text
stringlengths
1
1.06M
meta
dict
\section{Introduction} \begin{figure}[t] \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NonSmth3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/Lasso3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NoMin3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NonSmth2D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/Lasso2D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NoMin2D.png} \end{subfigure} \caption{Graphical Depiction of some simple objective functions which have a smooth value function but either don't have a minimum or are not differentiable at the minimizer. (Top row) The objective $ f $ is plotted against $ x $ and $ u $ for some toy examples (left) $ \mathbb{R} \times \mathbb{R} \ni (x, u) \mapsto f(x, u) = \exp(x) + \delta_{[u, +\infty)} (x) $, (middle) $ \mathbb{R} \times [0, 4] \ni (x, u) \mapsto f(x, u) = (a x - b)^{2}/2 + \delta_{[-u, u]} (x) $ where $ a, b \neq 0 $ and (right) $ \mathbb{R} \times \mathbb{R} \ni (x, u) \mapsto f(x, u) = \exp(x) + u^{2} / 2 $. The function $u \mapsto (u, x^{*}, f(x^{*}, u)) $ is sketched in red in first two examples (left and middle). No such curve is shown for the third example because the minimum w.r.t. $ x $ is not attained. The black dashed-line (left and middle) shows the boundary of the feasible set in $ \mathbb{R} \times \mathbb{R} $. (Bottom row) Value function $ p $ plotted against $ u $ for the corresponding examples.} \label{fig:IntroEx} \end{figure} Given a function $ f : \mathbb{R}^{N}\times\mathbb{R}^{P} \to \mathbb{R} $ with values $ f(\bm{x}, \u) $, we consider the following parametric optimization problem: \begin{equation} \tag{$ \P $} \label{eq:PrimalProb} p(\u) \coloneqq \inf_{\bm{x} \in \mathbb{R}^{N}} f(\bm{x}, \u) \,. \end{equation} The optimal value of $ f(\cdot, \u) $, which we denote by $ p (\u) $, depends on the parameter $ \u $ and is commonly referred to as the \textit{value function} of \eqref{eq:PrimalProb} or the \textit{infimal projection} of $ f $. When the minimum is attained at some $ \bm{x}^{*}(\u) \in \mathbb{R}^{N} $ for a given $ \u \in \mathbb{R}^{P} $, the value function is given by $ p(\u) = f(\bm{x}^{*}(\u), \u) $. For many applications, quantifying the change in $ p $ with respect to $ \u $ is key, which is achieved by computing gradient or subgradient information of $ p $. This is particularly true for Machine Learning applications, for which a parametric dependency occurs naturally, for example, when solving a min-min or minimax optimization problem, in Structured Support Vector Machines \cite{TGK04, TJHA05}, Sparse Dictionary Learning \cite{MBPS10}, Generative Adversarial Networks \cite{GPM+14} and Matrix Factorization. Another important area where such derivative information is crucial is the Sensitivity Analysis of an optimization problem, which finds applications in the shadow price problem \cite[Section 4.3]{Sti18} and also in bridge crane design or breakwater modeling \cite{CMC08}. The decision-making is based on a measure of how sensitive the model is when parameters $ \u $ are changed. If $\bm{x}^{*}(\u)$ is available and differentiable, the gradient information can be computed by differentiating $ p(\u) = f(\bm{x}^{*}(\u), \u) $ with respect to $ \u $, i.e., \begin{equation*} \grad{}{p}(\u) = D_{\bm{u}} \bm{x}^{*} (\u) \grad{\bm{x}}{f} (\bm{x}^{*}(\u), \u) + \grad{\u}{f} (\bm{x}^{*}(\u), \u) = \grad{\u}{f} (\bm{x}^{*}(\u), \u) \,. \end{equation*} However, clearly, this approach demands for strong smoothness conditions of the parametric function $ f $ and the solution mapping $ \bm{x}^{*}(\u) $, which are not satisfied for common Machine Learning applications. Consider for example the following sparsity constrained linear regression problem: \begin{equation} \label{eq:Lasso} \min_{\bm{x}} \norm{A \bm{x}- b}_2\,,\ \mathrm{s.t.}\ \norm{\bm{x}}_1 \leq u\,, \end{equation} where $A\in \mathbb{R}^{M\times N}$, $b\in \mathbb{R}^M$, and $u\geq 0$. As a constrained optimization problem, the objective (including the constraint in terms of an indicator function) is not differentiable. The value function, however, is continuously differentiable on $ (0, \infty) $ and subdifferentiable at $ u=0 $ \cite{BF08}. As noted in \cite{BF08} and more generally in \cite{BF11, ABF13}, its gradient can be used to solve the following minimial norm problem: \begin{equation} \label{eq:LassoEqv} \min_{\bm{x}} \norm{\bm{x}}_1 \,,\ \mathrm{s.t.}\ \norm{A \bm{x} - b}_2 \leq w\,. \end{equation} The problem in \eqref{eq:Lasso} is one of many instances where the parametric function $ f $ is jointly convex in its arguments. Yet algorithmic differentiation strategies based on differentiating approximations to the solution mapping cannot be applied. This is due to the fact that the boundary of the feasible set changes with $ \u $ and when the solution $ \bm{x}^{*} (\u) $ lies at the boundary for some $ \u $, the subdifferential of $ f $ with respect to $ \u $ at $ (\bm{x}^{*}(\u), \u) $ is a shifted non-trivial cone, hence, in particular not single-valued. We explain this phenomenon more concretely in \sref{prob:DirDiff} (see also \fref{fig:IntroEx}). As a remedy, we invoke standard results from convex duality of the function $f$ to derive the above mentioned differentiability property of the value function for a large class of optimization problems including \eqref{eq:Lasso}. In fact, beyond differentiability, we explore the formula \begin{equation*} \subdiff{}{p} (\u) = \arg\max_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,, \end{equation*} which expresses the convex (Fr\'echet) subdifferential of the value function $p$ at $\u$ as the set of solutions to a certain optimization problem that depends on the convex conjugate $f^{*}$ of $f$. For a jointly convex function $f$ in $(\bm{x},\u)$, the validity of the formula is asserted under the weak assumptions that $p(\u)$ is finite (i.e. the infimum of $f(\cdot,\u)$ in \eqref{eq:PrimalProb} is finite) and $ \u \in \ri (\dom{p}) $ lies in the relative interior of the domain of $p$. Therefore, in these situations, the problem of differentiating the value function $p$ is equivalent to solving a convex optimization problem, which allows us to explore the large literature on convex optimization algorithms. Since single-valuedness of the subdifferential of $p$ implies differentiability, for example, strict convexity of $\bm{y}\mapsto f^{*}(0, \bm{y})$ implies differentiability of $p$ without the need for $f(\bm{x},\u)$ to be differentiable. Therefore our approach allows us to compute the variation (gradient) of the value function $p$ in situations for which commonly used direct differentiation strategies, for example based on automatic differentiation, cannot be applied. Nevertheless, even if the parametric function $f$ is sufficiently smooth, the flexibility to apply various (optimal) convex optimization algorithms for computing this derivative information compares favorably with those direct differentiation strategies. For the large class of optimization problems that we consider, we summarize algorithms with their convergence guarantees based on the properties of the objective function. \paragraph{Remark.} Differentiation of the value function $p$ in \eqref{eq:PrimalProb} is not to be confused with differentiating the optimal solution mapping $\bm{x}^{*}(\u)$ with respect to $\u$. Besides its usage in computing automatic and implicit gradient estimator, the argmin derivative is used in optimization layers, that is, neural networks whose output is given by solving an optimization problem \cite{AK17, AAB+19}. It is also required in bilevel optimization \cite{DKPK15}, a most well known application of which is gradient-based hyperparameter optimization or parameter learning \cite{Dom12, KP13, DVFP14}. Another problem which is similar to ours is the differentiation of a function $ g(\bm{x}, \u) $ with respect to the parameter $ \u $ evaluated at a solution $ \bm{x}^{*}(\u) $ of a system of parametric nonlinear equations $ h(\bm{x}, \u) = 0 $, where $ g : \mathbb{R}^{N} \times \mathbb{R}^{P} \to \mathbb{R}^{M} $ and $ h : \mathbb{R}^{N} \times \mathbb{R}^{P} \to \mathbb{R}^{N} $ satisfy some regularity conditions. We can also replace the function $ g $ with a functional and the non-linear system with a parametric ordinary differential equation. The two problems are related (but not equivalent) because when $ f $ in \eqref{eq:PrimalProb} is continuously differentiable and has a minimium $ \bm{x}^{*}(\u) $ in $ \bm{x} $ for a given $ \u $, then $ h = \grad{\bm{x}}{f} $ and $ g = f $ with $ M = 1 $ and our goal is to differentiate $ p(\u) = f(\bm{x}^{*}(\u), \u) $. To differentiate $ g(\bm{x}^{*}(\u), \u) $ with respect to $ \u $, we can make use of Piggyback differentiation \cite{GF03} or the Adjoint-state method \cite{Pon61, PLA18}. These techniques find their use in solving constrained optimization problems (where constraints are often given as ODE's or PDE's) with various applications in Geophysics \cite{Ple06}, Medicine \cite{KFTC12} and Neural Networks \cite{CRBD18}. \section{Problem Setting} \label{sec:PS} We consider parametric optimization problems of type \eqref{eq:PrimalProb} and seek for computing $\nabla p(\u)$, i.e., the variation of the value function $p$ with respect to $\u$. One of our major goals is to characterize the properties of $f$ for which various numeric differentiation strategies with theoretical convergence guarantees can be used. We emphasize differentiation strategies based on iterative algorithms and provide convergence rates. First, in Section~\ref{prob:AAIG}, we recall the most widely used approaches for smooth parametric functions $f$, and demonstrate their limitations for several examples in \sref{prob:DirDiff}. Therefore, in \sref{sec:DualGrad}, as a remedy for such situations, we leverage a well known result from convex duality theory for numerical estimation of the variation of the value function, which allows us to classify problem classes with convergence rates. \subsection{Analytical, Automatic and Implicit Gradient Estimator} \label{prob:AAIG} Ablin et al.~\cite{APM20} analyze three different methods for iterative derivative approximation of smooth parametric functions $f$, provide convergences rates and enlighten a super-efficiency phenomenon for the automatic differentiation strategy. We recall their results. Let $f$ be twice continuously differentiable on $ \mathbb{R}^{N}\times\mathbb{R}^{P} $ and $ \bm{x}^{*}(\u) $ be the unique minimizer for every $ \u \in \mathbb{R}^{P} $ such that $ \hess{\bm{x}}{f} (\bm{x}^{*}(\u), \u)$ is positive definite. From the Implicit Function Theorem, we derive that $ \bm{x}^{*} : \mathbb{R}^{P} \to \mathbb{R}^{N} $ is continuously differentiable with derivative $ D_{\bm{u}}\bm{x}^{*}(\u) = \varphi (\bm{x}^{*} (\u), \u) $, where we define the mapping $ \varphi : \mathbb{R}^{N}\times\mathbb{R}^{P} \to \mathbb{R}^{N \times P} $ as: \begin{equation} \label{eq:Dmin} \varphi (\bm{x}, \u) = -[\hess{\bm{x}}{f} (\bm{x}, \u)]^{-1} \grad{\bm{x}\u}{f} (\bm{x}, \u)\,. \end{equation} The value function and its gradient are then given by: \begin{equation*} p(\u) = f (\bm{x}^{*}(\u), \u)\ \mathrm{and}\ \grad{}{p}(\u) = \grad{\u}{f} (\bm{x}^{*}(\u), \u) \,. \end{equation*} The expression for $ \grad{}{p} $ follows from chain rule and the optimality condition $ \grad{\bm{x}}{f} (\bm{x}^{*}(\u), \u) = 0 $. The minimizer $ \bm{x}^{*} (\u) $ is estimated by an iterative optimization method which yields a sequence $ \seq{\bm{x}}{k} $ with limit $ \bm{x}^{*}(\u) $. In a realistic setting such a process is terminated after $ K $ iterations to yield a so-called sub-optimal solution $ \bm{x}^{(K)}(\u) $ for each $ \u $. To compute $ \grad{}{p} $, we either substitute $ \bm{x}^{(K)}(\u) $ in place of $ \bm{x}^{*}(\u) $ in the expression for $ \grad{}{p} $ to obtain the \textbf{analytic gradient estimator}: \begin{equation} \tag{AnG} \label{eq:AnG} \bm{g}^{(K)}_{1} (\u) \coloneqq \grad{\u}{f} (\bm{x}^{(K)}, \u) \end{equation} or in the expression for $ p $ and then differentiate it with respect to $ \u $, assuming that the sequence $ \seq{\bm{x}}{k} $ is differentiable, meaning that the mapping between successive iterates and the dependence on the parameter $\u$ is differentiable, giving us the \textbf{automatic gradient estimator}: \begin{equation} \tag{AuG} \label{eq:AuG} \bm{g}^{(K)}_{2} (\u) \coloneqq [D_{\bm{u}}\bm{x}^{(K)}]^{T} \grad{\bm{x}}{f} (\bm{x}^{(K)}, \u)\ + \grad{\u}{f} (\bm{x}^{(K)}, \u) \,. \end{equation} The term $ D_{\bm{u}}\bm{x}^{(K)} $ is an estimator of $ D_{\bm{u}}\bm{x}^{*} $ and is obtained by applying automatic differentiation on $ \bm{x}^{(K)} $, hence the name. Using the expression in \eqref{eq:Dmin} to estimate $ D_{\bm{u}}\bm{x}^{*} $ yields the \textbf{implicit gradient estimator}, i.e., \begin{equation} \tag{IG} \label{eq:IG} \bm{g}^{(K)}_{3} (\u) \coloneqq [\varphi (\bm{x}^{(K)}, \u)]^{T} \grad{\bm{x}}{f} (\bm{x}^{(K)}, \u)\ + \grad{\u}{f} (\bm{x}^{(K)}, \u) \,. \end{equation} Ablin et al.~\cite{APM20} provide following error bounds for \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG}. \begin{theorem} \label{thm:AAIG} Let $ D \coloneqq D_{x} \times D_{u} \subset \mathbb{R}^{N}\times\mathbb{R}^{P} $ be compact and $ f $ be $ m $-strongly convex with respect to $ \bm{x} $ and twice differentiable over $ D $ with second derivatives $ \grad{\bm{x}\u}{f} $ and $ \hess{\bm{x}}{f} $ respectively $ L_{xu} $ and $ L_{xx} $-Lipschitz continuous. Then the first derivatives $ \grad{\u}{f} $ and $ \grad{\bm{x}}{f} $ are respectively $ L_{u} $ and $ L_{x} $-Lipschitz continuous and for $ \bm{x}^{(k)} $ produced by $ \bm{x}^{(k+1)} \coloneqq \bm{x}^{(k)} - \tau \grad{\bm{x}}{f} (\bm{x}^{(k)}, \u) $ with $ \tau \leq 1/L_{x} $ and $ \omega \coloneqq 1 - m\tau $, following statements hold: \begin{enumerate}[label=\textnormal{(\alph*)}] \item The analytic estimator converges and we have: \begin{equation*} \norm{\bm{g}^{(K)}_{1} - \grad{}{p} (\u)}_{2} \leq L_{x} \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{K} \end{equation*} \item The automatic estimator converges and for $ C_{k} \coloneqq \tau (L_{x} k + \omega/2) (L_{xu} + L_{1}L_{xx}) $ with $ \norm{D_{\bm{u}}\bm{x}^{(k)}}_{2} \leq L_{1} $ we have: \begin{equation*} \norm{\bm{g}^{(K)}_{2} - \grad{}{p} (\u)}_{2} \leq C_{K} \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{2K - 1} \end{equation*} \item The implicit estimator converges and for $ C \coloneqq (L_{xu} + L_{1}L_{xx}) / 2 + L_{2}L_{x} $ with $ \norm{\varphi (\bm{x}^{(K)}, \u)}_{2} \leq L_{2} $ we have: \begin{equation*} \norm{\bm{g}^{(K)}_{3} - \grad{}{p} (\u)}_{2} \leq C \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{2K} \end{equation*} \end{enumerate} \end{theorem} \tref{thm:AAIG} shows faster convergence of automatic and implicit estimators as compared to analytical estimator. The automatic estimator is more stable than implicit estimator as depicted experimentally in \cite{APM20}. This makes the automatic method a strong contender for estimating $ \grad{}{p} $. It is also not computationally expansive thanks to reverse mode AD. The memory overhead is overcome by discarding the iterates $ \bm{x}^{(k)} $ for $ k = 0, \dots, K-1 $ and using $ \bm{x}^{(K)} $ only in all the calculations when going backward \cite{Chr94, MO20}. Ablin et al.~\cite{APM20} also study these methods under weaker conditions, for instance, when $ f(\cdot, \u) $ is $ \mu $-{\L}ojasiewicz \cite{AB09} which generalizes strong convexity. However their results depend on strong smoothness assumptions for $ f $. \subsection{Problems with Direct Differentiation} \label{prob:DirDiff} Obviously, the settings for which Ablin et al.~\cite{APM20} provide convergence rate guarantees is quite limited. We would like to emphasize the fact that differentiability of the parametric function $f$ is not required for that of the value function $ p $. In this section, we show with simple examples that the necessary conditions like differentiablity of the objective and the existence of the minimizer are the key disadvantages of the above methods. \begin{example} \label{exmp:NonSmthToy} Let $ f : \mathbb{R} \times \mathbb{R} \to \mathbb{R} $ be defined as: \begin{equation*} f (x, u) = \exp(x) + \delta_{[u, +\infty)} (x) \,, \end{equation*} then \eqref{eq:AnG} and \eqref{eq:IG} fail to converge for all $ u \in \mathbb{R} $ while \eqref{eq:AuG} converges only when $ x^{(k)} > u $ for all $ k $ (eventually) and $ D_{u} x^{(k)} $ converges to $ D_{u} x^{*}(u) $. \end{example} \paragraph{Detail.} $ f $ is jointly convex in $ x $ and $ u $ and for all $ u \in \mathbb{R}, f(\cdot, u) $ is $ \exp(u) $-strongly convex and $ x^{*} : \mathbb{R} \to \mathbb{R} $ and $ p : \mathbb{R} \to \mathbb{R} $, given by $ x^{*}(u) = u $ and $ p(u) = \exp(u) $ respectively, are continuously differentiable on $ \mathbb{R} $. On the other hand, $ f $ is neither differentiable with respect to $ x $ nor $ u $ at $ (x^{*}(u), u) $ for any $ u \in \mathbb{R} $. To see why $ f $ is not differentiable with respect to $ u $, note that $ f $ is alternatively written as $ f(x, u) = \exp(x) + \delta_{(-\infty, x]} (u) $. The subdifferential of $ f $ with respect to $ u $ is $ \mathbb{R}_{+} $ when $ x = u $ and $ \{ 0 \} $ when $ x > u $. Thus when $ x^{(k)} = u $ for some $ k \in \mathbb{N} $, none of the above methods is useful here. If $ x^{(k)} > u $ for all $ k $, we get $ g^{(k)}_{1} (u) = 0 $ and $ g^{(k)}_{3} (u) = 0 $ since $ \partial f / \partial u (x^{(k)}, u) = 0 $ and $ \partial^{2} f / \partial u^{2} (x^{(k)}, u) = 0 $. The automatic estimator is given by $ g^{(k)}_{2} (u) = \Dutx^{(k)} (u) \exp (u) $ because $ \partial f / \partial x (x^{(k)}, u) = \exp (u) $. It converges to $ p^{\prime} (u) = \exp (u) $ only if $ \Dutx^{(k)} (u) $ converges to $ D_{u}x^{*} (u) = 1 $. The convergence of $ D_{u} x^{(k)} (u) $ to $ D_{u} x^{*} (u) $ in \exref{exmp:NonSmthToy} is possible only under limited conditions which we do not establish here. The next example considers the non-smooth parametric objective of \eqref{eq:Lasso} in an analogue $ 1 $D setting. \begin{example} \label{exmp:NonSmthLasso} Let $ f : \mathbb{R} \times \mathbb{R} \to \mathbb{R} $ be defined as: \begin{equation*} f (x, u) = \frac{1}{2} (a x - b)^{2} + \delta_{[-u, u]} (x) \,, \end{equation*} where $ a, b \in \mathbb{R} \backslash \{0\} $, then for all $ u \in (0, \abs{b/a}) $, \eqref{eq:AnG} and \eqref{eq:IG} fail to converge while \eqref{eq:AuG} converges only when $ x^{(k)} \in (-u, u) $ for all $ k $ (eventually) and $ D_{u} x^{(k)} $ converges to $ D_{u} x^{*}(u) $. \end{example} \paragraph{Detail.} $ f $ is jointly convex in $ x $ and $ u $ and for all $ u \in (0, \abs{b/a}), f(\cdot, u) $ is $ a^{2} $-strongly convex and we have $ x^{*}(u) = \sgn{b/a} u $ and $ p(u) = (a x^{*} (u) - b)^{2}/2 $. Since $ f(x, u) = (a x - b)^{2}/2 + \delta_{[\abs{x}, \abs{b/a})} (u) $ and $ \subdiff{u}{f} (x, u) = N_{[\abs{x}, \abs{b/a})} (u) $, $ f $ is not differentiable with respect to $ u $ at $ (x^{*}(u), u) $ for any $ u \in (0, \abs{b/a}) $. Given a sequence $ x^{(k)} \in (-u, u) $ with limit $ x^{*}(u) $ we have $ g^{(k)}_{1} = 0 $ and $ g^{(k)}_{3} = 0 $ because $ \partial f / \partial u (x^{(k)}, u) = 0 $ and $ \partial^{2} f / \partial u^{2} (x^{(k)}, u) = 0 $. The automatic estimator $ g^{(k)}_{2} (u) = \Dutx^{(k)} (u) a (a x^{(k)} - b) $ converges to $ p^{\prime} (u) = D_{u}x^{*} (u) a (a x^{*} - b) $ when $ \Dutx^{(k)} (u) $ converges to $ D_{u}x^{*} (u) $. \begin{example} \label{exmp:NoMin} Let $ f : \mathbb{R} \times \mathcal{U} \to \mathbb{R} $ be defined as: \begin{equation*} f(x, u) = \exp(x) + \frac{1}{2} u^{2} \,, \end{equation*} then \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} fail to converge for all $ u \in \mathbb{R} $. \end{example} \paragraph{Detail.} This case is obvious because $ \inf_{x} \exp(x) = 0 $ with minimum not attained, giving us $ p(u) = u^{2} / 2 $ and $ \argmin_{x} f(x, u) = \emptyset $. While one may still use the methods of \cite{APM20} or cvxpylayers \cite{AK17, AAB+19} to efficiently estimate $ \grad{}{p} $ in situations like those presented above, it should be noted that differentiating the solution mapping is not a strictly more general approach than differentiating the value function. This is due to the failure of the applicability of the chain rule for evaluating $ \nabla_{\u} [f(\bm{x}^{*}(\u), \u)] $ for non-smooth functions in general \cite{BP20}. (The concept of a subgradient is not defined for a vector-valued non-smooth function and must be replaced by graphical derivatives and coderivatives; see \cite[Section 9.D]{RW98} and the chain rule in \cite[Theorem 10.49]{RW98}). This calls for a theoretically justified approach for estimating $ \grad{}{p} $ beyond those which are currently available \cite{APM20, AK17, AAB+19}. \section{Dual Gradient Estimator} \label{sec:DualGrad} The discussion in the previous section suggests that a different method is needed which is independent of directly differentiating the parametric objective function $ f $. Trading the differentiability assumption for a joint convexity assumption of $f$ in $(\bm{x},\u)$, we invoke the powerful convex duality for computing derivative information of the value function $p$ in cases beyond differentiability of $f$. Moreover, the same statement provides an expression for the convex subdifferential of $p$. Denoting the convex conjugate of a function $p$ by \[ p^*(\bm{y}) := \sup_{\u} \innerprod{\bm{y}}{\u} - p(\u) \,, \] and its biconjugate by $p^{**}:=(p^*)^*$, the following result can be derived when strong duality, i.e., $p^{**}=p$, holds. The dual of the problem defined in \eqref{eq:PrimalProb} is given by: \begin{equation} \tag{$ \mathcal{D} $} \label{eq:DualProb} p^{**}(\u) = \sup_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation} and for $ \u \in \ri{(\dom{p})} $ \begin{equation*} \subdiff{}{p} (\u) = \arg\max_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation*} When $ \subdiff{}{p} (\u) $ is single-valued, $ p $ is differentiable at $ \u $ and therefore solving \eqref{eq:DualProb} yields the gradient of the value function, which does not require differentiability of $ f $. These results rely on the following standard convex duality result, which we state from \cite[Theorem~4.1]{Dru20} \cite[Section~11.H]{RW98}. \begin{theorem} \label{thm:Main} For $ \mathcal{X} \subset \mathbb{R}^{N} $ and $ \mathcal{U} \subset \mathbb{R}^{P} $, let $ f : \mathcal{X} \times \mathcal{U} \to \overline{\mathbb{R}} $ be a proper, lower semi-continuous and convex function. Then following are true for all $ \u \in \mathcal{U} $: \begin{enumerate}[label=\textnormal{(\alph*)}] \item \textbf{Weak Duality:} $ p^{**}(\u) \leq p(\u) $. \item \textbf{Subdifferential:} If $ p(\u) $ is finite, then \begin{equation} \label{eq:thm:Subd} \partial p(\u) \subset \argmax_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation} If in addition, the inclusion $ \u \in \ri (\dom{p}) $ holds, then \eqref{eq:thm:Subd} holds with equality. \item \textbf{Strong Duality:} If the subdifferential $ \partial p(\u) $ is nonempty, then the equality $ p^{**}(\u) = p(\u) $ holds and the supremum $ p^{**}(\u) $ is attained. \end{enumerate} \end{theorem} Therefore, \eqref{eq:DualProb} is key for computing the variation of $p$ with respect to $\u$. Our goal is reduced to the problem of solving \eqref{eq:DualProb}, for which the machinery of convex optimization can be invoked to state algorithms and convergence rates. Moreover, in contrast to the automatic differentiation strategy (backpropagation), there is no need to store the iterates, which dramatically reduces the memory requirements. Let $ \seq{\bm{y}}{K} $ be a sequence generated by an algorithm for solving \eqref{eq:DualProb}, we call the gradient computed by this method the \textbf{dual gradient estimator}: \begin{equation} \tag{DG} \label{eq:DG} \bm{g}^{(K)}_{4} (\u) = \bm{y}^{(K)} \,. \end{equation} The dual estimator is computationally efficient since it requires solving an optimization problem and does not require computing any additional gradient and Hessian terms. The computational expenses depend only on the method used to solve the problem and the rate of convergence. Such an estimator also does not have a memory overhead like storing the iterates $ \seq{\bm{y}}{k} $. \subsection{A Large Class of Parametric Optimization Problems} As an application of our approach, we consider the following class of parametric optimization problem \begin{equation} \tag{$\mathcal{R}_{f}$} \label{eq:Roc:obj} f(\bm{x}, \u) = \innerprod{\c}{\bm{x}} + h(\b - A\bm{x} + \u) + k(\bm{x}) \,, \end{equation} where $ h : \mathbb{R}^{P} \to \overline{\mathbb{R}} $ and $ k : \mathbb{R}^{N} \to \overline{\mathbb{R}} $ are proper, lower semi-continuous and convex, $ A : \mathbb{R}^{N} \to \mathbb{R}^{P} $ is a linear map and $ (\c, \b) \in \mathbb{R}^{N}\times\mathbb{R}^{P} $. The convex conjugate of $ f $ is given by: \begin{equation*} f^{*}(\v, \bm{y}) = -\innerprod{\b}{\bm{y}} + k^{*}(A^{*}\bm{y} - \c + \v) + h^{*}(\bm{y}) \,, \end{equation*} which yields the conjugate of the value function as: \begin{equation} \tag{$\mathcal{R}_{p^{*}}$} \label{eq:Roc:vfc} p^{*}(\bm{y}) = -\innerprod{\b}{\bm{y}} + k^{*}(A^{*}\bm{y} - \c) + h^{*}(\bm{y}) \,. \end{equation} Therefore, in order to compute the variation of $p$ with respect to $\u$ by using \eqref{eq:thm:Subd}, we must solve a problem of the form: \begin{equation} \label{eq:dual-grad-estimator-problem} \min_{\bm{y}\in\mathbb{R}^{P}} k^{*}(A^{*}\bm{y} - \c + \v) + h^{*}(\bm{y}) - \innerprod{\b + \u}{\bm{y}} \,. \end{equation} In the following section, depending on the properties of $k$, $h$, and $A$, we provide algorithms and convergence rates for solving \eqref{eq:dual-grad-estimator-problem} and, hence, for approximating the variation of the value function $p$. As a generic algorithm for solving \eqref{eq:dual-grad-estimator-problem}, we mention the Primal--Dual Hybrid Gradient Algorithm by Chambolle and Pock~\cite{CP11} here. A sufficient condition for uniqueness of solution of \eqref{eq:dual-grad-estimator-problem} is strong convexity of $ h^{*} $ which follows from the Lipschitz continuity of $ \grad{}{h} $. For a weaker condition, we state the following result: \begin{proposition} Let $ h, k, A $ and $ \c $ in \eqref{eq:Roc:obj} be such that $ h $ is differentiable on $ \intr{(\dom{h})} $ and there exist $ (\bm{x}, \u) \in \dom{k} \times \intr{(\dom{h})} $ with $ A^{*} \grad{}{h} (\u) - \c \in \subdiff{}{k} (\bm{x}) $, then $ \subdiff{}{p} (\u) = \{ \grad{}{p} (\u) \} $ is single-valued for all $ \u \in \ri (A\dom{k} + \dom{h} - \b) $. \end{proposition} \begin{proof} The condition $ A^{*} \grad{}{h} (\u) - \c \in \subdiff{}{k} (\bm{x}) $ guarantees the existence of some $ \bm{y} \in \dom{h^{*}} $ with $ A^{*} \bm{y} - \c \in \dom{k^{*}} $. In such case, the expressions $ h^{*}(\bm{y}) $ and $ k^{*}(A^{*} \bm{y} - \c) $ are finite-valued and $ \dom{p^{*}} $ is non-empty. Since $ p $ is proper, lower semi-continuous and convex \cite[Theorem~3.101]{Hoh19}, for every $ \u \in \ri (\dom p) $ with $ \dom{p} = A\dom{k} + \dom{h} - \b $ \cite[Example~11.41]{RW98}, $ \subdiff{}{p} (\u) $ is non-empty \cite[Theorem~23.4]{Roc70}. The single-valuedness of $ \subdiff{}{p} (\u) $ then follows from the strict convexity of $ h^{*} $ (see \lref{lem:basic}\ref{basic:strict:diff}). \end{proof} To understand how this works we consider the example where $ h = \norm{\cdot}_{2}^{2}/2 $ and $ k = \lambda\norm{\cdot}_{2}^{2}/2 + \gamma \norm{\cdot}_{1} $ for $ \lambda > 0 $ and $ \gamma \geq 0 $. By choosing $ \u = 0 $ and $ \bm{x} $ as: \begin{equation*} \bm{x}_{i} = \begin{cases} (-\c_{i} - \gamma)/\lambda &, \qquad\enspace\; \c_{i} < -\gamma \\[5pt] \qquad 0 &, -\gamma \leq \c_{i} \leq \gamma \\[5pt] (-\c_{i} + \gamma)/\lambda &, \enspace\; \gamma < \c_{i} \,, \end{cases} \end{equation*} we observe that $ A^{*}\grad{}{h}(\u) - \c = -\c \in \subdiff{}{k} (\bm{x}) $. Parametric Optimization problems of the form \eqref{eq:Roc:obj} are ubiquitous in Machine Learning, Computer Vision, and Signal Processing. In Signal and Image Processing, the parameter $ \u $ represents the observed variable while the mapping $ A $ represents the operation performed on the optimal hidden variable $ \bm{x} $ (which is to be determined) to obtain the observed variable. In Machine Learning, $ \u $ is the target or the label vector, $ A $ represents the feature matrix obtained from the independent variable and $ \bm{x} $ denotes the weights of the mapping to be learned which fits the training set $ (A, \u) $. In this model, $ h $ measures the dissimilarity between $ A\bm{x} $ and $ \u $. The second term puts the penalty on $ \bm{x} $ and therefore indicates a prior information of the optimal $ \bm{x} $ which is necessary when $ P < N $. In many applications like supervised learning, image denoising and segmentation, $ N \leq P $, while in those like compressed sensing and deconvolution, $ N > P $. Other than the above applications, \eqref{eq:Roc:obj} also generalizes the classical infimal convolution \cite{Roc70}. For example, such expressions occur in Image Processing applications in the context of regularization via Total Generalized Variation \cite{BKP10}. Moreover, the Moreau envelope \cite{RW98} of a non-smooth function is of the presented form and is employed for solving non-smooth optimization problems and is key for interpretation of many convex optimization algorithms such as proximal splitting methods \cite{LM79, CP11a}. Also, the penalty approaches for approximating the minimization of $ f(\bm{x})+g(\bm{x}) $ via $ \min_{\bm{x}} ( f(\bm{x}) + \min_{\bm{z}} g(\bm{z}) + 1/2\norm{\bm{x}-\bm{z}}^{2} ) $ have the same form, which shows relations to alternating minimization approaches and has been employed in real world machine learning problems \cite{LWC19}. \subsection{Rate of Convergence} \label{DG:RoC} By invoking convex duality, as described in the previous section, the computation of the value function's variation is reduced to solving problems of type \eqref{eq:dual-grad-estimator-problem} for which a large literature of optimization algorithms is available for several special cases. We consider the following situations: \begin{enumerate}[label=\textnormal{(\alph*)}] \item Let \eqref{eq:dual-grad-estimator-problem} be a quadratic problem with matrix $ Q \in \mathbb{R}^{P \times P} $ and let $ L = \lambda_{\max} (Q) $ and $ m = \lambda_{\min} (Q) $, then \eqref{eq:DG} computed by using conjugate-gradient method converges like $ \mathcal{O} (\omega^{K}) $ with $ \omega \coloneqq (\sqrt{L} - \sqrt{m})/(\sqrt{L} + \sqrt{m}) $ \cite[Section 1.6]{Ber99}. \textbf{Discussion.} Depending on whether $ P $ is smaller (resp. larger) than $ N $, this rate is better (resp. worse) than those provided in \tref{thm:AAIG} (see the first column of \fref{fig:Exp} for a comparison). \item Let $ h^{*} $ be possibly non-smooth with efficiently computable proximal mapping and $ k^{*} \circ A^{*} $ has an $ L $-Lipschitz continuous gradient then the following are true for solving \eqref{eq:dual-grad-estimator-problem} by using ISTA and accelerated proximal gradient descent (FISTA): \begin{itemize} \item \eqref{eq:DG} converges with ISTA \cite[Theorem~4.9]{CP16} and FISTA \cite[Theorem~3]{CD15} to $ \grad{}{p} (\u) $. \item If $ h^{*} $ and $ k^{*} \circ A^{*} $ are strongly convex with parameters $ \delta \geq 0 $ and $ \gamma \geq 0 $ and $ \mu = \delta + \gamma > 0 $, \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (\omega_{1}^{K}) $ with ISTA \cite[Theorem~4.9]{CP16} and like $ \mathcal{O} (\omega_{2}^{K}) $ with FISTA \cite[Theorem~4.10]{CP16} where $ \omega_{1} = (1 - \tau \gamma)/(1 + \tau \delta) $ and $ \omega_{2} = 1 - \sqrt{\tau \mu / (1 + \tau \delta)} $. \end{itemize} \label{enum:(F)ISTA} \textbf{Discussion.} This general setting is beyond the theory that is provided by \tref{thm:AAIG}. \item Let $ h^{*} $ and $ k $ be possibly non-smooth and their respective proximal mappings can be computed efficiently, then the following are true for solving \eqref{eq:dual-grad-estimator-problem} by using Primal--Dual Hybrid Gradient Algorithm with $ L = \norm{A^{*}} $ \begin{itemize} \item \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ \cite[Theorem~5.1]{CP16}. \item If either $ h^{*} $ or $ k $ are strongly convex, then \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (1/K^{2}) $ to $ \grad{}{p} (\u) $ \cite[Theorem~2]{CP11}. \item If $ h^{*} $ and $ k $ are both strongly convex with parameters $ \delta $ and $ \gamma $ respectively, then \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (\omega^{K/2}) $ to $ \grad{}{p} (\u) $ with $ \omega = (1 + \theta)/(2 + \mu) $ and $ \mu = 2\sqrt{\delta\gamma} / L $ \cite[Theorem~3]{CP11}. \end{itemize} \label{enum:PDHG} \textbf{Discussion.} Similarly, this setting is more general than that of \tref{thm:AAIG}. \end{enumerate} \paragraph{Remark.} For non-strongly convex settings in \eqref{eq:dual-grad-estimator-problem}, the sequence of values $ p^{*}(\bm{y}^{(K)}) - \innerprod{\u}{\bm{y}^{(K)}} $ converges like $ \mathcal{O} (1/K) $ with ISTA \cite[Theorem~4.9]{CP16} and PDHG \cite[Theorem~1]{CP11} and like $ \mathcal{O} (1/K^{2}) $ with FISTA \cite[Theorem~4.4]{BT09} i.e., we have a potentially accelerated rate of convergence of the objective values. However this rate does not directly translate to a convergence rate of the iterates and hence, to a rate for the convergence of the dual gradient estimator. Such a conclusion requires additional properties of the optimization problem, such as local strong convexity, error bounds, or growth conditions \cite{AB09, FGP15, DL18}. In order to recognize the potential of the dual gradient approach based on properties of the primal functions in \eqref{eq:Roc:obj}, we trace the conditions for the various convergence rates listed above back to properties of the primal functions. These results are based on the following lemma. Their proofs can be found in most standard texts on Convex Analysis, e.g., \cite{HL12} or \cite{Roc70}. \begin{lemma} \label{lem:basic} Let $ g, h : \mathbb{R}^{P} \to \overline{\mathbb{R}} $ be proper, lower semi-continuous and convex functions, $ \d \in \mathbb{R}^{P} $ and $ B : \mathbb{R}^{N} \to \mathbb{R}^{P} $ be a linear mapping. Let $ l : \mathbb{R}^{N} \to \overline{\mathbb{R}} $ be defined by $ l(\bm{x}) = g(B\bm{x} + \d) $. Then the following results hold: \begin{enumerate}[label=\textnormal{(\alph*)}] \item If $ g $ is $ m_{g} $-strongly convex on $ \mathbb{R}^{P} $ for $ m_{g} \geq 0 $, then $ l $ is $ \lambda_{\min} (B^{*} B) m_{g} $-strongly convex on $ \mathbb{R}^{N} $. \item If $ g $ and $ h $ are strongly convex on $ \mathbb{R}^{P} $ with parameters $ m_{g} $ and $ m_{h} $ respectively, then $ g + h $ is $ m_{g} + m_{h} $-strongly convex on $ \mathbb{R}^{P} $. \item If $ g $ has an $ L_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $ for $ L_{g} \in (0, +\infty) $, then $ l $ has a $ \lambda_{\max} (B^{*} B) m_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{N} $. \item If $ g $ and $ h $ have Lipschitz continuous gradients on $ \mathbb{R}^{P} $ with parameters $ L_{g} $ and $ L_{h} $ respectively, then $ g + h $ has an $ L_{g} + L_{h} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $. \item If $ g $ is differentiable on $ \Omega \coloneqq \intr{\dom{g}} $, then $ g^{*} $ is strictly convex on each convex subset $ C \subset \grad{}{g} (\Omega) $. \label{basic:strict:diff} \item $ g $ is $ m_{g} $-strongly convex on $ \mathbb{R}^{P} $ if and only if $ g^{*} $ has a $ 1/m_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $. \label{basic:strong:lip} \item $ g $ has an $ L_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $ if and only if $ g^{*} $ is $ 1/L_{g} $-strongly convex on $ \mathbb{R}^{P} $. \label{basic:lip:strong} \end{enumerate} \end{lemma} Let us look at \eqref{eq:Roc:obj} when the regularity conditions given in \tref{thm:AAIG} are satisfied for $ f $. Let $ h $ and $ k $ be strongly convex with parameters $ m_{h} > 0 $ and $ m_{k} > 0 $ respectively and twice differentiable with Lipschitz continuous first and second derivatives. Let $ L_{h} $ and $ L_{k} $ be Lipschitz constants of $ \grad{}{h} $ and $ \grad{}{k} $ and let $ L_{A} = \lambda_{\max} (A^{*} A), m_{p} = \lambda_{\min} (A^{*} A) $ and $ m_{d} = \lambda_{\min} (A A^{*}) $ then $ f(\cdot, \u) $ is $ m_{h} m_{p} + m_{k} $-strongly convex and has an $ L_{h}L_{A} + L_{k} $-Lipschitz continuous gradient. Using these parameters, the optimal convergence rate for gradient descent is given by $ (L - m)/(L + m) $ \cite{Pol87}, which along with \tref{thm:AAIG} gives us the rates for the analytical, automatic and implicit estimators as $ \mathcal{O} (\omega_{p}^{K}), \mathcal{O} (K\omega_{p}^{2K}) $ and $ \mathcal{O} (\omega_{p}^{2K}) $ respectively where we have: \begin{equation*} \omega_{p} = \frac{(L_{h} L_{A} - m_{h} m_{p}) + (L_{k} - m_{k})}{(L_{h} L_{A} + m_{h} m_{p}) + (L_{k} + m_{k})} \,. \end{equation*} The strong convexity parameters of $ k^{*} \circ A^{*} $ and $ h^{*} $ are $ m_{d}/L_{k} $ and $ 1/L_{h} $. The Lipschitz constants of gradients of these functions are $ L_{A}/m_{k} $ and $ 1/m_{h} $. These parameters similarly give us the convergence rate for the dual estimator as $ \mathcal{O} (\omega_{d}^{K}) $ for: \begin{equation*} \omega_{d} = \frac{L_{h} m_{h} (L_{k}L_{A} - m_{k} m_{d}) + L_{k} m_{k} (L_{h} - m_{h})}{L_{h} m_{h} (L_{k}L_{A} + m_{k} m_{d}) + L_{k} m_{k} (L_{h} + m_{h})} \,. \end{equation*} Assuming $ A $ is full rank, the convergence rates depend on whether $ P $ is larger or smaller than $ N $. The condition of strong convexity of $ h $ or $ k $ can be relaxed to non-strong convexity. The expression for convergence rate for primal problem will stay the same with $ m_{h} $ or $ m_{k} $ set to $ 0 $. For the dual problem, we make use of the results listed in \ref{enum:(F)ISTA} and \ref{enum:PDHG} to compute the rate. We note that the theoretical guarantees for the primal gradient estimators are difficult to establish beyond the strong convexity and twice continuous differentiability of $ f $ in \eqref{eq:Roc:obj}. On the other hand, the dual gradient estimator is quite powerful as it converges in a very broad setting and the convergence rates are theoretically justified. \section{Experiments} \label{sec:Exp} We compare the performance of the four different gradient estimators, i.e., \eqref{eq:AnG}, \eqref{eq:AuG}, \eqref{eq:IG} and \eqref{eq:DG} for estimation of $ \grad{}{p} (\u) $ in different settings. Therefore we fix $ N $ and run these methods for different values of $ P $ and for different choices of $ h $ and $ k $ in \eqref{eq:Roc:obj}. Changing $ P $ will affect $ L_{A}, \lambda_{\min} (A^{*} A) $ and $ \lambda_{\min} (A A^{*}) $ while changing $ h $ and $ k $ will modify $ L_{h}, L_{k}, m_{h} $ and $ m_{k} $. This also includes cases of non-differentiability of $ k $ and non-strong convexity of $ h $. For each problem and for each $ P $, we generate error plots of the sequences $ \bm{g}^{(n)}_{i} $ for a given $ \u $. Since the convergence rates for methods \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} depend on that of the original sequence, we also show the plots for $ \bm{x}^{(n)} $ for each of the examples. We consider the following four examples to experimentally verify our observations: \begin{equation} \label{eq:Exmps} \begin{aligned} f_{1}(\bm{x}, \u) &= \frac{1}{2} \norm{\u - A\bm{x}}_{2}^{2} + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} \\ f_{2}(\bm{x}, \u) &= h_{\delta}(\u - A\bm{x}) + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} \\ f_{3}(\bm{x}, \u) &= \frac{1}{2} \norm{\u - A\bm{x}}_{2}^{2} + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} + \gamma \norm{\bm{x}}_{1} \\ f_{4}(\bm{x}, \u) &= h_{\delta}(\u - A\bm{x}) + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} + \gamma \norm{\bm{x}}_{1} \,, \end{aligned} \end{equation} where $ h_{\delta} : \mathbb{R}^{P} \to \mathbb{R} $ in second and fourth equations in \eqref{eq:Exmps} is the function defined by: \begin{equation*} h_{\delta}(\u) \coloneqq \begin{cases} \quad\; \frac{1}{2}\norm{\u}_{2}^{2} &, \quad \norm{\u}_{2} \leq \delta \\[10pt] \delta \Big( \norm{\u}_{2} - \frac{\delta}{2} \Big) &, \quad \norm{\u}_{2} > \delta \,. \end{cases} \end{equation*} The conjugate of the corresponding value functions is given by: \begin{align} p^{*}_{1}(\bm{y}) &= \frac{1}{2\lambda} \norm{A^{T}\bm{y}}_{2}^{2} + \frac{1}{2} \norm{\bm{y}}_{2}^{2} \nonumber \\ p^{*}_{2}(\bm{y}) &= \frac{1}{2\lambda} \norm{A^{T}\bm{y}}_{2}^{2} + h^{*}_{\delta}(\bm{y}) \nonumber \\ p^{*}_{3}(\bm{y}) &= k^{*}(A^{T}\bm{y}) + \frac{1}{2} \norm{\bm{y}}_{2}^{2} \nonumber \\ p^{*}_{4}(\bm{y}) &= k^{*}(A^{T}\bm{y}) + h^{*}_{\delta}(\bm{y}) \nonumber \,, \end{align} with conjugate of elastic-net term $ k \coloneqq \lambda\norm{\cdot}_{2}^{2} + \gamma\norm{\cdot}_{1} $ given by: \begin{equation*} k^{*}(\v) = \sum_{i=1}^{N} \max (0, \abs{v_{i}} - \gamma)^{2} / (2 \lambda) \,. \end{equation*} \begin{figure*} \begin{subfigure}{0.98\textwidth} \centering \includegraphics[width=\linewidth]{figs/legends.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/10X50.png} \end{subfigure} \caption{Error plots shown for the gradient sequences computed by the four methods, i.e., \eqref{eq:AnG}, \eqref{eq:AuG}, \eqref{eq:IG} and \eqref{eq:DG}, as well as the original (primal variable) sequence computed using (proximal) gradient descent (solid lines) and its inertial variant (dashed lines). The sequences are evaluated on four problems $ f_{1}, f_{2}, f_{3} $ and $ f_{4} $ ($ f_{j} $ changes from left to right in the given order) for five different values of $ P $, i.e., $ 90, 70, 50, 30 $ and $ 10 $ (P changes from top to bottom in the given order). Each cell shows the error plots for the ten sequences for a fixed $ P $ and $ f_{j} $; five for gradient descent or ISTA and five for the Heavy-ball method \cite{Pol64} or iPiasco \cite{OBP15}.} \label{fig:Exp} \end{figure*} For evaluation we set $ N $ to $ 50 $ and choose $ P $ from $ \{ 10, 30, 50, 70, 90 \} $. This gives us five different plots for each problem and every value of $ P $ corresponds to a row in \fref{fig:Exp}. We set $ \lambda $ to $ 2 $, $ \gamma $ to $ 0.1 $ and $ \delta $ to $ 0.1 $. We keep $ \delta $ small because for sufficiently large values of $ \delta $, $ f_{2} $ will behave like $ f_{1} $ and $ f_{4} $ will behave like $ f_{3} $. Each element of $ A $ and $ \u $ is drawn from a normal distribution with mean $ 0 $ and standard deviation $ 1 $. Thus $ A $ is full-rank almost surely. We also scale each column of $ A $ differently to introduce ill-conditioning. For all our problems and methods, we use gradient descent when the problem is entirely smooth and proximal gradient descent when the problem has a non-smooth component with an optimal step size $ 2 / (L + m) $. To study the effect of inertia, we additionally employ the Heavy-ball method \cite{Pol64} and iPiasco \cite{OBP15} with optimal step size $ 4 / (\sqrt{L} + \sqrt{m})^{2} $ and momentum parameter $ (\sqrt{L} - \sqrt{m})^{2} / (\sqrt{L} + \sqrt{m})^{2} $ on these problems. The gradients and the Hessians are computed by using the autograd package \cite{MDA15}. For $ f_{1} $, an analytical expression exists for both $ \bm{x}^{*} (\u) $ and $ \grad{}{p} (\u) $. In order to compute a good estimate of such terms for the remaining problems, we run the primal and dual problems respectively for a large number of iterations. We verify the correctness of the obtained estimate of $ \grad{}{p}(u) $ by comparing it with the numerical gradient computed using central differences. Then we run each algorithm for $ 250 $ iterations for each $ P $ and $ f_{j} $ and generate the respective plots. Each cell in \fref{fig:Exp} displays plots for $ \norm{\bm{x}^{(n)} (\u) - \bm{x}^{*} (\u)}_{2} $ and $ \norm{\bm{g}^{(n)}_{i} (\u) - \grad{}{p} (\u)}_{2} $ against the number of iterations $ K $. For $ f_{1} $ (first column), we note that all methods converge since they are backed by theoretical results. We see that for $ P > N $ (first column; first three rows), the dual method is slowest to converge and for $ P < N $ (first column; last two rows), it outperforms the analytical and automatic methods. Since the problem is quadratic therefore the implicit method yields $ \grad{}{p} $ in one step. For $ f_{2} $, the dual method shows faster convergence than all the methods for every choice of $ P $. The remaining two problems (third and forth columns) are not continuously differentiable and therefore \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} show an erratic behavior. The implicit method (red) performs very poorly in most cases. The analytical (orange) and automatic (green) gradient estimators manage to converge but do so in an irregular manner. Like $ f_{1} $, the dual method converges slowly when $ P \geq N $ for $ f_{3} $ and quickly when $ P < N $. Similarly, just like $ f_{2} $, the performance of the dual method is better than all other methods for every $ P $. The difference between the error plots generated by gradient descent or ISTA (solid lines) and the Heavy-ball method or iPiasco (dashed lines) is also visible. We observe that all the methods benefit from inertia. The fast convergence of automatic method is because of the fact that the acceleration in the convergence of $ \bm{x}^{(K)} $ is also reflected in that of $ D_{\bm{u}}\bm{x}^{(K)} $ \cite{APM20, MO20}. In conclusion, we note that for the given non-smooth problems, especially $ f_{4} $, the dual gradient estimator is not only stable but also performs better than its primal counterparts. \section{Conclusion} The variation of the value function of a parametric optimization problem is desirable in a wide range of Machine Learning and Image Processing applications. The methods for computing this gradient usually rely on directly differentiating the objective and are thus limited to the settings when the objective satisfies strong smoothness conditions. We emphasize that the gradient of the value function can also be computed by using a well-known result from convex duality. This method provides an enormous flexibility for numerical approximation of the value functions derivative, allows to leverage convergence rate results from convex optimization algorithms, and does not rely on differentiability; It can compute a subgradient of the value function. \section*{Acknowledgments} Sheheryar Mehmood and Peter Ochs are supported by the German Research Foundation (DFG Grant OC 150/4-1). {\small \bibliographystyle{ieee}
{ "timestamp": "2020-12-29T02:21:46", "yymm": "2012", "arxiv_id": "2012.14017", "language": "en", "url": "https://arxiv.org/abs/2012.14017" }
\section{Introduction} \qquad The exterior Dirichlet problem (EDP)\ for the minimal surface equation consists on the study of existence/nonexistence and uniqueness of solutions of the PDE boundary problem% \begin{equation} \left\{ \begin{array} [c]{l}% \mathcal{M}\left( u\right) :=\operatorname{div}\left( \frac{\nabla u}% {\sqrt{1+\left\Vert \nabla u\right\Vert ^{2}}}\right) =0\text{, }u\in C^{2}\left( \Omega\right) \cap C^{0}\left( \overline{\Omega}\right) \\ u|_{\partial\Omega}=\varphi \end{array} \right. . \label{exDP}% \end{equation} where $\Omega\subset\mathbb{R}^{n}$, $n\geq2$, is an exterior domain that is, $\Lambda:=\mathbb{R}^{n}\backslash\overline{\Omega}$ is a relatively compact domain, and $\varphi\in C^{0}\left( \partial\Omega\right) $ a given function. Additionally to existence or not of solutions of (\ref{exDP}), one is also interested on global properties of their graphs in $\mathbb{R}^{n+1}.$ In $\mathbb{R}^{2}$ the EDP has a history which goes back to J. C. C. Nitsche who proved (Section 4 of \cite{N}) that any solution of (\ref{exDP}) has a $C^{1}$ expansion,$\ $for $\left\Vert x\right\Vert $ big enough, of the form% \begin{equation} u\left( x_{1},x_{2}\right) =c_{1}x_{1}+c_{2}x_{2}+c\log\left\Vert x\right\Vert +O\left( \left\Vert x\right\Vert ^{-1}\right) . \label{expan}% \end{equation} Regarding the existence/non existence problem, R. Osserman \cite{O} proved that there is a boundary data on the disk for which the EDP (\ref{exDP}) on the complement of the disk has no bounded solution. R. Krust \cite{Kr} proved that Osserman's boundary data has no solution with horizontal end, that is, $c_{1}=c_{2}=0$ in (\ref{expan}) or, equivalently, having vertical Gauss map at infinity, leaving opened the question about the existence or not of a boundary data for which the EDP has no solution at all that is, with no end type restriction. This was solved by N. Kutev and F. Tomi \cite{KT} who proved the existence of a boundary data, with arbitrarily small oscillation and with bounded $C^{0,1}$ norm, for which (\ref{exDP}) has no solution, irrespective of the asymptotic behavior. As to the existence problem, it is proved in \cite{KT} and \cite{RT} that (\ref{exDP}) has\ a solution with horizontal end under conditions involving the curvature of the boundary of the domain, the Lipschitz constant and the oscillation of the boundary data. Regarding the behavior in $\mathbb{R}^{n+1},$ $n\geq2,$ of the graphs of the solutions of (\ref{exDP}), we remark that the fundamental solutions (see next section) on the exterior of any given open ball $B$ of $\mathbb{R}^{n}$, provide examples of foliations with horizontal ends of the open subset of $\mathbb{R}^{n}$% \[ \left\{ \left( x,z\right) \in\mathbb{R}^{n}\backslash\overline{B}% \times\mathbb{R}\text{ such that }-v\left( x\right) <z<v\left( x\right) \right\} , \] where the graph of $v$ is the top of a generalized catenoid with neck size determined by $B.$ This foliation is parametrized by the angle that the Gauss map of the graph of the fundamental solution at the boundary of the domain makes with the positive vertical axis (note that if $\gamma$ is such angle relatively to a fundamental solution $u\in C^{2}\left( \mathbb{R}% ^{n}\backslash B\right) $, then $\tan\gamma=\sup_{\partial B}\left\Vert \nabla u\right\Vert $). A question that arises is if such a similar phenomenon happens with an arbitrary exterior domain. This question was partially answered by the third author in $\mathbb{R}^{2}$ (Theorem 1 of \cite{R})$.$ A complete answer in the two dimensional case was obtained in \cite{RT} where the authors prove that the limit of the leaves in Theorem 1 of \cite{R} can be included in the foliation. We recall that R. Krust proved in \cite{Kr} that if there are two different solutions in $\mathbb{R}^{3}$ with the same Gauss map at infinity then there is a continuum of solutions foliating the space in between$.$ The case $\mathbb{R}^{n}$ for $n\geq3,$ to the authors' knowledge, was investigated only in the work of E. Kuwert \cite{K} where it is proved that the Krust foliation theorem \cite{Kr} is true in any dimension, leaving opened however the problem of existence or not of such foliations. In the present paper we investigate the existence of foliations to the EDP in $\mathbb{R}^{n}$ for $n\geq3$ in arbitrary exterior domains of $\mathbb{R}% ^{n}$ but in the special case that the boundary data $\varphi$ in (\ref{exDP}) is zero. We use in part the technique of \cite{R} for proving that an exterior domain $\Omega$ of $C^{2,\alpha}$ class in $\mathbb{R}^{n},$ $n\geq3,$ determines a non trivial foliation of minimal hypersurfaces in $\Omega\times\mathbb{R\subset R}^{n+1}$ containing the trivial solution as a leaf. As it happens in the $2-$dimensional case, this foliation has horizontal ends and is parametrized by the maximal angle that the Gauss map of the leaves in $\mathbb{R}^{n+1}$ make with the positive vertical axis at $\partial \Omega.$ Moreover, any leaf has a limit height at infinity which can be estimated by the geometry of the domain (see Theorem 1 for a precise statement). A natural problem is to extend our result to more general boundary data. To succeed, applying the technique used here (or of \cite{RT}), one needs to guarantee the existence of at least one solution with the given boundary data. However, although not having a counter example, we do believe, as it happens in the $2-$dimensional case, that without hypothesis on the boundary data such a solution may not exist. And even if one solution exists, it can possibly be the only one. This happens in the $2-$dimensional case on the exterior of a disk for certain boundary data, as proved in Theorem 2.9 of \cite{RT}. Even though, it seems to us that a more difficult part on the nonzero boundary data case is to estimate the values at infinity of the solutions: as done here, one needs the fundamental solutions as barriers and the way they are used applies, in principle, only for zero boundary data. \section{Fundamental solutions} Given $\lambda>0$ and $p\in\mathbb{R}^{n}$ let $B_{\lambda}\left( p\right) $ be the ball centered at $p$ and with radius $\lambda,$ $n\geq2$. The radial function \begin{equation} v_{\lambda}\left( x\right) =\lambda\int_{1}^{\frac{r}{\lambda}}\frac {dt}{\sqrt{t^{2\left( n-1\right) }-1}}\text{, }r=\left\Vert x-p\right\Vert \text{, }x\in\mathbb{R}^{n}\backslash B_{\lambda}\left( p\right) , \label{ncat}% \end{equation} is a solution of (\ref{exDP}) in $\mathbb{R}^{n}\backslash B_{\lambda}\left( p\right) $ vanishing at $\partial B_{\lambda}\left( p\right) .$ We call $v_{\lambda}$, or any vertical translation of $v_{\lambda},$ a fundamental solution. The graph of $v_{\lambda}$ is half of a $n-$dimensional catenoid. By using isometries and homotheties one obtains a family of radial solutions, which we also call fundamental solutions, defined in the exterior of any fixed ball which gradient at the boundary of the ball varies from $0$ to $\infty.$ In this paper we are interested only when $n\geq3.$ In this case we then have% \begin{equation} 0<\sigma_{n}:=\int_{1}^{\infty}\frac{dt}{\sqrt{t^{2\left( n-1\right) }-1}% }<\infty\label{sig}% \end{equation} so that, from (\ref{ncat}), $v_{\lambda}\left( x\right) $ has a limit as $\left\Vert x\right\Vert \rightarrow\infty$ not depending on $p,$ which we denote by $v_{\lambda}\left( \infty\right) $ and which is given by \begin{equation} v_{\lambda}\left( \infty\right) =\sigma_{n}\lambda. \label{vlinf}% \end{equation} \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig1} \caption*{Fundamental Solutions with $\lambda=1$} \label{fig:fig1}% \end{figure} \section{The result and its proof} A fundamental tool in PDE, used several times in the proof of Theorem \ref{mt}, is the comparison principle. In our case it states that if $\Omega$ is a bounded domain in $\mathbb{R}^{n}$, $u,v\in C^{2}\left( \Omega\right) $ satisfies $\mathcal{M}\left( u\right) =\mathcal{M}\left( v\right) =0$ and $u\leq v$ at $\partial\Omega$ that is,% \[ \lim\sup_{k}\left( u(x_{k})-v(x_{k})\right) \leq0 \] for any sequence $x_{k}$ in $\Omega$ which leaves any compact subset of $\Omega,$ then $u\leq v$ in $\Omega$ (Proposition 3.1 of \cite{RT2})$.$ An easy consequence of the comparison principle is the maximum principle which asserts that if $u,v\in C^{2}\left( \Omega\right) \cap C^{0}\left( \overline{\Omega}\right) $ satisfy $\mathcal{M}\left( u\right) =\mathcal{M}\left( v\right) =0$ in $\Omega$ then \[ \max_{\overline{\Omega}}\left\vert u-v\right\vert =\max_{\partial\Omega }\left\vert u-v\right\vert \] (Proposition 3.2 of \cite{RT2})$.$ The maximum principle has an useful application on Differential Geometry, known as the tangency principle. In our case it says that if $M_{1}$ and $M_{2}$ are minimal hypersurfaces of $\mathbb{R}^{n}$ (with or without boundary and not necessarily graphs) that have a tangency at some interior or boundary point $p\in M_{1}\cap M_{2},$ and if $M_{1}$ is in one side of $M_{2}$ in a neighborhood $p,$ then $M_{1}$ coincides with $M_{2}$ in a neighborhood of $p$ \cite{FS}. We also remark that once we have a priori $C^{1}$ estimates for the solutions of the minimal surface equation (or to more general quasi linear elliptic PDEs) we also have $C^{1,\alpha}$ a priori estimates from the H\"{o}lder theory (Ch 13 of \cite{GT}). Then well known arguments (see, for example, Section 2.1 of \cite{RT2}) allow to reduce the $C^{2,\alpha}$ a priori estimates and the regularity of solutions of quasi linear elliptic PDEs to a priori estimates and regularity theory of linear elliptic PDEs (Ch 6 of \cite{GT}). In the statement of Theorem \ref{mt} we set, for convenience, $s:=\tan\gamma$, $\left\vert \gamma\right\vert \leq\pi/2$ and, we use $u_{s}$ and $u_{s}\left( \infty\right) $ instead of $u_{\gamma}$ and $c_{\gamma}$ as in the Abstract. \begin{theorem} \label{mt}Assume that $\Omega$ is an exterior domain of $C^{2,\alpha}$ class such that $\Lambda:=\mathbb{R}^{n}\backslash\overline{\Omega},$ $n\geq3,$ satisfies the interior sphere condition with maximal radius $\rho$, namely: Given $p\in\partial\Lambda$, there is a $\left( n-1\right) -$dimensional sphere $S_{p}$ of radius $\rho$ such that $p\in S_{p}$, $S_{p}\subset \overline{\Lambda}$ and $\rho$ is maximal under these conditions. Let $\varrho$ be the radius of the smallest open ball $B_{\varrho}$ of $\mathbb{R}^{n}$ such that $\partial\Omega\subset\overline{B}_{\varrho}.$ Given $s\in\left[ -\infty,\infty\right] $ there is a bounded function $u_{s}\in C^{\infty}\left( \Omega\right) $ satisfying $\mathcal{M}\left( u_{s}\right) =0$ in $\Omega,$ $u_{-s}=-u_{s}$ and such that: If $-\infty<s<\infty$ then $u_{s}\in C^{\infty}\left( \Omega\right) \cap C^{2,\alpha}\left( \overline{\Omega}\right) ,$ \begin{equation} u_{s}|_{\partial\Omega}=0 \label{bou}% \end{equation} and% \begin{equation} \max_{\partial\Omega}\left\Vert \nabla u_{s}\right\Vert =\max_{\Omega }\left\Vert \nabla u_{s}\right\Vert =\left\vert s\right\vert . \label{gr}% \end{equation} The graph of $u_{\infty}$ is contained in a $C^{1,1}$-manifold $M\subset \overline{\Omega}\times\mathbb{R}$ with boundary $\partial M=$ $\partial \Omega.$ For any $s\in\left[ -\infty,\infty\right] $ there exists the limit \begin{equation} u_{s}\left( \infty\right) :=\lim_{\left\Vert x\right\Vert \rightarrow\infty }u_{s}\left( x\right) , \label{cinf}% \end{equation} and% \begin{equation} \lim_{\left\Vert x\right\Vert \rightarrow\infty}\left\Vert \nabla u_{s}\left( x\right) \right\Vert =0. \label{gauss}% \end{equation} Moreover, the maps $s\mapsto u_{s}\left( x\right) $, for fixed $x\in\Omega$ and $s\mapsto u_{s}\left( \infty\right) $ are strictly increasing, bounded and we have the inclusions \begin{equation} \left[ -\sigma_{n}\rho,\sigma_{n}\rho\right] \subset\left[ -u_{\infty }\left( \infty\right) ,u_{\infty}\left( \infty\right) \right] \subset\left[ -\sigma_{n}\varrho,\sigma_{n}\varrho\right] \label{inc}% \end{equation} where $\sigma_{n}$ is given by (\ref{sig}). If one of the inclusions is an equality then $\rho=\varrho$, $\Omega$ is the exterior of a ball of radius $\rho$ and the $u_{s}$ are the fundamental solutions. Finally, the graphs of the solutions of $u_{s},$ $s\in\left( -\infty ,\infty\right) $ foliate the open subset of $\mathbb{R}^{n+1}$ \begin{equation} O:=\left\{ \left( x,z\right) \in\mathbb{R}^{n+1}\backslash\overline{\Omega }\times\mathbb{R}\text{ such that }u_{-\infty}\left( x\right) <z<u_{\infty }\left( x\right) \right\} . \label{O}% \end{equation} \end{theorem} \begin{proof} We first consider the case that $-\infty<s<\infty$. Since the case $s=0$ is trivial and, since if $u_{s}$ is a solution satisfying (\ref{bou}), (\ref{gr}) (\ref{cinf}) and (\ref{gauss}) then $u_{-s}=-u_{s}$ is a solution also satisfying these conditions, we may assume $s>0$. Let $a\in\mathbb{R}$ be such that $B_{a}=B_{a}\left( 0\right) $, the open ball in $\mathbb{R}^{n}$ of radius $a$ centered at origin, contains $\overline{\Lambda}$. Let $v_{a}\in C^{0}\left( \mathbb{R}^{n}\backslash B_{a}\right) $ be given by (\ref{ncat}) with $p=0$. We see that $\left\Vert \nabla v_{a}\left( x\right) \right\Vert \rightarrow0$ as $\left\Vert x\right\Vert \rightarrow\infty$ and we may then choose $k\in\mathbb{N}$, $k>a+1$, large enough such that \begin{equation} \left\Vert \nabla v_{a}\right\Vert _{\partial B_{k}}\leq\frac{s}{2}. \label{vas}% \end{equation} Set $\Omega_{k}=B_{k}\cap\Omega$ and% \begin{equation} T_{k}=\left\{ t\geq 0\ ;\begin{aligned} \ &\ \exists~w_{t}\in C^{2,\alpha}\left( \overline{\Omega }_{k}\right) \text{ s.t. }\mathcal{M}\left( w_{t}\right) =0,\\ &~\sup\nolimits_{\overline{\Omega}_{k}}\left\Vert \nabla w_{t}\right\Vert \leq s, \ w_{t}|_{\partial\Omega}=0,~w_{t}|_{\partial B_{k}}=t \end{aligned}\right\} .\qquad\label{Tk}% \end{equation} The set $T_{k}$ is not empty since $0\in T_{k}$. Moreover, $\sup T_{k}<\infty$ since \[ \sup_{\overline{\Omega}_{k}}\left\Vert \nabla w_{t}\right\Vert \leq s \] for all $t\in T_{k}$. We will prove that \[ t_{k}:=\sup T_{k}\in T_{k}% \] and that \begin{equation} \sup_{\Omega_k}\left\Vert \nabla w_{t_{k}}\right\Vert =\sup_{\partial\Omega_k }\left\Vert \nabla w_{t_{k}}\right\Vert =s\text{.} \label{wtk}% \end{equation} Taking a sequence $\left( t_{m}^{k}\right) $ in $T_{k}$ converging to $t_{k}$ as $m\rightarrow\infty$ the corresponding functions $w_{t_{m}^{k}}$ have uniformly bounded $C^{1}$ norm. By elliptic PDE theory (\cite{GT}, \cite{RT2}) there is a subsequence of $w_{t_{m}^{k}}$ converging on the $C^{2}$ norm on $\overline{\Omega}_{k}$ to a function $w_{k}\in C^{2,\alpha }\left( \overline{\Omega}_{k}\right) $ which satisfies $\mathcal{M}\left( w_{k}\right) =0$ in $\Omega_{k}$. Clearly $w_{k}|_{\partial\Omega}=0$, $w_{k}|_{\partial B_{k}}=t_{k}$ and $\sup_{\Omega_{k}}\left\Vert \nabla w_{k}\right\Vert \leq s$. It follows that $t_{k}\in T_{k}$ and that $w_{k}=w_{t_{k}}$. From the maximality of $t_{k}$ we claim that we cannot have $\sup_{\Omega_{k}% }\left\Vert \nabla w_{k}\right\Vert \nolinebreak<\nolinebreak s.$ Indeed: Consider a function $\phi\in C^{2,\alpha}\left( \mathbb{R}^{n}\right) $ such that $\phi|_{B_{k-1}}=0$ and $\phi|_{\mathbb{R}^{n}\backslash B_{k}}=1$, set \[ C_{0}^{2,\alpha}(\overline{\Omega}_{k})=\left\{ \left. \omega\in C^{2,\alpha}(\overline{\Omega}_{k})\text{ }\right\vert \text{ \ \ }% \omega|_{\partial\Omega_{k}}=0\right\} , \] and define $T\colon\,[-1,1]\times C_{0}^{2,\alpha}(\overline{\Omega}% _{k})\rightarrow C^{\alpha}(\overline{\Omega}_{k})$ by \[ T\left( t,\omega\right) =\mathcal{M}\left( \omega+w_{k}+t\phi\right) . \] Then $T\left( 0,0\right) =0.$ One may see that the Fr\'{e}chet derivative $\partial_{2}T\left( 0,\omega_{k}\right) =d\mathcal{M}_{w_{k}}$ is invertible (Theorem 3.3 of \cite{GT}) so that, from the implicit function theorem on Banach spaces (Theorem 17.6 of \cite{GT}), there exists a continuous function $t\mapsto\omega\left( t\right) \in C_{0}^{2,\alpha }(\overline{\Omega}_{k})$ (continuous on the $C^{2,\alpha}$ topology)$,$ with $\omega(0)=0$ such that $T\left( t,\omega(t)\right) =0,$ $t\in\left( -\varepsilon,\varepsilon\right) .$ Therefore, since $\left\Vert \operatorname{\nabla}w_{k}\right\Vert _{\Omega_k}<s$ there exists $t\in\left( 0,\varepsilon\right) $ such that \[ \sup_{\Omega_k}\left\Vert \nabla\left( \omega\left( t\right) +w_{k}+t\phi\right) \right\Vert <s. \] Since \[ \mathcal{M}\left( \omega\left( t\right) +w_{k}+t\phi\right) =T(t,\omega (t))=0, \] $\omega\left( t\right) +w_{k}+t\phi=0$ at $\partial\Omega$ and $\omega(t)+w_{k}+t\phi=t_{k}+t$ at $\partial B_{k},$ it follows that $t_{k}+t\in T_{k},$ contradiction since $t_{k}=\sup T_{k}.$ We then have $\sup_{\Omega_{k}}\left\Vert \nabla w_{k}\right\Vert =s$. We claim that% \begin{equation} \sup_{\partial B_{k}}\left\Vert \nabla w_{k}\right\Vert \leq s/2. \label{was}% \end{equation} Indeed: Since the graph of $v_{a}$ is vertical at $\partial B_{a}$ it follows from the comparison principle (see \cite{GT}, Ch 10, or Proposition 3.1 of \cite{RT2}) that \begin{equation} v_{a}+t_{k}-v_{a}(x_{0})\leq w_{k}\leq t_{k} \label{in}% \end{equation} where $x_{0}\ $is any but fixed point of $\partial B_{k}.$ From (\ref{vas}) and (\ref{in}) we get (\ref{was}). By the gradient maximum principle (\cite{GT}, Ch 15) we obtain% \[ \sup_{\Omega_k}\left\Vert \nabla w_{k}\right\Vert =\sup_{\partial\Omega _k}\left\Vert \nabla w_{k}\right\Vert =s. \] Letting $k\rightarrow\infty$ and using the diagonal method we obtain a subsequence of $w_{k}$ converging uniformly $C^{2}$ on compact subsets of $\overline{\Omega}$ to a function $u_{s}\in C^{2,\alpha}\left( \overline {\Omega}\right) $ satisfying $\mathcal{M}\left( u_{s}\right) =0$ in $\Omega,$ (\ref{bou}) and (\ref{gr}). From elliptic PDE regularity \cite{GT} $u_{s}\in C^{\infty}\left( \Omega\right) $. Now, for any $s\in\left[ 0,\infty\right) ,$ the graph $G_{s}$ of $u_{s}$ is by construction of (uniform) bounded slope (see \cite{S}). It follows from Proposition 3 of \cite{S} that $G_{s}$ is \emph{regular at infinity }that is, $u_{s}$ has a twice differentiable expansion% \begin{equation} u_{s}\left( x\right) =c_{s}+a_{s}\left\Vert x\right\Vert ^{2-n}+\sum _{j=1}^{n}c_{s,j}x_{j}\left\Vert x\right\Vert ^{-n}+O\left( \left\Vert x\right\Vert ^{-n}\right) \label{exp}% \end{equation} from which it follows that% \begin{equation} u_{s}\left( \infty\right) :=\lim_{\left\Vert x\right\Vert \rightarrow\infty }u_{s}\left( x\right) =c_{s}. \label{cs}% \end{equation} It also follows from (\ref{exp}) that \[ \lim_{\left\Vert x\right\Vert \rightarrow\infty}\left\Vert \nabla u_{s}\right\Vert \left( x\right) =0 \] which implies that $G_{s}$ is horizontal at infinity that is, (\ref{gauss}) is satisfied. This proves that (\ref{cinf}) and (\ref{gauss}) are satisfied for $s\in\left[ 0,\infty\right) .$ Let $v_{\varrho}$ be the fundamental solution on $\mathbb{R}^{n}\backslash B_{\varrho}$ which gradient infinity at $\partial B_{\varrho}.$ Given $s\in\left[ 0,\infty\right) $ we claim that $u_{s}\left( \infty\right) <v_{\varrho}\left( \infty\right) .$ Indeed, coming from $-\infty$ with the graph $G_{\varrho}$ of $v_{\varrho}$ using vertical translations$,$ since the gradient of $v_{\varrho}$ at the boundary of $B_{\varrho}$ is infinity, it follows from the tangency principle that the first contact between $G_{\varrho}$ and the graph of $u_{s}$ has to be at infinity and with the boundary of $G_{\varrho}$ strictly below the level $x_{n+1}=0.$ Hence, at the level $x_{n+1}=0$ one necessarily has $u_{s}\left( \infty\right) <v_{\varrho}\left( \infty\right) .$ It follows from the claim and from (\ref{vlinf}) that $u_{s}$ is bounded by $\sigma\varrho$ for all $s\in\left[ 0,\infty\right) .$ Clearly we have $u_{s}\leq u_{t}$ and also $u_{s}\left( \infty\right) \leq u_{t}\left( \infty\right) $ if $s\leq t$. Hence, for any increasing sequence $s_{m}\rightarrow\infty$ the sequence $u_{s_{m}}$ converges uniformly on compact subsets of $\Omega$ to a $C^{\infty}$ function $u_{\infty}$ in $\Omega$ satisfying $\mathcal{M}\left( u_{\infty}\right) =0$. For proving that the graph $G_{\infty}$ of $u_{\infty}$ is contained in a $C^{1,1}$ manifold with boundary $\partial\Omega$ consider a fixed ball $B_{a}$ with $a>\varrho.$ By \cite{Mi}, given $s\in\left[ 0,\infty\right] $ there is a minimizer $v_{s}$ on the space $\operatorname*{BV}\left( \Omega_{a}\right) $ of bounded variation functions on $\Omega_{a}$ (see \cite{G})$,$ for the functional \[ \mathcal{F}_{s}\left( w\right) =\int_{\Omega_{a}}\sqrt{1+\left\Vert \nabla w\right\Vert ^{2}}+\int_{\partial\Omega_{a}}\left\vert w-\phi_{s}\right\vert ,\text{ }w\in\operatorname*{BV}\left( \Omega_{a}\right) , \] where $\phi_{s}\in C^{\infty}\left( \partial\Omega_{a}\right) $ satisfies $\phi_{s}|_{\partial\Omega}=0$, $\phi_{s}|_{\partial B_{a}}=u_{s}|_{\partial B_{a}}.$ Since $u_{s}$ is also a minimizer for $\mathcal{F}_{s}$ for $0\leq s<\infty,$ we have $u_{s}|_{\Omega_{a}}=v_{s}$ by uniqueness \cite{Mi} (the equality is in\ $\operatorname*{BV}\left( \Omega_{a}\right) $). Noting that \begin{align*} \lim_{s\rightarrow\infty}\mathcal{F}_{s}\left( w\right) & =\mathcal{F}% _{\infty}\left( w\right) ,\text{ }w\in\operatorname*{BV}\left( \Omega _{a}\right) ,\\ \lim_{s\rightarrow\infty}\mathcal{F}_{s}\left( u_{s}\right) & =\lim_{s\rightarrow\infty}\mathcal{F}_{\infty}\left( u_{s}\right) , \end{align*} we have (writing only $u_{s}$ instead of $u_{s}|_{\Omega_{a}})$% \begin{align*} \mathcal{F}_{\infty}\left( v_{\infty}\right) & =\lim_{s\rightarrow\infty }\mathcal{F}_{s}\left( v_{\infty}\right) \geq\lim_{s\rightarrow\infty }\mathcal{F}_{s}\left( v_{s}\right) =\lim_{s\rightarrow\infty}% \mathcal{F}_{s}\left( u_{s}\right) \\ & =\lim_{s\rightarrow\infty}\mathcal{F}_{\infty}\left( u_{s}\right) \geq\mathcal{F}_{\infty}\left( u_{\infty}\right) , \end{align*} where, in the last inequality, we used that $\mathcal{F}_{\infty}$ is lower semicontinuous. It follows that $\mathcal{F}_{\infty}\left( v_{\infty }\right) =\mathcal{F}_{\infty}\left( u_{\infty}\right) $ and hence, by uniqueness, $v_{\infty}=u_{\infty}$ in $\Omega_{a}$. From Theorem 4.2 of \cite{Bo} applied to the functional $\mathcal{F}_{\infty}$, by choosing $\Phi=\partial\Omega$, $\phi_{i}\equiv0$, and using also Theorem 4.7, we conclude that the graph of $u_{\infty}$ is contained in a $C^{1,1}$ manifold $M$ with boundary which boundary is $\partial\Omega.$ We have seen that $s\rightarrow u_{s}\left( \infty\right) $ is increasing and bounded by $\sigma\varrho.$ If $c:=\lim_{s\rightarrow\infty}u_{s}\left( \infty\right) $ then we have $u_{s}\leq u_{\infty}\leq c,$ $s\in\left[ 0,\infty\right) ,$ by the comparison principle, and hence there is the limit $u_{\infty}\left( \infty\right) $ of $u_{\infty}\left( x\right) $ as $\left\Vert x\right\Vert \rightarrow\infty$ and $u_{\infty}\left( \infty\right) =c,$ proving the second inclusion of (\ref{inc}). We shall prove now (\ref{gauss}) for $s=\infty.$ By the way $u_{\infty}$ is obtained we can not conclude directly that the graph of $u_{\infty}$ is of (uniform) bounded slope and hence we don't know if $u_{\infty}$ is regular at infinity and admits an expansion as (\ref{exp}). But this is actually the case, indeed:\ Since $u_{\infty}\left( \infty\right) =c$ the tangent cone to the graph of $u_{\infty}$ at infinity is the hyperplane $\mathbb{R}^{n}=\left\{ x_{n+1}=0\right\} $ of $\mathbb{R}^{n+1}$ (see \cite{Si}) and hence, from Theorem 1 of \cite{Si} it follows that $\nabla u_{\infty}$ has a limit at infinity and $\left\Vert \nabla u_{\infty}\right\Vert $ is bounded outside some compact. Since $u_{\infty }$ is bounded this limit has to be zero and this proves (\ref{gauss}) for $s=\infty.$ Let $c\in\lbrack0,\sigma_{n}\rho]$ be given. We prove that there is a non negative solution $w_{c}\in C^{0}\left( \overline{\Omega}\right) \cap C^{\infty}\left( \Omega\right) $ of (\ref{exDP}) such that $w_{c}% |_{\partial\Omega}=0$ and% \[ \underset{\left\Vert x\right\Vert \rightarrow\infty}{\lim}w_{c}\left( x\right) =c. \] Define \begin{equation} \digamma=\left\{ f\in C^{0}\left( \overline{\Omega}\right) ;\begin{aligned} \ &~f\text{ is a subsolution of }\mathcal{M}\text{ in }\Omega,\\ &~f=0\text{ in }\partial\Omega\text{ and }\limsup \nolimits_{\left\Vert x\right\Vert \rightarrow\infty}f\left( x\right) \leq c \end{aligned}\right\} .\qquad\label{per}% \end{equation} Clearly $\digamma\neq\varnothing$ and it follows from the the comparison principle that $f\leq c$ for all $f\in\digamma.$ We may then apply Perron's method (\cite{GT}, Section 2.8) to conclude that \[ w_{c}\left( x\right) =\sup\left\{ f\left( x\right) ;\text{ }f\in \digamma\right\} \text{, }x\in\overline{\Omega}, \] is $C^{\infty}$ and satisfies $\mathcal{M}\left( w_{c}\right) =0$ in $\Omega$. For proving that \begin{equation} \lim_{\left\Vert x\right\Vert \rightarrow\infty}w_{c}\left( x\right) =c \label{wc}% \end{equation} take $a>0$ large enough, such that $\overline{\Lambda}\subset B_{a}$ satisfies $v_{a}\left( \infty\right) >c$. We have that $f\in C^{0}\left( \overline{\Omega}\right) $ given by% \[ f\left( x\right) =\left\{ \begin{array} [c]{l}% 0\text{ if }x\in\overline{\Omega}\cap B_{a}\\ \max\{0,v_{a}\left( x\right) -\left( v_{a}\left( \infty\right) -c\right) \}\text{, if }x\in\mathbb{R}^{n}\backslash B_{a}% \end{array} \right. \] is a subsolution relatively to the (\ref{exDP}) satisfying $f|_{\partial \Omega}=0$ and \begin{equation} \underset{\left\Vert x\right\Vert \rightarrow\infty}{\lim}f\left( x\right) =c. \label{win}% \end{equation} It follows that $f\in\digamma$ and then $f\leq w_{c}\leq c,$ which proves (\ref{wc}). It remains to prove that $w_{c}$ extends $C^{0}$ to $\overline{\Omega}$ and that $w_{c}|_{\partial\Omega}=0$. Given $p\in\partial\Omega$, by hypothesis there is an open ball $B_{\rho}$ contained in $\Lambda$ such that $\partial B_{\rho}$ is tangent to $\partial\Omega$ ($=\partial\Lambda)$ at $p$. Since \[ c\leq\sigma_{n}\rho=v_{\rho}\left( \infty\right) \] and $v_{\rho}=0$ at $\partial B_{p}$ it follows from the comparison principle\ that $0\leq w_{c}\leq v_{\rho}$. Since $p$ is arbitrary this proves the claim that is, $w_{c}$ extends $C^{0}$ to $\overline{\Omega}$ and $w_{c}|_{\partial\Omega}=0$. Now, assume that $0\leq c<\sigma_{n}\rho.$ Then we may find a fundamental solution $\widetilde{v}$ defined on the exterior of a ball of radius $\rho,$ contained in $\Lambda$, tangent to $\partial\Omega$ with bounded gradient at the boundary of the ball and such that \[ \widetilde{v}\left( \infty\right) =\frac{c+\sigma_{n}\rho}{2}. \] By the comparison principle it follows that $0\leq w_{c}\leq\widetilde{v}.$ This proves that $w_{c}$ extends $C^{1}$ to $\overline{\Omega}$ and, by PDE regularity \cite{GT}, $w_{c}\nolinebreak\in\nolinebreak C^{2,\alpha}\left( \overline{\Omega}\right) \nolinebreak\cap\nolinebreak C^{\infty}\left( \Omega\right) .$ Setting% \[ s_{c}=\max_{\partial\Omega}\left\Vert \nabla w_{c}\right\Vert , \] we prove that $u_{s_{c}}=w_{c}.$ By contradiction, assume the opposite. Then, setting% \begin{equation} d:=\lim_{\left\Vert x\right\Vert \rightarrow\infty}u_{s_{c}} \label{hav}% \end{equation} we cannot have $d>c$ or $d<c$. Indeed: Assume, by contradiction, that $d>c.$ Let $p\in\partial\Omega$ be such that $\left\Vert \nabla w_{c}\right\Vert \left( p\right) =s_{c}.$ If $\left\Vert \nabla u_{s_{c}}\right\Vert \left( p\right) =s_{c}$ we cannot have $w_{c}\left( x\right) \leq u_{s_{c}}\left( x\right) $ for all $x\in\overline{\Omega}$ because of the boundary tangency principle. But if have $w_{c}>u_{s_{c}}$ this inequality must hold only on a bounded open subset of $\Omega$ since $c<d$. One can then make a vertical translation of the graph of one of the solutions to get a tangency between their graphs, with one of them in one side of the other, contradicting the tangency principle. The remaining possibility% \[ \left\Vert \nabla u_{s_{c}}\right\Vert \left( p\right) <s_{c}=\left\Vert \nabla w_{c}\right\Vert \left( p\right) \] also implies that $w_{c}>u_{s_{c}}$ must hold on a bounded open subset of $\Omega$ leading, as before, to a contradiction with the tangency principle. The case that $d<c$ cannot happen by the same arguments. This proves that $c=d$ and, arguing with the tangency principle again, that $w_{c}=u_{s_{c}}.$ Finally, take an increase sequence $c_{m}\in\left[ 0,\sigma_{n}\rho\right) $ converging to\ $\sigma_{n}\rho$ as $m\rightarrow\infty$. The sequence $s_{c_{m}}$ is increasing and then has a limit $s\in\left[ 0,\infty\right] .$ The sequence $\left( u_{s_{c_{m}}}\right) $ converges uniformly $C^{2}$ on compact subsets of $\Omega$ to a solution $u_{s}\in C^{0}\left( \overline{\Omega}\right) \cap C^{\infty}\left( \Omega\right) ,$ $u_{s}|_{\partial\Omega}=0$ and $\sup_{\partial\Omega}\left\Vert \nabla u_{s}\right\Vert =s.$ As before we obtain $u_{s}=w_{\sigma_{n}\rho}$, proving that \[ \left[ 0,\sigma_{n}\rho\right] \subset\left[ 0,u_{\infty}\left( \infty\right) \right] . \] This concludes the proof of (\ref{inc}). If one of the inclusions in (\ref{inc}) is an equality and the corresponding graphs of the solutions with infinite gradient at $\partial\Omega$ are not the same, then either one is below the other or they intersect in interior points. The first case cannot occur because of the boundary tangency principle. The second case neither because otherwise one can make a vertical translation of one of them to get a tangency between the graphs, with one in one side of the other, contradicting the tangency principle. Hence, in case of equality in some of the inclusions (\ref{inc}), $\Omega$ is the exterior of a ball of radius $\rho=\varrho.$ Is a particular consequence of the proof of the foliation property, given below, that the solutions $u_{s}$ are necessarily the fundamental solutions. For proving that the graphs of the solutions $u_{s},$ $s\in\left( -\infty,\infty\right) ,$ foliate the open subset $O$ of $\mathbb{R}^{n}$ (defined in (\ref{O})) we apply Theorem 2 of \cite{K}. It is enough to prove that any solution $u\in C^{0}\left( \overline{\Omega}\right) $ of the minimal surface equation in $\Omega$ with horizontal end and such that $u|_{\partial\Omega}=0$ coincides with $u_{s}$ for some $s\in\left[ -\infty,\infty\right] .$ By using Theorems 4.2 and 4.7 of \cite{Bo}, as above, we may conclude that the graph of $u$ is a $C^{1,1}$ manifold $M$ with boundary and, since $u\in C^{0}\left( \overline{\Omega}\right) $ is a solution of the minimal surface equation in $\Omega$, $M$ is a minimal hypersurface with boundary $\partial\Omega$ of $\mathbb{R}^{n}$. Representing $M,$ locally, as a graph near any given point of $\partial\Omega\ (=\partial M),$ we may use PDE regularity theory to conclude that, indeed, $M$ is a $C^{2,\alpha}$ manifold. Moreover, the assumption that $u$ has horizontal end implies, as already argued before, that $u$ is bounded and that there exists the limit \[ d:=\lim_{\left\Vert x\right\Vert \rightarrow\infty}u\left( x\right) . \] If $M$ has no vertical tangent space at any point of $\partial\Omega$ then it follows by PDE regularity that $u\in C^{2,\alpha}\left( \overline{\Omega }\right) \cap C^{\infty}\left( \Omega\right) $. Setting $s=\max _{\partial\Omega}\left\Vert \nabla u\right\Vert ,$ we can argue as before to prove that $u=u_{s}.$ Assume that $M$ has a vertical tangent space at some point of $\partial \Omega.$ We claim then that $u=u_{\infty}$ or $u=u_{-\infty}.$ We first prove that $d=u_{\infty}\left( \infty\right) $ or $d=u_{-\infty}\left( \infty\right) $. By contradiction, first assume that $0<u_{\infty}\left( \infty\right) <d .$ Arguing with the tangency principle it is easy to see then that $u_{\infty }\leq u.$ But then $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $ and the graph $G$ of $u_{\infty}$ is a minimal hypersuface of $C^{2,\alpha}$ class with boundary $\partial\Omega$ which has a vertical tangent space at some point $p\in\partial\Omega.$ The hypersurfaces $G$ and $M$ then must have a tangency at $p.$ By the boundary tangency principle it follows that $G=M,$ contradiction! If $0\leq d<u_{\infty}\left( \infty\right) $, since $u_{s}$ converges uniformly on compacts of $\Omega$ to $u_{\infty},$ as $s\rightarrow\infty,$ there is $s$ large enough such that $u_{s}\left( \infty\right) >d$. By using the tangency principle one may see that this leads to a contradiction. For similar reasons one excludes the case $d<u_{-\infty}\left( \infty\right) $ and $u_{-\infty}\left( \infty\right) <d\leq0.$ It then follows that $d=u_{\infty}\left( \infty\right) $ or $d=u_{-\infty }\left( \infty\right) $ from what one easily obtains, from the tangency principle once more, that $u=u_{\infty}$ or $u=u_{-\infty}.$ This concludes with the proof of the theorem. \end{proof} \noindent\textbf{Remarks.} \noindent(a) It is true that the graph of the limit solution $u_{\infty}$ of the EDP in $\mathbb{R}^{2}$ is a $C^{1,\alpha}$ surface with boundary. Moreover, it holds $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $ in this case \cite{RT}. In higher dimensions, as proved in the Theorem \ref{mt}, the graph of the solution $u_{\infty}$ is part of a $C^{1,1}$ manifold with boundary $\partial\Omega.$ However, we do not know if $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $. The $2-$dimensional case is studied in \cite{RT} using classical Plateau's problem technique which is typically $2-$dimensional. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig2} \caption*{Possible solutions of arbitrary domains } \label{fig:fig2}% \end{figure} \noindent(b) The EDP for the minimal surface equation is studied in the Riemannian setting in \cite{ARS} and \cite{ER}. \bigskip
{ "timestamp": "2021-09-13T02:02:49", "yymm": "2012", "arxiv_id": "2012.14003", "language": "en", "url": "https://arxiv.org/abs/2012.14003" }
\section*{Acknowledgments} SM, CHS-T and HW are financed in part through the NExT Institute. SM is also funded by the STFC consolidated Grant No. ST/L000296/1. HW acknowledges financial support from the Magnus Ehrnrooth Foundation, the Finnish Academy of Sciences and Letters and STFC Rutherford International Fellowship (funded through the MSCA-COFUND-FP Grant No. 665593). The work of AC is funded by the Department of Science and Technology, Government of India, under Grant No. IFA18-PH 224 (INSPIRE Faculty Award). Some of us also acknowledge the use of the IRIDIS High Performance Computing Facility and associated support services at the University of Southampton. We thank Nishita Desai, Terhi J\"arvinen, Emmanuel Olaiya, Ian Tomalin and Jose Zurita for useful discussions. \section{Event selection and backgrounds} \section{Collider analysis} We shall look at the process $pp\rightarrow H,A \rightarrow \tilde{N}\tilde{N}$, where both sneutrinos decay visibly giving a charged lepton ($\ell = e, \mu$) and a charged higgsino. In this model the sneutrinos do not have a well-defined lepton number and the lepton number violating mass splitting is larger than the decay width, so there is a $50\%$ chance for each sneutrino to decay to either charge lepton. As the backgrounds for same-sign dileptons are smaller, we choose events with two same-sign leptons and veto for a third hard lepton with $p_{T}>15$~GeV. If the higgsinos have a relatively smaller mass splitting, say close to $5-10$~GeV, then the decay width for the sneutrino to visible decays is small. Numerically the minimal width is close to $10^{-14}$~GeV leading to a mean decay length of a couple of millimeters. In contrast, in the region of phase space where the production of heavy Higgses is possible with a reasonable cross section and the decays are kinematically allowed, the decay widths are typically a few times $10^{-13}$~GeV. This implies that the mean decay lengths are around hundreds of micrometers, which leads to final states with DVs. Thus for a large fraction of the signal events we should have two DVs with charged leptons and soft charged tracks. In this work, we require both of the leptons ($\ell = e, \mu$) to be displaced. Such a requirement means that the backgrounds come from processes involving either $b$-quarks or $c$-quarks. The same-sign requirement further reduces these as in the case of only two heavy-flavour quarks same-sign leptons are possible only in the case of flavour oscillations. Our signal events will have missing transverse momentum in the form of the LSPs and, if the spectrum is not compressed, it is possible to require large $\slashed{p}_{T}$ with a rather good signal acceptance. Heavy flavour events rarely satisfy this requirement. In order to have neutrinos with significant transverse momentum, the quarks in the hard process must have large $p_T$ and the cross section falls down quickly with increasing $p_{T}$. Also, $t\overline{t}$ events with both the tops decaying hadronically can kinematically mimic the signal events. \subsection{Event generation procedure} For detailed analysis we choose the three Benchmark Points (BPs) given in Table \ref{tab:benchmark}. While they differ slightly in their mass spectrum, their main difference is in the sneutrino lifetimes. BP-I represents a ``typical'' benchmark for EW scale seesaw with a decay width corresponding to a mean decay length around $1$~{\rm mm}. The second benchmark has a shorter lifetime with a mean decay length less than $0.3$~{\rm mm}, while the third one has a mean decay length around $2$~{\rm mm}, which is about as long as one can get without going to very compressed spectra. For such cases any leptons would be soft and triggering the event would be more difficult. For interested readers, we refer \cite{Fukuda:2019kbp,Bhattacherjee:2020nno} for the recent proposals on these issues. Regarding the constraints on the spectrum, the higgsinos need to be heavier than about $160$~GeV with a mass splitting of $10$~GeV \cite{Sirunyan:2018iwl}. Since $m_{H}>2m_{\tilde{N}}>2m_{\tilde{H}}$, the heavy Higgses need to be beyond $400$~GeV. However, the production cross section falls off rather quickly beyond $500$~GeV. In this range $\tan \beta$ needs to be low to avoid the constraints from $H\rightarrow \tau^{+}\tau^{-}$ searches \cite{Sirunyan:2018zut,Aad:2020zxo}. For our BPs we have $2<\tan\beta < 3$ as this both evades the experimental constraints and gives a large BR($H\rightarrow \tilde{N}\tilde{N}$). We simulate 100,000 signal events and $\mathcal O(10^{7})$ events each for the $t\bar{t}$ and $b\bar{b}$ backgrounds using {\tt MadGraph5 v2.6.6} \cite{Alwall:2011uj} at LO. Parton showering and hadronisation are modelled through {\tt Pythia v8.2} \cite{Sjostrand:2014zea} and fast detector simulation is obtained by {\tt Delphes v3.3.3} \cite{deFavereau:2013fsa} with the ATLAS card. We use a modified version of the default ATLAS card to implement the impact parameter smearing effects. The event rates are then corrected to NLO accuracy with $k$-factor of 2 for the signal \cite{Spira:1993bb,Spira:1995rr,Muhlleitner:2006wx} and to next-to-next-to-leading order (NNLO) accuracy with $k$-factor of 1.8 for the two dominant backgrounds \cite{Czakon:2011xx, Aliev:2010zk, Catani:2020kkl}\footnote{Note that, as pointed out in Ref.\cite{Catani:2020kkl}, at high transverse momenta of the bottom quarks, large logarithmic terms of the form $\ln(\frac{p^b_T}{m_b}) $ become important and need to be resummed properly while estimating the NNLO cross section for the $b\bar{b}$ process. For our study, bottom quarks are pair produced with a minimum $p^b_T$ = 200 GeV, so we make a conservative choice of $k$-factor = 1.8.}. Further, in order to generate events efficiently, we demand that the top quarks are decaying hadronically and the $b$-hadrons, obtained after parton shower and hadronisation of the $b$-quarks, are decaying through leptonic final states. We generate two sets of $b\bar b$ samples by varying the $p_T$ of the bottom quarks at the generation level. We find that the one with generation level cut $p^{b}_{T, {\rm min}}$ = 200 GeV has better sensitivity. In Table \ref{tab:events}, we show the details of event generation of individual signal and background events. \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Observable & BP-I & BP-II & BP-III \\ \hline \hline Lightest Higgs mass & 125.0 & 125.2 & 125.6 \\ \hline 2nd Higgs mass & 338.2 & 322.1 & 370.4\\ \hline 3rd Higgs mass & 462.2 & 484.0 & 483.6 \\ \hline Lightest Pseudoscalar Higgs mass & 259.0 & 256.8 & 261.7\\ \hline 2nd Pseudoscalar Higgs mass & 446.9 & 470.2 & 468.1 \\ \hline \hline Lightest Sneutrino mass & 219.4 & 219.9 & 228.4 \\ \hline Lightest CP-odd Sneutrino mass & 220.0 & 219.8 & 229.2 \\ \hline \hline Lightest Chargino mass & 185.8 & 177.6 & 205.1 \\ \hline Lightest Neutralino mass & 177.3 & 168.2 & 196.0 \\ \hline Next-to Lightest Neutralino mass & 200.3 & 193.2 & 218.6 \\ \hline \hline BR($h_3 \to \tilde{\nu_1} \tilde{\nu_1}$) (in \%) & 5.3 & 4.9 & 4.9 \\ \hline BR($A_3 \to \tilde{\nu_1} \tilde{\nu_1}^\prime$) (in \%) & 1.2 & 1.3 & 1.6 \\ \hline BR($\tilde{\nu_1} \to \ell \tilde{\chi^{\pm}_1}$) (in \%) & 48.2 & 48.6 & 45.3 \\ \hline BR($\tilde{\nu_1} \to \nu \tilde{\chi^{0}_1}$) (in \%)& 51.8 & 51.4 & 54.7 \\ \hline $\Gamma(\tilde{\nu_1})$ (GeV) & $1.6 \times 10^{-13}$ & $8.5 \times 10^{-13}$ & $9 \times 10^{-14}$ \\ \hline \end{tabular} \caption{Details of the BPs (all the masses are in GeV). The leptonic BRs include electron, muons and taus.} \label{tab:benchmark} \end{center} \end{table} \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Process & Cross section (pb) & Events generated & Event weight factor \\ \hline Signal (BP-I) & 0.0666 & 100,000 & 0.09 \\ \hline Signal (BP-II) & 0.0558 & 100,000 & 0.08 \\ \hline Signal (BP-III) & 0.0508 & 100,000 & 0.07 \\ \hline $t\bar{t}$ (hadronic)& 369.0 & 10,000,000 & 5.1 \\ \hline $b\bar b$ ($p^{b}_{T, {\rm min}}$ = 30 GeV) & 1183654.3 & 5,000,000 & 32432.1 \\ \hline $b\bar b$ ($p^{b}_{T, {\rm min}}$ = 200 GeV) & 378.0 & 10,000,000 & 5.2 \\ \hline \end{tabular} \caption{Event simulation details at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$.} \label{tab:events} \end{center} \end{table} \subsection{Definition of displaced objects} We now provide the details of the observables used to probe the displaced signal events. \begin{itemize} \item \underline{Displaced leptons}: The isolated leptons ($\ell = e, \mu$) with $p_{T}(\ell)>10$~GeV and $|\eta(\ell)| < 2.5$ must satisfy $|{d_\perp}|> 0.2~{\rm mm}$ \cite{CMS:2014hka, Aad:2019tcc} where $d_{\perp}$ is the transverse impact parameter relative to the primary vertex. The lepton isolation is achieved by demanding the angular separation between the lepton and jets, $\Delta R (\ell, jet)$, should be greater than 0.4. Additionally, we demand that the leptons carry at least 80\% (90\%) of the transverse momentum within a cone of radius R = 0.5 in case of a muon (electron). Note that, we have used a modified version of the default ATLAS card available within {\tt Delphes} to implement the impact parameter smearing effects and obtain the displaced leptons. \item \underline{DVs using displaced tracks}: The tracks used in the DV reconstruction must satisfy the following requirements: $p_{T}>1$~GeV and $|{d_\perp}| > 2~{\rm mm}$. Further, the significance of $d_\perp$ with respect to the beam axis ({\it i.e.}, $|d_\perp|$ divided by its uncertainty $\sigma_{d_\perp}$) should be at least 4 \cite{Sirunyan:2018pwn,Aad:2019kiz,Aad:2019tcc}. The final requirement on track $\frac{|d_\perp|}{\sigma_{d_\perp}}$ improves the identification of displaced tracks associated to the DVs. We collect the displaced tracks and construct the DV. To combine the tracks, we use the truth information of the vertices ({\it i.e.}, vertex position) obtained from the detector emulator and merge those tracks if $|\Delta X (t_i, t_j)| < 0.001$, $|\Delta Y (t_i, t_j)| < 0.001$ and $|\Delta Z (t_i, t_j)| < 0.001$, where $t_i$ denotes the $i$-th displaced track. The invariant mass of the DVs are calculated using the summed 4-momenta of the associated tracks, {\it i.e.}, $m^2_{\rm DV} = {(\sum E_i)}^2 - {(\sum {\vec p}_i)}^2$, where the sum runs over the tracks associated to the DV \cite{Aad:2019kiz}. \item \underline{Jets and displaced jets}: The jets are constructed from calorimeter tower elements using {\tt Fastjet v3.3.2} \cite{Cacciari:2011ma} and the anti-$k_T$ jet clustering algorithm \cite{ Cacciari:2008gp} with jet radius $R = 0.5$. We demand that the jets must satisfy $p_{T}>20$~GeV and $|\eta| < 3.0$. For signal events, hadronic decay of the charginos leads to displaced hadronic final states. Note that long-lived hadrons ({\it e.g.}, $b$-hadrons, $c$-hadrons) as well as soft particles coming from the prompt decay of the displaced hard processes are also present in the events. So, we calculate the angular separation $\Delta R$ between the jet and the displaced tracks then demand that the displaced jet should have at least 2 or more displaced tracks satisfying $\Delta R (j,t) < 0.4$. The displaced jets are constructed following the Refs. \cite{Nemevsek:2018bbt, LLPtalk}. We check that for signal events, the final state stable objects are not energetic enough to pass the jet $p_T$ threshold and provide significant separation from the background events, therefore, we do not consider the displaced jets for further analysis. \end{itemize} \subsection{Distribution of different observables} Here we show the distribution of several kinematic variables relevant for the collider analysis. All the histograms are drawn for events which satisfy the basic selections on the leptons and jets discussed in the previous section. Distributions are scaled for $\mathcal L = 137 ~{\rm fb}^{-1}$ of integrated luminosity at the $\sqrt s = 13$ TeV run of the LHC. \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{nlep.png} \includegraphics[scale=0.3]{nlep_dv.png} \caption{Lepton multiplicity: all (left) and displaced (right).} \label{fig:nlep} \end{figure} \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{dxy_l1.png} \includegraphics[scale=0.3]{dxy_l2.png} \caption{Transverse impact parameter ($d_{\perp}$) of leading (left) and sub-leading (right) lepton. } \label{fig:dxy} \end{figure} \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{m_dv1.png} \caption{Leading DV mass.} \label{fig:mdv} \end{figure} In the left panel of Figure \ref{fig:nlep}, we show the multiplicity of isolated leptons present in the signal and background events. The right panel displays the same distribution but for the isolated displaced leptons with minimum transverse impact parameter $d_{\perp} > 0.2 ~{\rm mm}$. The $d_{\perp}$ distribution of the leading (in $p_T$) two leptons are shown in Figure \ref{fig:dxy}. Significant overlap is overserved expecially at small values of $d_{\perp}$. As mentioned in the previous section, we construct the DVs using the displaced tracks and calculate the mass of the DV. In Figure \ref{fig:mdv}, we plot the mass of the leading (in mass) DV. It is evident that the presence of high $p_T$ displaced tracks, associated with the displaced leptons originating from the sneutrino decay, results in DVs with relatively larger masses. Therefore, we can control the background events by selecting events with mass greater than 5 GeV or so. Also, the distribution has a kinematical endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{\pm}}$ so this can be used to estimate the sneutrino mass once the chargino mass is known. \subsection{Event selection and signal significance} After looking at the distributions of several kinematic observables, we find the following selection cuts optimised for our process of interest. The cuts are as follows. \begin{itemize} \item C1: Two same-sign same-flavour leptons (electron or muons) satisfying basic lepton selection criteria. \item C2: The leading (in $p_T$) lepton must satisfy $p_{T}(\ell_{1})>25$~GeV. \item C3: The subleading (in $p_T$) lepton must satisfy $p_{T}(\ell_{2})>15$~GeV. \item C4: Veto on a third lepton with $p_{T}(\ell_{3})>15$~GeV. \item C5: For opposite sign same flavour leptons, veto di-lepton invariant masses around the $Z$ mass {\it i.e.}, $m_{\ell^{\pm}\ell^{\mp}} \ne [80,100]$ GeV. \item C6: Select events with $\cancel p_T$\ $>$ 30 GeV. \item C7: Both leptons have a transverse impact parameter $d_{\perp} > 0.2~{\rm mm}$. \end{itemize} Even though our main backgrounds are from heavy-flavour jets, in this study we did not want to impose a $b$-veto so that we would be free of uncertainties related to the $b$-tagging when we show the viability of our approach. As the displacement of a secondary vertex is a key input for $b$-tagging algorithms, the background displacement distributions would be affected by imposing a $b$-veto. Also the acceptance of the signal events is uncertain. If experimental collaborations were to do this type of an analysis, they may be able to further improve the background rejection with the use of a $b$-tagger. The complete cutflow is presented in Table \ref{tab:cutflow}. The signal significance ($\frac{S}{\sqrt{(S+B)}}$), calculated at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$, is shown in Table \ref{tab:signi}. We may see that the displacement requirement rejects almost all of the signal of BP-II. For such a case the prompt signature can still be visible --- this is actually BP3 of \cite{Moretti:2019yln} for which a cut-based analysis gave a $\sim 3\sigma$ excess. From Table \ref{tab:cutflow} it is interesting to note that even though the cuts C4 and C5 do not reduce the dominant backgrounds, however, they are important to reduce the sub-dominant backgrouds like WZ, ZZ, tW, tZ and $t\bar tZ$ processes where both prompt and displaced leptons can be present. \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c| } \hline & BP-I & BP-II & BP-III & $t\bar{t}$ & $b\bar b$ (30 GeV) & $b\bar b$ (200 GeV) \\ \hline C1 & 420.4 & 354.9 & 160.6 & 2637.8 & 2.7 $\times 10^7$ & 440.2 \\ \hline C2 & 354.7 & 335.8 & 71.9 & 857.0 & 2.4 $\times 10^6$ & 362.5 \\ \hline C3 & 315.3 & 309.1 & 57.6 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C4 & 314.7 & 307.8 & 56.9 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C5 & 314.7 & 307.3 & 56.9 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C6 & 265.8 & 270.4 & 49.6 & 123.2 & 32432.1 & 150.2 \\ \hline C7 & 35.9 & 1.7 & 13.5 & 5.1 & 0 & 5.2 \\ \hline \end{tabular} \caption{Cutflow table ($\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$).} \label{tab:cutflow} \end{center} \end{table} \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline & Signal (S) & B ($t\bar{t}$) & B ($b\bar b$) & B (total) & Significance = $\frac{S}{\sqrt{(S+B)}}$ \\ \hline BP-I & 35.9 & 5.1 & 5.2 & 10.3 & 5.3 \\ \hline BP-II & 1.7 & 5.1 & 5.2 & 10.3 & 0.5 \\ \hline BP-III & 13.5 & 5.1 & 5.2 & 10.3 & 2.8 \\ \hline \end{tabular} \caption{Signal significances estimated at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$.\label{table:backgrounds}} \label{tab:signi} \end{center} \end{table} Before we proceed to extract neutrino Yukawa couplings, we discuss a few kinematic observables which can be used, in addition to cuts C1-C6, to reduce the SM backgrounds. For example, we can optimise $d_\perp$ for both the leading two leptons, especially the second lepton which, for signal events, has larger $p_T$ and longer decay lengths (see plots in the upper panel of Figure \ref{fig:aftercuts}). Another important quantity is the ratio of the missing transverse momentum and scalar $H_T$, defined as $\alpha = \frac{\cancel{p_T}}{\sqrt{H_T}}$. Signal events have large missing transverse energy and, therefore, relatively larger values of $\alpha$. These additional observables provide us with an extra handle to minimise the SM backgrounds and improve the sensitivity to sneutrino events. \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{dxy_l1_cut.png} \includegraphics[scale=0.3]{dxy_l2_cut.png} \includegraphics[scale=0.3]{xalpha2_cut.png} \caption{Distribution of the transverse impact parameter ($d_{\perp}$) of the leading (top-left) and sub-leading (top-right) lepton. Bottom: The distribution of $\alpha = \frac{\cancel{p_T}}{\sqrt{H_T}}$. All the figures are drawn {for events satisfying cuts C1-C6}. } \label{fig:aftercuts} \end{figure} \section{Introduction} \label{intro} The Standard Model (SM) has survived nearly all experimental tests. Besides the strong evidence for Dark Matter (DM), the only unexplained experimental phenomenon is neutrino oscillations \cite{Athanassopoulos:1997pv,Fukuda:1998mi,Aguilar:2001ty,Ahn:2002up,Abe:2011sj,An:2012eh}, which in turn imply that neutrinos have a tiny, but non-zero, mass. The standard explanation is that neutrino masses are generated through a seesaw mechanism \cite{Minkowski:1977sc,Konetschny:1977bn,Mohapatra:1979ia,Magg:1980ut,Schechter:1980gr,Foot:1988aq}, where the effective dimension five operator responsible for neutrino masses is suppressed by a heavy mass scale. The range of possible seesaw scales varies from eV-scale sterile neutrinos to those with masses of the order $10^{14}$~GeV. Taking type-I seesaw as an example, the neutrino Yukawa couplings are $\mathcal{O}(1)$ at the upper end while they are tiny at the lower end of the possible spectrum of their interaction strength. One interesting option is that the seesaw scale is around the Electro-Weak scale (EW), which requires the neutrino Yukawa couplings to be somewhat smaller than the electron Yukawa coupling. As the Right-Handed (RH) neutrinos are singlets under the SM gauge group, the only interactions they have are the Yukawa couplings, which being small can lead to Displaced Vertices (DVs) \cite{Basso:2008iv,Helo:2013esa,Izaguirre:2015pga,Accomando:2016rpc,Liu:2019ayx}. Supersymmetry (SUSY) is a well-motivated framework for Beyond the SM (BSM) physics. SUSY is the only space-time symmetry that can be added to the Poincar\'e algebra \cite{Haag:1974qh} and it relates particles with different spins, specifically, bosons to fermions. This relation leads to the cancellation of quadratic divergences emerging in the calculation of the Higgs boson mass in the SM (the so-called hierarchy problem). Furthermore, SUSY may induce the convergence of the Electro-Magnetic (EM), weak and strong couplings at some high energy scale, unlike the SM, a precondition for a theory embedding unification of forces. Finally, if one removes the baryon and lepton number violating couplings by requiring $R$-parity, one can get as a by-product of SUSY a DM candidate in the form of the Lightest Supersymmetric Particle (LSP). Needless to say then, in order to pursue BSM physics that addresses all the aforementioned SM flaws, SUSY is one of the possible paths to follow, so long that it embeds a mechanism for neutrino mass generation. In doing so, it is then necessary to surpass its minimal realisation and consider non-minimal ones \cite{Book}, wherein the gauge and/or Higgs structures are enlarged with respect to the case of the Minimal Supersymmetric Standard Model (MSSM). An attractive framework in this respect is the Next-to-MSSM (NMSSM), wherein a singlet Superfield, containing an extra singlet Higgs state and its SUSY counterpart, is added to the MSSM particle content. This way, the so-called $\mu$-problem \cite{Ellwanger:2009dp} of the MSSM is overcome. If such a construct is supplemented with RH neutrinos and their SUSY counterparts, a viable model for neutrino mass generation based on a type-I seesaw is established. By adopting this theoretical framework, we will show that the heavy Higgs states belonging to it (both CP-even and -odd) can have significant couplings to RH sneutrinos, even in the alignment limit, as required by measurements of the SM-like Higgs boson discovered at the Large Hadron Collider (LHC) in 2012. Furthermore, given that in this model RH (s)neutrinos and higgsinos get their masses through the same mechanism, one can expect the SUSY states to be rather degenerate, yet, suitable soft SUSY-breaking mass terms can render the RH sneutrinos somewhat heavier than the higgsinos. In such a case a RH sneutrino can decay visibly through its Yukawa interactions to a charged lepton ($\ell = e, \mu$) and chargino or else a neutrino and neutralino. Therefore, a typical signal that may emerge at the LHC in this theoretical scenario is heavy Higgs mediated production of a sneutrino pair eventually yielding a di-lepton signature together with soft jets and missing transverse energy. As the seesaw mechanism has a source of lepton number violation, we get both opposite-sign and same-sign dileptons, the latter giving better discovery potential due to smaller backgrounds. Remarkably, the aforementioned mass degeneracy may make the sneutrinos long lived, so that the visible tracks of this signature may be displaced, which in turn implies a smaller background with respect to the one affecting similar prompt signatures \cite{Moretti:2019yln,Moretti:2020zbn}. Here we shall prove that this signature with two DVs can be extracted at the LHC and, moreover, we shall also show how the kinematics of the displaced (visible) tracks could allow for a measurement of the (s)neutrino Yukawa couplings, thereby enabling one to probe the underpinning neutrino mass generation dynamics. Our paper is organised as follows. In the next section we introduce our theoretical framework. In the following one we illustrate the properties of the track displacements and how these can be related to the discussed Yukawa couplings. Then we perform our MC analysis aimed at extracting both the relevant signature and its underlying (s)neutrino mass parameters. We then conclude. \section{NMSSM with RH neutrinos} We shall study the NMSSM with RH neutrinos. It is based on the following Superpotential \cite{Kitano:1999qb,Cerdeno:2008ep} $$ W=y^{u}_{ij}(Q_{i}\cdot H_{u})U^{c}_{j}-y^{d}_{ij}(Q_{i}\cdot H_{d})D^{c}_{j}-y^{\ell}_{ij}(L_{i}\cdot H_{d})E^{c}_{j}+y^{\nu}_{ij}(L_{i}\cdot H_{u})N^{c}_{i} + \lambda S(H_{u}\cdot H_{d}) $$ \begin{equation}\label{eq:superpotential} \hspace*{-9.85cm} +\frac{\lambda_{Ni}}{2}SN_{i}^{c}N_{i}^{c}+\frac{\kappa}{3} S^{3}. \end{equation} As mentioned, this model cures some problems of the MSSM, namely the $\mu$-term will be generated through the Vacuum Expectation Value (VEV) of the scalar component of the singlet Superfield $S$ and we also have a mechanism for neutrino mass generation. As the $\mu$-term should not be too far above the EW scale, the RH neutrino masses are at the EW scale too, hence the neutrino Yukawa couplings need to be very small, of the order $10^{-7}$. We shall look at a scenario where we have light higgsinos and RH neutrinos being roughly degenerate with these. The soft SUSY-breaking masses should then make the RH sneutrinos heavier than the higgsinos and thus the decays $\tilde{N}\rightarrow \tilde{\chi}^{0}\nu,\tilde{\chi}^{\pm}\ell^{\mp}$ are kinematically open. The decay width will be given by the neutrino Yukawa couplings. As mentioned, they will be tiny, and thus may lead to DVs. This model allows two important features: EW Symmetry Breaking (EWSB) generates both a lepton-number violating mass term for the RH sneutrinos and a coupling between Higgs states and the sneutrinos. The coupling in the alignment limit is (neglecting doublet-singlet mixing) \begin{eqnarray} C_{h\tilde{N}\tilde{N}} & = & \pm\frac{1}{2}\lambda\lambda_{N}v\sin 2\beta,\\ C_{H\tilde{N}\tilde{N}} & = & \pm\frac{1}{2}\lambda\lambda_{N}v\cos 2\beta, \end{eqnarray} where the upper (lower) sign is for CP-even (CP-odd) sneutrinos. If $\tan \beta > 1.5$, the heavy Higgs state has a stronger coupling to sneutrinos. If $\lambda$ and $\lambda_{N}$ are large, RH sneutrinos can be pair produced through the heavy Higgs portal and they can be detected through lepton-number violating signatures \cite{Moretti:2019yln,Moretti:2020zbn}. The singlet field is essential in achieving this as in the MSSM with RH neutrinos the Higgs-RH sneutrino couplings would not exist. As intimated, our aim is to use DVs to both improve background rejection and allow for a quantitative estimate of the neutrino Yukawa couplings. \section{From displacements to Yukawa couplings} \label{sec:yukawa1} We shall assume that the only kinematically available decay channels for the sneutrino are $\tilde{N}\rightarrow \tilde{\chi}^{0}\nu,\tilde{\chi}^{\pm}\ell^{\mp}$ and that the neutralino and chargino are higgsino-like. The sneutrino-lepton-chargino vertex factor is \begin{equation} \lambda_{\tilde{N}\ell^{+}\tilde{\chi}^{-}}=\frac{i}{\sqrt{2}}y^{\nu}_{ab}V_{12}\frac{1+\gamma_{5}}{2}, \end{equation} where $a,b$ refer to the flavours of the charged lepton, sneutrino and $V_{12}$ tells the higgsino component of the lightest chargino. For CP-odd sneutrinos we only need the replacement $\frac{i}{\sqrt{2}}\rightarrow \frac{1}{\sqrt{2}}$. This leads to the partial width (neglecting the lepton mass) \begin{equation}\label{eq:charginowidth} \Gamma(\tilde{N}_{i}\rightarrow \ell^{\pm}_{j}\tilde{\chi}^{\mp})=\frac{(m_{\tilde{N}}^{2}-m_{\tilde{\chi}^{\pm}}^{2})^{2}}{16\pi m_{\tilde{N}}^{3}}|y^{\nu}_{ji}|^{2}|V_{12}|^{2}. \end{equation} We shall assume $|V_{12}|=1$ in the following. If we neglect the mixing between Left-Handed (LH) and RH neutrinos\footnote{This mixing introduces a vertex factor $\lambda_{N}N_{j5}$ times the RH neutrino component of the light eigenstates. The elements of the neutrino left-right mixing matrix are of the same order as the elements of $y^{\nu}$. As the singlino component $N_{j5}$ is not negligible, this correction to the vertex factor is numerically only an order of magnitude smaller than $|y^{\nu}N_{j3}|$ so this does introduce an $\mathcal{O}(10\%)$ correction to the partial width.\label{mixing}}, the sneutrino-neutrino-neutralino vertex factor is \begin{equation} \lambda_{\tilde{N}\nu\tilde{\chi}^{0}}=-\sum_{a}\frac{i}{\sqrt{2}}\left( N_{j3}^{*}P^{*}_{ab}y^{\nu}_{bc}\frac{1-\gamma_{5}}{2}+ N_{j3}P_{ab}y^{\nu *}_{bc}\frac{1+\gamma_{5}}{2}\right), \end{equation} where $P$ is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $b$ and $c$ are the flavours of the neutrino and sneutrino, respectively, while $N_{j3}$ gives the $\tilde{H}_{u}$ component of the neutralino $j$. For CP-odd sneutrinos we shall again replace $\frac{i}{\sqrt{2}}\rightarrow \frac{1}{\sqrt{2}}$ in the prefactor. This gives the partial width \begin{equation}\label{eq:neutralinowidth} \Gamma(\tilde{N}_{i}\rightarrow \nu\tilde{\chi}^{0}_{j})=\sum_{k}\frac{(m_{\tilde{N}}^{2}-m_{\tilde{\chi}^{0}}^{2})^{2}}{16\pi m_{\tilde{N}}^{3}}|y^{\nu}_{ki}|^{2}|N_{j3}|^{2}. \end{equation} Here $j$ can get values $1,2$ and $|N_{j3}|^{2}\simeq 1/2$. From Equations (\ref{eq:charginowidth}) and (\ref{eq:neutralinowidth}) we see that the total decay width is proportional to $\sum_{i} |y^{\nu}_{ij}|^{2}$. Hence the measurement of the lifetime of the sneutrino will give an estimate of the sum of the Yukawa couplings, while the ratios $|y_{ik}/y_{jk}|^{2}$ are proportional to BR$(\tilde{N}_{k}\rightarrow \ell_{i}^{\pm}\tilde{\chi}^{\mp})/{\rm BR}(\tilde{N}_{k}\rightarrow \ell_{j}^{\pm}\tilde{\chi}^{\mp})$. Measuring the lifetime and the ratios of the Branching Ratios (BRs) would then give us the absolute values of the individual neutrino Yukawa couplings. We shall assume that the chargino and neutralinos would have been observed already and their masses would be known reasonably well. From the decay mode $\tilde{N}\rightarrow \ell^{\pm}\tilde{\chi}^{\mp}$ and the subsequent chargino decay $\tilde{\chi}^{\pm}\rightarrow \tilde{\chi}^{0}+$ hadrons, the invariant mass of the visible decay products should have an endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{0}}$ from which the sneutrino mass could be estimated. We also expect that the mass of the heavy Higgs bosons would be known from one of its fermionic decay modes. In order to extract the lifetime of the sneutrino in its rest frame, we need to measure the position of the secondary vertex and the relativistic $\gamma$-factor of the sneutrino, which can be computed, \textit{e.g.}, through $\gamma_{\tilde{N}}=E_{\tilde{N}}/m_{\tilde{N}}$. This can be done on an event-by-event basis as follows. In the Center-of-Mass (CM) frame energy-momentum conservation gives us \begin{eqnarray} E_{\tilde{N}}^{*} & = & \frac{m_{H}}{2},\\ p_{\tilde{N}}^{*} & = & \sqrt{\frac{m_{H}^{2}}{4}-m_{\tilde{N}}^{2}}.\label{eq:labmomentum} \end{eqnarray} To figure out what are the sneutrino momenta in the laboratory frame, we may use the following facts. \begin{itemize} \item The initial transverse momentum of the heavy Higgs boson is nearly zero as long as there are no hard prompt jets in the event. \item The three-momentum of the sneutrino is directed from the primary vertex to the secondary one, since the sneutrino is a neutral particle and its trajectory will not be curved. \end{itemize} \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{kinematics.png} \end{center} \caption{The kinematics can be solved as shown. As the sneutrinos are neutral, their momentum vectors are aligned with the displacements of the secondary vertices from the primary vertex. We then construct the momenta in the lab frame by using these direction vectors and the fact that the transverse components are equal. Then we may boost to the center-of-mass frame, where we know the total momentum $p^{*}$ drom Eq. (\ref{eq:labmomentum}). \label{fig:kinematics}} \end{figure} We shall draw vectors from the primary vertex to the secondary vertices as shown in Figure \ref{fig:kinematics}. Next we shall scale them so that $|p_{T,\tilde{N}_{1}}|=|p_{T,\tilde{N}_{2}}|$ based on the small initial transverse momentum. Due to the difference of initial state gluon momenta, the final state will have a net momentum in the $z$-direction, but we may boost to the CM frame simply by adding equal vectors to both laboratory frame momenta. Once we are in the CM frame, we know $p_{\tilde{N}}^{*}$ from Equation (\ref{eq:labmomentum}) and hence may solve for $p_{T,\tilde{N}}$. This then allows us to compute \begin{equation} p_{\tilde{N}}=p_{T,\tilde{N}}\sqrt{1+\frac{d_{Z}^{2}}{d_{\perp}^{2}}}, \end{equation} where $d_{z}$ and $d_{\perp}$ are the longitudinal and transverse displacements, respectively. In Figure \ref{fig:dv}, we show a representative image of the trajectory of a long-lived particle in a 2-dimensional plane (from \cite{Allanach:2016pam}). \begin{figure}[!htb] \begin{center} \includegraphics[width=0.5\textwidth]{DV_image.png} \end{center} \caption{Schematic view in the transverse plane of a long-lived particle decay.} \label{fig:dv} \end{figure} We can then deduce $E_{\tilde{N}}=\sqrt{p_{\tilde{N}}^{2}+m_{\tilde{N}^{2}}}$ and from this $\gamma_{\tilde{N}}$. As the total displacement is \begin{equation} d=\gamma_{\tilde{N}} v \tau, \end{equation} where $v$ is the velocity and $\tau$ the lifetime of the sneutrino, we may solve for the lifetime \begin{equation} \tau =\frac{d}{\gamma_{\tilde{N}}\sqrt{1-\frac{1}{\gamma_{\tilde{N}}^{2}}}}. \end{equation} The lifetimes follow an exponential probability distribution \begin{equation} P(\tau)\propto e^{-\tau/\tau_{0}}, \end{equation} where $\tau_{0}$ is the average lifetime. Plotting the lifetime distribution with a suitable binning with a logarithmic scale should then produce a straight line, whose slope gives the inverse of the average lifetime\footnote{We refer to \cite{Banerjee:2019ktv,Liu:2020vur} for elaborated discussions on determining the lifetime of long-lived particles at the LHC.} When Next-to-Leading Order (NLO) and even higher order corrections are taken into account, the Higgs bosons will have a finite initial transverse momentum $p_{T}^{H}$. This will introduce a relative error in the measurement of the momenta, which is of the order of $p_{T}^{H}/p_{\tilde{N}}$. As we are interested in events that do not have a large longitudinal boost, \textit{i.e.}, all decay products are in the barrel region of the detector, we expect typically the $\gamma$-factor of the sneutrino to be rather small. In such a case the error on $\gamma_{\tilde{N}}$ will be of the order of $(p_{T}^{H}/m_{\tilde{N}})^{2}$. If we veto for hard jets a typical $p_{T}^{H}$ is about $10$~GeV \cite{Harlander:2014uea} and the mass of the sneutrino is $200$~GeV or more so the error from the finite initial transverse momentum on the lifetime measurement is very small. \section{Conclusions and outlook} We have studied sneutrino pair production via the heavy Higgs portal in the NMSSM with RH neutrinos. In the model both higgsinos and RH neutrinos get their masses through the singlet VEV, so we expect them to be at the EW scale. If RH sneutrinos are heavier than higgsinos, they can decay to both a charged lepton and a chargino or a neutrino and a neutralino, the former one giving a visible signature at colliders. Since the RH sneutrinos are gauge singlets, their decay modes are dictated by the neutrino Yukawa couplings, which are tiny in an EW scale seesaw model. The smallness of the decay width together with the lepton number violation arising from the RH (s)neutrino mass term leads to a signature with same-sign dileptons emerging from displaced vertices together with missing transverse momentum. The SM backgrounds for such a signal topology are therefore low. We have indeed performed an alaysis proving that the study of the emerging signatures with DVs can lead to the extraction of the neutrino Yukawa couplings, which salient features are as follows. \begin{itemize} \item We searched for displaced leptons and jets, with the leptons originating from sneutrino decays. Several BPs of the aforementioned BSM scenarios were introduced, with varied displacement lengths, and studied by MC analysis assuming a simple cut-based analysis providing a good signal-to-backgroud ratio already at the end of the 13 TeV run of the LHC. \item The kinematic distribution of the displaced vertex mass gives an end point which can be used to estimate the sneutrino mass once the chargino mass is known. \item Additional variables can make the regions of phase-space effectively background-free, thereby implying better handle on the extraction of the intervening Yukawa couplings. \item The decay width of the RH sneutrinos are dictated by the sum of the squares of the Yukawa couplings so a measurement of the sneutrino lifetime would allow one to measure the Yukawa couplings. The individual Yukawa couplings can then be extracted from the sneutrino BRs to different lepton flavours, once corrected with identification efficiencies. \item We ultimately showed that, for lifetimes corresponding to mm scale displacements, lifetimes can be measured with reasonable accuracy, which can then give the absolute values of the Yukawa couplings with a $10\%$ accuracy so long that sufficiently large data samples can be accrued for the signal, like, {\it e.g.}, at the HL-LHC. \end{itemize} In short, such an approach can lead to the extraction of the underlying neutrino dynamics parameters from the study of sneutrinos signatures at hadronic colliders, albeit under a specific SUSY paradigm. The results obtained here from events with DVs therefore complement those obtained in Refs.~\cite{Moretti:2019yln,Moretti:2020zbn} for the case of prompt signatures. \section{Extraction of Yukawa couplings} In order to extract the Yukawa couplings, we need to know the spectrum. The heavy Higgs masses can be measured reasonably accurately from their fermionic decay channels. The higgsinos would also be discovered from other searches and those would give an estimate for the masses of the neutralinos and charginos. The sneutrino mass can be estimated by measuring the invariant mass of the DV related to the sneutrino decay to a lepton and a chargino. The invariant mass distribution has an endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{\pm}}$ (see Figure \ref{fig:mdv}), which will give us the sneutrino mass once we know the chargino mass. We assume that we are able to get an essentially background free sample (say, over $95\%$ purity) using the cuts given in the previous section (both those given in Table \ref{tab:cutflow} and discussed at the end of the section) and the usage of $b$-tagging. We then do an event-by-event correction to the displacements described in section \ref{sec:yukawa1} to get the actual lifetimes. As both the CP-even and CP-odd Higgs bosons contribute to the signature, we need to pick which mass we use in the boost correction. As discussed in \cite{Moretti:2019yln}, the CP-even Higgs state usually has the largest BR to sneutrinos, although also the amount of available phase space has an impact. If the CP properties of the two heavy Higgses have been measured, the CP-even mass would give the better estimate, otherwise it would be a reasonable choice to pick the heavier one due to the larger phase space. This ambiguity leads to a systematic error in the measurement of the Yukawa couplings, which is of the order of $10\%$ estimated by studying the impact of choosing the other Higgs mass to the lifetime distribution. The background originating from heavy flavour hadrons that survives the cuts is heavily boosted (with the $p_{T}$ of a typical $b$-jet being $200$~GeV or more) leading to an average displacement greater than $5$~mm. As the boost correction assumes the particle to be heavy, the lifetime of heavy flavour hadrons will be overestimated and will on average correspond to mean decay lengths around $5$~mm, although the exponential distribution of course gives events at all displacements. We shall use a binning of $0.1$~mm. As the total number of background events at the HL-LHC is expected to be around $200$ (scaled from Table \ref{table:backgrounds} to $3000$~fb$^{-1}$) and these are distributed into some $100$ bins or more, there will not be many background events in a single bin. The background distribution may be estimated, \textit{e.g.}, by looking at events with small vertex mass and $\slashed{p}_{T}/\sqrt{H}_{T}$ and the subtract it from the distribution. The signal region includes a cut on the transverse distance of the leptons requiring $d_{\perp}> 0.2$~mm. The distribution of lifetimes close to the cut will be modified and hence we put a lower bound on the lifetime when fitting. We fit the exponential to the distribution in the interval $0.5$~mm$ <c\tau<5$~mm. This leads to rather robust results if the true lifetime is around $c\tau \simeq 1$~mm but, if the lifetime is shorter, the number of events in the fit region becomes so low that the statistical error increases. In such a case it might be reasonable just to use this method to find an upper limit for the lifetime and derive a lower limit for the neutrino Yukawa couplings. An upper limit for the Yukawa couplings can be obtained from the constraint on the sum of neutrino masses. That can be expressed as $\sum m_{\nu}=\mathrm{Tr}(m^{\nu})=\sum_{i,j}|y^{\nu}_{ij}|^{2}v^{2}\sin^{2}\beta/2m_{N_{j}}$. Once we had studied the fitting method with one BP, we tried to analyse other BPs blindly so that one of the authors generated events and another one then analysed them, the result was then compared to the unknown input. As long as the decay lenghts were at the mm scale, a reasonable agreement was achieved. \begin{figure} \begin{center} \includegraphics[width=0.75\textwidth]{lifetimefit_BP1.png} \end{center} \caption{The fit of an exponential to the lifetime distribution of the sneutrino for BP-I. The amount of data corresponds to $3000$~fb$^{-1}$ at $\sqrt{s}=14$~TeV.\label{fig:sneutrinofit}} \end{figure} We show in Figure \ref{fig:sneutrinofit} a fit of the sneutrino lifetime for BP-I. We fit a simple exponential to the lifetime distribution. The fit with a number of events corresponding to HL-LHC energy ($\sqrt s =14$ TeV) and luminosity ($3000$~fb$^{-1}$) gives us $\sum_{i}|y^{\nu}_{i1}|^{2}=(2.62 \pm 0.13 \pm 0.26) \times 10^{-13}$, where the first error is statistical and the second one the theory error (assumend to be $10\%$) from the ambiguity in choosing the CP-even or CP-odd Higgs as discussed above. On top of these there will be experimental systematic errors, which are mostly related to measuring the primary and secondary vertices\footnote{As this is a simple lifetime measurement, many typical sources for systematic errors, like parton distribution functions, do not matter.}. The true value is $2.28\times 10^{-13}$, which slightly more than one standard deviation off. For BP-II the event sample is so small that the Yukawa couplings cannot be reliably estimated. The best fit value would have been $3.57\times 10^{-13}$ but, as the actual average decay length is shorter than the lower end of our fitting region, this estimate is based only on few events and individual outliers can significantly change the result. The estimate was off by a factor of two. With such a short decay length it makes sense only to give a lower bound on the Yukawa couplings. We give the results for our three BPs in Table \ref{tb:yukawas}. When doing the estimate, we added $10\%$ to the invisible decay width of Equation (\ref{eq:neutralinowidth}) due to the mixing effect of LH and RH neutrinos (see footnote on page \pageref{mixing}). \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|} \hline & BP-I & BP-II & BP-III\\ \hline Measured $\sum |y^{\nu}_{1i}|^{2}$ & $2.62\pm 0.13\pm 0.26$ & $>2.94$ & $3.73\pm 0.67 \pm 0.37$\\ Actual $\sum |y^{\nu}_{1i}|^{2}$ & $2.28$ & $7.81$ & $2.40$\\ \hline \end{tabular} \end{center} \caption{The sums of neutrino Yukawa couplings from displacements and the actual input values in units of $10^{-13}$. The first uncertainty is statistical and the second one is a $10\%$ theoretical uncertainty related to the ambiguity between the CP-even/odd Higgs states as discussed in the text. For BP-II the decay length was so short that we only quote a lower limit for the Yukawa couplings.\label{tb:yukawas}} \end{table} From this value we can then determine the absolute values of individual Yukawa couplings as the number of events with a given lepton flavour, which will be proportional to $\epsilon_{i} |y^{\nu}_{i1}|^{2}$, where $\epsilon_{i}$ is the efficiency of identifying the lepton of flavour $i$. Since the error on $|y^{\nu}|^{2}$ will often be below $20\%$ with high enough statistics, the error on the Yukawa couplings themselves will be around $10\%$. Hence even this rather simple fitting method can give a reasonably good estimate of the Yukawa couplings.
{ "timestamp": "2021-05-24T02:15:18", "yymm": "2012", "arxiv_id": "2012.14034", "language": "en", "url": "https://arxiv.org/abs/2012.14034" }
\section{Introduction} The Supermembrane theory was originally expected to describe the microscopic degrees of freedom of M-theory, however when formulated on 11D Minkowski background it has a continuous spectrum from $[0,+\infty)$ \cite{deWit6,deWit2}. This behavior does not change just by circle compactifications \cite{deWit3}. The continuity of the spectrum represents an obstruction to interpreting the M2-brane as a fundamental object. Indeed, it led to the formulation of the matrix theory conjecture \cite{Susskind} where the supermembrane was interpreted as a second quantized theory. In \cite{Restuccia}, a new topological condition was found associated to the presence of a non trivial $U(1)$ bundle on the worldvolume of the supermembrane induced by an irreducible wrapping. In {\cite{Ovalle3}}, its Hamiltonian formulation was found. The spectrum of the regularized supersymmetric theory was rigorously shown to be purely discrete, with finite multiplicity \cite{Boulton} in distinction to the general case. This sector was denoted as Supermembrane (i.e. M2-brane) with central charges, since the topological term associated to an irreducible wrapping induces the presence of a non-vanishing central charge in the supersymmetric algebra. It has been formulated for different target spaces \cite{Bellorin,mpgm14,mpgm15}. Other sectors of the M2-brane with good quantum properties have also been identified: the M2-brane on a pp-wave background \cite{Sugiyama,Dasgupta} whose regularization is described by the BMN matrix model \cite{Maldacena3} and its spectral properties proved in \cite{mpgm11}, as well as a toroidally compactified M2-brane in the presence of a quantized constant three-form that induces worldvolume 2-form fluxes \cite{mpgm6}. Recently, a new sector associated to the compactification of the M2-brane on a particular 10D spacetime with punctures has also been obtained. It can describe other type of non-trivial M2-branes \cite{mpgm16} once a suitable regularization is provided. The regularization of these non trivial M2-brane sectors satisfy the sufficiency condition for discreteness found in \cite{Boulton}. These M2-brane sectors describe \textit{part} of the fundamental M-theory degrees of freedom and therefore they may represent a restriction on the String sectors with a quantum consistent uplift to M-theory. The need to obtain other non-trivial sectors of the M2-brane theory formulated on more general backgrounds becomes increasingly clear, but also its String counterparts. In this paper we concentrate on the String description of the nontrivial M2-brane bosonic sector associated with the presence of a central charge condition or subject to a quantized constant three-form background that induces a 2-form flux condition on its worldvolume. The relation between the M2-branes compactified on target spaces with isometries and the D2-branes under scalar/vector dualization is well-known \cite{Ovalle3,Duff5,Townsend4,Schmidhuber,Polchinski3,Manvelyan1,Duff6, Bergshoeff7,Ovalle1,Leigh}. In this sense, sometimes these two theories have also been referred as duals. We will also use the world 'duality' in the aforementioned sense when we will refer to it in sections 5 and 6. Some of the distinctive properties of these nontrivial M2-brane sectors will be inherited by their String duals, as we will see. In particular, the M2-brane with central charges and the M2-brane with $C_{\pm}$ 2-form fluxes have a nontrivial U(1) monopole connection and a symplectic gauge field which can be properly combined to produce a topologically nontrivial dynamical gauge field as recently proved in \cite{mpgm10}. This extra field is not present in an ordinary supermembrane theory and it is therefore natural to expect that new fields may arise on the associated D-brane theory. Because this extra field does not exist in a standard supermembrane theory, it is natural to expect new fields to emerge on the associated D-brane theory. When Dp-branes contain worldvolume fluxes they can be expressed in terms of Dp-brane bound states \cite{Maldacena4,Dasgupta2,Roy}. We will discuss this point briefly at the end of the paper.\newline The paper is organized as follows: In section 2, we review recent results about the main properties of two nontrivial M2-brane sectors: the M2-brane with central charges and the M2-brane with $C_{\pm}$ fluxes. We discuss their equivalence relation, which can be interpreted as a duality connecting those two M2-brane sectors, a priori inequivalents. In section 3, we obtain the M2-brane with a $C_{\pm}$ flux Hamiltonian where the transverse components of the supergravity three-form become explicit. We denote it as CM2-brane and characterize its properties. In section 4, we obtain its D-brane description. It contains RR and NSNS quantized charges that generate the presence of 2-form fluxes and a new dynamical field. In section 5, we discuss the physical properties of the new symmetries and their global description. In section 6, we show that it is possible to obtain the same result by departing from a D2-brane in $10D$ toroidally compactified when some specific fluxes are turned on and they generate an extra constraint that modifies the original Hamiltonian. We discuss its differences with respect to a D2-brane with RR and NSNS background fields with generic fluxes. In section 7, we present a brief discussion and our conclusions. \section{Toroidally compactified Nontrivial M2-branes} In this section, we will briefly review former results concerning the local \cite{Ovalle3,Ovalle1,Ovalle2} and global aspects of nontrivial M2-branes associated with the presence of central charges \cite{mpgm7}, those of the M2-brane formulated in the presence of 2-form fluxes $C_{\pm}$ \cite{mpgm6,mpgm10} and their relationship. The Light Cone Gauge (LCG) $D=11$ bosonic Hamiltonian formulation of a M2-brane on a general background was found in \cite{deWit}. Its supersymmetric extension on a flat superspace coupled with a non vanishing constant supergravity three-form is given by \cite{mpgm6} \begin{eqnarray}\label{HCM2} \mathcal{H}&=&\left[\frac{1}{(P_--C_-)}\left(\frac{1}{2}(P_a-C_a)^2+\frac{1}{4}(\epsilon^{uv}\partial_u X^a \partial_v X^b)^2\right) - \bar{\theta}\Gamma^-\Gamma_a \left\lbrace X^a,\theta \right\rbrace \right. \nonumber \\ &-& \left. C_{+-}- C_+ \right] \end{eqnarray} subject to \begin{eqnarray} P_a\partial_u X^a + P_- \partial_u X^- + \bar{S}\partial_u \theta &\approx& 0\\ S - (P_--C_-)\Gamma^- \theta &\approx& 0 \end{eqnarray} where $X^a,X^-$ denote the embedding maps from the worldvolume $\Sigma$, assumed to be a Riemann surface of genus one onto the target space. The indices $a,b=1,\dots,9$ denote the target space transverse components and $u,v=1,2$ the spatial directions of $\Sigma$. $\theta$ represents a Majorana spinor of 32 components that acts as a scalar on the worldvolume and $\Gamma$ are the Gamma matrices in 11D. The canonical conjugate momenta related to $X_a$, $X^-$, $\theta$, are given by $P_a$, $P_-$ and $S$, respectively. The LCG supergravity three-form components \cite{deWit} correspond to \begin{equation}\label{CaLCG} \small \begin{aligned} & C_a = -\varepsilon^{uv}\partial_uX^- \partial_vX^b C_{-ab} +\frac{1}{2}\varepsilon^{uv}\partial_uX^b \partial_vX^c C_{abc} \, \\ & C_{\pm} = \frac{1}{2}\varepsilon^{uv}\partial_uX^a \partial_vX^b C_{\pm ab} \,, \qquad C_{+-} = \varepsilon^{uv}\partial_uX^- \partial_vX^a C_{+-a} \, \end{aligned} \end{equation} The gauge invariance of the three-form allows us to fix $C_{+-a}=0$ and $C_{-ab}=0$. In \cite{mpgm6} the authors restrict themselves to considering backgrounds with $C_{\pm ab}$ and $C_{abc}$ the only nonzero and constant components. The Hamiltonian and the constraints contain nonphysical degrees of freedom associated to $X^-$ that must be eliminated \cite{deWit}. A way to solve this problem without introducing non-localities was proposed in \cite{mpgm6}, eliminating this dependence through the following canonical transformations of the phase space variables. \begin{eqnarray}\label{TransCanonica} \widehat{P}_a &=& P_a - C_a \quad , \quad \widehat{P}_- = P_- - C_- \end{eqnarray} Indeed, it can be seen that these transformations preserve the kinetic terms and all the Poisson brackets. Furthermore, it can be set $\widehat{P}_-=P^0_-\sqrt{W}$, with $\sqrt{W}$ a regular density on the worldvolume $\Sigma$ corresponding to the determinant of the spatial part of the worldvolume metric. If the target space is now toroidally compactified to $M_9^{LCG}\times T^2$ the embedding maps $X^a (\sigma^1,\sigma^2, \tau)$ become decomposed in terms of the maps $X^m$ with $m=3,\dots,9$ from the base $\Sigma$ to $M_9^{LCG}$ and the maps $X^r$ with $r=1,2$ from $\Sigma$ to $T^2$. The winding condition on the compact sector, \begin{eqnarray}\label{M} \oint_{C_{s}} dX^r = M_s^r \end{eqnarray} such that $m_s^r$ are the elements of the winding matrix. In complex coordinates $M^1_s+iM^2_s = 2\pi R(l_s+m_s\tau)$ with $R$, $\tau$ the moduli of the $T^2$ and $l_s,m_s$ the winding numbers, allows us to define $dX_{h}^r=M^r_sd\widehat{X}^s$ in terms of the orthonormalized harmonic basis $d\widehat{X}^r$, $$\oint_{C_s}d\widehat{X}^r=\delta_s^r$$ with $C_s$ the homological basis if the torus $T^2$. Since the components of the three-form are constants, they can always expressed as $C_{\pm rs}=c_{\pm} \epsilon_{rs}$ with $c_\pm \in \mathbb{Z}/\{0\}$. It can be defined $\displaystyle \widetilde{F}_{\pm}=\frac{1}{2}C_{\pm rs}M^r_pM^s_q d\widetilde{X}^p\wedge d\widetilde{X}^q$ with $\widetilde{X}^p$, $p=1,2$, the $T^2$ coordinates. When a quantization condition is imposed on the three-form components $C_{\pm rs}$, then, the 2-form flux condition is defined \cite{mpgm6}, \begin{eqnarray}\label{fluxpullback} \int_{T^2}C_{\pm}= \int_{T^2}\widetilde{F}_{\pm} = k_{\pm}\in \mathbb{Z}, \quad k_{\pm}\ne 0. \end{eqnarray} The target space flux condition indices a worldvolume flux condition through its pullback. The minimal embedding maps $\widehat{X}^r$ are identified with the $T^2$ torus coordinates $\widetilde{X}^s$, and the wordvolume and target-space fluxes are in one-to-one correspondence \cite{mpgm10}, \begin{eqnarray}\label{fluxpullback} \int_{T^2}\widetilde{F}_{\pm} = c_{\pm}\int_\Sigma \widehat{F}, \end{eqnarray} with $\displaystyle \widehat{F}=\frac{1}{2}\epsilon_{rs}dX_h^r\wedge dX_h^s$ defined in terms of the harmonic one-forms.The flux units are $k_{\pm}=nc_{\pm}$ with $n$ first Chern class associated to $\widehat{F}$. Once the flux condition is imposed and applying the Hodge decomposition the closed one-forms can be decomposed as, $dX^r=M^r_sd\widehat{X}^s+d\mathcal{A}^s$ where $dA^s$ are components of an exact one-form that transform as a symplectic connection under symplectomorphisms transformations. The flux condition acts as a new constraint on the Hamiltonian that changes it. Then, the Hamiltonian of the M2-brane on a $C_{\pm}$ flux background becomes \begin{eqnarray} \label{hamiltoniancpm} H^{\tiny{C_{\pm}}}&=&\int_\Sigma d^2\sigma\sqrt{W}\left[\frac{1}{2}\Big(\frac{\widehat{P}_m}{\sqrt{W}}\Big)^2+\frac{1}{2}\Big(\frac{\widehat{P}_r}{\sqrt{W}}\Big)^2 + \frac{1}{4}\left\{X^m,X^n\right\}^2 + \frac{1}{2}(\mathcal{D}_rX^m)^2 \right. \nonumber \\ &+& \left. \frac{1}{4}(\mathcal{F}^{rs})^2+\frac{1}{4}(\widehat{F}^{rs})^2 -\bar{\theta}\Gamma_-\Gamma_r\mathcal{D}_r\theta-\bar{\theta}\Gamma_-\Gamma_m\left\{X^m,\theta\right\} +C_+\right]\ \end{eqnarray} where \begin{eqnarray} \left\lbrace A, B \right\rbrace = \frac{\epsilon^{uv}}{\sqrt{W}}\partial_u A \partial_v B \end{eqnarray} is the Lie Bracket as defined in \cite{deWit5} and $\widehat{F}^{rs} = \left\lbrace X_h^r,X_h^s \right\rbrace = \frac{1}{2}\epsilon^{rs}\frac{\epsilon^{uv}}{\sqrt{W}}\widehat{F}_{uv}$ with $\widehat{F}_{uv}$ the components of the worldvolume two-form $\widehat{F}$. The action of the worldvolume flux constraint induces the appearance of a monopole contribution in the Hamiltonian and a dynamical symplectic gauge field $\mathcal{A}^r$ \cite{Ovalle3}. The degrees of freedom of the theory are $X^m,\mathcal{A}^r,\theta$. It also contains a symplectic covariant derivative and a symplectic curvature that were both formerly identified in \cite{Ovalle1} given by \begin{eqnarray}\label{simplectica} \mathcal{D}_rX^m =D_rX^m+\left\{ \mathcal{A}_r,X^m\right\}, \qquad \mathcal{F}_{rs}= D_r\mathcal{A}_s-D_s\mathcal{A}_r+\left\{ \mathcal{A}_r,\mathcal{A}_s\right\} \end{eqnarray} with $D_r$ a covariant derivative such that $D_1+iD_2=2\pi R (l_s+m_s\tau)\Theta^{s}_r\frac{\epsilon^{uv}}{\sqrt{W}} \partial_u \widehat{X}^r\partial_v$. it is defined in terms of the torus moduli $(R,\tau)$, the windings $l_s,m_s$ and $\Theta$ is a matrix associated with the monodromy induced on the base manifold which is contained in the conjugacy classes of $SL(2,Z)$ \cite{mpgm17}. Its presence is due to the invariance of the theory under the full symplectomorphisms group, in particular with those that are not connected with the identity on $\Sigma$ that changes the homology basis on $\Sigma$ and the corresponding basis of one-forms $d\widehat{X}$ \cite{mpgm10}. This Hamiltonian is subject to the local and global constraints associated to the Area Preserving Diffeomorphisms (APD) as a residual symmetry, \begin{eqnarray} \small \left\{ (\sqrt{W})^{-1}\widehat{P_m} , X^m\right\} + \mathcal{D}_r\left( (\sqrt{W})^{-1}\widehat{P}_r\right)+\left\lbrace (\sqrt{W})^{-1}\bar{S},\theta \right\rbrace &\approx& 0 \label{APDconstraints1}\\ \oint_{C_S}\left[\frac{\widehat{P}_m dX^m}{\sqrt{W}} + \frac{\widehat{P}_r dX^r}{\sqrt{W}} + \frac{\bar{S} d\theta}{\sqrt{W}}\right] &\approx& 0 \label{APDconstraints2}, \end{eqnarray} The worldvolume flux condition acts as a new constraint on the Hamiltonian. The regularized supersymmetric Hamiltonian \cite{mpgm6} satisfies the sufficiency criteria for discreteness found in \cite{Boulton}. The theory is $N=1$ since it preserves $1/2$ of the supersymmetry \cite{mpgm6}. \subsection{The M2-brane with $C_{\pm}$ fluxes dual to the M2-brane with central charge} The so-called M2-brane with central charges \cite{Ovalle1} corresponds to an irreducibly wrapped toroidal M2-brane that contains an extra topological constraint \cite{Restuccia} associated to an irreducible wrapping \begin{equation}\label{central charge} \int_{\Sigma}dX^r\wedge dX^s =\epsilon^{rs}nA_{T^2} \end{equation} with the integer $n=det(\mathbb{W})\ne 0$ defined in terms of $\mathbb{W}$ the wrapping matrix and $A_{T^2}$ the 2-torus area. As it was shown in \cite{Restuccia2}, the irreducible wrapping condition represents a generalization of the Dirac monopole condition over Riemann surfaces of arbitrary genus $g\ge 1$. Classically, the dynamics of its associated Hamiltonian does not contain string-like spikes with zero cost energy \cite{mpgm}, which would render at quantum level, the supersymmetric spectrum of the theory continuous. The discreteness of its supersymmetric spectrum was rigorously proved in \cite{Boulton} and in \cite{mpgm11}. The central charge (CC) condition (\ref{central charge}) corresponds to the quantization condition of $\widehat{F}$ and hence establishes a one-to -one relation with the quantization condition ($C_{\pm}$ over the 2-torus target space. In fact, $dX^r\wedge dX^s=\epsilon^{rs}\widehat{F}$ with $\widehat{F}$ defined in the previous section in such way that satisfies (\ref{central charge}). As shown in \cite{mpgm6} it establishes a relation between the two associated Hamiltonian densities as follows: \begin{eqnarray} \mathcal{H}^{C_{\pm}} = \mathcal{H}^{CC} + C_+ \end{eqnarray} For $C_+=0$, the Hamiltonian with central charges and the one with $C_-\ne 0$ fluxes, exactly coincide. Hence the discreteness property of the former automatically implies the discreteness of the second one. For the case of $C_+\ne 0$, since it is also quantized, the spectrum is discrete and shifted by a constant value proportional to $k_+$. Globally, the M2-branes considered are described in terms of symplectic torus bundles over a torus with a monodromy contained in $SL(2,Z)$ and with a topologically non trivial $U(1)$ connection \cite {mpgm3}. In \cite{mpgm10} it was proved that these structures generate a twisted torus bundle, where the base manifold is given by the worldvolume Riemann surface $\Sigma$, the fiber is a twisted torus $\mathbb{T}^3$ and the structure group are the 2-torus symplectomorphisms, $Symp(T^2)$. The consistence of the global description and all of the details are discussed in \cite{mpgm10}. \section{CM2-brane on Twisted Torus Bundle} \label{Sec3} We now obtain a Hamiltonian formulation of the M2-brane on a quantized constant three-form on the same target space $M_9\times T^2$, in which the transverse components of the three-form become explicit. To distinguish it from the previous case, we will refer to it as CM2. Let us note that we can also decompose $C_a$ (\ref{CaLCG}) to avoid the nonphysical degrees of freedom as $C_a=C_a^{(1)} + C_a^{(2)}$ with \begin{eqnarray} C_a^{(1)} &=&-\epsilon^{uv}\partial_u X^- \partial_v X^b C_{-ab} \quad , \quad C_a^{(2)}= \frac{1}{2}\epsilon^{uv}\partial_u X^a \partial_v X^b C_{abc} \end{eqnarray} Therefore, instead of the previous canonical transformation (\ref{TransCanonica}), it is enough to assume \begin{eqnarray}\label{TransCanonicaaa} \widehat{P}_a &=& P_a - C_a^{(1)} \quad , \quad \widehat{P}_- = P_- - C_- \end{eqnarray} to obtain the Hamiltonian in terms of its physical degrees of freedom. In fact, it can be verified that it also constitutes a consistent canonical transformation of the phase space variables, preserving the kinetic term and the Poisson brackets of the theory. The associated $11D$ LCG Hamiltonian becomes \begin{eqnarray}\label{CM2} \mathcal{H}_{CAN}=\left[\frac{1}{2}\frac{( \widehat{P}_a - C_a^{(2)})^2}{\sqrt{W}} + \frac{1}{4}\sqrt{W}\left\lbrace X^a,X^b \right\rbrace^2 - \sqrt{W} \bar{\theta}\Gamma^-\Gamma_a \left\lbrace X^a,\theta \right\rbrace - C_+\right] \end{eqnarray} where in distinction to the case previously analyzed (\ref{hamiltoniancpm}) the component $C_a^{(2)}$ is present carrying the information of the $C_{abc}$ components of the three-form, with $a=1,\dots,9$ denoting the transverse components. The trivial dynamics of $\widehat{P}_-$ allow us to set $\widehat{P}_-=\widehat{P}_-^0\sqrt{W}$. If now we perform a toroidal compactification and impose the flux condition (\ref{fluxpullback}) that acts as an extra constraint, then the LCG CM2-brane Hamiltonian with $C_{\pm}$ fluxes becomes \begin{eqnarray}\label{HCMIM2} H_{CM2} &=& \int d^2 \sigma \sqrt{W} \left\lbrace \frac{1}{2}\left(\frac{\widehat{P}_m-C_m^{(2)}}{\sqrt{W}}\right)^2 + \frac{1}{2}\left(\frac{\widehat{P}_r-C_r^{(2)}}{\sqrt{W}}\right)^2 + \frac{1}{4} \left\lbrace X^m,X^n \right\rbrace^2 + \frac{1}{4}(\widehat{F}^{rs})^2\right. \nonumber \\ &+& \left. \frac{1}{2}\left( \mathcal{D}_r X^m\right)^2 +\frac{1}{4} (\mathcal{F}^{rs})^2 -\bar{\theta}\Gamma^-\Gamma_r\mathcal{D}_r\theta - \bar{\theta}\Gamma^-\Gamma_m\left\lbrace X^m,\theta\right\rbrace-C_{+} \right\rbrace \end{eqnarray} where \begin{eqnarray} \small C_m^{(2)} &=& \frac{\epsilon^{uv} }{2}\left[ \partial_u X^{\bar{n}} \partial_v X^n C_{m\bar{n}n} + 2\partial_u X^n \partial_v X^r C_{mnr}+\partial_u X^r \partial_v X^s C_{mrs}\right] \\ C_r^{(2)} &=& \frac{\epsilon^{uv}}{2}\left[\partial_u X^m \partial_v X^n C_{rmn} + 2\partial_u X^m \partial_v X^s C_{rms}\right] \end{eqnarray} subject to the same APD constraints (\ref{APDconstraints1}) and (\ref{APDconstraints2}). The only nontrivial contribution on the $C_+$ term is due to $C_{+rs}$ components. On the Hamiltonian (\ref{HCMIM2}) appears explicitly the contribution of the transverse components of the supergravity three-form $C_{abc}$ with $a=(m,r)$, in distinction with the Hamiltonian of an M2-brane with $C_\pm$ fluxes (\ref{hamiltoniancpm}). However, a redefinition of the phase space variables based on the following canonical transformation reveals that both Hamiltonians, i.e. the CM2-brane and the M2-brane with $C_{\pm}$ fluxes, are equivalent. \begin{eqnarray}\label{ct3}\widetilde{P}_m = \widehat{P}_m - C_m^{(2)},\quad \widetilde{P}_r = \widehat{P}_r - C_r^{(2)}. \end{eqnarray} Consequently, the two theories previously obtained, and this last new one must share the same qualitative spectral properties of discreteness of the supersymmetric spectrum. This result is not obvious from a direct examination of (\ref{HCMIM2}) at the regularized level. This is due to the coupling of the scalar field's pullback and the canonical momenta. Indeed, the sufficiency criteria for the discreteness of the supersymmetric spectrum \cite{Boulton} can not be directly applicable. Because the sector of the M2-brane with central charges and the one with a quantized constant three form represented by the CM2-brane can also be connected by canonical transformations, this can be interpreted as additional evidence of a duality between both previously unconnected sectors, realized at the Hamiltonian and mass operator levels. Globally, the CM2-brane can also be described in terms of a Twisted Torus Bundle with monodromy contained in $SL(2,Z)$. \section{D-branes from nontrivial M2-branes} In this section, we obtain the D-brane description from the CM2-brane Hamiltonian formulation. To this end, we use the scalar/vector duality and we will refer to these sectors as duals in the aforementioned sense. In this paper we choose the simplest possible background where the nontriviality of the M2-brane becomes manifest. Their formulation is done in the LCG since the toroidally nontrivial M2-branes have been formulated in this context in order to establish their spectral properties. The LCG fixing commutes with the dualization procedure. Previous formulations of the LCG D2-brane Hamiltonian in ten dimensions in the absence of RR and NSNS background fields were done in \cite{Manvelyan1,Manvelyan2,Lee}. For the case of toroidally compactified target spaces, the Hamiltonian was obtained in \cite{Ovalle1,Ovalle2}. The virtue of working with the CM2-brane formulation is that the presence of the $C_{abc}$ transverse components, under dualization, generates the B-field coupling inside the Born Infeld action, explicitly. From the CM2-brane dual formulation, the duals of the other nontrivial M2-branes cases previously discussed, can be obtained. The 11D flux condition is dualized to produce 10D RR and NSNS flux conditions. When the D2-brane theories contain fluxes associated with the presence of RR and/or NSNS charges, they have been conjectured to admit a description in terms of D-brane bound states. \subsection{D-brane description of CM2-brane.} The action corresponding to the CM2-brane with $C_{\pm}$ fluxes subject to APD (\ref{APDconstraints1}) and (\ref{APDconstraints2}), can be obtained from (\ref{HCMIM2}) by a Legendre transformation. Following \cite{Townsend4}, let us consider a compactification on an extra circle, $X^m=(X^\alpha,X^9)$ with $\alpha=3,\dots,8$ and isolate the contribution of $X^9$ on the action. If we promote $L=dX^9$ to an independent one-form on the worldvolume by adding a topological term $AdL$, such that $dL=0$ we have \begin{eqnarray}\label{dualaction} S &=& S_0 + \int d^3\xi\left[ \frac{1}{2}\sqrt{W}\left(L_0+\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u\Lambda L_v\right)^2 - \frac{1}{2}\sqrt{W}\left(\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u X^\alpha L_v\right)^2\right. \nonumber \\ &-& \left. \frac{1}{2}\sqrt{W}\left(L_0+\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u\Lambda L_v\right)\widehat{F}^{rs} B_{rs} - \frac{1}{2}\sqrt{W}\left(\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u X^r L_v\right)^2 \right.\nonumber \\ &-& \left. \sqrt{W}\left(L_0+\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u\Lambda L_v\right)(\mathcal{F}^{rs} B_{rs})-\frac{1}{2}\sqrt{W}\left(\frac{\epsilon^{uv}}{\sqrt{W}}\partial_u X^s L_v B_{rs}\right)^2\right. \nonumber \\´ &-& \left. \sqrt{W}\left(\frac{\epsilon^{uv}}{\sqrt{W}}L_u X^r B_{+r}\right) \right]- \frac{1}{2}\int d^3\xi \left\lbrace \epsilon^{uv}L_0F_{uv} + 2\epsilon^{uv}L_uF_{v0} \right\rbrace \end{eqnarray} where $L_0=\Dot{X}^9$, $L_u=\partial_uX^9$, $F=dA$, $\Lambda$ is the Lagrange multiplier and $B_{rs}=C_{9sr}$ are the components of the NSNS background field on the compact sector. The term denoted as $S_0$ corresponds to \begin{eqnarray} S_0&=& \int d^3\sigma \left\lbrace \frac{1}{2}\frac{\widehat{P}_\alpha \widehat{P}^\alpha}{\sqrt{W}} + \frac{1}{2}\frac{\widehat{P}_r \widehat{P}^r}{\sqrt{W}} - \frac{1}{4} \left\lbrace X^\alpha,X^\beta \right\rbrace^2 - \frac{1}{4}(\widehat{F}^{rs})^2\right. \nonumber \\ &-& \left. \frac{1}{2}\left( \mathcal{D}_r X^\alpha\right)^2 -\frac{1}{4} (\mathcal{F}^{rs})^2+C_{+} \right\rbrace \end{eqnarray} For simplicity we have considered that $C_{\pm rs}^{(10)}$, $B_{\pm r}$ and $B_{rs}$ are the only nontrivial but constant components of the background fields in 10D. However, is easy to verify that $B_{-r}$ and $C_{-rs}$, do not appear explicitly in the Hamiltonian.\newline From the equations of motion we get \begin{eqnarray} L_0&=&\left[\frac{\epsilon^{uv}F_{uv}}{2\sqrt{W}} - \left\lbrace \Lambda,X^9\right\rbrace + \frac{1}{2}\widehat{F}^{rs} B_{rs} + \frac{1}{2}\mathcal{F}^{rs}B_{rs} \right] \\ L_\omega &=& \sqrt{W}\frac{\Tilde{\gamma}_{\omega v}}{\Tilde{\gamma}} \left[ \frac{1}{2}\frac{\epsilon^{\rho\sigma}F_{\rho\sigma}}{\sqrt{W}}\epsilon^{v\bar{v}}\partial_{\bar{v}}\Lambda - \epsilon^{v\bar{v}}F_{0\bar{v}} - \epsilon^{v\bar{v}}\partial_{\bar{v}} X^r B_{+r}\right] \end{eqnarray} By inserting these expressions in the previous action and performing a Lagrange transformation the corresponding Hamiltonian in $M_8\times T^2$, it becomes \begin{eqnarray}\label{HCMIM22} \mathcal{H}_{dual} &=& \frac{1}{2} \frac{\widehat{P}_\alpha \widehat{P}^\alpha}{\sqrt{W}} + \frac{1}{2}\frac{(\widehat{P}_r - B_r)^2}{\sqrt{W}} + \frac{1}{2}\frac{\Pi^u\Pi^v \widetilde{\gamma}_{uv}}{\sqrt{W}} + \frac{G}{2\sqrt{W}} + \sqrt{W}\frac{1}{4}(\widehat{F}^{rs})^2 \nonumber \\ &+& \frac{1}{4}\sqrt{W}(\mathcal{F}^{rs})^2+ \frac{1}{2}\sqrt{W}(\mathcal{D}_r X^\alpha)^2 - C_{+}^{(10)} - B_+ \end{eqnarray} where\footnote{By $\gamma$ we denote $\gamma=det(\gamma_{uv})= det(\partial_uX^\alpha\partial_vX_\alpha)$ and $\widetilde{\gamma}_{uv}=\gamma_{uv}+\partial_uX^r\partial_v X_r$} $G=\gamma + \mathcal{F}^{DBI}$ $, \quad B_r=\Pi^u\partial_u X^s B_{rs}, \quad B_+=\Pi^u\partial_u X^s B_{+s}, \quad \Pi^u=\epsilon^{uv}L_v$, and $\mathcal{F}^{DBI}= det(F_{uv} + B_{uv})$ being \begin{eqnarray} \small \label{Fcursiva} \mathcal{F}^{DBI} &=& F + b_2(\sqrt{W})^2\left[\frac{1}{2}b_2\left(\widehat{F}^{rs}\epsilon_{rs} + \mathcal{F}^{rs}\epsilon_{rs}\right)^2 + (*F)\left( \widehat{F}^{rs} + \mathcal{F}^{rs}\right)\epsilon_{rs}\right] \end{eqnarray} with $B_{uv}=\frac{1}{2}\partial_u X^r \partial_v X^s B_{rs}$, $B_{rs}=b_2 \epsilon_{rs}$ and where the RR ten dimensional $C_+^{(10)}$ has the same form of its 11D dual counterpart $C_+$ (\ref{CaLCG}). The LGG Hamiltonian (\ref{HCMIM22}) is subject to Gauss law and local and global APD constraints, \begin{eqnarray}\label{D2constraints} \partial_u \Pi^u &\approx& 0. \label{Gaussconstraint}\\ \left\{ (\sqrt{W})^{-1}\widehat{P_\alpha} , X^\alpha\right\} + \mathcal{D}_r\left( (\sqrt{W})^{-1}\widehat{P}_r\right) + \epsilon^{uv}\partial_u\left[ \frac{\Pi^\omega F_{v\omega}}{\sqrt{W}} \right] &\approx& 0 \label{APD1}\\ \oint_{C_S}\left[\frac{\widehat{P}_\alpha \partial_u X^\alpha}{\sqrt{W}} + \frac{\widehat{P}_r \partial_u X^r}{\sqrt{W}} + \frac{\Pi^\omega F_{u\omega}}{\sqrt{W}}\right]d^u\sigma &\approx& 0. \label{APD2} \end{eqnarray} The M2-brane flux conditions are used to derive the D-brane flux conditions. As \cite{Townsend4} originally observed in the covariant formulation, if one performs in the LCG, the dualization to the WZ term in eleven dimensions, the background fields become $C_{\pm} = C_{\pm}^{(10)} + B_{\pm}, \label{C+11}$ and $C_a = C_a^{(10)} - B_a.$ with $B_-=\Pi^u\partial_u X^s B_{-s}$. Gauge invariance of the two-form allows us to fix $B_{-s}=0$. Hence, it can be seen that the $C_{\pm}$ quantization condition in $D=11$, implies in the dual action, \begin{eqnarray}\label{quantization} \int_{\widetilde{\Sigma}} C^{(10)}_{\pm}=k_{\pm}^{(10)},\quad \int_{\widetilde{\Sigma}} B_+ = b_+,\quad \int_{\widetilde{\Sigma}} B_2 = b_2 \int_{\widetilde{\Sigma}} \widehat{F} =b_2 n \end{eqnarray} where the components $C_{\pm rs}^{(10)}=c_{\pm}^{(10)}\epsilon_{rs}$ and $B_{rs}=b_2\epsilon_{rs}$ with $k_{\pm}^{(10)}=n c_{\pm}^{(10)}$ such that they satisfy $k_{\pm}=k_{\pm}^{(10)}+b_+$. The components of the 10D RR three-form are constant and they generate 2-form fluxes whose pull-back on the worldvolume of the D2-brane, in analogy with the M2-brane analysis, are associated with a topologically nontrivial U(1) curvature. There is an extra contribution to the B-field that appears in the DBI term, associated with the pull-back of the Kalb-Ramond field $B_2=\frac{1}{2}B_{rs}dX^r\wedge dX^s$. This contribution -since the coefficient is also constant - also generates a two form flux condition on the worldvolume. At first sight, it could seem that there is not a flux quantization condition acting on it. However, due to the fact that the coefficient is constant, the worldvolume flux condition is automatically guaranteed by the central charge condition induced by the $C_{\pm}$ quantization of the M2-brane. This U(1) worldvolume monopole condition found in \cite{Restuccia} associated with a first nontrivial Chern class given by $k_{\pm}+b_2$ contributes to the Hamiltonian with a nontrivial curvature $\widehat{F}$ associated with a nontrivial $U(1)$ connection $\widehat{A}$ under symplectomorphisms transformations. As a consequence, the dual Hamiltonian of the CM2-brane on $M_9^{LCG}\times T^2$ can be understood as a D2-brane with 2-form fluxes given by (\ref{quantization}). See that in the same way that the flux constraint for the nontrivial M2-brane generates a symplectic gauge field with an associated symplectic field strength ($\mathcal{F}$), it also happens in the D2-brane associated to the CM2-brane with $C_{\pm}$ fluxes. As previously stated, because the D2-brane has RR three-forms and NSNS two-forms, both of which generate 2-form fluxes, an open question is whether it can be described in terms of Dp bound states. This 'dual' D-brane theory inherits from the nontrivial M2-brane, the same type of $U(1)$ topologically nontrivial gauge field. In fact, this condition was discovered in \cite{Restuccia} an it represents a generalization of the Dirac monopole condition to Riemann surfaces of genus $g\ge 1$ \cite{Restuccia2}. This quantization condition acts as an extra constraint on the Hamiltonian, generating new dynamical fields, as we will see in more detail in section 5. In \cite{mpgm11} the spectral properties of the $SU(N)$ regularized M2-brane with central charge and the $0+1$ dimensionally reduced D2-D0 bound state were compared. Though at first sight both models seem quite similar and carry a RR charge, they are associated with different monopole conditions and their spectra are completely different. In \cite{mpgm11} it was rigorously shown that the regularized M2-brane theory with central charges has a purely discrete supersymmetric spectrum, while the dimensionally reduced D2-D0 has a continuous spectrum bounded by below by the $U(N)$ monopole energy contribution to infinity, as originally conjectured by \cite{Witten4}. On the D-brane side, the CM2 dual (\ref{HCMIM22}) has a DBI curvature $F$ that is not quantized in contrast to D2-D0 bound states. This analysis, however, does not rule out the possibility of expressing it in terms of more complicated bound state constructions, such as in \cite{mpgm20}. \section{D-brane description: new features} In this section, we are going to emphasize the physical implications of the nontrivial D2-branes. On one hand, we have seen that the quantization condition on the M2-branes implies the existence of nontrivial quantization conditions on the constant RR and NSNS background fields of the D2-brane. These particular quantization conditions generate worldvolume 2-form flux fields. According to \cite{Restuccia2} this automatically implies that the associated D2-brane -which we will refer to as nontrivial D2-branes in the following - must have a monopole charge given by the units of fluxes turned on the worldvolume. Secondly, in the cases when $B_{rs}$ is quantized, the $\mathcal{F}^{U(1)}$ appearing in the Dirac Born-Infeld action is not topologically trivial. We will see that the nontrivial D2-brane posses different degrees of freedom, hence dynamical fields, than those associated to a usual toroidally compactified D2-brane. On top of the embedding scalar fields $X^m$ and the standard $U(1)$ DBI gauge field $A_u$, it appears a new singlevalued symplectic gauge field $\mathcal{A}_r$ and new nontrivial symmetries on the worldvolume. \subsection{New gauge Symmetries} The dual description of a D2-brane contains two characteristic symmetries related to the worldvolume $\widetilde{\Sigma}$: DBI U(1) symmetry related to the Gauss constraint and the symplectomorphisms. It is easy to see that $A_u$ on $\widetilde{\Sigma}$ transforms under the Gauss constraint as a U(1) gauge field \begin{eqnarray} \delta A_u = \partial_u \Omega, \label{U(1)DBI} \end{eqnarray} where $\Omega=-\Lambda$ is the parameter of the transformation. The dynamical variables $(X^m,\mathcal{A}^r,A_u)$ also transform under the APD constraint. In fact, concerning to the symplectomorphisms connected to the identity, any functional $O$ of the canonical variables transform locally under ${Symp}_0(T^2)$ as \cite{Restuccia,Restuccia3} % \begin{eqnarray} \delta O &=& \left\lbrace O, <\xi \phi>\right\rbrace_{PB} \end{eqnarray} where $\phi$ is the local constraint (\ref{APD1}) and for example, for the first term in (\ref{APD1}) we have that $< >$ denotes \begin{eqnarray} <d\xi\wedge \frac{P_m}{\sqrt{w}}dX^m> \equiv \int_{\widetilde{\Sigma}} d^2 \sigma \sqrt{w} \left(d\xi\wedge \frac{P_m}{\sqrt{w}}dX^m\right) \end{eqnarray} with $\xi=\xi(\sigma,\tau)$ the continuous parameter that contains both, the local and global parameters, associated to the $Symp_0(\Sigma)$ transformations \cite{Restuccia4}. Therefore, under symplectomorphisms connected to the identity we have that \begin{eqnarray} \delta_{\xi} X^m &=& \left\lbrace \xi, X^m \right\rbrace \\ \delta_{\xi} A_u &=& \xi^v F_{vu} = \left\lbrace \xi, A_u \right\rbrace + \xi^v\partial_u A_v, \end{eqnarray} where $m$ runs over the compact and non-compact indices, $m=(r,\alpha)$ with $r=1,2$ and $\alpha=3,\dots,8$. If we now examine the maps of the compact sector under symplectomorphism transformations, taking into account the worldvolume flux condition and the Hodge decomposition, we can split them in the following manner, which is similar to the study performed in \cite{mpgm10} \begin{eqnarray}\label{eqn 4.6} \delta_{\xi} X^r =\left\lbrace \xi, X^r\right\rbrace \,= \delta X_h^r + \delta \mathcal{A}^r . \end{eqnarray} Let us remark that the single-valued function of the embedding map $\mathcal{A}^r$ defines an associated one form $d\mathcal{A}$ -with $r=1,2$ the index running over the directions of the $T^2$- which transforms as a symplectic connection in contrast to the DBI U(1) connection $A_u$ with $u=1,2$ defined over $\widetilde{\Sigma}$. \subsubsection{Symplectic Gauge Symmetry} Similarly to the nontrivial M2-branes considered, we define a class of maps whose associated one-forms $dX_h$ are expressed in terms of the harmonic basis such that under infinitesimal transformations of the type $\delta X_h^r=\{\nu, X_h^r\}$ with $\nu$ an infinitesimal parameter, a curvature $\widehat{F}=\epsilon_{rs}dX_h^r\wedge dX_h^s$ with $r,s,=1,2$ is preserved. When the symplectomorphisms transformations (\ref{eqn 4.6}) are realized as follows, \begin{equation} \delta [X_h^r] = 0 \quad \text{and} \quad \delta \mathcal{A}^r = \mathcal{D}_r\xi \, , \end{equation} the multivalued $X_h^r$ becomes inert under the transformation and all of the transformation is realized by the $\mathcal{A}^r$ scalar field. The transformation law for $d\mathcal{A}^r$, corresponds to a symplectic one-form connection over $\widetilde{\Sigma}$. The associated one-form transforms as a symplectic gauge field. Its associated symplectic curvature is $\mathcal{F}=\mathcal{D}A+\{A,A\}$ is equivalent to (\ref{simplectica}), which is topologically trivial It defines a symplectic covariant derivative $\mathcal{D}\bullet=D\bullet+\{\mathcal{A},\bullet\}$ whose transformation, preserves the transformation law of its argument under symplectomorphisms. In the context of toroidally nontrivial M2-branes, a similar symplectic gauge field, its associated covariant derivative, and its curvature had previously been identified. There and here, its origin is found in the action of the specific constant 2-form fluxes acting over the worldvolume, which act as an extra constraint on the Hamiltonian imposing a restriction on the embedding maps. \subsubsection{U(1) Gauge Symmetries} Any D-brane wordlvolume action contains a characteristic DBI U(1) connection $A = A_u d\sigma^u$, where the transformation of $A_u$ under the Gauss constraint is given by (\ref{U(1)DBI}). 2-form fluxes, on the other hand, naturally induce a topologically nontrivial $U(1)$ over their worldvolume, though the precise effect on the Hamiltonian depends on the type of flux and its origin. Let us notice that the flux conditions on $C_{\pm}$ and $B_2$ given by (\ref{quantization}) imply the existence of a non trivial U(1) one-form $\widehat{A}=\frac{1}{2}\epsilon_{rs}X_h^rdX_h^s$, with an associated curvature $\widehat{F}=d\widehat{A}$ defined as in \cite{mpgm10}. In the case of constant 2-form fluxes, the associated $U(1)$ dynamical gauge symmetry is related to the one-form defined in terms of scalar fields via a nontrivial transformation via symplectomorphisms. We recall that this symmetry is generated by the APD constraint (\ref{eqn 4.6}) over the D2-brane worldvolume $\widetilde{\Sigma}$ by the following transformation consistent with (\ref{eqn 4.6}) \begin{equation}\label{transformacion1} \delta [X_h^r] = \left\lbrace \xi, [X_h^r] \right\rbrace \quad \text{and} \quad \delta \mathcal{A}^r = \left\lbrace \xi, \mathcal{A}^r \right\rbrace \,. \end{equation} In fact, we also may define $A_{\pm}=k_{\pm}\widehat{A}$ and $A_B=b_2\widehat{A}$, the non trivial U(1) connections reminiscent of the quantization conditions over the RR constant three-form and the NSNS constant 2-form, with $F^{(wv)}=k_{\pm}\widehat{F}$ and $B_2=b_2 \widehat{F}$, respectively. They contribute to the total amount of 2-form flux defined on the D2-worldvolume. In fact, it can be checked that $\widehat{A}$ does transform as a U(1) connection under symplectomorphisms \begin{eqnarray} \delta \widehat{A} =d \eta \,, \quad\textrm{with a parameter\quad } \eta = -\frac{\epsilon^{uv}}{\sqrt{w}}\partial_v\xi \left(\widehat{A}_u\right) -\xi\left(\star\widehat{F}\right), \end{eqnarray} where \begin{eqnarray} \frac{1}{2\pi}\int_\Sigma \widehat{F} = n. \end{eqnarray} As a result, the nontrivial one-form connection on the worldvolume is $A_{\pm}+A_{B}=(b_2+k_{\pm})\widehat{A}=k\widehat{A}$. Due to the constant nature of the NSNS background field $B_2$, its pullback to the worldvolume corresponds to the field strength of a one-form -which also transforms as a non trivial U(1) connection under symplectomorphisms-. Closely following \cite{mpgm10} it is possible to define a new dynamical topologically trivial $U(1)$ one-form $\mathcal{A}_G$ defined in terms of scalar embedding maps as follows, \begin{eqnarray} \mathcal{A}_G = \frac{1}{2}\epsilon_{rs}(\mathcal{A}^rdX_h^s-\mathcal{A}^sdX_h^r+\mathcal{A}^rd\mathcal{A}^s). \end{eqnarray} It transforms as a $U(1)$ connection under the symplectomorphisms specified by (\ref{transformacion1}) \begin{eqnarray} \delta \mathcal{A}_G &=& d \widetilde \eta \,, \quad \widetilde \eta\equiv \left( -\frac{\epsilon^{uv}}{\sqrt\omega} \partial_{v}\xi (\frac{1}{2}\epsilon_{rs}\mathcal{A}^r\partial_{u}X_h^s) -\xi \left(* \widehat F \right)\right) \label{ACBcursiva6} \end{eqnarray} with an associated curvature labelled $ \mathcal{F}_{G}=d\mathcal{A}$ topologically trivial. A particular property of this curvature $\mathcal{F}_G=d\mathcal{A}_G$ is that it coincides with the symplectic curvature (\ref{Fcursiva}). In fact, it was shown in the context of the M2-brane in \cite{mpgm10}. Furthermore, both structures are consistent with the irreducible wrapping condition. Here, the same analysis remains valid. Hence, it is possible to define a more general $U(1)$ connection $\widetilde{\mathbb{A}}$ in terms of $\widehat{A}$ and $\mathcal{A}_G$ \begin{eqnarray}\label{Agordita} \widetilde{\mathbb{A}} = \widehat{A} + \beta \mathcal{A}_G \end{eqnarray} with $\beta$ a real scalar. The connection is defined on the same nontrivial principal bundle characterized by the first Chern class $n$. It has an associated curvature $\mathbb{F} = d\mathbb{A}$ that satisfies, \begin{eqnarray} \frac{1}{2\pi}\int_\Sigma \mathbb{F}= n\ne0 \end{eqnarray} with \begin{eqnarray} \mathbb{F}=\widehat{F}+\beta\mathcal{F} \end{eqnarray} As a result, the D-brane description of the non trivial M2-brane duals, discussed here, has two U(1) gauge symmetries, whose curvatures are respectively $F$, $\mathbb{F}$ in distinction with the usual case. Indeed, the $\mathcal{F}^{DBI}$ can be re-expressed in terms of this new dynamical topologically nontrivial $U(1)$ as, \begin{eqnarray} \small \label{Fcursiva2} \mathcal{F}^{DBI} &=& F + \frac{1}{2}b_2(\sqrt{W})^2\left[b_2\left(\mathbb{F}^{rs}\epsilon_{rs}\right)^2 + 2(*F)\mathbb{F}^{rs}\epsilon_{rs}\right] \end{eqnarray} This naturally reinforces the idea that $H_{dual}$ could be better described by a bound state of D-branes. \subsection{Twisted torus Bundle description} The characteristics of the aforementioned nontrivial D2-branes introduce new aspects into the bundle description that we characterize. The topologically non trivial part of a standard toroidal D2-brane is characterized by two independent fibers: a principal torus bundle defined over its worldvolume $\widetilde{\Sigma}$ and a $U(1)$ fiber associated with the DBI contribution. In the presence of a two-form fluxes that are not associated with the DBI gauge field, a nontrivial $U(1)$ fiber is added. As we will see, in the case we are considering, there is an extra relationship between the fibers which changes the overall construction. In order to introduce the global description of the non trivial M2-brane duals, let us recall that the D2-branes are formulated on $M_8\times T^2$ with a foliation on the worldvolume, such that $\widetilde{\Sigma}$ is a Riemann surface of genus one related to the spatial directions of the worldvolume. It contains a topologically trivial $U(1)$ principle bundle defined on it associated to the DBI contribution. Flux condition induced by the constant quantized background condition implies, through the Weyl theorem, the existence of a U(1) principal bundle over $\widetilde{\Sigma}$ \begin{eqnarray} U(1)\rightarrow E' \rightarrow \widetilde{\Sigma}. \end{eqnarray} Now, because the D2-branes are toroidally compactified but the flux acts as an extra constraint,in close analogy with \cite{mpgm3}, it admits a symplectic torus bundle description with monodromy in $SL(2,Z)$ defined as \begin{eqnarray} T^2 \rightarrow E \rightarrow \widetilde{\Sigma} \, ,\hspace{0.5cm} G=Symp(T^2) \end{eqnarray} with $T^2$ is the compact part of the fiber, $\widetilde{\Sigma}$ is the base, $E$ is the total space and $G$ is the structure group of the fiber bundle. It is possible to define a Maurer-Cartan structure between the flux condition and the torus of the fiber, such that they defined a twisted three-torus. In fact, we may define three global one-forms \begin{eqnarray} e^1&=&d\widetilde{X}^1 , \\ e^2&=&d\widetilde{X}^2 , \\ e^3&=&dy + k\widetilde{X}^1d\widetilde{X}^2 \end{eqnarray} where $(\widetilde{X}^1,\widetilde{X}^2)\in T^2$ and $y$ is a coordinate on the $S^1$ associated with the nontrivial $U(1)$ fiber, such that the global one-forms $e_1$, $e_2$ y $e_3$ satisfies the structure equation \begin{eqnarray} de^3=f^3_{12}e^1\wedge e^2 \end{eqnarray} with $f^3_{12}=k$ known as Maurer-Cartan. In consequence, as in \cite{mpgm10} the two fibers form terms of a Twisted Torus Bundle \begin{eqnarray} (T^2)^{U(1)} \rightarrow E \rightarrow \Sigma \, ,\hspace{0.5cm} G=Symp(T^2) \end{eqnarray} where $(T^2)^{U(1)}$ denotes a 2-torus with a U(1) monopole connection over it, equivalent to a twisted 3-torus. Nontrivial D2-branes with worldvolume and background fluxes on $M_8\times T^2$ can thus be described geometrically as twisted torus bundles with monodromy in $SL(2,Z)$ and an extra $U(1)$ trivial principal bundle associated with the DBI gauge symmetry. They are duals of nontrivial M2-branes on $M_9\times T^2$ formulated on a twisted torus bundle with monodromy in $SL(2,Z)$. \section{ D2-branes on a RR and NSNS background with fluxes.} In this section, we will show how, when certain 2-form fluxes are included, the LCG Hamiltonian of (\ref{HCMIM22}) can be also obtained directly from the D2-brane. It is important to note, however, that this formulation differs from a usual D2-brane with RR and NSNS background fields. To demonstrate it, we illustrate its direct construction, emphasizing its key points. We will investigate the LCG D2-brane in the presence of constant RR and NSNS background subject to quantization conditions, because it will be compared to the LCG formulation of the M2-brane on a general quantized constant supergravity three-form $C _3$, which is relevant for M2-brane spectral characterization. By means of the dimensional reduction and the scalar/vector duality we have obtained the D-brane related to the nontrivial CM2-brane. The covariant formulation of the Dp-brane action in the presence of RR and NSNS background was originally found in \cite{Townsend5}. The LCG D2-brane Hamiltonian on a flat background was obtained in \cite{Manvelyan1, Lee}. However, in these works, the coupling with the RR and NSNS background fields was not considered. As previously mentioned in relation to the M2-brane \cite{mpgm6}, the difficulty of its formulation relies on the proper handling of the non-physical degrees of freedom. It requires the obtention of proper canonical transformations for the D2-brane theory that allow us to eliminate the $X^-$ dependence without introducing nonlocalities. Consider the Lagrangian density of a D2-brane on a 10D Minkowski spacetime, coupled to the RR 3-form and NSNS 2-form background fields, \begin{eqnarray} \mathcal{L}_{D2}=-\sqrt{-\Bar{G}} -\frac{1}{3!}\epsilon^{ijk}\partial_iX^\mu \partial_jX^\nu\partial_kX^\rho C^{(10)}_{\mu\nu\rho}, \end{eqnarray} with $X^{\mu}(\sigma^1,\sigma^2,\tau)$ the D2-brane embedding maps from the worldvolume $\tilde{\Sigma}$ in the $10D$ target-space $M_{10}$ labelled by $\mu,\nu,\rho=0,\dots,9$ and $i,j,k=0,1,2$ denoting the worldvolume indices. Let us begin by defining the generalized metric, \begin{eqnarray} \Bar{G} &=& det(\gamma_{ij}+\mathcal{F}_{ij}^{DBI}), \end{eqnarray} in terms of the worldvolume induced metric $\gamma_{ij}=\partial_i X^\mu \partial_j X_\mu$ and on the curvature \begin{eqnarray} \mathcal{F}^{DBI}_{ij} &=& F_{ij}+ B_{ij}. \end{eqnarray} It is defined in terms of the $U(1)$ Born-Infeld field strength $F_{ij}=\partial_iA_j-\partial_jA_i$ and a B-field which is the pullback of the NS-NS two-form background to the worldvolume $B_{ij}= \partial_iX^\mu \partial_j X^\nu B_{\mu\nu}$. It is straightforward to see that the LCG Lagrangian density is given by \begin{eqnarray} \label{LCG L RR NSNS} \mathcal{L}&=& -\sqrt{\triangle G} + C^{(10)}_{+-} + C^{(10)}_+ + C^{(10)}_M\partial_0X^M + C^{(10)}_-\partial_0X^-, \end{eqnarray} where $\Bar{G}=-\triangle{G}$ with $\triangle=-G_{00}+G_{0u}G^{uv}G_{v0}$, $G=detG_{uv}$ and $u,v=1,2$ being the spatial worldvolume indices. Following the notation of \cite{deWit} the pullback of the three-form components are, \begin{eqnarray} C^{(10)}_M &=& \frac{1}{2}\epsilon^{uv}\partial_u X^N\partial_vX^LC^{(10)}_{MNL} -\epsilon^{uv}\partial_u X^-\partial_vX^NC^{(10)}_{-MN}, \\ C^{(10)}_{\pm}&=& \frac{1}{2}\epsilon^{uv}\partial_uX^M\partial_vX^N C^{(10)}_{\pm MN} , \label{Cmasmenos} \\ C^{(10)}_{+-} &=& \epsilon^{uv}\partial_uX^-\partial_v X^N C^{(10)}_{+-N}. \end{eqnarray} By performing a Legendre transformation we obtain the canonical Hamiltonian \begin{eqnarray} \mathcal{H}_{CAN} &=& \frac{1}{2(P_--C^{(10)}_- -B_{-})}\left[(P_M-C^{(10)}_M-B_M)^2 + \Pi^u\Pi^v\gamma_{uv} + {G}\right], \nonumber \\ &-& B_+ +\Pi^u\partial_u A_0-C^{(10)}_{+-}-C^{(10)}_+, \end{eqnarray} subject to the primary constraints \begin{eqnarray} \varphi &=& \Pi^0 \approx 0 , \\ \phi_{u} &=& P_M\partial_uX^M + P_-\partial_uX^-+\Pi^vF_{uv} \approx 0, \end{eqnarray} with \begin{eqnarray} B_M&=&\Pi^u(\partial_u X^-B_{M-}+\partial_uX^{N}B_{MN}), \label{BM} \\ B_+ &=& \Pi^u(\partial_u X^-B_{+-}+\partial_uX^N B_{+N}). \label{Bmasmenos} \end{eqnarray} From the preservation in time of $\varphi$ the Gauss constraint is obtained (\ref{Gaussconstraint}) as a time independent secondary constraint. Moreover, $\phi_u$ is also time independent and the three constraints are first class. By fixing the residual gauge symmetries related to the RR three-form transformation as well as the two form NSNS, it can be set up $B_{+-}=B_{-M}=C^{(10)}_{+-M}=0$. $C^{(10)}_{-MN}=0$ can be fixed to zero. If we choose a background where $C_{-MN}$ is not zero but constant, once a quantization condition is imposed on $C^{(10)}_{-}$, it is no longer possible to take continuously this value to zero. Analogously to the methodology discussed in section (\ref{Sec3}), it is possible to find a suitable canonical transformation in which the D2-brane Hamiltonian may be re-expressed in terms of the physical variables, \begin{eqnarray} \widehat{P}_M&=& P_M-C^{(10)}_M , \\ \widehat{P}_- &=& P_--C^{(10)}_-. \end{eqnarray} This is a consistent canonical transformation that preserves all the Poisson brackets of the theory and the kinematical contribution. In order to obtain the physical Hamiltonian, the gauge invariance of the theory has been fixed according to \begin{eqnarray} & &\Pi^0 \approx 0 \rightarrow \psi_1=A_0\approx 0 , \\ & & \phi_\omega \approx 0 \rightarrow \psi_2= \Sigma_u = \gamma_{0u}+(\partial_0A_v + B_{0v} )\gamma^{vw}\mathcal{F}_{uw} \approx 0. \end{eqnarray} While the first condition leaves no residual symmetry the second one leaves the expected APD constraint. Let us remark that we have left unfixed the gauge symmetry related with the Gauss constraint. Finally, the LCG D2-brane DBI physical Hamiltonian coupled to RR and NSNS background fields is given by \begin{eqnarray}\label{H DBI RR NSNS} \hspace{-0.5cm}H&=& \int d^2\sigma \left[ \frac{1}{2}\frac{(\widehat{P}_M - B_M)^2}{\sqrt{W}} + \frac{1}{2}\frac{\Pi^u\Pi^v\gamma_{uv}}{\sqrt{W}}+\frac{1}{2}\frac{{G}}{\sqrt{W}}-C^{(10)}_+ - B_{+}\right], \end{eqnarray} subject to the Gauss Law and the APD constraint respectively \begin{eqnarray} \partial_u\Pi^u &\approx& 0, \label{Gauss Constraint}\\ \epsilon^{uv}\partial_u\left[ \frac{P_M}{\sqrt{W}}\partial_vX^M + \frac{\Pi^w}{\sqrt{W}}F_{vw} \right] &\approx& 0 \label{APD Constraint}, \end{eqnarray} where $C^{(10)}_{+}$ is given by (\ref{Cmasmenos}) and $B_M$, $B_{+}$ are given by (\ref{BM}), (\ref{Bmasmenos}), respectively (with $B_{-M}=B_{+-}=0$). Therefore, the two resultant symmetries of the theory are the expected Gauss constraint (\ref{Gauss Constraint}), associated to the gauge U(1) and the Area Preserving Diffeomorphisms symmetries (\ref{APD Constraint}), both related to the worldvolume of the D2-brane. By using the equations of motion it can be seen that the gauge $\Sigma_u = 0$ fixes the Lagrange multiplier $c^w=0$ and, in consequence $\widehat{P}_-$ can be written as a scalar density \begin{eqnarray} \widehat{P}_-=\widehat{P}^0_-\sqrt{W}, \end{eqnarray} where, as in the case of the nontrivial M2-brane, $P_-^0$ is a constant and $\sqrt{W}$ is the regular density on $\widetilde{\Sigma}$. It is straightforward to see that turning off all the background fields, one recovers the LCG Hamiltonian of the D2-brane studied in \cite{Manvelyan1}. In order to reproduce the LCG Hamiltonian (\ref{HCMIM22}) found in Section (\ref{Sec3}), we consider a toroidal compactification of (\ref{H DBI RR NSNS}) and impose particular independent quantization conditions. The transverse index $M=(\alpha,r)$ can be decomposed in $\alpha = 3,\dots,8$ non compact directions and $r,s=1,2$ compact directions. The topology of the compact D2-brane worldvolume from now on, will be assumed to be also a 2-torus. In analogy with the M2-brane, the winding condition on the map on the compact sector is \begin{eqnarray}\label{M-tilde} \oint_{\widetilde{C}_{s}} dX = 2\pi \widetilde{R}(l^{'}_s+m^{'}_s\widetilde{\tau})=\widetilde{M}_s^1 + i \widetilde{M}_s^2 \end{eqnarray} where $\widetilde{R},\widetilde{\tau}$ are moduli of $T^2$ and $l'_s$, $m'_s$ with $s=1,2$ are winding numbers. Therefore, the harmonic sector of the map can be written in terms of a normalized basis $d\widehat{X}^s$ as $dX_h=dX_h^1 + idX_h^2= (\widetilde{M}_s^1 + i\widetilde{M}^2_s) d\widehat{X}^s$. Since $\mathcal{F}_{uv}^{DBI}$ contains the pullback of the NSNS background field $B$, $\mathcal{F}$ becomes, \color{black} \begin{eqnarray}\label{FDBI} \mathcal{F}^{DBI}=\frac{1}{2}\epsilon^{u\bar{u}}\epsilon^{v\bar{v}}\mathcal{F}_{\bar{v}\bar{u}}^{U(1)}\mathcal{F}_{vu}^{DBI} = \widetilde{\mathcal{F}}^{DBI}+ \epsilon^{\bar{u}u}\epsilon^{\bar{v}v} F_{\bar{v}\bar{u}}A_{vu} +\epsilon^{\bar{u}u}\epsilon^{\bar{v}v} A_{\bar{v}\bar{u}vu}, \end{eqnarray} with \begin{eqnarray} A_{vu} &=& 2\partial_v X^\alpha \partial_u X^r B_{\alpha r} + \partial_v X^r \partial_u X^s B_{rs} , \label{Acurvatura} \\ A_{\bar{v}\bar{u}vu}&=& 2\partial_{\bar{v}} X^\alpha \partial_{\bar{u}} X^\beta B_{\alpha\beta} \partial_{v} X^{\bar{\alpha}} \partial_{u} X^r B_{\bar{\alpha}r} + \partial_{\bar{v}} X^\alpha \partial_{\bar{u}} X^\beta B_{\alpha\beta} \partial_v X^r \partial_u X^s B_{rs}, \nonumber \\ &+&2\partial_{\bar{v}} X^\alpha \partial_{\bar{u}} X^r B_{\alpha r} \partial_v X^{\bar{\alpha}} \partial_u X^r B_{\bar{\alpha}r} + 2\partial_{\bar{v}} X^\alpha \partial_{\bar{u}} X^r B_{\alpha r} \partial_v X^r \partial_u X^r B_{rs} , \nonumber \\ &+& \partial_{\bar{v}} X^r \partial_{\bar{u}} X^s B_{rs} \partial_v X^{\bar{r}} \partial_u X^{\bar{s}} B_{\bar{r}\bar{s}}, \label{Acurvatura2} \end{eqnarray} and $\widetilde{\mathcal{F}}^{U(1)}$ represents the determinant of the $U(1)$ DBI curvature associated to the contribution related to the non-compact sector. We find that the LCG Hamiltonian on $M_8\times T^2$ is given by \begin{eqnarray}\label{compactD22} H&=& \int d^2\sigma \left[ \frac{1}{2}\frac{(\widehat{P}_\alpha - B_\alpha)^2}{\sqrt{W}} +\frac{1}{2}\frac{(\widehat{P}_r - B_r)^2}{\sqrt{W}}+ \frac{1}{2}\frac{\Pi^u\Pi^v \gamma_{uv}}{\sqrt{W}}+\frac{1}{2}\frac{\widetilde{G}}{\sqrt{W}} \right., \nonumber \\ &+& \left. \frac{1}{2}\frac{\epsilon^{u\bar{u}}\epsilon^{v\bar{v}}F_{\bar{v}\bar{u}}A_{vu} }{\sqrt{W}} + \frac{1}{2}\frac{\epsilon^{u\bar{u}}\epsilon^{v\bar{v}}A_{\bar{v}\bar{u} vu} }{\sqrt{W}} + \frac{1}{2}\sqrt{W}\left\lbrace X^\alpha,X^r\right\rbrace^2 +\frac{1}{4}\sqrt{W}\left\lbrace X^r,X^s\right\rbrace^2, \right. \nonumber \\ &-&\left. \frac{1}{2}\epsilon^{uv}\partial_uX^r\partial_v X^s C^{(10)}_{+rs} - \Pi^u \partial_u X^\alpha B_{+\alpha} - \Pi^u \partial_u X^r B_{+r} \right]. \end{eqnarray} subject to \begin{eqnarray} \partial_u\Pi^u &\approx& 0, \\ \epsilon^{uv}\partial_u\left[ \frac{P_\alpha \partial_v X^\alpha}{\sqrt{W}} + \frac{P_r \partial_vX^r}{\sqrt{W}} + \frac{\Pi^w F_{vw}}{\sqrt{W}} \right] &\approx& 0,\\ \oint_{C_S}\left[\frac{P_\alpha \partial_v X^\alpha}{\sqrt{W}} + \frac{P_r \partial_vX^r}{\sqrt{W}} + \frac{\Pi^w F_{vw}}{\sqrt{W}}\right] d^v\sigma &\approx& 0, \end{eqnarray} where ${\widetilde{G}}=det(\widetilde{\gamma}_{uv} + \widetilde{\mathcal{F}}_{uv}^{U(1)})$ represents the generalized metric associated to the noncompact dimensions and with \begin{eqnarray} \widetilde{\mathcal{F}}_{uv}^{U(1)} &=& F_{uv} +\partial_u X^\alpha \partial_v X^\beta B_{\alpha\beta}, \\ \widetilde{\gamma}_{uv} &=&\partial_u X^\alpha \partial_v X_\alpha. \end{eqnarray} In order to make contact with (\ref{HCMIM22}) it is required to assume the RR three-form background to be constant $C_{\pm rs}^{(10)}=\epsilon_{rs}c_{\pm}$ an quantized. It implies the existence of a well-defined closed two form $\displaystyle \widetilde{F}_{\pm}=\frac{1}{2}C_{\pm rs}^{(10)}\widetilde{M}^r_p\widetilde{M}^s_q d\widetilde{X}^p\wedge d\widetilde{X}^q$ with $\widetilde{M}^r_p$ defined as in (\ref{M-tilde}) and with $\widetilde{X}^p$, $p=1,2$, the $T^2$ coordinates. If a quantization condition on $T^2$ is imposed, this implies the presence of a nontrivial 2-form flux on the D2-brane worldvolume, \begin{eqnarray} \label{FluxT22} \int_{T^2} \widetilde{F}_{\pm}= k_\pm\in \mathbb{Z}\ne 0 \to c_{\pm}\int_{\widetilde{\Sigma}} \widehat{F} = k_\pm\ne 0 \end{eqnarray} where $k_{\pm}=nc_{\pm}$ and $\displaystyle \widehat{F}=\frac{1}{2}\epsilon_{rs}dX_h^r\wedge dX_h^s$. We can also impose a quantization condition the pullback of the NS-NS background two-form $\widetilde{B}_2$ on $T^2$. Since the B-field is also assumed constant its pullback can also be interpreted as a flux condition over the worldvolume, \begin{eqnarray}\label{flux B} \int_{T^2}\widetilde{B}_2=k_B\ne 0 \to \int_{\widetilde{\Sigma}} B_2 = b_2\int_{\widetilde{\Sigma}} \widehat{F}= k_B\ne 0,\quad k_B \in \mathbb{Z}, \end{eqnarray} with $\widetilde{B}_2=\frac{1}{2}B_{rs} \widetilde{M}^r_p \widetilde{M}^s_q d\widetilde{X}^p\wedge d\widetilde{X}^q$, $B_{rs}=b_2\epsilon_{rs}$ and $k_B=nb_2$. Both quantization conditions (\ref{FluxT22}) and (\ref{flux B}) imply the existence of an irreducible wrapping condition on the compact sector. In this context, the imposition of the quantization conditions over the $B_2$ and $C^{(10)}_{\pm}$ are completely independent. However, from the CM2-brane dual description, we have shown that they they are associated to different components of the 11d three-form $C_3$, hence both of them must hold in order to describe its M2-brane dual origin. The background can be fixed such that the only nontrivial components of the constant background fields be $B_{rs}, C^{(10)}_{\pm rs}, B_{+r}$. Then, it can be seen from (\ref{Acurvatura}) and (\ref{Acurvatura2}) that \begin{eqnarray} \epsilon^{\bar{u}u}\epsilon^{\bar{v}v} F_{\bar{v}\bar{u}}A_{vu} &=& (\sqrt{W})^2 (\star F)\left[\widehat{F}^{rs} + \mathcal{F}^{rs}\right]B_{rs}, \nonumber \\ \epsilon^{\bar{u}u}\epsilon^{\bar{v}v} A_{\bar{v}\bar{u}vu} &=& (\sqrt{W})^2\frac{1}{2}\left(\widehat{F}^{rs}B_{rs} + \mathcal{F}^{rs}B_{rs}\right)^2. \end{eqnarray} with $F=dA$. Finally, if the moduli, winding numbers and units of fluxes are those considered in Section 4, the LCG Hamiltonian of a D2-brane on $M_8\times T^2$ subject to the 2-form fluxes conditions (\ref{FluxT22}) and (\ref{flux B}) exactly becomes \begin{eqnarray}\label{HCMIM222} \mathcal{H}_{dual} &=& \frac{1}{2} \frac{\widehat{P}_\alpha \widehat{P}^\alpha}{\sqrt{W}} + \frac{1}{2}\frac{(\widehat{P}_r - B_r)^2}{\sqrt{W}} + \frac{1}{2}\frac{\Pi^u\Pi^v \widetilde{\gamma}_{uv}}{\sqrt{W}} + \frac{G}{2\sqrt{W}} + \sqrt{W}\frac{1}{4}(\widehat{F}^{rs})^2 \nonumber \\ &+& \frac{1}{4}\sqrt{W}(\mathcal{F}^{rs})^2+ \frac{1}{2}\sqrt{W}(\mathcal{D}_r X^\alpha)^2 - C_{+}^{(10)} - B_+ \end{eqnarray} where $G$, $\mathcal{F}^{rs}$ and $\mathcal{D}_r$ coincides with the definitions used in (\ref{HCMIM22}). The Hamiltonian is also subject to the Gauss constraint (\ref{Gaussconstraint}) and to worldvolume symplectomorphisms (\ref{APD1}) and (\ref{APD2}). There are several differences between this formulation and a general D2-brane with RR and NSNS fluxes: First, the background fields are considered constant and when the quantization condition is imposed. They are responsible for generating specific 2-form fluxes whereas this statement is not necessarily true in more general backgrounds. These fluxes imply the existence of a monopole contribution over the D2-brane worldvolume in analogy with \cite{Restuccia} that acts as an extra constraint on the D2-brane embedding maps in distinction with any other case considered so far. It implies, on top of the flux contribution, the presence of an extra symplectic gauge field that couples to the scalar fields via a symplectic covariant derivative defined $\mathcal{D}_r X^\alpha$ and a symplectic curvature $\mathcal{F}_{rs}$. There are also cubic and quadratic interactions between the different field strengths here defined, which are not present in any of the previous cases studied in the literature, up to our knowledge. As a result, the Hamiltonian (\ref{HCMIM22}) obtained through a scalar/vector dualization of the CM2-brane, corresponds to a specific D2-brane with constant quantized background fields that induce 2-form flux acting on the worldvolume as an extra constraint which generate new fields and couplings in the Hamiltonian. \section{Discussion and Conclusions} We have obtained the LCG Hamiltonian from a toroidally compactified M2-brane with 2-form $C_{\pm}$ fluxes, with an explicit contribution of the transverse components of supergravity three-form, $C_{abc}$ denoted by us as CM2-brane. The direct analysis of its spectral properties is rather cumbersome due to the mixing terms between the scalar fields with their canonical conjugate momenta that appear in the kinetic term. In fact, the sufficiency criteria for discreteness of the supersymmetric spectra found in \cite{Boulton} is not directly applicable. However, we obtain a canonical transformation of the phase space variables that establishes an equivalence with the M2-brane with $C_{\pm}$ fluxes formerly identified in \cite{mpgm6}. Hence, they must share the same spectral discreteness properties and consequently, the CM2-brane also constitutes a nontrivial M2-brane. The M2-brane with central charge is obtained by performing the canonical transformation (\ref{ct3}) and turning off the $C_+$ components. The toroidally nontrivial M2-branes here analyzed are shown to contain the same type of $U(1)$ monopole quantization condition. We now obtain the D-brane description of the CM2-brane Hamiltonian. We find that it corresponds to a D2-brane in the presence of two-form fluxes associated to the quantization of constant RR and NSNS background fields. They appear in the dualization of the quantization condition of the 11D $C_{\pm}$. Due to the new terms in the CM2-brane formulation, the DBI B-field contribution becomes explicit in the D-brane counterpart. The compactified momentum has a nontrivial coupling to the worldvolume B-field. By using the relation (\ref{quantization}), the $C_{\pm}^{(10)}$ flux condition and the NSNS quantized $B_+$ are straightforwardly obtained. Gauge invariance allows to set $B_-=0$. The NSNS constant $B_2$ field becomes quantized without imposing an extra quantization condition since it is proportional to the central charge condition. This dual\footnote{By means of a scalar/vector dualization.} D-brane theory inherits from the nontrivial M2-brane, the same type of $U(1)$ monopole quantization condition. Since it acts as an extra constraint on the Hamiltonian, it also implies the appearance of a dynamical symplectic gauge field associated to $d\mathcal{A}^r$ components with a topologically trivial curvature in analogy with the M2-brane with central charges. The D-brane theory is subject to the Gauss and APD constraints. The theory contains new symmetries. It is possible to define an extra $U(1)$ gauge symmetry under symplectomorphism transformations in terms of the components of the embedding maps $\mathcal{A}_G$. $\mathcal{A}$ and $\mathcal{A}_G$ define respectively a symplectic curvature $\mathcal{F}$ and a $U(1)$ curvature $\mathcal{F}_G$. As happens with the CM2-brane, when expressed in terms of the embedding map components, they have the unique property of being equal. The $\mathcal{A}_G$ together with the topologically nontrivial $U(1)$ gauge symmetry associated with the fluxes, $\widehat{A}$, defines a dynamical $U(1)$ gauge symmetry $\mathbb{A}$ on a nontrivial $U(1)$ fiber bundle with the same Chern class as the flux condition. These two fibers are linked together to form a twisted torus. The D2-brane bundle is defined by a twisted torus bundle with monodromy in $SL(2,Z)$ -inherited from the toroidally nontrivial M2-brane- and an independent DBI $U(1)$ trivial principal bundle over its worldvolume. The quantization condition of a constant $\widetilde{B}_2$ was formerly discussed \cite{Connes} in the context of the noncommutative formulation of the matrix model on a torus and its M-theory origin was qualitatively discussed in terms of the coupling of the supermembrane to a constant $C_{-}$ in \cite{deWit}. The supermembrane on a $C_-$ background was already discussed in \cite{mpgm6}. The $B_2$ contribution does not come from the dualization of the $C_-$ but from the $C_{abc}$. Hence the non commutative formulation of matrix model on a torus has an M-theory origin associated to a CM2 for a restricted background where $B_+$ and $C_+$ vanishes. Both formulations can generate a monopole contributions. Because the D2-brane carries 2-form fluxes, an open question is whether it admits a description in terms of Dp bound states. The D2-D0 bound state would have been a natural candidate. In \cite{mpgm11} the spectral properties of the $SU(N)$ regularized M2-brane with central charge and the $0+1$ dimensionally reduced D2-D0 bound state were compared. Though both models seem quite similar and carry a RR charge, they are associated to different monopole conditions and their spectra are completely different. Furthermore in the CM2 dual (\ref{HCMIM22}) the DBI curvature $F$ does not get quantized in distinction with the D2-D0 bound state. Hence, the M2-brane dual considered in this work can not be described as a D2-D0 bound state. This analysis however does not exclude the possibility of more complicated constructions, as for example in \cite{mpgm20}. A deeper analysis of this aspect is left for a future work. \section{Acknowledgements} MPGM and CLH thanks to A. Restuccia for helpful discussions. CLH is supported by CONICYT PFCHA/DOCTORADO BECAS CHILE/2019-21190263, and the Project ANT1956 of the U.Antofagasta. The authors also thank SEM 18-02 funding project from U. Antofagasta, and to the international ICTP Network NT08 for kind support.
{ "timestamp": "2021-11-09T02:38:23", "yymm": "2012", "arxiv_id": "2012.14069", "language": "en", "url": "https://arxiv.org/abs/2012.14069" }
\section{Introduction} \label{sec:intro} The dependence of physical observables on the topological parameter $\theta$ is one of the most interesting properties of four dimensional $SU(N)$ pure-gauge theories. The parameter is coupled in the action to the topological charge \begin{eqnarray} \label{eq:cont_def_topocharge} Q = \hspace{-2pt} \int d^4 x \, q(x) =\frac{1}{64\pi^2} \varepsilon_{\mu\nu\rho\sigma}\hspace{-2pt} \int d^4x\, F^a_{\mu\nu}(x) F^a_{\rho\sigma}(x) \, ; \end{eqnarray} $\theta$-dependence can be studied to achieve a better understanding of the non-perturbative features of Yang--Mills theories, but also has direct phenomenological implications for hadron physics in the limit of large number of colors~\cite{tHooft:1973alw,tHooft:1976rip,Witten:1978bc,Witten:1979vv,Veneziano:1979ec,Witten:1980sp,DiVecchia:1980yfw}. A particularly interesting quantity, whose dependence on $\theta$ has been thoroughly investigated, is the vacuum energy density $E(\theta)$, which is formally defined by the relation \begin{eqnarray}\label{eq:free_energy_def} E(\theta) \equiv -\frac{1}{V} \log \int [dA] e^{-S_{\mathit{YM}}[A]+i\theta Q[A]}\ , \end{eqnarray} where $V$ is the four-dimensional euclidean space-time volume. The functional form of $E(\theta)$ is known only for very specific theories, like for QCD close to the chiral limit~\cite{DiVecchia:1980yfw}. It is customary to consider a Taylor expansion of $E(\theta)$ around $\theta=0$. Since $Q$ is odd under a $\mathrm{CP}$ transformation, only even powers of $\theta$ appear in the expansion, which can be parametrized in the form \begin{eqnarray}\label{eq:vacuum_energy_theta_dep_parametrization} E(\theta) -E(0) = \frac{1}{2} \chi \theta^2 \left( 1 + \sum_{n=1}^{\infty} b_{2n} \theta^{2n} \right)\ . \end{eqnarray} The coefficients of this expansion are related to the cumulants of the topological charge distribution at $\theta=0$, for instance $\chi=\langle Q^2\rangle_{\theta=0}/V$, where $\chi$ is the topological susceptibility. While the exact numerical values of the coefficients $\chi$ and $b_{2n}$ are generically unknown, something is known about their dependence on the number of colors $N$, at least when $N$ is large enough. Indeed, assuming that a non-trivial $\theta$-dependence is present in the large-$N$ limit, it is possible to fix the large $N$ scaling form of the energy density $E(\theta,N)=N^2 \bar{E}(\theta/N)$~\cite{Witten:1979vv,Witten:1980sp,Witten:1998uka}, implying \begin{eqnarray}\label{eq:large_N_scaling} \chi &=& \bar{\chi} + O\left(\frac{1}{N^2}\right),\\ b_{2n} &=& \frac{\bar{b}_{2n}}{N^{2n}} + O\left(\frac{1}{N^{2\left(n+1\right)}}\right)\ . \end{eqnarray} The value of $\bar{\chi}$ is related to the mass of the $\eta^\prime$ meson by the Witten--Veneziano formula~\cite{Witten:1979vv,Veneziano:1979ec}, which provides the estimate $\bar{\chi} \simeq (180\,\text{MeV})^4$. Analytic estimates of $\bar{\chi}$ and of the $\bar{b}_{2n}$ coefficients are available only for two dimensional models~\cite{DAdda:1978vbw, Campostrini:1991kv, DelDebbio:2006yuf, Rossi:2016uce, Bonati:2016tvi}. Given the non-perturbative nature of $\theta$-dependence, the numerical lattice approach is the natural tool to investigate such topics quantitatively, and in particular to test large-$N$ predictions~\cite{Alles:1996nm, Alles:1997qe, DelDebbio:2004ns, DelDebbio:2002xa, DElia:2003zne, DelDebbio:2006yuf, Lucini:2004yh, Giusti:2007tu, Vicari:2008jw, Panagopoulos:2011rb, Ce:2015qha, Ce:2016awn, Bonati:2015sqt, Bonati:2016tvi, Bonati:2018rfg, Bonati:2019kmf}. There are however some non-trivial computational challenges that have to be faced, especially in the large $N$ regime. The first problem is related to the measure of the coefficients $b_{2n}$. The task is challenging by itself since, contrary to what happens for $\chi$, these coefficients approach zero as $N$ is increased. In addition, the simplest estimators available for $\theta = 0$ simulations are based on the cumulants of the topological charge distribution, which are not self-averaging quantities, leading to a bad signal-to-noise ratio for large volumes. Such problem can be solved by introducing an explicit source term in the action, coupled to the topological charge density, which corresponds in practice to an imaginary $\theta$ term~\cite{Bhanot:1984rx,Azcoiti:2002vk,Alles:2007br,Imachi:2006qq,Aoki:2008gv,Panagopoulos:2011rb, Alles:2014tta,DElia:2012pvq,DElia:2013uaf,DElia:2012ifm}. The method was exploited for the determination of the $b_{2n}$ coefficients first in Ref.~\cite{Panagopoulos:2011rb} and later developed and applied in several works to both $SU(N)$ Yang--Mills theory \cite{Bonati:2015sqt, Bonati:2016tvi, Bonati:2018rfg, Bonati:2019kmf} and two dimensional $CP^{N-1}$ models~\cite{Bonanno:2018xtd,Berni:2019bch}. The second problem is the so-called ``freezing problem'', it represents a well known problem in a wide range of theories sharing the presence of topological excitations~\cite{csd_full_QCD_delia, deForcrand:1997yw,Lucini:2001ej,DelDebbio:2002xa,Leinweber:2003sj, DelDebbio:2004xh, Luscher:2011kk, Laio:2015era, Flynn:2015uma, theta_dep_QCD_N_f_2+1, Bonati:2017woi,Athenodorou:2020ani} and it will be the main topic of this work. When adopting local update algorithms on lattices with periodic boundary conditions, the topological modes experience a severe critical slowing down when approaching the continuum limit, with the autocorrelation time of the topological charge which grows approximately exponentially as a function of $1/a$~\cite{DelDebbio:2002xa,DelDebbio:2004xh}. As a consequence gauge configurations stay fixed (frozen) in a given topological sector for an exceedingly large amount of Monte Carlo time, thus preventing a correct sampling of the path-integral distribution. This problem becomes even worse in the large-$N$ limit, since at fixed lattice spacing $a$ the autocorrelation time of the topological charge seems to grow exponentially with the number of colors~\cite{DelDebbio:2002xa,DelDebbio:2004xh,Bonanno:2018xtd}. Despite the fact that a definitive solution to the topological freezing problem has been obtained only in toy models~\cite{Bonati:2017woi}, several strategies have been proposed to reduce its severeness, in particular by reducing the exponential critical slowing down to a polynomial critical slowing down~\cite{Vicari:1992jy, Luscher:2011kk, Laio:2015era, Bonati:2017woi}, or to extract information even from completely frozen configurations~\cite{Bietenholz:2015rsa}. A popular method, proposed in Ref.~\cite{Luscher:2011kk}, is to adopt open boundary conditions for the gauge fields in the time direction instead of the usual periodic ones. The presence of an open boundary eliminates barriers among different topological sectors even in the continuum, and one expects the critical slowing down of the topological modes to be essentially diffusive, with an autocorrelation time increasing as $1/a^2$ approaching the continuum limit. Using this method, however, one also breaks translation invariance and loses completely any notion of global topological charge. Therefore $\chi$ and $b_{2n}$ can only be estimated from the integral of the $(2n+2)$-point connected correlators of the topological charge density on the bulk of the lattice. A different strategy, that keeps the advantages of the open boundary conditions without breaking translation invariance, has been proposed by M.~Hasenbusch in Ref.~\cite{Hasenbusch:2017unr}, where it was tested for two dimensional $CP^{N-1}$ models. The basic idea of this method is to combine periodic and open boundary conditions in a parallel tempering framework, using the copies with open or partially open boundary conditions as sources of topological fluctuations for the copy with periodic boundary conditions, which is the one on which measures are performed. To reduce the number of copies to be used in the parallel tempering, open boundary conditions are not enforced along all the temporal boundaries, but only in a limited space region, that will be referred to as the ``defect'' in the following. In Ref.~\cite{Berni:2019bch} it was shown that in two dimensional $CP^{N-1}$ models, for large $N$ values, the adoption of the Hasenbusch algorithm in combination with the imaginary-$\theta$ method allows to achieve impressive improvements compared to previously available results for the $\theta$-dependence of these models. The aim of the present work is to test the same setup in the case of four dimensional $SU(N)$ Yang--Mills theories at zero temperature, comparing its performance with that of the standard simulations. In doing this we will also refine the state-of-the-art results about the large-$N$ behavior of the $b_2$ coefficient. This paper is organized as follows: in Sec.~\ref{sec:lattice_setup} we discuss our lattice setup, along with the parallel tempering algorithm and the imaginary-$\theta$ method, in Sec.~\ref{sec:results} we show the numerical results obtained with the parallel tempering and finally in Sec.~\ref{sec:conclusion} we draw our conclusions. \section{Lattice setup} \label{sec:lattice_setup} In this section we introduce the discretizations adopted for the action and the topological charge, we present a summary of the imaginary-$\theta$ method and discuss the parallel tempering algorithm employed in our simulations. \subsection{Lattice action and lattice topological charge} We discretize the Yang--Mills action on an hyper-cubic lattice of size $L$ and with periodic boundary conditions in every direction (see Sec.~\ref{sec:defect} for the defect) using the standard Wilson action: \begin{eqnarray}\label{eq:lat_def_action} S_W = -\frac{\beta}{N} \sum_{x,\mu>\nu} \Re \ensuremath{\mathrm{Tr}} \left\{ \Pi_{\mu\nu}(x) \right\}, \end{eqnarray} where $\Pi_{\mu\nu}(x) \equiv U_\mu(x)U_\nu(x+\hat{\mu})U_\mu^\dagger(x+\hat{\nu})U_\nu^\dagger(x)$ is the plaquette operator. For the topological charge~\eqref{eq:cont_def_topocharge}, we adopt the simplest discretization with definite parity, the so-called \emph{clover} discretization: \begin{eqnarray}\label{eq:clover_charge_def} Q_{\mathit{clov}} = \frac{1}{2^9\pi^2} \sum_{x,\mu,\nu,\rho,\sigma} \varepsilon_{\mu\nu\rho\sigma} \ensuremath{\mathrm{Tr}}\left\{C_{\mu\nu}(x)C_{\rho\sigma}(x)\right\}, \end{eqnarray} where $C_{\mu\nu}(x)$ is a discretization of the field strength given by the sum of all the 4 plaquettes centered in the site $x$ and lying on the $\mu$--$\nu$ plane. $Q_{\mathit{clov}}$ is generically non-integer and it is related, configuration by configuration, to the physical charge $Q$ by~\cite{Campostrini:1988ab}: \begin{eqnarray}\label{eq:lat_topo_charge_renormalization} Q_{\mathit{clov}} = Z Q + \eta, \end{eqnarray} where $Z$ is a finite renormalization constant that approaches 1 in the continuum limit, and $\eta$ is a stochastic noise due to ultraviolet (UV) fluctuations at the scale of the lattice spacing. Using the variance of $Q_{\mathit{clov}}$ to estimate the topological susceptibility would require to take into account both multiplicative and additive renormalizations, which can be avoided by using one of the several smoothing procedures that have been proposed in the literature, such as the gradient flow~\cite{Luscher:2009eq, Luscher:2010iy} or cooling~\cite{Berg:1981nw, Iwasaki:1983bv, Itoh:1984pr, Teper:1985rb, Ilgenfritz:1985dz, Campostrini:1989dh, Alles:2000sc}, which are all known to agree with each other when properly matched~\cite{Bonati:2014tqa, Alexandrou:2015yba}. In this work we use cooling due to its simplicity and numerical effectiveness. We denote by $Q_{\mathit{clov}}^{\mathit{cool}}$ the topological charge obtained by measuring the observable Eq.~\eqref{eq:clover_charge_def} on a configuration to which a certain number of cooling steps have been applied. To assign an integer topological charge $Q_L$ to each configuration we follow Ref.~\cite{DelDebbio:2002xa}, defining \begin{eqnarray}\label{eq:lat_def_topocharge} Q_L = \ensuremath{\mathrm{round}} \left\{ \alpha \, Q_{\mathit{clov}}^{\mathit{cool}}\right\}, \end{eqnarray} where ``\ensuremath{\mathrm{round}}'' denotes the rounding to the closest integer and the value of $\alpha$ is fixed by minimizing \begin{eqnarray}\label{eq:alpha_def} \braket{ \left( \alpha \, Q_{\mathit{clov}}^{\mathit{cool}} - \ensuremath{\mathrm{round}} \left\{ \alpha \, Q_{\mathit{clov}}^{\mathit{cool}}\right\} \right)^2 }\ , \end{eqnarray} so that the maxima in the distribution of $\alpha Q_{\mathit{clov}}^{\mathit{cool}}$ are located approximately at integer values; such fixing is performed at $\theta = 0$ and then adopted also for $\theta \neq 0$. The topological susceptibility computed using $Q_L$ becomes stable (i.e.~independent of the number of cooling steps $n_{\mathit{cool}}$) after $n_{\mathit{cool}}\sim 10$, moreover such threshold reveals to be weakly dependent on the lattice spacing, thus we chose $n_{\mathit{cool}}=20$ to define the topological charge in all simulations, verifying also the stability of all continuum extrapolations if a different value of $n_{\mathit{cool}}$ is used. \subsection{Imaginary-$\theta$ method} \label{sec:imtheta} As anticipated in the introduction, the imaginary-$\theta$ method is a technique that is useful to estimate the topological susceptibility and especially the coefficients $b_{2n}$ introduced in Eq.~\eqref{eq:vacuum_energy_theta_dep_parametrization}. In this section we provide a short summary of this computational method, referring to Ref.~\cite{Bonati:2015sqt} for more details. The idea of the method is to introduce an imaginary $\theta$ term, in order to avoid a sign problem, and to extract $\chi$ and $b_{2n}$ from the dependence on $\theta$ of the cumulants of the topological charge distribution: the method wins over the standard computation at $\theta = 0$ since now the information on the $b_{2n}$ parameters is contained in all cumulants, including the lowest order ones. The procedure is most conveniently explained by working formally in the continuum: the continuum euclidean action can be written in the form \begin{eqnarray}\label{eq:cont_imag_theta_action_def} S(\theta_I) = S_{\mathit{YM}} - \theta_I Q, \end{eqnarray} where $\theta_I = i \theta$. The dependence on $\theta_I$ of the cumulants of the topological charge distribution can be computed from the derivatives of $E(\theta)$ in Eq.~\eqref{eq:free_energy_def}, properly continued to the imaginary axis, and can be expressed in terms of $\chi$ and $b_{2n}$ using Eq.~\eqref{eq:vacuum_energy_theta_dep_parametrization}. As an example, the explicit expressions of the first few cumulants as a function of $\theta_I$ read: \begin{eqnarray}\label{eq:cumul_dep_imag_theta} \begin{split} \frac{k_1(\theta_I)}{V} &= \chi \left[\theta_I -2b_2 \theta_I^3+3b_4 \theta_I^5+O(\theta_I^6)\right],\\ \frac{k_2(\theta_I)}{V} &= \chi \left[1-6b_2 \theta_I^2+15b_4 \theta_I^4+O(\theta_I^5)\right],\\ \frac{k_3(\theta_I)}{V} &= \chi \left[-12b_2 \theta_I+60b_4 \theta_I^3+O(\theta_I^4)\right],\\ \frac{k_4(\theta_I)}{V} &= \chi \left[-12b_2 +180b_4 \theta_I^2+O(\theta_I^3)\right], \end{split} \end{eqnarray} where $V$ is the space-time volume and the first fourth cumulants of the topological charge are \begin{eqnarray}\label{eq:first_few_cumuls} \begin{split} k_1 &= \braket{Q},\\ k_2 &= \braket{Q^2} - \braket{Q}^2,\\ k_3 &= \braket{Q^3} - 3 \braket{Q^2}\braket{Q} + 2 \braket{Q}^3,\\ k_4 &= \braket{Q^4} - 4 \braket{Q^3}\braket{Q} -3 \braket{Q^2}^2\\ &\quad \, + 12 \braket{Q^2}\braket{Q}^2 - 6 \braket{Q}^4. \end{split} \end{eqnarray} All these averages are computed by using the weight $e^{-S(\theta_I)}$ in the path-integral. Let us now describe what changes on the lattice: the lattice action is \begin{eqnarray}\label{eq:lat_imag_theta_action_def} S_L(\theta_L) = S_W - \theta_L Q_{\mathit{clov}}, \end{eqnarray} where the $\theta$-term is discretized by using the non-smoothed clover charge $Q_{\mathit{clov}}$ defined in the previous subsection, and $\theta_L$ is the bare imaginary-$\theta$ coupling. The reason for using $Q_{\mathit{clov}}$ is that with this choice standard heatbath and overrelaxation algorithms can be used in the update. Relations analogous to Eq.~\eqref{eq:cumul_dep_imag_theta} can be obtained, where $\theta_I= Z \theta_L$ and $Z$ is the renormalization constant appearing in Eq.~\eqref{eq:lat_topo_charge_renormalization}. Measuring the cumulants for several values of $\theta_L$ we can thus fit the values of $\chi$, $b_{2n}$ and $Z$. We explictly note that the cumulants are not affected by the renormalization of $Q_{\mathit{clov}}$ since they are evaluated by using the smoothed and rounded charge $Q_L$ introduced in the previous section (see Ref.~\cite{Berni:2019bch} for a more detailed discussion on this point). \subsection{Parallel tempering of volume defect} \label{sec:defect} The lattice action in Eq.~\eqref{eq:lat_imag_theta_action_def}, as the standard Wilson action, is linear in each of the link variables, hence standard heathbath~\cite{Creutz:1980zw, Kennedy:1985nu} and overrelaxation~\cite{Creutz:1987xi} algorithms can be applied to update the gauge configurations when using the imaginary-$\theta$ method. However, as anticipated in the introduction, the local nature of this updating procedure results in a slowing down of the topological modes, which is exponential both in the inverse lattice spacing and in the number $N$ of colors. This makes very difficult to perform controlled continuum extrapolations for large values of $N$, since even simulations with moderately small values of the lattice spacing become prohibitively expensive as $N$ is increased. To mitigate this problem, we adopt, in this work, the parallel tempering algorithm proposed for the $CP^{N-1}$ models in Ref.~\cite{Hasenbusch:2017unr}, where this algorithm has been shown to perform as well as simulations with open boundaries while bypassing their complications related to finite-size effects. Moreover, as shown for $CP^{N-1}$ models in Ref.~\cite{Berni:2019bch}, parallel tempering can be easily applied in combination with the imaginary-$\theta$ method discussed in the previous subsection. In this algorithm we consider $N_r$ identical systems, differing only for the boundary conditions on a cuboid defect located on a given spatial slice. In particular, the boundary conditions imposed on the links that orthogonally cross the defect are chosen so that the different copies interpolate between periodic boundary conditions (pbc) and open boundary conditions (obc). Each system is evolved independently using standard local algorithms, and different copies are exchanged from time to time with a Metropolis step, so that the strong reduction of the autocorrelation time achieved in the obc copy is transferred to the pbc one, on which the measure of the cumulants of the topological charge $Q_L$ is performed. Since the injection (or ejection) of topological charge in the system is mainly triggered by the update of the links close to the defect, it is convenient \cite{Hasenbusch:2017unr} to alternate updating sweeps over the whole lattice with hierarchic updates over sub-regions of the lattice centered around the defect. In particular, we updated more frequently small hyper-rectangular regions centered around the defect. In our simulations the location of the defect is the tridimensional region \begin{eqnarray*} \begin{aligned} D=\Big\{&x_1=L-a, \, 0\le x_2 < L_d^{(2)}, \, \\ &0\le x_3 < L_d^{(3)}, \, 0\le x_4 < L_d^{(4)} \Big\}, \end{aligned} \end{eqnarray*} however, after every hierarchic update, we perform a random translation of the pbc copy by one lattice spacing, thus effectively moving the location of the defect. For the sake of the simplicity we use a cubic defect $L_d^{(2)}=L_d^{(3)}=L_d^{(4)}\equiv L_d$ and it is sufficient to choose $L_d$ equal to a few lattice spacings to obtain satisfactory performances. For a discussion on how the choice of $L_d$ affects the efficiency of the algorithm, see Sec.~\ref{sec:algorithm_comparison}. In order to specify how the different boundary conditions across the defect are implemented, it is convenient to rescale each link of every replica according to \begin{eqnarray*} U^{(r)}_\mu(x) \to K_\mu^{(r)}(x) U^{(r)}_\mu(x), \end{eqnarray*} where $U_\mu^{(r)}(x)$ indicates a link of the $r^{\text{th}}$ replica and the explicit expression of $K^{(r)}_\mu(x)$ is: \begin{eqnarray*} K_\mu^{(r)}(x) = \begin{cases} c(r), \quad &\mbox{ if} \quad \mu \ne 1 \ \mathrm{and}\ x \in D, \\ 1, \quad &\mbox{ otherwise,} \end{cases} \end{eqnarray*} so that only the links crossing the volume defect are affected by its presence. For the pbc replica (corresponding to $r=0$) we have $c(0)=1$, for the obc replica (corresponding to $r=N_r-1$) we have $c(N_r-1)=0$, for $0<r<N_r-1$ the value of $c(r)$ interpolates between 0 and 1. With these notations the action of the $r^{\text{th}}$ copy reads \begin{align*} S_L^{(r)}(\theta_L) = & \,\, S_W^{(r)} +S_\theta^{(r)}(\theta_L)\\ = &-\frac{\beta}{N} \sum_{x,\mu>\nu} K^{(r)}_{\mu\nu}(x)\Re \ensuremath{\mathrm{Tr}} \left\{ \Pi^{(r)}_{\mu\nu}(x)\right\}\\ &- \theta_L Q_{\mathit{clov}}\left[U_\mu^{(r)}(x)\right], \end{align*} where $K^{(r)}_{\mu\nu}(x)$ is a short-hand for \begin{eqnarray*} K^{(r)}_{\mu\nu}(x) \equiv K^{(r)}_\mu(x) K^{(r)}_\nu(x+\hat{\mu}) K^{(r)}_\mu(x+\hat{\nu}) K^{(r)}_\nu(x). \end{eqnarray*} Note that we chose to keep the $\theta$ term insensitive to the presence of the defect, which only affects the Wilson part of the action. Exploratory simulations performed by modifying also the $\theta$ term provided evidence that this choice does not significantly affects the performance of the algorithm, analogously to what was found for $CP^{N-1}$ models in Ref.~\cite{Berni:2019bch}. This is not surprising since the barriers between the topological sectors, responsible of the critical slowing down, stem essentially from the Wilson term. The swap of replicas was proposed after every step of hierarchic update, for every couple of adjacent copies $r$ and $r+1$ (differentiating the cases of even and odd $r$ in order to avoid synchronization problems), and was accepted with the Metropolis probability \begin{eqnarray} \begin{aligned} p =& \min\left\{1, \exp\left\{-\Delta S \right\}\right\}\\ =& \min\left\{1, \exp\left\{- S_W^{\mathit{swap}} + S_W^{\mathit{no\,swap}} \right\}\right\}, \end{aligned} \end{eqnarray} where the $\theta$-term, which is not affected by the defect, does not enter the acceptance probability. Since boundary conditions of different replicas differ only on a sub-region of the lattice, to compute $\Delta S$ it is sufficient to sum the contributions to the action of the plaquettes centered on sites lying in a hyper-cuboid region centered around the defect and extending one lattice spacing from it. For the parallel tempering to be effective in decorrelating the topological modes of the pbc copy there must be no bottleneck for a configuration in the obc copy to be swapped toward the pbc one. In order to guarantee a random walk without barrier of the configurations between the different replicas it is thus convenient to choose the constants $c(r)$ entering $K^{(r)}_\mu(x)$ in such a way that the acceptance ratio is constant for all the proposed swaps: $p(0, 1)=p(1,2)= \cdots =p(N_r-2,N_r-1)$. \section{Numerical results} \label{sec:results} In order to compare the performances of parallel tempering with the ones of the standard algorithm, we performed simulations for $N=4$ and $6$ with both algorithms and for several values of $\beta$ at $\theta_L=0$, measuring the autocorrelation time of $Q^2_L$. We then performed simulations at non-zero $\theta_L$ with parallel tempering for each value of $\beta$, to estimate $\chi$, $b_2$ and $b_4$ using the imaginary-$\theta$ method. In Tab.~\ref{tab:simulations_summary} we summarize the parameters of the performed simulations. Lattice sizes have been chosen to ensure that $L\sqrt{\sigma}\gtrsim 3$, where $\sigma \simeq \left(440 \text{ MeV}\right)^2$ is the string tension, so as to have finite-size effects under control~\cite{DelDebbio:2001sj,Bonati:2016tvi}. Statistics reported in Tab.~\ref{tab:simulations_summary} refer to the parallel tempering simulations and are reported in number of parallel tempering steps. A single step of tempering update consists first of all of a complete update of each replica, using 5 lattice sweeps of over-relaxation~\cite{Creutz:1987xi} followed by 1 lattice sweep of heat-bath~\cite{Creutz:1980zw,Kennedy:1985nu}, both implemented \emph{\`a la} Cabibbo--Marinari~\cite{Cabibbo:1982zn}, i.e., updating all the $N(N-1)/2$ diagonal $SU(2)$ subgroups of $SU(N)$. After this ``global'' update step we perform an iteration on the sub-lattices entering the hierarchical update (see Sec.~\ref{sec:defect}), each iteration consisting of \begin{itemize} \item a local update sweep of the sub-lattices for every replica, using the same combination of local algorithms adopted for the global update; \item the parallel tempering swap proposal; \item a random translation of the pbc copy by one lattice spacing. \end{itemize} Since each system is updated using the same procedure and the time required for hierarchic updates, swap and translations is negligible with respect to the time of the global update, the total numerical effort of a single parallel tempering step is $\sim N_r$ times larger than the one required for the local update in the standard setup. \begin{table}[!htb] \centering \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{7}{|c|}{ }\\[-1em] \multicolumn{7}{|c|}{$N=4$}\\ \hline &&&&&&\\[-1em] $\beta$ & $L/a$ & $a\sqrt{\sigma}$ & $L\sqrt{\sigma}$ & $\theta_L^{\mathit{max}}$ & \makecell{Stat.\\$\theta=0$} & \makecell{Stat.\\$\theta\neq0$}\\ \hline &&&&&&\\[-1em] 11.104 & 16 & 0.1981(5)* & 3.17 & 15 & 255k & 787k \\ 11.347 & 20 & 0.1590(6) & 3.17 & 15 & 1.39M & 2.44M \\ \hline \multicolumn{7}{c}{ }\\ \hline \multicolumn{7}{|c|}{ }\\[-1em] \multicolumn{7}{|c|}{$N=6$} \\ \hline &&&&&&\\[-1em] $\beta$ & $L/a$ & $a\sqrt{\sigma}$ & $L\sqrt{\sigma}$ & $\theta_L^{\mathit{max}}$ & \makecell{Stat.\\$\theta=0$} & \makecell{Stat.\\$\theta\neq0$}\\ \hline &&&&&&\\[-1em] 24.768 & 12 & 0.2912(11) & 3.57 & 15 & 103k & 257k \\ 24.845 & 12 & 0.2801(13) & 3.41 & 15 & 113k & 166k \\ 25.056 & 12 & 0.2499(10) & 3.04 & 15 & 228k & 280k \\ 25.394 & 14 & 0.2143(8) & 3.00 & 15 & 513k & 553k \\ 25.750 & 16 & 0.1878(18) & 3.00 & 17.5 & 1.12M & 1.84M \\ \hline \end{tabular} \end{center} \caption{Summary of simulation parameters. Simulations at non-zero values of $\theta_L$ were performed in steps of $\Delta \theta_L = 2$ up to $\theta_L=10$ and in steps of $\Delta \theta_L=2.5$ for $\theta_L>10$. The last column refers to the total statistics accumulated for all imaginary-$\theta$ simulations. The defect length was, in all cases, $L_d/a=2$. All simulations for $N=4$ were performed using $N_r=10$, corresponding to a constant swap probability $p$ around $20\%$, while simulations for $N=6$ used $N_r=17$, corresponding to $p\approx 30\%$. Lattice spacings are taken from Ref.~\cite{Lucini:2005vg} or interpolation/extrapolation of data thereof, except for the one marked with *, which comes from Ref.~\cite{DelDebbio:2001sj}.} \label{tab:simulations_summary} \end{table} \subsection{Parallel tempering: results and comparison} \label{sec:algorithm_comparison} \begin{figure} \centering \includegraphics[scale=0.5]{{Q_MC_evolution_N_6_beta_25.75_theta_0}.eps} \caption{Monte Carlo time evolution of the lattice topological charge $Q_L$ for a run with $N=6$, $\beta=25.75$ and $\theta_L=0$. For the comparison to be fair, data for the parallel tempering case are plotted as a function of the total number of global updates performed on all the replicas, i.e. $17$ times the number of parallel tempering updates.} \label{fig:Q_MC_evolution} \end{figure} Just by inspection of the time histories of the topological charge it is simple to realize that parallel tempering substantially reduces the topological freezing allowing us to perform simulations at values of the lattice spacings which would have been otherwise prohibitive with the standard algorithm. An example of Monte Carlo time evolution of $Q_L$ obtained with parallel tempering for $N=6$, $\beta=25.75$ and $\theta_L=0$ is shown in Fig.~\ref{fig:Q_MC_evolution}, where we compare it with the evolution obtained with the standard algorithm. In order to quantitatively characterize the gain achieved with parallel tempering and optimize its efficiency, it is useful to study the autocorrelation time of the topological susceptibility. We use as definition of the autocorrelation time for the generic observable $\mathcal{O}$ the expression~\cite{Berg:2004fd} \begin{eqnarray}\label{eq:def_tau} \tau(\mathcal{O}) = \frac{1}{2}\left( \frac{\Delta_{\mathcal{O}}^{\mathit{binned}}}{\Delta_{\mathcal{O}}^{\mathit{naive}}} - 1\right)^2, \end{eqnarray} where $\Delta_{\mathcal{O}}^{\mathit{binned}}$ is the error associated to $\langle \mathcal{O}\rangle$ by using a self-consistent binning analysis, while $\Delta_{\mathcal{O}}^{\mathit{naive}}$ is the usual standard error of the mean for independent identically distributed samples. The autocorrelation time of the topological susceptibility, however, does not take into account the increased computational effort of the parallel tempering algorithm with respect to the standard local algorithms. As a figure of merit for the computational effectiveness of parallel tempering, it is thus convenient to introduce the effective autocorrelation time given by \begin{eqnarray} \tau_{\mathit{pt}}(\mathcal{O})=\tau(\mathcal{O})N_r\ . \end{eqnarray} As discussed in the previous section, in every parallel tempering simulation we tuned the parameters $c(r)$ in such a way that the acceptance $p(r,r+1)$ of the Metropolis swap move between the replicas $r$ and $r+1$ is approximately independent of $r$. This tuning was performed using test simulations at $\theta_L=0$ (acceptances do not depend on $\theta_L$), and in Fig.~\ref{fig:acceptances} we show an example of the behavior of $c(r)$ for a run with $N=4$ and $\beta=11.347$. Deviations from the linear behavior appear to be small in the optimal $c(r)$ values, however using these values $\tau_{\mathit{pt}}(Q_L^2)$ is about half the one obtained by using a simple linear interpolation. Once $p(r,r+1)$ is almost independent of $r$, configurations move freely among the different replicas following a random walk, as shown in Fig.~\ref{fig:bound_cond_MC_evolution}. The constant value $p$ that is reached by $p(r,r+1)$ after the tuning of $c(r)$ obviously depends on the number $N_r$ of replicas used. We did not perform a systematic investigation of the dependence on $p$ of the numerical effectiveness of parallel tempering, however this dependence seems to be quite mild. Indeed by increasing the number of replicas $N_r$, the constant acceptance probability $p$ grows and the autocorrelation time $\tau(Q_L^2)$ is reduced, however also the computational cost increases with $N_r$. The net effect is that $\tau_{\mathit{pt}}(Q_L^2)$ is largely insensitive to the specific value of $p$, at least as far as it is not too close to $0$ or $1$, as we verified in some test simulations. For example, simulations performed with $N=4$ and $\beta=11.104$ using $p\simeq 20\%$ (achieved with $N_r=10$) or $p\simeq 30\%$ (corresponding to $N_r=12$) provided consistent values of $\tau_{\mathit{pt}}(Q_L^2)$: $72(10)$ and $78(18)$ respectively. The value of $p$ was kept fixed while approaching the continuum limit, and in all cases we found that this could be achieved by using the same value of $N_r$ for the different lattice spacings (at fixed physical volume). \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{{plot_acc_N_4_beta_11.347_theta_0}.eps} \caption{Behavior of $c(r)$ as a function of the replica index $r$ compared to a simple linear behavior (figure above), along with the corresponding acceptances ($\sim20\%$) for the swap between copies $r$ and $r+1$ (figure below). Data refer to a run with $N=4$, $\beta=11.347$ and $\theta_L=0$.} \label{fig:acceptances} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{{c_0_MC_evolution_N_4_beta_11.347_theta_0}.eps} \caption{Random walk of a configuration among different replicas during a parallel tempering run for $N=4$, $\beta=11.347$ and $\theta_L=0$. Replicas are parametrized by their value of $c(r)$ and the Monte Carlo time is expressed in units of the parallel tempering update step defined in the text.} \label{fig:bound_cond_MC_evolution} \end{figure} Most of the simulations reported in this work used $L_d/a=2$ for the size of the defect, a value that is sufficient to drastically reduce the freezing problem. To investigate the dependence of $\tau_{\mathit{pt}}$ on $L_d$ some more simulations have been performed with $L_d/a=3$ and $L_d/a=1$ for the case $N=6$, always keeping the swap acceptance probability $p$ fixed to about $30\%$, which requires to scale the number of replicas approximately as $\sim\sqrt{L_d^3}$. \begin{table}[!htb] \centering \begin{center} \begin{tabular}{|c|c|c|c|c|c||c|} \hline \multicolumn{7}{|c|}{ }\\[-1em] \multicolumn{7}{|c|}{$N=4$}\\ \hline &&&&&&\\[-1em] $\beta$ & $a\sqrt{\sigma}$ & $L_d/a$ & $p(\%)$ & $N_r$ & $\tau_{\mathit{pt}}\left(Q_L^2\right)$ & $\tau_{\mathit{std}}\left(Q_L^2\right)$ \\ &&&&&&\\[-1em] \hline &&&&&&\\[-1em] \multirow{2}{*}{$11.104$}& \multirow{2}{*}{$0.1981(5)$} & \multirow{2}{*}{$2$} & 20 & 10 & 72(10) & \multirow{2}{*}{140(10)*} \\ &&& 30 & 12 & 78(18) & \\ \hline &&&&&&\\[-1em] 11.347 & 0.1590(6) & 2 & 20 & 10 & 380(80) & 1000(200)\\ \hline \multicolumn{7}{c}{ }\\ \hline \multicolumn{7}{|c|}{ }\\[-1em] \multicolumn{7}{|c|}{$N_c=6$}\\ \hline &&&&&&\\[-1em] $\beta$ & $a\sqrt{\sigma}$ & $L_d/a$ & $p(\%)$ & $N_r$ & $\tau_{\mathit{pt}}\left(Q_L^2\right)$ & $\tau_{\mathit{std}}\left(Q_L^2\right)$ \\ &&&&&&\\[-1em] \hline &&&&&&\\[-1em] 24.768 & 0.2912(11) & 2 & 30 & 17 & 16(3) & 110(10)* \\ \hline &&&&&&\\[-1em] \multirow{3}{*}{$24.845$}& \multirow{3}{*}{$0.2801(13)$} & 1 & \multirow{3}{*}{30} & 7 & 120(30) & \multirow{3}{*}{220(30)*} \\ && 2 & & 17 & 22(5) & \\ && 3 & & 29 & 30(6) & \\ \hline &&&&&&\\[-1em] 25.056 & 0.2499(10) & 2 & 30 & 17 & 39(8) & 800(100)* \\ \hline &&&&&&\\[-1em] 25.394 & 0.2143(8) & 2 & 30 & 17 & 110(40) & 5000(1500) \\ \hline &&&&&&\\[-1em] \multirow{2}{*}{$25.750$}& \multirow{2}{*}{$0.1874(8)$} & 2 & \multirow{2}{*}{30} & 17 & 760(200) & \multirow{2}{*}{$\sim10^5$**} \\ &&3 & & 30 & 43(11) & \\ \hline \end{tabular} \end{center} \caption{Results for the autocorrelation time of $Q^2_L$ obtained by using the standard and the Hasenbusch algorithm. Quantities denoted with $*$ are taken from Ref.~\cite{Bonati:2016tvi}, where the procedure used for the local update was the same of the present work. The result denoted by ** is just a rough estimate, since it was impossible to obtain a reliable value even after $\sim2.5$M trajectories. } \label{tab:summary_autocorr_times} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{tau_vs_a.eps} \caption{Scaling of $\tau_{\mathit{pt}}\left(Q^2_L\right)$ with the inverse lattice spacing obtained by using the local algorithms or parallel tempering for $N=6$. The scaling of the autocorrelation time obtained with the standard algorithm and with parallel tempering at fixed $L_d/a=2$ are both compatible with an exponential scaling in $1/a$ (dashed and solid line). Best fits performed with the fit function $\log\{\tau_{\mathit{pt}}\}= k_0 + k_1/(a\sqrt{\sigma})$ yield $k_1=3.24(20)$ and $1.95(17)$ for the standard and the parallel tempering updates respectively. } \label{fig:autocorr_times_N_6} \end{figure} A complete list of the obtained autocorrelation times $\tau_{\mathit{pt}}(Q_L^2)$ is reported in Tab.~\ref{tab:summary_autocorr_times}, where they are also compared with the results obtained in Ref.~\cite{Bonati:2016tvi} using just local algorithms. The scaling of $\tau_{\mathit{pt}}(Q_L^2)$ with $1/(a\sqrt{\sigma})$ for the case $N=6$ is instead shown in Fig.~\ref{fig:autocorr_times_N_6}. Simulations performed for $N=6$ at $\beta=24.845$ (corresponding to the points at $1/(a\sqrt{\sigma})\simeq 3.57$ in Fig.~\ref{fig:autocorr_times_N_6}) using $L_d/a=1,2,3$ show that $L_d/a=2$ is the optimal choice for this value of the coupling. As can be seen from Fig.~\ref{fig:autocorr_times_N_6}, autocorrelation times extracted from simulations performed at fixed $L_d/a=2$ are much smaller than the corresponding ones obtained from simulation using local update algorithms, also for smaller values of the lattice spacing. However $\tau_{\mathit{pt}}(Q_L^2)$ still seems to scale exponentially with the inverse lattice spacing. This is due to the fact that, by approaching the continuum limit at fixed $L_d/a$, the size of the defect in physical units is reduced, and the mechanism of injection of topological charge through the defect becomes less and less efficient. If instead $L_d$ is kept fixed in physical units while approaching the continuum limit, one generically expects a polynomial critical slowing down in $1/(a\sqrt{\sigma})$. To investigate this point we performed additional simulations at $\beta=25.75$ using $L_d/a=3$, in order to have at this lattice spacing a defect of the same physical size as the one corresponding to $L_d/a=2$ at $\beta=24.845$ (in both the cases $L_d\sqrt{\sigma} \sim 0.56$). The outcome of this test is that, despite a $\approx 33\%$ reduction of the lattice spacing, the effective autocorrelation time $\tau_{pt}(Q_L^2)$ is compatible in the two cases, as reported in Tab.~\ref{tab:summary_autocorr_times} and shown in Fig.~\ref{fig:autocorr_times_N_6}. These results still do not permit to make a clear assessment about the scaling and the optimal tuning of the parallel tempering algorithm towards the continuum limit, however, altogether they give a strong indication that it works exceedingly well, compared to standard algorithms, in reducing topological freezing, and that the best scaling is obtained keeping $L_d$ in the range $0.2-0.3$~fm. All this is consistent with what is observed in two-dimensional $CP^{N-1}$ models~\cite{Hasenbusch:2017unr,Berni:2019bch}, where the continuum limit is performed at fixed $L_d / \xi$, i.e. at fixed physical size of the defect. \subsection{Analytic continuation and continuum limit} In Tab.~\ref{tab:summary_results} we summarize the results obtained for the topological observables $\chi$ and $b_2$ at different values of $N$ and of the lattice spacing, obtained by fitting the $\theta$ dependence of the cumulants as described in Sec.~\ref{sec:imtheta}. An example of imaginary-$\theta$ fit of the cumulants is shown in Fig.~\ref{fig:cumulants_fit_N_6_beta_25.75} for the case $N=6$ and $\beta=25.75$. In all the cases we found sufficient to fit the first three cumulants, as the addition of the fourth one did not change the obtained results. Moreover, in all cases we found the $O\left( \theta_L^6 \right)$ term in the expansion of the vacuum energy to be well compatible with zero since no signal above zero is observed for $b_4$. In particular, we find $\vert b_4(N=4)\vert \cdot 10^5 \lesssim 15$ and $\vert b_4(N=6) \vert \cdot 10^5 \lesssim 30$. For this reason, results for $a^4 \chi$ and $b_2$ reported in Tab.~\ref{tab:summary_results} have been obtained by neglecting $b_4$ in Eqs.~\eqref{eq:cumul_dep_imag_theta}. Finally we note that correlations between the different cumulants are small and do not significantly affect the result of the fit, as we explicitly checked by performing both correlated and uncorrelated fits. \begin{table}[!htb] \centering \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{ }\\[-1em] \multicolumn{5}{|c|}{$N=4$}\\ \hline &&&&\\[-1em] $\beta$ & $a\sqrt{\sigma}$ & $Z$ & $a^4\chi \cdot 10^5$ & $b_2 \cdot 10^4$\\ \hline &&&&\\[-1em] 11.104 & 0.1981(5) & 0.14742(96) & 4.183(28) & -137.0(7.5) \\ 11.347 & 0.1590(6) & 0.1747(23) & 1.691(22) & -128(11) \\ \hline \multicolumn{5}{c}{ }\\ \hline \multicolumn{5}{|c|}{ }\\[-1em] \multicolumn{5}{|c|}{$N=6$} \\ \hline &&&&\\[-1em] $\beta$ & $a\sqrt{\sigma}$ & $Z$ & $a^4\chi \cdot 10^5$ & $b_2 \cdot 10^4$ \\ \hline &&&&\\[-1em] 24.768 & 0.2912(11) & 0.10300(52) & 18.371(93) & -77.2(8.3) \\ 24.845 & 0.2801(13) & 0.10945(68) & 15.205(94) & -73.6(9.1) \\ 25.056 & 0.2499(10) & 0.12053(88) & 9.719(69) & -72.9(8.6) \\ 25.394 & 0.2143(8) & 0.1382(12) & 5.120(44) & -65.1(7.6) \\ 25.750 & 0.1878(13) & 0.1518(15) & 2.816(27) & -58.7(7.4) \\ \hline \end{tabular} \end{center} \caption{Summary of the results obtained using the imaginary-$\theta$ fit for $N=4$ and $6$ by considering up to $O(\theta_L^4)$ terms in the Taylor expansion of the vacuum energy, i.e., neglecting $b_4$ in Eqs.~\eqref{eq:cumul_dep_imag_theta}.} \label{tab:summary_results} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{{fit_N_6_beta_25.75_3_cumuls_thetamax_17.5}.eps} \caption{Best fit of the first 3 cumulants (solid, dashed and dotted line respectively) for $N=6$, $\beta=25.750$ and $\theta_L\in \left[0,17.5\right]$, obtained considering up to $O(\theta_L^4)$ terms in the Taylor expansion of the vacuum energy, i.e., neglecting $b_4$ in Eqs.~\eqref{eq:cumul_dep_imag_theta}. The best fit yields $\chi^2/\mathrm{dof}=14.4/24$.} \label{fig:cumulants_fit_N_6_beta_25.75} \end{figure} We used our data for $N=4$ and $N=6$, as well as data obtained for larger lattice spacings taken from Ref.~\cite{Bonati:2016tvi}, to extrapolate continuum results for $\chi/\sigma^2$ and $b_2$. For the topological susceptibility the improvement with respect to previously available results is only marginal, since the dominant source of error comes from the string tension $\sigma$ used to set the scale. For $b_2$, instead, we achieved a substantial improvement of the state of the art, both for $N=4$ and $N=6$. In particular for $N=6$ parallel tempering allowed us to reach much finer lattice spacings than the ones used in previous studies. In this way we could perform for the first time a controlled continuum extrapolation of $b_2$ in this case, while in Ref.~\cite{Bonati:2016tvi} only a reasonable confidence interval was reported. In Tab.~\ref{tab:cont_limit_summary} we summarize our continuum limits, while in Fig.~\ref{fig:cont_limit} we report our continuum extrapolations. \begin{table}[!htb] \centering \begin{center} \begin{tabular}{|c||c|c|} \hline &&\\[-1em] $N$ & $\chi/\sigma^2$ & $b_2$\\ \hline &&\\[-1em] 3 & 0.0289(13) & -0.0216(15) \\ 4 & 0.02499(54) & -0.01240(96) \\ 6 & 0.02214(69) & -0.0042(10) \\ \hline \end{tabular} \end{center} \caption{Summary of continuum extrapolations for $N=3,4$ and $6$. Values for $N=3$ are taken from Ref.~\cite{Bonati:2015sqt}.} \label{tab:cont_limit_summary} \end{table} \begin{figure} \hspace*{0.16cm} \includegraphics[scale=0.485]{continuum_limit_chi.eps} \hspace*{-0.22cm} \includegraphics[scale=0.5]{continuum_limit_b2.eps} \caption{Continuum extrapolations of $\chi/\sigma^2$ (above) and $b_2$ (below) for $N=4$ and $6$ (solid and dashed line respectively) obtained fitting linear corrections in $a^2 \sigma$ to the continuum limit. The reported best fits yield, respectively for $N=4$ and $6$, $\chi^2/\rm{dof}=0.8/4$ and $3.7/4$ for $\chi/\sigma^2$ and $\chi^2/\rm{dof}=4.9/4$ and $1.2/4$ for $b_2$. The diamond points represent the determinations reported in Ref.~\cite{Bonati:2016tvi} for $N=4$ and 6.} \label{fig:cont_limit} \end{figure} \subsection{Large-$N$ limit} In this section we revisit the large-$N$ extrapolation on the basis of our improved results in particular we report our estimates of $\bar{\chi}$ and $\bar{b}_2$ introduced in Eq.~\eqref{eq:large_N_scaling}. Let us start from the topological susceptibility: following large-$N$ expectations, we fitted our data for $N\ge3$ using the functional form: \begin{eqnarray} \frac{\chi}{\sigma^2} = \frac{\bar{\chi}}{\sigma^2} + \frac{k}{N^2} + O\left(\frac{1}{N^4}\right). \end{eqnarray} Our data are in agreement with the expected large-$N$ scaling and we find the result $\bar{\chi}/\sigma^2=0.0199(10)$; the best fit is shown in Fig.~\ref{fig:large_N_chi} together with numerical results. As already observed, our result does not improve the previous determination $\bar{\chi}/\sigma^2=0.0209(11)$ of Ref.~\cite{Bonati:2016tvi} as the main source of errors comes from the string tension used to set the scale. Using $\Lambda_{\mathit{large-}N}/\sqrt{\sigma}=0.525(2)$~\cite{GonzalezArroyo:2012bh} and $\Lambda_{\mathit{large-}N}=242(10)$~MeV~\cite{Gockeler:2004ad} to convert to physical units we get $\bar{\chi}^{1/4}=173(8)$~MeV, in agreement with the prediction $\bar{\chi}^{1/4}\simeq 180$~MeV obtained from the Witten--Veneziano formula. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{large_N_chi.eps} \caption{Extrapolation of $\chi/\sigma^2$ towards the large-$N$ limit using fit function $\chi/\sigma^2=\bar{\chi}/\sigma^2+k/N^2$. Best fit yields $\bar{\chi}/\bar{\sigma}^2=0.0199(10)$ and $k=0.082(17)$.} \label{fig:large_N_chi} \end{figure} We now pass to the discussion of the large-$N$ behavior of $b_2$. According to the standard large-$N$ arguments, we expect a behavior of the type: \begin{eqnarray} b_2 = \frac{\bar{b}_2}{N^2} + \frac{\bar{b}_2^{\left(1\right)}}{N^4} + O\left(\frac{1}{N^6}\right)\ . \end{eqnarray} To test this prediction we perform a best fit of our data with $N\ge3$ using the power law $b_2(N)= \bar{b}_2/N^{c}$, obtaining a perfect agreement with expectations, since the exponent results $c=2.17(26)$, which improves the previous result $c=2.0(4)$ reported in Ref.~\cite{Bonati:2016tvi}. The obtained best fit is shown in Fig.~\ref{fig:large_N_b2}. By fixing the exponent $c=2$ and fitting our data with just the leading behavior $b_2=\bar{b}_2/N^2$ in the ranges $N\ge3$ and $N\ge 4$, we obtain the results $\bar{b}_2=-0.1931(98)$ and $\bar{b}_2=-0.192(14)$ respectively. Since the curve profiles obtained in these two cases are practically indistinguishable, we only show the former in Fig.~\ref{fig:large_N_b2}. As our final result, we quote the value $\bar{b}_2=-0.193(10)$, which improves on the previous determination $\bar{b}_2=-0.23(3)$ of Ref.~\cite{Bonati:2016tvi}. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{large_N_b2.eps} \caption{Extrapolation of $b_2$ towards the large-$N$ limit. The solid line represents the best fit obtained using fit function $b_2=\bar{b}_2/N^c$ in the whole range; the dashed line represents the best fit obtained using the same fit function but with fixed $c=2$ in the whole range; the dotted line represents the best fit obtained in the whole range including a further $\bar{b}_2^{\left(1\right)}/N^4$ term in the fit function. The best fits yield, respectively, $\bar{b}_2=-0.238(79), -0.1931(98)$ and $-0.179(31)$. The free exponent results $c=2.17(26)$, while the next-to-leading correction is $\bar{b}_2^{\left(1\right)}=-0.17(35)$.} \label{fig:large_N_b2} \end{figure} \section{Conclusions} \label{sec:conclusion} In this work we investigated the $\theta$-dependence of $SU(N)$ Yang--Mills theories at zero temperature using the parallel tempering algorithm proposed by Hasenbusch in Ref.~\cite{Hasenbusch:2017unr}. This algorithm was originally tested in two dimensional $CP^{N-1}$ models, and a first extension of the original proposal has already been performed in Ref.~\cite{Berni:2019bch} (still for $CP^{N-1}$ models), by extending the parallel tempering approach to simulations at imaginary $\theta$ values. In the present work we implemented the same setup in $SU(N)$ Yang--Mills theory, thus proving the feasibility of the approach also for computationally more demanding models, and improving the state of the art results for the $\theta$ dependence of these models in the large $N$ limit. The idea of the method is to simulate many independent identical systems differing only for the boundary conditions imposed on a cubic defect $L_d\times L_d \times L_d$, which are chosen to interpolate between open (obc) and periodic boundary conditions (pbc). Each replica evolves independently, and swaps among them are proposed from time to time in order to transfer configurations from the obc to the pbc replica. In this way a drastic reduction of the autocorrelation time of the topological charge is achieved, while avoiding the complication related to the breaking of translation invariance connected to the adoption of open boundary conditions, since measures are performed on the pbc replica. By using the parallel tempering algorithm we got an impressive reduction of the autocorrelation times of topological observables, which for the smallest lattice spacing used was of at least two order of magnitude when taking into account also the larger computational complexity. A nice feature of the algorithm is that this gain was obtained without optimally tuning all the possible parameters entering the update, which proves the robustness of the approach. The most relevant parameter to be fixed is clearly the size of the defect, and we verified that for the cases studied in this paper a size in the range $0.2$--$0.3$~fm is sufficiently close to optimal to obtain a huge reduction of the critical slowing down. The possibility of performing simulations at smaller lattice spacing than in previous studies allowed us to achieve a substantial improvement in the determination of $\theta$-dependence beyond the leading $O(\theta^2)$ order. In particular, we improved the accuracy of the determination of the coefficients $b_2$ for both $N=4$ and $N=6$, in the last case also performing a controlled continuum limit extrapolation, which was not possible with previously available results. These data confirm that $b_2$ scales with the number of colors in a way that is consistent with the leading behavior predicted by large-$N$ arguments: data for $N\ge 3$ are perfectly compatible with a scaling of the form $b_2 = \bar{b}_2/N^2$, with $\bar{b}_2=-0.193(10)$, while a best fit according to $b_2=\bar{b}_2/N^c$ returns the value $c = 2.17(26)$ for the free exponent. This shows that the scaling of our data is consistent with the leading expected behavior, and it is thus reasonable to neglect sub-leading corrections. We however explicitly note that the accuracy is still not high enough to exclude possible unconventional scenarios like the one put-forward in Ref.~\cite{Vonk:2019kwv}, or to better investigate if critical corrections emerge at small $N$~\cite{Kitano:2020mfk}. A further refinement of present results, including also new values of $N$, would thus be welcome in the future: the algorithm proposed in Ref.~\cite{Hasenbusch:2017unr}, and extended to $SU(N)$ gauge theories in this study, will permit such systematic refinement. \acknowledgments Numerical simulations have been performed on the MARCONI machine at CINECA, based on the agreement between INFN and CINECA (under projects INF19\_npqcd and INF20\_npqcd).
{ "timestamp": "2021-02-08T02:19:44", "yymm": "2012", "arxiv_id": "2012.14000", "language": "en", "url": "https://arxiv.org/abs/2012.14000" }
\section{Introduction} \label{sec:2} The statistican observes a dataset of $n$ i.i.d. realizations of an outcome random variable $y_i$, a random vector of covariates $d_i$ with support in $\mathbb{R}^{K}$ and a random vector of controls $x_i$ with support in $\mathbb{R}^p$. $K$ is fixed while $p=p_n$ is allowed to go to infinity with the sample size. We assume that the following relationship holds: \begin{equation}\label{reg}y_i=d_i^{\top}\alpha+x_i^\top\beta+\gamma_i +u_i,\ \forall i=1,\dots,n, \end{equation} where $\alpha\in \mathbb{R}^K, \beta \in \mathbb{R}^{p}$, the error term $u_i$ is a real-valued random variable such that $\mathbb{E}[w_iu_i \vert \gamma_i=0]=0$ where $w_i=(d_i^\top, x_i^\top)^\top$ and $\gamma_i$ is a random variable. It also holds that $\{y_i,d_i,x_i,u_i,\gamma_i\}_i$ are i.i.d. and $\mathbb{E}[w_i w_i^\top\vert \gamma_i=0]$ exists and is positive definite. The observation $i$ is called an outlier if $\gamma_i\ne 0$. Let $X=(x_1, \dots, x_n)^\top$, $\gamma = (\gamma_1,\dots,\gamma_n)^\top$ and $u=(u_1,\dots,u_n)$. The goal is to obtain inference results on the parameter $\alpha$ in the presence of outliers. This model embodies many situations of practical interest. In Economics, $d_i$ could correspond to a binary treatment, while $x_i$ is a set of controls which need to be added to the regression to ensure exogeneity of $d_i$. The vector $\gamma_i$ may represent measurement errors, in this case, $\alpha$ is the average treatment effect of the whole population. Instead, if outliers arise because for some atypical individuals $d_i$ has a causal effect largely different from the one of the rest of the population, then $\alpha$ is the causal effect of the vast majority of the population. There may be many controls because the data is by nature high-dimensional or the statistician decides to include multiple transformations of a low number of variables to account for nonlinearities (see e.g. \cite{belloni2014inference}). In Biology, the researcher may guess that a few genes increase the risk of catching a given disease. To test this conjecture, a dataset of genomes of sick and healthy individuals is gathered. The variables in $d_i$ are the suspected genes while $x_i$ corresponds to the rest of the genome. Here, a likely cause for outliers is measurement errors. This paper borrows from an approach to inference in the high-dimensional regression model which is rooted in the econometrics literature (see \citet{belloni2011?1, belloni2012sparse, belloni2014high, belloni2014inference, belloni2016inference, belloni2016post, belloni2017program, belloni2019valid}). We introduce the linear projections of each covariate in $d_i$ on the controls $x_i$. For $k=1,\dots, K$, let \begin{equation}\label{lp}d_{ki}=x_i^\top\beta^k+\gamma_{i}^k +\xi_{i}^k,\ \forall i=1,\dots,n, \end{equation} where $ \beta^k \in \mathbb{R}^{p}$, the error term $\xi_{i}^k$ is a real-valued random variable such that $\mathbb{E}[x_i\xi_i^k \vert \gamma_i^k=0]=0$ and $\gamma_i^k$ is a random variable. All observations are i.i.d. and $\mathbb{E}[x_ix_i^\top\vert \gamma_i^k=0]$ exists and is positive definite. Again, the observation $i$ is called an outlier if $\gamma_i^k\ne 0$. Let $\gamma^k = (\gamma_1^k,\dots,\gamma_n^k)^\top$ and $\xi^k=(\xi_1^k,\dots,\xi_n^k)^\top$. Similarly, the linear projection of $y_i$ on $x_i$ can be written \begin{equation}\label{lpy}y_{i}=x_i^\top\beta^0+\gamma_{i}^0 +\xi_{i}^0,\ \forall i=1,\dots,n, \end{equation} where $\beta^0 = \sum_{k=1}^K\alpha_k\beta^k+\beta$, $\gamma^0 =\sum_{k=1}^K\alpha_k\gamma^k+\gamma$ and $\xi^0 =\sum_{k=1}^K\alpha_k\xi^k+u$. We study a two-step estimation procedure. In the first step, we apply a variant of the square-root lasso estimator of \cite{belloni2011square} to the regressions in \eqref{lp} and \eqref{lpy}. The proposed variant has the advantage of being robust to outliers and allows to obtain estimates $\widehat{\xi}^0$ and $\widehat{\xi}^k$ of $\xi$ and $\xi^k$, respectively. We penalize the $\ell_1$-norm of both $\beta^k$ and $\gamma^k$. In the second step, we use the ordinary least squares estimator applied to the regression of $\widehat{\xi}^0$ on $\widehat{\xi}^1,\dots,\widehat{\xi}^K$. The rationale behind this second step lies in a moment condition satisfying the Neyman orthogonality condition (\cite{belloni2017program}). We show that, if the vectors $\beta^0,\dots, \beta^K$ are sparse enough and the proposition of outliers in all first-step regressions and in \eqref{reg} goes sufficiently quickly to $0$, the proposed two-step estimator of $\alpha$ is asymptotically normal which enables us to build tests and confidence intervals. Strikingly, the asymptotic variance of our estimator is the same as the one of the OLS estimator of $\alpha$ in the regression of $y_i$ on $d_i$ and $x_i$ (under the usual conditions of the linear regression model, in particular fixed $p$ and no outliers). In this sense the proposed estimator is efficient. \textbf{Related literature.} This paper draws upon the literature in at least two different research field. The first is that of inference in the high-dimensional linear regression model, for which different approaches have been suggested. \citet{javanmard2014confidence, van2014asymptotically} and \cite{zhang2014confidence} propose to debias the LASSO estimator. \citet{ belloni2014inference} rely rather on a two-step approach similar to ours which attains the semiparametric efficiency bound. None of these approaches, however, has been shown to be robust to outliers. The second related field is that of robust regression. Detailed accounts of this field can be found in \cite{rousseeuw2005robust,hampel2011robust} and \cite{maronna2018robust}. In the context of low-dimensional regression, the literature identifies a trade-off between efficiency and robustness, as explained below. $M$-estimators (such as the Ordinary Least-Squares (OLS) estimator) are often efficient when data are generated by the standard linear model with Gaussian errors and without outliers. However, this comes at the cost of robustness; $M$-estimators may be asymptotically biased in the presence of outliers. By contrast, $S$-estimators such as the Least Median of Squares (LMS) and the Least Trimmed Squares (LTS) are robust under several measures of robustness developed in the literature. They are also asymptotically normal in the model with Gaussian errors and without outliers but have a larger asymptotic variance than the OLS estimator in the standard linear model. In contrast, our estimation procedure yields an efficient estimator which can be asymptotically normal even in the presence of outliers. Within the robust regression literature some authors have considered applying $\ell_1$-norm penalization to robust estimation. For low-dimensional linear regression (or nested special cases), see for instance \cite{gannaz2007robust, she2011outlier, lee2012regularization,lambert2011robust,dalalyan2012socp,li2012simultaneous,gao2016penalized, collier2017rate}. These works do not provide inference results. In this setup, \cite{beyhum2020inference} shows that a variant of the square-root lasso estimator is asymptotically normal and efficient. The present paper can be seen as an extension of this result in a high-dimensional context. Recently, some authors have studied the problem of simultaneous estimation of the regression coefficients and the outliers when the number of variables can be larger than the sample size, see for instance \cite{alfons2013sparse, liu2017robust, virouleau2017high, yang2018general, liu2020high}. Closely related to our study is \cite{nguyen2012robust}, which proposed the extended lasso estimator which is a variant of the lasso estimator. \cite{dalalyan2019outlier} later refined their results. None of those works develop confidence intervals. Our main contribution is therefore to propose a $\ell_1$-norm penalized estimation procedure in the high-dimensional regression model which is robust to outliers, asymptotically normal and efficient. Moreover, in \cite{nguyen2012robust} the proposed theoretical choice of penalty level depends on the variance of the error term. Because our first step estimator is a variant of the square-root lasso estimator, we avoid this caveat and are able to propose a choice of penalty level which is not a function of the variance of the noise. \textbf{Notation.} We use the following notations. For a matrix $M$, $M^{\top}$ is its transpose, $\norm{M}_2$, $\norm{M}_1$ and $\norm{M}_{\infty}$ are the $\ell_2$-norm, $\ell_1$-norm and the sup-norm of the vectorization of $M$, respectively. $\norm{M}_{\text{op}}$ is the operator norm of $M$, $\norm{M}_0$ is the number of non-zero coefficients in $M$, that is its $\ell_0$-norm and $\norm{M}_{2,\infty}$ is the maximum of the $\ell_2$-norms of the columns of $M$. Moreover, $U(M)$ is the maximal diagonal element of $M$, $\lambda_{\min}(M)$ and $\lambda_{\max}(M)$ are its smallest and largest eigenvalue, respectively. When $M$ is a $n_1\times n_2$ matrix, $S\subset\{1,\dots,n_1\}$ and $T\subset\{1,\dots,n_2\}$, we denote by $M_{ST}$ the $|S|\times |T|$ submatrix of $M$ corresponding to the rows indexed by $S$ and the columns indexed by $T$. For a real number $x\in \mathbb{R}$, $\text{sign}(x)$ is equal to $1$ if $x\ge 0$ and $-1$ otherwise. For $a,b\in \mathbb{R}$, $a\vee b$ (resp. $a\wedge b$) denotes the maximum (resp. minimum) of $a$ and $b$. Next, for an integer $m$, $I_m$ is the identity matrix of size $m\times m$. Finally, $\mathbbm{1}_{\{\cdot\}}$ denotes the indicator function. \section{Estimation} \subsection{Framework} The probabilistic framework consists of a sequence of data generating processes (henceforth, DGPs) that depend on the sample size $n$. We consider an asymptotic setting where $n$ goes to $\infty$ while the number of $p=p_n$ of regressors is allowed (but does not have to) to go to infinity with $n$. The different regression coefficients $\beta,\beta^0,\dots,\beta^K$ and vectors $\gamma,\gamma^1,\dots,\gamma^K$ can vary with $n$, but $\alpha$ remains fixed. The proposed estimation strategy is able to handle models where $\gamma^k$ is sparse for all $k=1,\dots,K$, that is $\left|\left|\gamma ^k \right|\right|_0/n=o_P(1)$ or, in other words, $\epsilon^k\to 0$. Potentially, every outcome $y_i,d_i$ can be generated by a distribution that does not follow a linear model but the difference between the distribution of $y_i,d_i$ and the one yielded by a linear model can only be large for a negligible proportion of individuals. The subsequent theorems will help to quantify these statements. \subsection{Estimator} We propose a two-step estimation strategy. In a first step, we estimate the regressions \ref{lp} and \ref{lpy}. For $k\in\{1,\dots,K\}$, $b\in\mathbb{R}^p$ and $c\in\mathbb{R}^n$, let $$Q^k(b,c)= \left\{\begin{array}{cc} \frac{1}{n} \sum_{i=1}^n(y_i-x_i^\top b-c_i)^2& \text{if } k=0\\ \frac{1}{n} \sum_{i=1}^n(d_{ki}-x_i^\top b-c_i)^2& \text{otherwise}. \end{array}\right.$$ We use the following estimators which have the advantages of being robust to outliers. \begin{equation} \label{1step} (\widehat{\beta}^k,\widehat{\gamma}^k)\in \argmin{b\in \mathbb{R}^{p},\ c\in \mathbb{R}^n} (Q^k(b,c))^{1/2} +\frac{\lambda_{\beta}^k}{n}\norm{\widehat{\Psi}b}_1 +\frac{\lambda_{\gamma}^k}{n} \norm{c}_1, \ \forall k=0,\dots,K, \end{equation} where $\{\lambda_\beta^k\}_{k=1}^K, \{\lambda_\gamma^k\}_{k=1}^K$ are sequences of positive penalty levels which properties will be specified below and $\widehat{\Psi}$ is the diagonal matrix with diagonal coefficients $\widehat{\Psi}_ {jj}=n^{-1/2}\sqrt{\sum_{i=1}^n x_{ji}^2}$. The second step estimator is the ordinary least square estimator of the regression of $y_i - x_i^\top\widehat{\beta}^0-\widehat{\gamma}^0_i$ on $d_{1i}-x_i^\top\widehat{\beta}^1-\widehat{\gamma}^1_i, \dots,d_{Ki}-x_i^\top\widehat{\beta}^K-\widehat{\gamma}^K_i$, that is \begin{equation}\label{2step} \widehat{\alpha}\in\argmin{a\in\mathbb{R}^K} \sum_{i=1}^n\left(\widehat{\xi}^0- \sum_{k=1}^Ka_k\widehat{\xi}^k_i\right)^2, \end{equation} where $\widehat{\xi}_i^0 = y_i - x_i^\top\widehat{\beta}^0-\widehat{\gamma}^0_i$ and for $k=1,\dots,K$, $\widehat{\xi}^k_i=d_{ki}-x_i^\top\widehat{\beta}^k-\widehat{\gamma}^k_i$. This estimator relies on the moment condition \begin{equation}\label{moment} \mathbb{E}\left[(d_i-x_i^\top\beta^k-\gamma^k_i)\left((y_i-x_i^\top\beta^0-\gamma^0_i)-\sum_{k=1}^K\alpha_k(d_i-x_i^\top\beta^k-\gamma^k_i)\right)\right]=0, \end{equation} for all $k=1,\dots,K$. If $\mathbb{E}[x_iu_i]=0$ and $\mathbb{E}[x_i\xi^k_i]=0$ for $k=1,\dots,K$, the partial derivatives of the moment in \eqref{moment} with respect to $\beta^0,\dots,\beta^K$ and $\gamma^0,\dots,\gamma^K$ are $0$. As mentioned in the introduction, this moment therefore satisfies the Neyman orthogonality condition (see \cite{belloni2017program}). This property reduces the effect on the second step of mistakes made in the first step. \subsection{Rate of convergence of the first step estimator} Let $k\in\{0,\dots,K\}$. In this subsection we derive the convergence rate of the estimator \eqref{1step} of the regression problem \eqref{lp} ($k\ne 0$) or \eqref{lpy} ($k=0$) . For a vector $v\in\mathbb{R}^p$ (resp. $v\in\mathbb{R}^n$), we denote by $v_T$ (resp. $v_S$), the vector such that $(v_T)_i= v_i$ (resp. $(v_S)_i=v_i$) if $\beta^k_i\ne 0$ (resp. $\gamma^k_i\ne0$) and $(v_T)_i= 0$ (resp. $(v_S)_i= 0$) otherwise. Moreover, we write $v_{T^c}= v-v_T$ (resp. $v_{S^c}= v-v_S$). Let $$\kappa^k = \min_{(h,f)\in\mathbb{C}^k} \frac{\frac{1}{\sqrt{n}}\norm{Xh+f}_2}{\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2}, $$ where $\mathbb{C}^k = \{(h,f)\in\mathbb{R}^p\times\mathbb{R}^n\ |\ \lambda_{\beta}^k\norm{\widehat{\Psi}h_{T^c}}_1+\lambda_{\gamma}^k\norm{f_{S^c}}_1 \le 3\lambda_{\beta}^k\norm{\widehat{\Psi}h_{T}}_1+3\lambda_{\gamma}^k\norm{f_{S}}_1\}.$ The constant $\kappa^k$ bears similarities with the extended restricted eigenvalue of \cite{nguyen2012robust} but has a different scaling. It is a generalization of the usual restricted eigenvalue (see \cite{bickel2009simultaneous}). Let $M^k=(\lambda_{\beta}^k\sqrt{\norm{\beta^k}_0})\vee( n\lambda_{\gamma}^k\sqrt{\epsilon^k})$. The following assumption is the key to derive rates of convergence of the estimator \eqref{1step}. \begin{assumption} \label{ascv} The following holds: \begin{enumerate}[\textup{(}i\textup{)}] \item\label{cvii}we have $\lim\limits_{n\to\infty}\Pr\left(\lambda_{\beta}^k\ge 2\sqrt{n}\max\limits_{j=1,\dots,p}\frac{|(X^\top\xi^k)_j|}{\norm{\xi^k}_2\widehat{\Psi}_{jj}}\right)=1$ and $\lim\limits_{n\to\infty}\Pr\left(\lambda_{\gamma}^k\ge 2 \sqrt{n} \frac{\norm{\xi^0}_\infty}{\norm{\xi^0}_2}\right)=1$; \item\label{cvi} there exists $\kappa_*^k>0$ such that $\lim_{n\to \infty}\Pr(\kappa^k\ge\kappa_*^k)=1$; \item\label{cviii} $M^k/n=o(1)$; \item\label{cviv} $\xi^k=O_{\textrm{P}}(\sqrt{n})$ . \end{enumerate} \end{assumption} Assumption \ref{ascv} \eqref{cvii} limits the choice of the penalty level. In practice, it is advisable to choose the lowest level of penalty satisfying this condition. \cite{belloni2011square} elaborates on how to choose $\lambda_{\beta}^k$ according to \eqref{cvii}. Lemma 2.3 and Corollary 2.4. in \cite{beyhum2020inference} provide guidance on how to pick $\lambda_{\gamma^k}$ under this constraint. Condition \eqref{cvi} states that the extended restricted eigenvalue is bounded from below with probability approaching $1$. A sufficient condition when $x_i$ is a mean zero Gaussian vector is given in Lemma 1 in \cite{nguyen2012robust}, but this result does not allow the penalization of $\beta^k$ to depend on the regressors. Condition \eqref{cviii} is a joint sparsity constraint on $\beta^k$ and $\gamma^k$. Condition \eqref{cviv} is standard and holds if the entries of $\xi^k$ are i.i.d. with finite expectation because of the law of large numbers. Below, we give a sufficient condition for conditions \eqref{cvii} and \eqref{cvi} under a Gaussian design. \begin{lemma} \label{sufficient}Assume that the following holds: \begin{enumerate}[\textup{(}i\textup{)}] \item\label{si} $\{(x_i,\xi_i^k)\}_i$ are i.i.d and $x_i$ is independent of $\xi_i^k$; \item\label{sii} there exists a positive definite matrix $\Sigma$ such that $\lambda_{\max}(\Sigma)U(\Sigma)=O(1)$, $1/(\lambda_{\min}(\Sigma)\wedge \min_{k=1,\dots,p}\Sigma_{kk})=O(1)$ and $x_i$ are i.i.d. $\mathcal{N}(0,\Sigma)$; \item\label{siii} there exists $\sigma^k>0$ such that $\xi^k_i$ are i.i.d. $\mathcal{N}(0,(\sigma^k)^2)$; \item\label{siv} there exists $c>1$ such that $\lambda_{\beta}^k\ge2c\sqrt{n}\sqrt{\log(p)}$ and $\lambda_{\gamma}^k\ge 2c\sqrt{\log(n)}$; \item\label{sv} $(\norm{\beta^k}_0\vee 1)\log(p)/n=o(1)$ and $\epsilon^k\log(n)=o(1)$. \end{enumerate}Then Assumption \ref{ascv} is satisfied. \end{lemma} In the particular setting of Lemma \ref{sufficient}, we see that condition \eqref{cviii} in Assumption \ref{ascv} becomes $\norm{\beta^k}_0=o(\sqrt{n}/\log(p))$ and $\epsilon^k=o(1/\log(n))$. As claimed in the introduction, the average proportion of outliers $\epsilon^k$ goes to $0$ while their number $n\epsilon^k$ can diverge. Let $\mu^k=(\norm{\beta^k}_0\vee \epsilon^k )$. The following theorem characterizes the rates of convergence of the first-step estimator. \begin{theorem}\label{thcv} Under Assumption \ref{ascv}, we have \begin{align*} \norm{\widehat{\beta}^k-\beta^k}_1+\frac{1}{\sqrt{n}}\norm{\widehat{\gamma}^k-\gamma^k}_1&=O_{\textrm{P}}\left(\sqrt{\mu^k}\left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right) \frac{M^k}{n}\right);\\ \norm{\widehat{\beta}^k-\beta^k}_2+\frac{1}{\sqrt{n}}\norm{\widehat{\gamma}^k-\gamma^k}_2&=O_{\textrm{P}}\left( \left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right) \frac{M^k}{n}\right). \end{align*} \end{theorem} The term $\norm{X}_{2,\infty}/\sqrt{n}$ is the maximum of the empirical standard deviations of the variables in $X$. It is equal to $\lambda_{\max}(\widehat{\Psi})$ and is an $O_{\textrm{P}}(1)$ when the variables in $X$ are uniformly bounded over $k$ and $n$ or when the assumptions of Lemma \ref{sufficient} hold (see Lemma \ref{chisquare} in the appendix). Under the assumptions of Lemma \ref{sufficient}, we obtain the same rate of convergence of $\norm{\widehat{\beta}^k-\beta^k}_2+\frac{1}{\sqrt{n}}\norm{\widehat{\gamma}^k-\gamma^k}_2$ as in Corollary 1 of \cite{nguyen2012robust} (but without requiring knowledge of the variance of the error term), that is $$O_{\textrm{P}}\left(\sqrt{\frac{\norm{\beta^k}_0\log(p)}{n}}+\sqrt{\epsilon^k\log(n)}\right).$$ \subsection{second-step estimator} In this section, we present sufficient assumptions for the asymptotic normality of $\widehat{\alpha}$ and consistent estimation of its asymptotic variance. Recall that $\widehat{\alpha}$ is the OLS estimator of the regression of $\widehat{\xi}^0$ on $\widehat{\xi}^1,\dots,\widehat{\xi}^K$. The first set of assumptions concerns the distribution of $(\xi^0,\dots,\xi^K)$ which we attempt to estimate in the first step. Let $\xi_i=(\xi_i^1,\dots,\xi_i^K)^\top$ and $\widehat{\xi}_i=(\widehat{\xi}_i^1,\dots,\widehat{\xi}_i^K)^\top$. \begin{assumption} \label{asan} The following holds: \begin{enumerate}[\textup{(}i\textup{)}] \item\label{ani} $\{(d_i,x_i,u_i)\}_i$ are i.i.d. random variables; \item\label{anii} $\mathbb{E}[\xi_iu_i]=E[u_i]=0$; \item\label{aniii} $\Sigma_\xi = \mathbb{E}[\xi_i\xi_i^\top]$ exists and is positive definite; \item\label{aniv} there exists $\sigma>0$ such that $\text{var}(u_i^2|\xi_i)=\sigma^2<\infty$. The conditional variance $\sigma^2$ does not scale with $n$. \end{enumerate} \end{assumption} These conditions are standard in the linear regression literature and guarantee that the OLS estimator of the regression of $\xi^0$ on $\xi^1,\dots,\xi^K$ is asymptotically normal. Let us now introduce $\bar M = \max_{k=0}^K M^k$, $\bar\lambda_\beta =\max_{k=0}^K \lambda_\beta^k$, $\bar\lambda_\gamma =\max_{k=0}^K \lambda_\gamma^k$ and $\bar \mu=\max_{k=0}^K\mu^k$. The second set of assumptions ensures that $\xi^0,\dots,\xi^K$ are sufficiently well estimated in the first step. \begin{assumption}\label{asrate} The following holds: \begin{enumerate}[\textup{(}i\textup{)}] \item\label{asratei} $\bar M^2=o(n^{3/2})$; \item\label{asrateii} $\sqrt{\bar \mu} \left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right)\bar M\left(\bar\lambda_\beta(\norm{X}_{2,\infty}/\sqrt{n}) + \bar\lambda_\gamma \sqrt{n}\right)=o_{\text{P}}(n^{3/2})$. \end{enumerate} \end{assumption} Under the assumptions of Lemma \ref{sufficient}, condition \eqref{asratei} in Assumption \ref{asrate} is satisfied if $\norm{\beta^k}_0=o(\sqrt{n}/\log(p))$ (the usual consistency condition of the lasso) and $\epsilon^k=o(1/(\sqrt{n}\log(n)))$. As already argued, we also have $(\norm{X}_{2,\infty}/\sqrt{n})=O_{\text{P}}(1)$ in this case. This implies that $$\sqrt{\bar \mu}\left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right) \bar M\left(\bar\lambda_\beta(\norm{X}_{2,\infty}/\sqrt{n}) + \bar\lambda_\gamma\sqrt{n}\right)= O_{\text{P}}(\sqrt{\bar \mu} \bar M\left(\bar\lambda_\beta + \bar\lambda_\gamma \sqrt{n}\right))=O_{\text{P}}(\bar M^2)$$ and therefore Assumption \ref{asrate}\eqref{asrateii} is implied by Assumption \ref{asrate}\eqref{asratei}. These assumptions allow us to show the asymptotic normality of our two-step estimator. We have the following theorem. \begin{theorem}\label{than} Under assumptions \ref{ascv}, \ref{asan} and \ref{asrate}, we have $$\sqrt{n}(\widehat{\alpha}-\alpha)\xrightarrow{\Pr}\mathcal{N}(0,\sigma^2\Sigma_\xi^{-1}),\ \widehat{\sigma}\xrightarrow{\Pr}\sigma\ \text{and } \widehat{\Sigma}_\xi\xrightarrow{\Pr}\Sigma_\xi,$$ where $\widehat{\sigma}^2 = n^{-1}\sum_{i=1}^n(\widehat{\xi}^0_i-\sum_{k=1}^K\widehat{\alpha}_k\widehat{\xi}_i^k)^2$ and $\widehat{\Sigma}_\xi = n^{-1}\sum_{i=1}^n\widehat{\xi}_i\widehat{\xi}_i^\top$. \end{theorem} This result allows us to discuss efficiency. Let us consider the alternative problem of estimating $\alpha$, in the regression model \eqref{reg}, when there are no outliers ($\gamma,\gamma^1,\dots,\gamma^K=0$), $p$ is fixed and Assumption \ref{asan} holds. In this model, the OLS estimator of $(\alpha,\beta)^\top$ is asymptotically normal and the asymptotic variance of the OLS estimator of $\alpha$ (the projection on the first $K$ coordinates) is $\sigma^2\Sigma_\xi^{-1}$ by the Frisch-Waugh-Lovell theorem. Therefore, our estimator reaches the same asymptotic variance as the OLS estimator in this alternative model. In this sense, our estimator is efficient. This result is remarkable because it is obtained in a framework where there are outliers and $p$ can go to infinity. An important remark concerns the meaning of confidence intervals developed using Theorem \ref{than}. They are obtained under an asymptotic setting with triangular array data in which the number of outliers is allowed to go to infinity while the proportion of outliers and nonzero coefficients of $\beta^0,\dots,\beta^K$ go to $0$. A 95\% confidence interval $I$ built with Theorem \ref{than} should be interpreted as follows: if the proportion of outliers in our data is low enough, the vectors $\beta^0,\dots,\beta^K$ are sparse and the sample size is large enough, then there is approximately a probability of $0.95$ that $\alpha$ belongs to $I$. \section{Computation and simulations} \subsection{Iterative algorithm} \label{Iterative Algorithm} We propose to use an algorithm similar to that of Section 5 in \cite{owen2007robust} to compute the first step estimators. Let $\widehat{\sigma}^k = Q^k(\widehat{\beta}^k,\widehat{\gamma}^k)$. Because $u=\min_{\sigma>0}\left\{\frac{\sigma}{2}+\frac{1}{2\sigma}u^2\right\}$, as long as $Q^k(\widehat{\beta}^k,\widehat{\gamma}^k)>0 $, we have \begin{equation}\label{optiglob} (\widehat{\beta}^k,\widehat{\gamma}^k, \widehat{\sigma}^k)\in \argmin{b\in \mathbb{R}^{p},c\in \mathbb{R}^n,s \in \mathbb{R}_+}\frac{s}{2}+ \frac{1}{2s} Q^k(b,c)+\frac{\lambda^k_\beta}{n} \norm{\widehat{\Psi}b}_1+\frac{\lambda^k_\gamma}{n}\left|\left|c\right|\right|_1. \end{equation} This is a convex optimization program and the proposed approach is to iteratively minimize over $b$, $c$ and $s$. Let us start from $\left(b^{(0)},c^{(0)},s^{(0)}\right)$ and compute the following sequence for $t\in\mathbb{N}^*$ until convergence: \begin{enumerate} \item $b^{(t+1)}\in \argmin{b\in \mathbb{R}^{p}} Q^k(b,c^{(t)})+2\frac{\lambda^k_\beta s^{(t)}}{n} \norm{\widehat{\Psi}b}_1;$ \item $\gamma^{(t+1)}\in \argmin{c\in \mathbb{R}^n} \left\vert\left| y-X\beta^{(t+1)}-c \right|\right\vert_2^2+\frac{2\lambda^k_{\gamma} s^{(t)}}{n}\left|\left|c\right|\right|_1;$ \item $s^{(t+1)}=\sqrt{Q^k(b,c)}$. \end{enumerate} Step 1 corresponds to the minimization of the usual lasso optimization program which is readily available in statistical softwares. The following lemma is a direct consequence of Section 4.2.2. in \cite{giraud2014introduction} and explains how to perform step 2. \begin{lemma}\label{alphaiter} For $i=1,\dots,n$, if $\left|y_i\mathbbm{1}_{k=0} + d_{ki}\mathbbm{1}_{k\ne 0} -x_i^\top b^{(t+1)} \right|\le\frac{\lambda^k_{\gamma} s^{(t)}}{n}$ then $c^{(t+1)}_i=0$, otherwise $c^{(t+1)}_i=y_i\mathbbm{1}_{k=0} + d_{ki}\mathbbm{1}_{k\ne 0} -x_i^\top b^{(t+1)}-\text{sign}\left(y_i\mathbbm{1}_{k=0} + d_{ki}\mathbbm{1}_{k\ne 0} -x_i^\top b^{(t+1)}\right)\frac{\lambda^k_{\gamma} s^{(t)}}{n}$. \end{lemma} \subsection{Simulations} We apply our estimation procedure in a small simulation exercise. The $p$ regressors $x_i$ are i.i.d. $\mathcal{N}(0,I_p)$. The variable $d_i$ is unidimensional and generated as $d_i=x_i^\top\beta^1 +\gamma^1_i +\xi_i^1$, where $\beta^1\in\mathbb{R}^p$, $\beta^1_k=10$ for $6\le k\le 10$ and $\beta^1_k=0$ otherwise, $\xi^1_i$ are i.i.d. $\mathcal{N}(0,1)$ and $$\gamma^1_i=\left\{\begin{array}{cc} 0&\text{if $x_{11i}<\Phi^{-1}(1-\epsilon)$}\\ z&\text{if $x_{11i}\ge \Phi^{-1}(1-\epsilon)$}, \end{array}\right.$$ where $\Phi$ is the cumulative distribution function of the standard normal distribution and $z\in \mathbb{R}$. The outcome is given by $y_i=\alpha d_i + x_i^\top\beta +\gamma_i+\xi_i$, where $\beta\in\mathbb{R}^p$ is such that $\beta_k=10$ for $1\le k\le 5$ and $\beta_k=0$ otherwise, $\xi_i$ are i.i.d $\mathcal{N}(0,1)$ and $$\gamma_i=\left\{\begin{array}{cc} 0&\text{if $x_{6i}<\Phi^{-1}(1-\epsilon)$}\\ z&\text{if $x_{6i}\ge \Phi^{-1}(1-\epsilon)$} \end{array}\right.$$ In tables ~\ref{fig:Cov} and ~\ref{fig:Cov1} we present the bias, the variance and the mean squared error (MSE) and the coverage of $95\%$ confidence intervals based on the asymptotic variance of Theorem \ref{than} for our estimator $\widehat{\alpha}$ for various values of $p,n,\epsilon$ and $z$. These quantities are computed as averages over 1,000 replications and in each replication the first-step estimators are computed using 10 iterations of the algorithm in Section \ref{Iterative Algorithm}. The penalty levels are chosen according to Lemma \ref{sufficient}, that is $\lambda^k_\beta=2.02\sqrt{n}\sqrt{2\log(p)}$ and $\lambda^k_\gamma=2.02\sqrt{2\log(n)}$ for $k=0,1$. To compare our estimator to a procedure which is not robust to outliers, we report exactly the same information for the (biased) estimator $\widehat{\alpha}^{b}$ which is similar to $\widehat{\alpha}$ apart from the fact that it sets $\lambda^k_\gamma=0$ for $k=0,1$ ( $\lambda^k_\beta$ remains equal to $2.02\sqrt{n}\sqrt{2\log(p)}$). The confidence intervals for $\widehat{\alpha}^{b}$ are also computed using Theorem \ref{than} but they are not asymptotically valid. We observe that our estimator has a very small prediction error and almost nominal coverage. \begin{table}[!ht] \begin{minipage}{0.48\textwidth} \centering \caption{$p=n=500$, $\epsilon=0.005$, $z=20$} \label{fig:Cov} {\small \begin{tabular}{|c|c|c|} \hline &$\widehat{\alpha}$ & $\widehat{\alpha}^{b}$ \\ \hline bias & $10^{-3}$&0.83\\ var & $8.10^{-3}$& 0.18\\ MSE &$8.10^{-3}$ & 0.86 \\ Coverage & 0.92 & 0.01\\ \hline \end{tabular} } \end{minipage} \begin{minipage}{0.48\textwidth} \centering \caption{$p=n=1,000$, $\epsilon=0.0025$, $z=40$} \label{fig:Cov1} {\small \begin{tabular}{|c|c|c|} \hline &$\widehat{\alpha}$ & $\widehat{\alpha}^{b}$ \\ \hline bias & $7.10^{-3}$&0.20\\ var & $3.10^{-3}$& 0.02\\ MSE &$3.10^{-3}$ & 0.06 \\ Coverage & 0.90 & 0.10\\ \hline \end{tabular} } \end{minipage} \end{table} \section{Proofs} \subsection{Proof of Lemma \ref{sufficient}} \subsubsection{Proof that condition \eqref{cvii} in Assumption \ref{ascv} holds} Conditional on $X$, $n^{-1/2}(X^\top\xi^k)_j=n^{-1/2}\sum_{i=1}^nx_{ij}\xi_i^k$ has a $\mathcal{N}(0, \widehat{\Psi}_{jj}^2(\sigma^k)^2)$ distribution. Therefore, by the Gaussian bound (see Lemma B.1 in \cite{giraud2014introduction}), we have \begin{equation}\label{gaussian_bound}\Pr\left(\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\ge t\right)=E\left[\left. \Pr\left(\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\ge t\right)\right| X\right]\le E[ e^{-\frac{t^2}{2}}|X]=e^{-\frac{t^2}{2}},\end{equation} for $t\ge 0$. Next, it holds that \begin{align} \notag &\Pr\left(\lambda_{\beta}^k< 2\sqrt{n}\max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\norm{\xi^k}_2\widehat{\Psi}_{jj}} \right)\\ \notag&\ge \Pr\left(c\sqrt{2\log(p)}<\max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}+ \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k}\right|\sup\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\widehat{\Psi}_{jj}}\right)\\ \notag &\ge\Pr\left(\left(c-\frac{c-1}{2}\right)\sqrt{2\log(p)}< \max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\right)\\ \label{pigeonhole} &\quad + \Pr\left(\frac{c-1}{2}\sqrt{2\log(p)}< \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k} \right|\sup\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\widehat{\Psi}_{jj}}\right), \end{align} by the pigeonhole principle. Now, because of \eqref{gaussian_bound}, we have \begin{align}\notag&\Pr\left(\left(c-\frac{c-1}{2}\right)\sqrt{2\log(p)} <\max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\right)\\ \notag &\quad=\mathbb{E}\left[\Pr\left(\left.\left(c-\frac{c-1}{2}\right)\sqrt{2\log(p)}< \max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\right| X\right)\right]\\ \notag &\quad\le \mathbb{E}\left[\sum_{j=1}^{p}\Pr\left(\left.\left(c-\frac{c-1}{2}\right)\sqrt{2\log(p)}\le \frac{|(X^\top\xi^k)_j|}{\sqrt{n}\sigma^k\widehat{\Psi}_{jj}}\right| X\right)\right]\\ \label{limzero}&\le pe^{-\left(c-\frac{c-1}{2}\right)^2\log(p)}=e^{-\left(\left(c-\frac{c-1}{2}\right)^2-1\right)\log(p)}\to 0 \end{align} because $c-\frac{c-1}{2}>1$. Remark that by the law of large numbers and the continuous mapping theorem, $\sqrt{n}\norm{\xi^k}_2^{-1}\xrightarrow{\Pr} (\sigma^k)^{-1}$. Therefore, this and \eqref{limzero} yield \begin{equation*} \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k} \right|\max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\widehat{\Psi}_{jj}}=o_{{\textrm{P}}}(1)O_{{\textrm{P}}}(\sqrt{\log(p)}), \end{equation*} which implies \begin{equation} \label{limzero2}\lim\limits_{n\to\infty}\Pr\left(\frac{c-1}{2}\sqrt{2\log(p)}< \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k} \right|\max\limits_{j=1\dots,p}\frac{|(X^\top\xi^k)_j|}{\sqrt{n}\widehat{\Psi}_{jj}}\right)=0. \end{equation} By \eqref{pigeonhole}, \eqref{limzero} and \eqref{limzero2}, we obtain $\lim\limits_{n\to\infty}\Pr\left(\lambda_{\beta}^k\ge 2\max\limits_{j=1\dots,p}(|(X^\top\xi^k)_j|/(\norm{\xi^k}_2\widehat{\Psi}_{jj}))\right)=1$. Next, we prove that $\lim\limits_{n\to\infty}\Pr\left(\lambda_{\gamma}^k\ge 2\sqrt{n}\norm{\xi^k}_\infty/\norm{\xi^k}_2\right)=1$. The Gaussian bound in Lemma B.1 of \cite{giraud2014introduction} yields \begin{equation}\label{gaussian_bound_2}\Pr\left(\frac{|\xi^k_j|}{\sigma^k}\ge t\right)\le e^{-\frac{t^2}{2}},\end{equation} for $t\ge 0$. Then, we have \begin{align} \notag &\Pr\left(\lambda_{\gamma}^k< 2\sqrt{n}\frac{\norm{\xi^k}_\infty}{\norm{\xi^k}_2}\right)\\ \notag&\ge \Pr\left(c\sqrt{2\log(n)}< \frac{\norm{\xi^k}_\infty}{\sigma^k}+ \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k}\right|\norm{\xi^k}_\infty\right)\\ \label{pigeonhole2} &\ge\Pr\left(\left(c-\frac{c-1}{2}\right)\sqrt{2\log(n)}< \frac{\norm{\xi^k}_\infty}{\sigma^k}\right) + \Pr\left(\frac{c-1}{2}\sqrt{2\log(n)}< \left|\frac{\sqrt{n}}{\norm{\xi^k}_2} - \frac{1}{\sigma^k}\right|\norm{\xi^k}_\infty\right), \end{align} by the pigeonhole principle. By \eqref{gaussian_bound_2}, we have $$\Pr\left(\left(c-\frac{c-1}{2}\right)\sqrt{2\log(n)}<\norm{\xi^k}_\infty/\sigma^k\right)\le e^{\left(\left(c-\frac{c-1}{2}\right)^2\right)\log(n)}\to 0.$$ This implies that $\norm{\xi^k}_\infty=O_{\textrm{P}}(\sqrt{\log(n)})$, which leads to $|\sqrt{n}\norm{\xi^k}_2^{-1}-(\sigma^k)^{-1}|\norm{\xi^k}_\infty=o_{\textrm{P}}(\sqrt{\log(n)})$ and shows that $\lim\limits_{n\to\infty}\Pr\left(\frac{c-1}{2}\sqrt{2\log(n)}< \left|\sqrt{n}\norm{\xi^k}_2^{-1}-(\sigma^k)^{-1}\right|\norm{\xi^k}_\infty\right)=0$. We conclude using \eqref{pigeonhole2}. \subsubsection{Proof that condition \eqref{cvi} in Assumption \ref{ascv} holds} In the rest of the proof, we denote $\norm{\beta^k}_0$ by $t$, $\norm{\gamma^k}_0$ by $s$ and $\lambda^k_\gamma/\lambda_\beta^k$ by $\lambda$. Let also $\Psi$ be the $p\times p$ diagonal matrix for which $\Psi_{kk}=\sqrt{\Sigma_{kk}}$ for $k=1,\dots,p$ and $C_X=\left(\sqrt{\lambda_{\min}(\Sigma)}/(8\norm{\Psi}_\infty)\right)\wedge 1/2$. The following proof bears similarities with that of Lemma 1 in \cite{nguyen2012robust} but is adapted to the specific penalty of our estimator. Take $(h,f)\in \mathbb{C}^k$. Let us first prove the following lemmas. \begin{lemma}\label{chisquare}Under the assumptions of Lemma \ref{sufficient}, it holds that $\lambda_{\max}(\widehat{\Psi})=O_{\textrm{P}}(1)$, $1/\lambda_{\min}(\widehat{\Psi})=O_{\textrm{P}}(1)$ and \begin{equation} \label{boundmax}\lim\limits_{n\to\infty}\Pr\left(\frac{1}{\lambda_{\max}(\widehat{\Psi})}\ge \frac{3}{4\norm{\Psi}_\infty^2}\right).\end{equation} \end{lemma} \begin{Proof} Take $k\in\{1,\dots,p\}$. Because $X_i\sim\mathcal{N}(0,\Sigma)$, we have $\sum_{i=1}^nX_{ik}^2/\Sigma_{kk}\sim \chi^2_n$. Let us first prove $\lambda_{\max}(\widehat{\Psi})=O_{\textrm{P}}(1).$ Using Lemma 1 in \cite{laurent2000adaptive}, we obtain $$ \Pr\left(\widehat{\Psi}_{kk}^2\ge \Sigma_{kk}+2\Sigma_{kk}\sqrt{\frac{\tau}{n}}+2\Sigma_{kk}\frac{\tau}{n}\right)\le \exp(-\tau). $$ for $\tau>0$. Choose $\tau=2\log(p)$. Because $\lambda_{\max}(\widehat{\Psi})= \norm{\widehat{\Psi}}_\infty$, this implies \begin{align*} &\Pr\left(\lambda_{\max}(\widehat{\Psi})^2\ge \norm{\Psi}_\infty^2+2 \norm{\Psi}_\infty^2\sqrt{\frac{\tau}{n}}+2 \norm{\Psi}_\infty^2\frac{\tau}{n}\right)\\&\le \sum_{k=1}^p\Pr\left(\widehat{\Psi}_{kk}^2\ge \Sigma_{kk}+2\Sigma_{kk}\sqrt{\frac{\tau}{n}}+2\Sigma_{kk}\frac{\tau}{n}\right)\le p\exp(-\tau)\to 0.\end{align*} Therefore, $\lambda_{\max}(\widehat{\Psi})=O_{\textrm{P}}(( \norm{\Psi}_\infty+2 \norm{\Psi}_\infty\sqrt{\tau/n}+2 \norm{\Psi}_\infty(\tau/n))^{1/2})$ which is an $O_{\textrm{P}}(1)$ by conditions \eqref{sii} and \eqref{sv} in Lemma \ref{sufficient}. Next, we prove $1/\lambda_{\min}(\widehat{\Psi})=O_{\textrm{P}}(1)$. By Lemma 1 in \cite{laurent2000adaptive}, it holds that \begin{equation} \label{chi2} \Pr\left(\widehat{\Psi}_{kk}^2\le \Sigma_{kk}\left(1-2\sqrt{\frac{\tau}{n}}\right)\right)\le \exp(-\tau). \end{equation} We set $\tau =2\log(p)$. Because $1/\lambda_{\min}(\widehat{\Psi})=\max_{k=1,\dots,p}1/\widehat{\Psi}_{kk}$, we have $$\Pr\left(\frac{1}{\lambda_{\min}(\widehat{\Psi})^2}\ge \frac{1}{(1-2\sqrt{\frac{\tau}{n}})\min\limits_{k=1,\dots,p}\Sigma_{kk}}\right)\le \sum_{k=1}^p\Pr\left(\widehat{\Psi}_{kk}^2\le \Sigma_{kk}\left(1-2\sqrt{\frac{\tau}{n}}\right)\right)\le p\exp(-\tau)\to 0.$$ Remark that conditions \eqref{sii} and \eqref{sv} in Lemma \ref{sufficient} imply $1/((1-2\sqrt{\frac{\tau}{n}})\min\limits_{k=1,\dots,p}\Sigma_{kk}^2)=O_{\textrm{P}}(1)$, which shows that $1/\lambda_{\min}(\widehat{\Psi})=O_{\textrm{P}}(1)$. Now, let us show \eqref{boundmax}. By \eqref{chi2}, we have $$\Pr\left(\frac{1}{\lambda_{\max}(\widehat{\Psi})}\ge \frac{1}{(1-2\sqrt{\frac{\tau}{n}})\norm{\Psi}_\infty}\right)\le p\exp(-\tau).$$ Choosing $\tau=2\log(p)$, we obtain \eqref{boundmax} by condition \eqref{sv}. \end{Proof} \begin{lemma}\label{square}Under the assumptions of Lemma \ref{sufficient}, it holds that $$\lim\limits_{n\to\infty}\Pr\left(\norm{Xh}_2^2+\norm{f}_2^2 \ge n\frac{C_X^2}{2}\left(\norm{\widehat{\Psi}h}_2 + \frac{1}{\sqrt{n}}\norm{f}_2\right)^2\right)=1.$$ \end{lemma} \begin{Proof} Take $v\in\mathbb{R}^p$, by Theorem 1 in \cite{raskutti2010restricted}, we have \begin{align}\notag \frac{1}{\sqrt{n}}\norm{Xv}_2&\ge \frac{\sqrt{\lambda_{\min}(\Sigma)}}{4} \norm{v}_2-9\sqrt{U(\Sigma)}\sqrt{\frac{\log(p)}{n}}\norm{v}_1\\ \label{raskutti}& \ge \frac{\sqrt{\lambda_{\min}(\Sigma)}}{4\lambda_{\max}(\widehat{\Psi})} \norm{\widehat{\Psi}v}_2-9\frac{\sqrt{U(\Sigma)}}{\lambda_{\max}(\widehat{\Psi})}\sqrt{\frac{\log(p)}{n}}\norm{\widehat{\Psi}v}_1.\end{align} Next, because $(h,f)\in \mathbb{C}^k$, it holds that \begin{align*}\norm{\widehat{\Psi}h}_1&\le 4 \norm{\widehat{\Psi}h_T}_1+3\lambda \norm{f_S}_1\\ &\le 4 \norm{h_T}_1+3\lambda \norm{f_S}_1\\ &\le 4 \sqrt{t}\norm{\widehat{\Psi}h}_2+3\lambda \sqrt{s}\norm{f}_2. \end{align*} This and \eqref{raskutti} yield \begin{align*}\frac{1}{\sqrt{n}}(\norm{Xh}_2+\norm{f}_2)&\ge \left(\frac{\sqrt{\lambda_{\max}(\Sigma)}}{4\lambda_{\max}(\widehat{\Psi})}-36\frac{\sqrt{U(\Sigma)}}{\lambda_{\max}(\widehat{\Psi})}\sqrt{\frac{t\log(p)}{n}} \right) \norm{\widehat{\Psi}h}_2\\ &\quad+\left(1-27\lambda\frac{\sqrt{U(\Sigma)}}{\lambda_{\max}(\widehat{\Psi})}\sqrt{s\log(p)}\right)\frac{1}{\sqrt{n}}\norm{f}_2. \end{align*} By conditions \eqref{sii}, \eqref{sv} and Lemma \ref{chisquare}, we have $\sqrt{U(\Sigma)t\log(p)}/(\sqrt{n}\lambda_{\max}(\widehat{\Psi}))=o_{\textrm{P}}(1)$ and $\lambda\sqrt{U(\Sigma)s\log(p)}/\lambda_{\max}(\widehat{\Psi})) =o_{\textrm{P}}(1)$, which implies $$\lim\limits_{n\to\infty}\Pr\left(\norm{Xh}_2+\norm{f}_2 \ge \sqrt{n}C_X\left(\norm{\widehat{\Psi}h}_2 + \frac{1}{\sqrt{n}}\norm{f}_2\right)\right)=1.$$ We conclude the proof using the inequality $(a+b)^2\le 2(a^2+b^2)$. \end{Proof} We divide the set $\{1,\dots,p\}$ into subsets $T_1,\dots, T_q$ of size $t$, where $T_1=T$, $T_2$ contains the $t$ largest entries of $T^c$, $T_3$ contains the second $t$ largest entries of $T^c$ and so on. Let $s'\ge s$. In a similar manner, we split the set $\{1,\dots,n\}$ into $S_1,\dots,S_r$, where $S_1=S$, $S_2$ contains the $s'$ largest entries of $S^c$, $S_3$ contains the second $s'$ largest entries of $S^c$ and so on. We have \begin{align} \notag\frac{1}{\sqrt{n}}|\left<Xh,f\right>|&\le \sum_{i,j}\frac{1}{\sqrt{n}}|\left<X_{S_iT_j}h_{T_j},f_{S_i}\right>|\\ \label{bounding}&\le \frac{1}{\sqrt{n}}\left(\max_{i,j}\norm{X_{S_iT_j}}_{\text{op}}\right) \sum_{i=1}^q\norm{h_{T_i}}_2 \sum_{i=1}^r\norm{f_{S_i}}_2. \end{align} Let us show the following lemmas. \begin{lemma}\label{wpa}Under the assumptions of Lemma \ref{sufficient}, it holds that $$ \Pr\left(\max_{i,j} \frac{1}{\sqrt{n}}\norm{X_{S_iT_j}}_\text{op}\le\sqrt{\lambda_{\max}(\Sigma)}\left(\sqrt{\frac{t}{n}}+ \sqrt{\frac{s'}{n}}+\tau\right)\right)\to 1$$ for any $\tau >0$. \end{lemma} \begin{Proof} Remark that the rows of $X_{S_iT_j}$ have a $\mathcal{N}(0,\Sigma_{T_jT_j})$ distribution. Therefore, the entries of $X_{S_iT_j}\Sigma_{TjT_j}^{-1/2}$ are i.i.d. $\mathcal{N}(0,1)$. Let $\tau >0$. Applying Corollary 5.35 in \cite{vershynin2010introduction} to the matrix $X_{S_iT_j}\Sigma_{TjT_j}^{-1/2}$, we obtain that, with probability greater than $1-2\exp(-\tau^2n/2)$, $\norm{X_{S_iT_j}\Sigma_{TjT_j}^{-\frac12}}_\text{op}\le\sqrt{t}+ \sqrt{s'}+\tau\sqrt{n}$, which implies \begin{equation}\label{opbound} \Pr\left(\norm{X_{S_iT_j}}_\text{op}\le\norm{\Sigma_{TjT_j}^{\frac12}}_{\text{op}}\left(\sqrt{t}+ \sqrt{s'}+\tau\sqrt{n}\right)\right)\ge 1-2\exp(-\tau^2n/2) \end{equation} Taking the union bound over all $i$ and $j$, we have $$ \Pr\left(\max_{i,j} \norm{X_{S_iT_j}}_\text{op}\le\norm{\Sigma_{TjT_j}^{\frac12}}_{\text{op}}\left(\sqrt{t}+ \sqrt{s'}+\tau\sqrt{n}\right)\right)\ge 1-2\binom{p}{t}\binom{n}{s'}\exp(-\tau^2n/2).$$ By condition \eqref{sv}, we have $\binom{p}{t}\le \left(\frac{ep}{t}\right)^t\le e^{t\left(\log\left(\frac{p}{t}\right)+1\right)}=o(e^n)$ and $\binom{n}{s'}\le\binom{n}{s} \le \left(\frac{en}{s}\right)^{s}\le e^{s\left(\log\left(\frac{n}{s}\right)+1\right)}=o(e^n).$ Therefore, it holds that \begin{equation}\label{limitone}\Pr\left(\max_{i,j} \norm{X_{S_iT_j}}_\text{op}\le\norm{\Sigma_{TjT_j}^{\frac12}}_{\text{op}}\left(\sqrt{t}+ \sqrt{s'}+\tau\sqrt{n}\right)\right)\to 1.\end{equation} Next, note that $$\norm{\Sigma_{TjT_j}^{\frac12}}_{\text{op}}=\max\limits_{v\in\mathbb{R}^t,\norm{v}_2=1}\norm{\Sigma_{TjT_j}^{\frac12}v}_{2}\le \max\limits_{v\in\mathbb{R}^n,\norm{v_{T_j}}_2=1}\norm{\Sigma_{TjT_j}^{\frac12}v_{T_j}}_{2} \le \sqrt{\lambda_{\max}(\Sigma)}. $$ This and \eqref{limitone} conclude the proof. \end{Proof} \begin{lemma}It holds that $$ \sum_{i=1}^q \norm{h_{T_i}}_2\le 5\frac{\norm{\widehat{\Psi}h}_2 } {\lambda_{\min}(\widehat{\Psi})}+3\frac{\lambda}{\lambda_{\min}(\widehat{\Psi})}\sqrt{\frac{s'}{t}} \norm{f}_2\text{ and }\sum_{i=1}^r \norm{f_{S_i}}_2 \le 5\norm{f}_2 +\frac{3}{\lambda}\sqrt{\frac{t}{s'}}\norm{\widehat{\Psi}h}_2.$$ \end{lemma} \begin{Proof} We have $\sum_{i=3}^q \norm{h_{T_i}}_2\le \sum_{i=3}^q \sqrt{t}\norm{h_{T_i}}_\infty \le\sum_{i=3}^q \norm{h_{T_{i-1}}}_1/\sqrt{t} \le \frac{1}{\sqrt{t}}\norm{h_{T^c}}_1/\sqrt{t} $. Because $(h,f)\in \mathbb{C}^k$, it holds that $\norm{\widehat{\Psi}h_{T^c}}_1\le 3\sqrt{t}\norm{\widehat{\Psi}h}_2+ 3\lambda\sqrt{s'}\norm{f}_2$. This yields \begin{align*} \sum_{i=1}^q \norm{h_{T_i}}_2&\le 2\norm{h}_2 +\sum_{i=3}^q \norm{h_{T_i}}_2\\ &\le 2\norm{h}_2 +\frac{\norm{h_{T^c}}_1}{\sqrt{t}}\\ &\le 2\norm{h}_2 +\frac{\norm{\widehat{\Psi}h_{T^c}}_1}{\lambda_{\min}(\widehat{\Psi})\sqrt{t}}\\ &\le 2\norm{h}_2 +\frac{3}{\lambda_{\min}(\widehat{\Psi})\sqrt{t}}\left(\sqrt{t}\norm{\widehat{\Psi}h}_2 + \lambda \sqrt{s}\norm{f}_2\right)\\ &\le 5\frac{\norm{\widehat{\Psi}h}_2 } {\lambda_{\min}(\widehat{\Psi})}+3\frac{\lambda}{\lambda_{\min}(\widehat{\Psi})}\sqrt{\frac{s'}{t}} \norm{f}_2. \end{align*} Similarly, we have $\sum_{i=3}^r \norm{f_{T_i}}_2\le \sqrt{s'} \norm{f_{T^c}}_1$ and $\norm{f_{T^c}}_1\le (3/\lambda)\sqrt{t}\norm{\widehat{\Psi}h}_2+ 3\sqrt{s'}\norm{f}_2$, which implies \begin{align*} \sum_{i=1}^r \norm{f_{S_i}}_2&\le 2\norm{f}_2 +\sum_{i=3}^r \norm{f_{S_i}}_2\\ &\le 2\norm{f}_2 +\frac{\norm{f_{T^c}}_1}{\sqrt{s'}}\\ &\le 2\norm{f}_2 +\frac{3}{\sqrt{s'}}\left(\frac{\sqrt{t}}{\lambda}\norm{\widehat{\Psi}h}_2+ \sqrt{s}\norm{f}_2\right)\\ &\le 5\norm{f}_2 +\frac{3}{\lambda}\sqrt{\frac{t}{s'}}\norm{\widehat{\Psi}h}_2. \end{align*} \end{Proof} \begin{lemma}\label{scalar}Under the assumptions of Lemma \ref{sufficient}, it holds that $$ \Pr\left(|\left<Xh,f\right>|\le n\frac{C_X^2}{8}\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2\right)\to 1.$$ \end{lemma} \begin{Proof} For $\tau >0$, let us work of the event $$ \max_{i,j} \frac{1}{\sqrt{n}}\norm{X_{S_iT_j}}_\text{op}\le\sqrt{\lambda_{\max}(\Sigma)}\left(\sqrt{\frac{t}{n}}+ \sqrt{\frac{s'}{n}}+\tau\right)$$ which proability goes to $1$ with $n$ according to Lemma \ref{wpa}. By \eqref{bounding}, $n^{-1/2}|\left<Xh,f\right>|$ is upper bounded by \begin{align*} &\sqrt{\lambda_{\max}(\Sigma)}\left(\sqrt{\frac{t}{n}}+ \sqrt{\frac{s'}{n}}+\tau\right)\left(5\frac{\norm{\widehat{\Psi}h}_2 } {\lambda_{\min}(\widehat{\Psi})}+3\frac{\lambda}{\lambda_{\min}(\widehat{\Psi})}\sqrt{\frac{s'}{t}} \norm{f}_2 \right)\left(5\norm{f}_2 +\frac{3}{\lambda}\sqrt{\frac{t}{s'}}\norm{\widehat{\Psi}h}_2\right)\\ &\le 25 \sqrt{n}\frac{\sqrt{\lambda_{\max}(\Sigma)}}{\lambda_{\min}(\widehat{\Psi})}\left(\sqrt{\frac{t}{n}}+ \sqrt{\frac{s'}{n}}+\tau\right) \max\left(1, \lambda\sqrt{\frac{s'}{t}},\frac{1}{\sqrt{n}\lambda}\sqrt{\frac{t}{s'}}\right)\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2. \end{align*} Take $s'=\max(s,\log(p) t)$. By condition \eqref{sv} and Lemma \ref{chisquare}, we have $$ \frac{1}{\lambda_{\min}(\widehat{\Psi})}\max\left(\sqrt{\frac{t}{n}},\sqrt{\frac{s'}{n}}, \lambda\sqrt{\frac{s'}{t}},\frac{1}{\sqrt{n}\lambda}\sqrt{\frac{t}{s'}}\right) =o_{\textrm{P}}(1).$$ Therefore, choosing $\tau$ sufficiently small, we obtain the result. \end{Proof} \noindent Let us now conclude the proof of Lemma \ref{sufficient}. It holds that $$\norm{Xh+f}_2^2= \norm{Xh}^2+\norm{f}_2^2+ 2\left<Xh,f\right>.$$ Therefore, by lemmas \ref{square} and \ref{scalar}, we have $$\lim\limits_{n\to 1\infty}\Pr\left( \frac{1}{n}\norm{Xh+f}_2^2\ge \frac{C_X^2}{4}\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2\right)=1,$$ which yields the result with $\kappa_*^k=C_X/2$. \subsection{Proof of Theorem \ref{thcv}} We only prove the result for $k=0$. The proof for $k\in\{1,\dots,K\}$ being similar. Throughout this proof, we work on the event $$\left\{\kappa^0>\kappa_*^0\right\}\cap \left\{\lambda_{\beta}^0\ge 2\sqrt{n}\sup\limits_{j=1\dots,p}\frac{|(X^\top\xi^0)_j|}{\norm{\xi^0}_2\widehat{\Psi}_{jj}}\right\}\cap\left\{\lambda_{\gamma}^0\ge 2\sqrt{n}\frac{\norm{\xi^0}_\infty}{\norm{\xi^0}_2}\right\}\cap \left\{\kappa_*^0>2\frac{M^0}{n}\right\} ,$$ which probability goes to $1$ because of Assumption \ref{ascv}. Let us define $h=\widehat{\beta}^0 -\beta^0$ and $f=\widehat{\gamma}^0-\gamma^0$. Now, remark that \begin{align} \notag \norm{\widehat{\Psi}\beta^0}_1- \norm{\widehat{\Psi}\widehat{\beta}^0}_1&= \norm{\widehat{\Psi}\beta^0}_1-\norm{\widehat{\Psi}\beta^0+\widehat{\Psi}h}_1\\ \notag&= \norm{\widehat{\Psi}\beta^0}_1-\norm{\widehat{\Psi}\beta^0+\widehat{\Psi}h_{T}}_1-\norm{\widehat{\Psi}h_{T^c}}_1\\ \label{boundgamma} &\le \norm{\widehat{\Psi}h_{T}}_1-\norm{\widehat{\Psi}h_{T^c}}_1. \end{align} We have an analogous bound for $\gamma^0$ \begin{equation} \norm{\gamma^0}_1- \norm{\widehat{\gamma}^0}_1 \le \left|\left|f_{S}\right|\right|_1-\left|\left|f_{S^c}\right|\right|_1. \label{boundalpha} \end{equation} The following holds \begin{align}\notag &(Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} - (Q^0(\beta^0,\gamma^0))^{1/2}\\& \notag \le \frac{\lambda_{\beta}^0}{n}(\norm{\widehat{\Psi}\beta^0}_1-\norm{\widehat{\Psi}\widehat{\beta}^0}_1) + \frac{\lambda_{\gamma}^0}{n}(\norm{\gamma^0}_1-\norm{\widehat{\gamma}^0}_1)\\ \label{lbound} &\le \frac{\lambda_{\gamma}^0}{n}( \norm{\widehat{\Psi}h_{T}}_1-\norm{\widehat{\Psi}h_{T^c}}_1) + \frac{\lambda_{\gamma}^0}{n}\left(\left|\left|f_{S}\right|\right|_1-\left|\left|f_{S^c}\right|\right|_1\right) \end{align} By convexity, if $Q^0(\beta^0,\gamma^0)\ne 0$, it holds that \begin{align}\notag(Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} - (Q^0(\beta^0,\gamma^0))^{1/2}&\ge -\frac{1}{nQ^0(\beta^0,\gamma^0)^{1/2}}\left(\left<X^\top\xi^0,h \right>+\left<\xi^0,f \right>\right)\\ \label{rbound}&\ge -\frac{1}{2n}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h}_1+ \lambda_{\gamma}^0 \norm{f}_1\right). \end{align} This last inequality is also straightforwardly true when $Q^0(\beta^0,\gamma^0)=0$. Combining \eqref{lbound} and \eqref{rbound}, we get $$ -\frac{1}{2n}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h}_1+ \lambda_{\gamma}^0 \left|\left|f\right|\right|_1\right)\le \frac{\lambda_{\beta}^0}{n}( \norm{\widehat{\Psi}h_{T}}_1-\norm{\widehat{\Psi}h_{T^c}}_1) + \frac{\lambda_{\gamma}^0}{n}\left(\left|\left|f_{S}\right|\right|_1-\left|\left|f_{S^c}\right|\right|_1\right),$$ which implies that $(h,f) \in \mathbb{C}^0$. Next, we have \begin{align*}Q^0(\widehat{\beta}^0, \widehat{\gamma}^0)- Q^0(\beta^0,\gamma^0)&=\frac{1}{n}\norm{Xh + f}_2^2-\frac{2}{n}\left<\xi,Xh+f\right>\\ &=\frac{1}{n}\norm{Xh + f}_2^2-\frac{2}{n}\left(\left<X^\top\xi^0,h \right>+\left<\xi^0,f \right>\right)\\ &\ge \frac{1}{n}\norm{Xh + f}_2^2-\frac{\norm{\xi^0}_2}{n^{3/2}}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h}_1+ \lambda_{\gamma}^0 \left|\left|f\right|\right|_1\right)\\ &\ge\frac{1}{n}\norm{Xh + f}_2^2-\frac{4\norm{\xi^0}_2}{n^{3/2}}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right).\end{align*} Because $(h,f)\in\mathbb{C}^0$, \eqref{rbound} implies $(Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} - (Q^0(\beta^0,\gamma^0))^{1/2}\ge -2n^{-1}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right)$. Combined with \eqref{lbound}, it leads to $$\left|(Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} - (Q^0(\beta^0,\gamma^0))^{1/2} \right|\le \frac2n \left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right).$$ This yields \begin{align*} &Q^0(\widehat{\beta}^0, \widehat{\gamma}^0)- Q^0(\beta^0,\gamma^0)\\&\le \left((Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} - (Q^0(\beta^0,\gamma^0))^{1/2}\right) \left((Q^0(\widehat{\beta}^0, \widehat{\gamma}^0))^{1/2} + (Q^0(\beta^0,\gamma^0))^{1/2}\right)\\ &\le \frac2n\left(\lambda_{\beta^0} \norm{|\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right) \left(2(Q^0(\beta^0,\alpha^0))^{1/2} +\frac2n\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right) \right) \end{align*} Then, this implies \begin{align*}&\frac{1}{n}\norm{Xh + f}_2^2\\ &\quad \le Q^0(\widehat{\beta}^0, \widehat{\gamma}^0)- Q^0(\beta^0,\gamma^0)+ \frac{4\norm{\xi^0}_2}{n^{3/2}}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right)\\ &\quad \le \frac4n\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right)^2 +\frac{8\norm{\xi}_2}{n^{3/2}}\left(\lambda_{\beta}^0 \norm{\widehat{\Psi}h_T}_1+ \lambda_{\gamma}^0 \left|\left|f_S\right|\right|_1\right).\\ &\quad \le \frac4n\left(\lambda_{\beta}^0 \sqrt{\norm{\beta^0}_0}\norm{\widehat{\Psi}h_T}_2+ \lambda_{\gamma}^0n\sqrt{\epsilon^k}\frac{1}{\sqrt{n}} \left|\left|f_S\right|\right|_2\right)^2 +\frac{8\norm{\xi}_2}{n^{3/2}}\left(\lambda_{\beta}^0 \sqrt{\norm{\beta^0}_0}\norm{\widehat{\Psi}h_T}_2+ \lambda_{\gamma}^0n\sqrt{\epsilon^k}\frac{1}{\sqrt{n}} \left|\left|f_S\right|\right|_2\right)\\ &\quad \le \left(\frac{2M^0}{n}\right)^2 \left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2 + 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right) \end{align*} Because $(h,f)\in\mathbb{C}^0$, this implies $$(\kappa_*^0)^2\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2\le\left(\frac{2M^0}{n}\right)^2 \left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)^2 + 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right).$$ If $\norm{\widehat{\Psi}h}_2+\norm{f}_2\ne 0$, we obtain $$\left(\norm{\widehat{\Psi}h}_2+\frac{1}{\sqrt{n}}\norm{f}_2\right)\le\left((\kappa_*^0)^2-(2M^0/n)^2\right)^{-1} 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}.$$ Therefore, we have \begin{align*} \norm{h}_1+\frac{\norm{f}_1}{\sqrt{n}}&\le \left( \sqrt{\norm{\beta^0}_0} + \sqrt{\epsilon^0} \right)(\lambda_{\max}(\widehat{\Psi})\vee 1)\left((\kappa_*^0)^2-(2M^0/n)^2\right)^{-1} 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}\\ \norm{h}_2+\frac{\norm{f}_2}{\sqrt{n}}&\le (\lambda_{\max}(\widehat{\Psi})\vee 1)\left((\kappa_*^0)^2-(2M^0/n)^2\right)^{-1} 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}\\ \norm{Xh+f}_2 &\le (\norm{X}_2\vee \sqrt{n}) (\lambda_{\max}(\widehat{\Psi})\vee 1)\left((\kappa_*^0)^2-(2M^0/n)^2\right)^{-1} 8M^0\frac{\norm{\xi^0}_2}{n^{3/2}}. \end{align*} To conclude, note that $\norm{\xi^0}_2= O_{\textrm{P}}(\sqrt{n})$ and $\Pr(\mathcal{E})\to 1$. \subsection{Proof of Theorem \ref{than}}We work on the event $$\mathcal{E} =\left\{\lambda_{\beta}^0\ge 2\sqrt{n}\sup\limits_{j=1\dots,p}\frac{|(X^\top\xi^0)_j|}{\norm{\xi^0}_2\widehat{\Psi}_{jj}}\right\}\cap\left\{\lambda_{\gamma}^0\ge 2\sqrt{n}\frac{\norm{\xi^0}_\infty}{\norm{\xi^0}_2}\right\}$$ which has probability approaching 1 by \eqref{ascv} \label{cvi}. This implies that all the convergence in probability and distribution statements valid on this event will hold unconditional to this event as well. We use the notations $E=(\xi_1,\dots,\xi_n)^\top$ and $\widehat{E}=(\widehat{\xi}_1,\dots,\widehat{\xi}_n)^\top$. \subsubsection{Proof of $\boldsymbol{\widehat{\Sigma}_\xi\xrightarrow{\Pr}\Sigma_\xi}$} We have \begin{align} \notag \frac{1}{n}\widehat{E}^\top\widehat{E}&=\frac{1}{n}(\widehat{E}-E+E)^\top(\widehat{E}-E+E)\\ \label{Edecomposed}&= \frac{1}{n}\left[(\widehat{E}-E)^\top E +E^\top(\widehat{E}-E)+ (\widehat{E}-E)^\top(\widehat{E}-E)+E^\top E\right] \end{align} By Assumption \ref{asrate} and Theorem \ref{thcv}, it holds that \begin{equation} \norm{\widehat{E}-E}_2^2=\sum_{k=1}^K\norm{X(\widehat{\beta}^k-\beta^k)+\widehat{\gamma}^k-\gamma^k}_2^2 =O_\text{P}\left(\sum_{k=1}^K\frac{(M^k)^2}{n}\right)=o_\text{P}\left(\sqrt{n}\right)\label{cvE} \end{equation} Next, we have \begin{align*} \left|( (\widehat{E}-E)^\top E)_{kk'}\right|&= \left|\left( X(\widehat{\beta}^k-\beta^k)+\widehat{\gamma}^k-\gamma^k\right)^\top\xi^{k'}\right|\\ &\le \left|(\widehat{\beta}^k-\beta^k)^\top X^\top\xi^{k'}\right| + \left|(\widehat{\gamma}^k-\gamma^k)^\top\xi^{k'}\right|\\ &\le \norm{X^\top\xi^{k'}}_\infty \norm{\widehat{\beta}^k-\beta^k}_1+ \norm{\xi^{k'}}_\infty \norm{\widehat{\gamma}^k-\gamma^k}_1\\ &\le \lambda^{k'}_\beta \frac{\norm{\xi^{k'}}_2}{2n} \norm{X}_{2,\infty} \norm{\widehat{\beta}^k-\beta^k}_1+ \lambda^{k'}_\gamma \frac{\norm{\xi^{k'}}_2}{2\sqrt{n}} \norm{\widehat{\gamma}^k-\gamma^k}_1 \end{align*} because we work on the event $\mathcal{E}$. Therefore, by Theorem \ref{thcv}, it holds that $$(\widehat{E}-E)^\top E=O_{\text{P}}\left(\sqrt{\bar \mu} \left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right)\frac{\bar M}{n}\left(\bar\lambda_\beta\frac{\norm{X}_{2,\infty}}{\sqrt{n}} + \bar\lambda_\gamma \sqrt{n}\right)\right)=o_{\text{P}}\left(\sqrt{n}\right),$$ and similarly $E^\top(\widehat{E}-E)=o_{\textrm{P}}(\sqrt{n})$. Using \eqref{Edecomposed}, we get \begin{equation}\label{Eestimate} \frac1n\widehat{E}^\top\widehat{E}- \frac1nE^\top E=o_{\textrm{P}}\left(\frac{1}{\sqrt{n}}\right) \end{equation} By the law of large numbers and Assumption \ref{asan}, we have $n^{-1}E^\top E \xrightarrow{\Pr}\Sigma_\xi$, which implies $\widehat{\Sigma}_\xi\xrightarrow{\Pr}\Sigma_\xi$ by \eqref{Eestimate}. \subsubsection{Proof of asymptotic normality} We have \begin{align} \notag \frac{1}{\sqrt{n}}\widehat{E}^\top\widehat{\xi}^0&=\frac{1}{\sqrt{n}}(\widehat{E}-E+E)^\top(\widehat{\xi}^0-\xi^0+\xi^0)\\ \label{xidecomposed}&= \frac{1}{\sqrt{n}}\left[(\widehat{E}-E)^\top \xi^0 +E^\top(\widehat{\xi}^0-\xi^0)+ (\widehat{E}-E)^\top(\widehat{\xi}^0-\xi^0)+E^\top \xi\right]. \end{align} By Assumption \ref{asrate} and Theorem \ref{thcv}, it holds that \begin{equation}\norm{\widehat{\xi}^0-\xi^0}_2= \norm{X(\widehat{\beta}^0-\beta^0)+\widehat{\gamma}^0-\gamma^0}_2=O_\text{P}\left(\frac{M^0}{\sqrt{n}}\right)=o_{\text{P}}(n^{\frac14}),\label{cvxi}\end{equation} which implies $|(\widehat{E}-E)^\top(\widehat{\xi}^0-\xi^0)|=o_{\textrm{P}}(\sqrt{n})$ by \eqref{cvE} and the inequality of Cauchy-Schwarz. Next, we have \begin{align*} \left|( (\widehat{\xi}^0-\xi^0)^\top E)_{k}\right|&= \left|\left( X(\widehat{\beta}^0-\beta^0)+\widehat{\gamma}^0-\gamma^0\right)^\top\xi^{k}\right|\\ &\le \left|(\widehat{\beta}^0-\beta^0)^\top X^\top\xi^{k}\right| + \left|(\widehat{\gamma}^0-\gamma^0)^\top\xi^{k}\right|\\ &\le \norm{X^\top\xi^{k}}_\infty \norm{\widehat{\beta}^0-\beta^0}_1+ \norm{\xi^{k}}_\infty \norm{\widehat{\gamma}^0-\gamma^0}_1\\ &\le \lambda^{k}_\beta \frac{\norm{\xi^{k}}_2}{2n}\norm{X}_{2,\infty} \norm{\widehat{\beta}^0-\beta^0}_1+ \lambda^{k}_\gamma \frac{\norm{\xi^{0}}_2}{2\sqrt{n}} \norm{\widehat{\gamma}^k-\gamma^k}_1. \end{align*} Therefore, by Theorem \ref{thcv}, it holds that $$(\widehat{\xi}^0-\xi^0)^\top E=O_{\text{P}}\left(\sqrt{\bar \mu}\left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right) \frac{\bar M}{n}\left(\bar\lambda_\beta\frac{\norm{X}_{2,\infty}}{\sqrt{n}} + \bar\lambda_\gamma \sqrt{n}\right)\right)=o_{\text{P}}\left(\sqrt{n}\right).$$ Similarly, it holds that $$(\widehat{E}-E)^\top \xi^0=O_{\text{P}}\left(\sqrt{\bar \mu} \left(\frac{\norm{X}_{2,\infty}}{\sqrt{n}}\vee 1\right)\frac{\bar M}{n}\left(\bar\lambda_\beta\frac{\norm{X}_{2,\infty}}{\sqrt{n}} + \bar\lambda_\gamma \sqrt{n}\right)\right)=o_{\text{P}}\left(\sqrt{n}\right).$$ Using \eqref{xidecomposed}, we get \begin{equation}\label{xiestimate} \frac{1}{\sqrt{n}}\widehat{E}^\top\widehat{\xi}^0- \frac{1}{\sqrt{n}}E^\top \xi^0=o_{\textrm{P}}\left(1\right) \end{equation} By \eqref{Eestimate}, the law of large numbers and the continuous mapping theorem, it holds that $n(\widehat{E}^\top\widehat{E})^{-1} \xrightarrow{\Pr} \Sigma_\xi^{-1}$ and $n(E^\top E )^{-1} \xrightarrow{\Pr} \Sigma_\xi^{-1}$, which implies $n(\widehat{E}^\top\widehat{E})^{-1} -n(E^\top E)^{-1}= o_{\textrm{P}}(1)$. This and \eqref{xiestimate} yield \begin{align*} \sqrt{n}\widehat{\alpha}&= \left(\frac1n\widehat{E}^\top\widehat{E}\right)^{-1} \frac{1}{\sqrt{n}}\widehat{E}^\top\widehat{\xi}^0 \\ &= \left(\frac1n\widehat{E}^\top\widehat{E}\right)^{-1}\left( \frac{1}{\sqrt{n}}\widehat{E}^\top\widehat{\xi}^0-\frac{1}{\sqrt{n}}E^\top\xi^0\right)+\left( \left(\frac1n\widehat{E}^\top\widehat{E}\right)^{-1}-\left(\frac1n E^\top E\right)^{-1} \right)\frac{1}{\sqrt{n}}E^\top \xi^0\\ &\quad+\left(\frac1n E^\top E\right)^{-1} \frac{1}{\sqrt{n}}E^\top \xi^0\\ &=o_{\textrm{P}}(1) + \sqrt{n} \alpha+\left(\frac1n E^\top E\right)^{-1} \frac{1}{\sqrt{n}}E^\top u. \end{align*} We conclude using the central limit theorem and Slutsky's theorem. \subsubsection{Proof of $\boldsymbol{\widehat{\sigma}\xrightarrow{\Pr}\sigma}$} Let $\widehat{u}_i= \widehat{\xi}^0_i- \sum_{k=1}^K\widehat{\alpha}_k\widehat{\xi}_i^k$ and $\widehat{u}=(\widehat{u}_1,\dots, \widehat{u}_n)^\top$. We have \begin{align*} \widehat{u}-u&= \widehat{\xi}^0-\xi^0- \sum_{k=1}^K(\widehat{\alpha}_k\widehat{\xi}^k-\alpha_k\xi^k)\\ &= \widehat{\xi}^0-\xi^0- \sum_{k=1}^K(\widehat{\alpha}_k-\alpha _k)(\widehat{\xi}^k-\xi^k) + (\widehat{\alpha}_k-\alpha _k)\xi^k + \alpha_k (\widehat{\xi}^k-\xi^k)= o_{\textrm{P}}(n^{\frac14}), \end{align*} by \eqref{cvE}, \eqref{cvxi} and the fact that $\widehat{\alpha}-\alpha=o_{\textrm{P}}(1)$. This implies that \begin{align*}\widehat{\sigma}^2&=\frac1n\widehat{u}^\top \widehat{u}\\ &=\frac1n(\widehat{u}-u)^\top u+\frac1n u^\top(\widehat{u}-u)^\top+\frac1n(\widehat{u}-u)^\top(\widehat{u}-u)+ \frac1n u^\top u+\\ &= \frac1n u^\top u +o_{\textrm{P}}(1)\xrightarrow{\Pr} \sigma^2\end{align*} by the law of large numbers. \bibliographystyle{plainnat}
{ "timestamp": "2021-02-08T02:18:15", "yymm": "2012", "arxiv_id": "2012.14118", "language": "en", "url": "https://arxiv.org/abs/2012.14118" }
\section{Introduction} In traditional ad-hoc retrieval, queries and documents are represented by variants of bag-of-words representations. This leads to the so called vocabulary mismatch problem: when a query contains words that do not exactly match words in a relevant document, the search engine may fail to retrieve this document. Query expansion and document expansion, the methods of adding additional terms to the original query or document, are two popular solution to alleviate the vocabulary mismatch problem. Document expansion has been shown to be particularly effective for short text retrieval and language-model based retrieval \citep{DE_for_LM, efron12}. Most of the existing works in document expansion are unsupervised: using information from the corpus to augment document representation, e.g., retrieval based \citep{efron12} and clustering based \citep{croft04, DE_for_LM}, or using external information to augment document representation \citep{DE_with_wordnet, DE_with_external_collection}. Recently, \citet{nogueira2019DE} proposed a new approach to document expansion, which is based on a popular generative sequence-to-sequence model (Seq2Seq) in NLP, transformers \citep{wolf2020huggingfaces}. It leverages supervision to train the model to predict expansion terms conditional on each document. The paper has shown significant improvement on passage (short document) datasets, when trained in-domain. In this paper, we follow this line of supervised neural document expansion approach and explore its performance on standard IR benchmarking dataset. Our main contributions are: 1. Adapting the method to unlabeled datasets by exploring transfer learning and weak-supervision approaches. 2. Adapting the method to traditional IR datasets, where a large number of long documents are present. \begin{figure}[htb] \begin{minipage}{0.5\textwidth} \begin{tikzpicture} \node (img) {\includegraphics[scale=0.38]{doc_lens_dist.png} }; \node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-0.7cm,font=\color{black}] {document counts}; \end{tikzpicture} \end{minipage}% \begin{minipage}{0.5\textwidth} \begin{tikzpicture} \node (img) {\includegraphics[scale=0.38]{answerPassage_dist.png}}; \end{tikzpicture} \end{minipage}% \caption{ Left: document lengths distribution in Robust04; Right: distribution of relevant passage (answer-containing passage) indices in MSMarco-documents } \label{fig:histograms} \end{figure} \section{Document retrieval with Seq2Seq models} \label{sec:DE_model} In this section, we discuss our approach to document expansion. \paragraph{\textbf{Train a passage level document expansion model}} We follow the approach of \citet{nogueira2019DE}: we use data of format $(p, q)$, where $q$ is a question and $p$ is a passage \textit{relevant} to it. A Seq2Seq model is trained so that conditional on input $p$, it learns to generate the ground-truth question $q$. The dataset we found most effective for training the document expansion model is the MSMarco passage-ranking dataset \citep{bajaj2018ms}. \paragraph{\textbf{Domain adaptation}} Since most IR datasets are short of annotated queries, it is important to explore ways to adapt the document expansion model to out-of-domain unlabeled datasets. We explored two simple approaches: zero-shot transfer and weak supervision (retrieval-based pseudo annotation). In zero-shot transfer, we directly apply a model trained from one domain to another; in retrieval-based pseudo annotation, we issue out-of-domain queries to a target corpus and treat the top-$k$ returned documents as ``relevant''. \paragraph{\textbf{Adaptation to long documents}} Due to the memory overhead of transformer-based models, the maximal input length of this family of models are constrained to be short. We trained our expansion models on passage-level datasets. However, during inference time, a standard IR dataset is usually a mixture of short and long documents, and expanding on an arbitrarily capped portion of a document may yield sub-optimal performance. Figure~\ref{fig:histograms}-Left shows the document lengths distribution of Robust04 dataset. We see that most of the documents are around 1000-tokens long, which has exceeded the typical input length transformer based models can take \citep{long_text_retrieval}. To deal with this problem, we explored three strategies (see experiment results at Table \ref{tab:robust04}: ``Long document generation methods'' section): \begin{enumerate} \item \textbf{concatenation of paragraph generation (CONCAT in Table \ref{tab:robust04})}: Split each long document into overlapping passages, respecting natural sentence boundaries (with sliding window size to be around 60 tokens, since this is roughly the median size of passages in MSMarco dataset, which we used for training) and run expansion inference on each passage. All generated expansions are directly appended to the original documents. \item \textbf{first k sentences (FIRST-K in Table \ref{tab:robust04})}: Run expansion model on the first $k$ whole sentences of each document. This strategy is based on our analysis of MSMarco datasets. From MSMarco-QA dataset, we can obtain $(q, p, a)$ triplets so that we know passage $p$ contains an answer for question $q$. We treat this as a signal of question-passage relevance. Then we can trace these passages back to the document they belong to using the MSMarco-documents dataset. Comparing ``Capped passage generation methods'' to ``Long document generation methods'' in Table \ref{tab:robust04}, we see that the ``Long document generation methods'', and in particular, the CONCAT and FIRST-K generation methods are more effective. \item \textbf{passage-importance indicator (PI in Table \ref{tab:robust04})}: Before seeing any questions, we predict which passage will be queried using a unsupervised predictor \citep{doc_homogeneity_measure}, and generate expansion exclusively based on this passage. In our experiments, this method do not yield good performance, likely due to the low quality of the passage-importance predictor. \end{enumerate} \section{Experiments} To train the document expansion models, we selected datasets that are typically used for passage-ranking and question-answering tasks, i.e., MSMarco passage-ranking \citep{bajaj2018ms}, Natural Questions \citep{natural_questions}, and SQuAD \citep{squad_v2}. In addition, we also used a TREC collection short-document dataset, microblogs \citep{Lin2014}. To evaluate the performance of document expansion method on information retrieval, we additionally add the standard IR benchmarking dataset Robust04 \citep{robust_04}. Our baseline retrieval methods, BM25 and Query Likelihood (QL), use implementation from the open-source Anserini toolkit \citep{anserini}; for Seq2Seq models, we experimented with transformers (OpenNMT implementation \citep{klein-etal-2017-opennmt}) and pre-trained BART (Huggingface Pytorch implementation \citep{wolf2020huggingfaces}). \subsection{Retrieval performance on passage datasets} From Table \ref{tab:in_domain}, we see that retrieval-based weak supervision, while showing good performance as training signal for neural retrieval models \citep{NRM_weak_supervision}, does not yield good document expansion models. Instead, the BART-based zero-shot transfer model has competitive performance to in-domain trained-from-scratch models. Once decided on the zero-shot transfer learning approach, we tried several fine-tuning strategies with the BART model (Table \ref{tab:train_strategy}), drawing inspiration from \citep{Yilmaz2019}. We found that fine-tuning pre-trained BART with MSMarco-passages dataset and with a mixed MSMarco-passages and microblogs dataset produces the best document expansion models. Although less effective, our experiment suggests that other passage-level datasets such as Natural Questions and SQuAD, can also be used as sources to train document expansion models. \begin{center} \begin{tabular}{l|SSSSSS} \toprule \multirow{4}{*}{DE models} & \multicolumn{2}{c}{MSMarco-passages} & \multicolumn{2}{c}{Natural Questions} & \multicolumn{2}{c}{SQuAD-v2} \\ & {Trec-DL} & {Trec-DL} \\ & {(DEV)} & {(DEV)} \\ & {R@100} & {MAP} & {R@100} & {MAP} & {R@100} & {MAP} \\ \midrule {\small BM25} & {0.4531} & {0.3773} & {0.7619} & {0.3616} & {0.9737} & {0.7997} \\ & {(0.7619)} & {(\textbf{0.3616})} & {} & {} & {} & {} \\ \midrule {\small \textbf{In-domain trained}} & {0.4545} & {0.3872} & {\textbf{0.8671}} & {0.4450} & {0.9754} & {0.7915} \\ {\small \textbf{transformer}} & {(0.7127)} & {(0.2203)} & {} & {} & {} & {} \\ \midrule {\small \textbf{Weakly supervised}} \\ {\small \textbf{transformer}} & {NA} & {NA} & {0.7649} & {0.3608} & {0.9717} & {0.7913} \\ \midrule {\small \textbf{Zero-shot transfer}} \\ {\small (transformer)} & {NA} & {NA} & {0.7773} & {0.3879} & {0.9764} & {0.8056} \\ {\small (fine-tuning} & {\textbf{0.5297}} & {\textbf{0.465}} & {0.8302} & {\textbf{0.4501}} & {\textbf{0.9787}} & {\textbf{0.8121}} \\ {\small BART)} & {(\textbf{0.7949})} & {(0.2674)} & {} & {} & {} & {} \\ \bottomrule \end{tabular} \captionof{table}{In-domain trained and weakly-supervised document expansion model; for MSMarco-passages, we have two test sets: DEV and Trec-DL \citep{trec_dl_2019}} \label{tab:in_domain} \end{center} \begin{center} \begin{tabular}{l|SSSS|SS} \toprule \multirow{2}{*}{DE models} & \multicolumn{2}{c}{Natural Questions} & \multicolumn{2}{c|}{SQuAD-v2} & \multicolumn{2}{c}{Robust04} \\ & {R@100} & {MAP} & {R@100} & {MAP} & {R@100} & {MAP} \\ \midrule {\small BM25} & {0.7619} & {0.3616} & {0.9737} & {0.7997} & {0.4152} & {0.2553} \\ {\small MSMarco-passages} & {\textbf{0.8302}} & {\textbf{0.4501}} & {\textbf{0.9787}} & {0.8121} & {\textbf{0.4229}} & {0.2620} \\ {\small MSMarco-passage} \\ {$\rightarrow$ microblogs} & {0.7931} & {0.4} & {0.9757} & {0.7962} & {0.4206} & {0.2533} \\ {\small MSMarco-passages} \\ {$+$ microblogs} & {0.8239} & {0.4437} & {\textbf{0.9787}} & {\textbf{0.8133}} & {0.4212} & {\textbf{0.2630}} \\ \cmidrule{1-5} {\small Natural Questions} & {\textbf{0.9031}} & {\textbf{0.5271}} & {0.9782} & {0.8099} & {0.4190} & {0.2626} \\ {\small SQuAD-v2} & {0.8173} & {0.4228} & {\textbf{0.9798}} & {\textbf{0.8156}} & {0.4218} & {0.2616} \\ \bottomrule \end{tabular} \captionof{table}{Zero-shot transfer of passage-level DE model} \label{tab:train_strategy} \end{center} \subsection{Retrieval performance on Robust04} To test the performance of passage-level trained document expansion model on standard IR corpus, we fixed the passage-level model to be ``MSMaroc-passage + microblog'' trained. Then we explored three expansion generation methods as described in Section~\ref{sec:DE_model}. The result of applying passage-level trained document expansion model to Robust04 can be found in Table \ref{tab:robust04}. \paragraph{\textbf{Traditional IR baselines}} In addition to tf-idf based BM25, we applied document expansion to two popular language-model based retrieval methods: query-likelihood with Dirichlet Smooth (QLD) and query-likelihood with Jelinek-Mercer smoothing (QLJM). We found that document expansion has good performance with QLD. We speculate that this is because the way our current document expansion model works is similar to the Dirichlet smoothing model, which models the process of adding unseen words to the original documents by drawing from a background distribution based on the entire corpus. Here, the document expansion model in addition samples words from the query distribution (conditional on each document). Since document expansion does not significantly improve QLJM, we did not include it in the rest of our experiments. \paragraph{\textbf{Capped passage generation vs Long document generation}} Comparing ``Capped passage generation methods'' to ``Long document generation methods'', we see that for both BM25 and QLD, the two long document expansion methods, CONCAT and FIRST-K, have better performance. The fact the CONCAT has the best performance suggests that even if most queries are targeting the head portion of a document (Figure \ref{fig:histograms}-Right), using information from the entire document may still be beneficial to expansion generation. \paragraph{\textbf{Performance with pseudo-relevance feedback and BERT re-ranker}} Since document expansion is one of several techniques that can improve the document retrieval performance, we also want to understand how it works when combined with other techniques. In our experiments, we first explored combining the best performing document expansion model with RM3 \citep{rm3}, a popular query expansion method based on pseudo-relevance feedback. While RM3 alone significantly improves the baseline retrieval performance, DE** \footnote{DE** indicates concatenation of paragraph generation with MSMarco+microblog trained passage model.} can still add an additional boost on top. We want to point out that comparing to RM3, which requires two rounds of retrieval at query time, document expansion model is applied offline and does not add additional computational overhead at run time. BM25 with document and query expansion makes a first-stage ranker in the ranking pipeline. In our last experiment, we test its end-to-end performance when combined with a second-stage neural ranker, BERT \citep{nogueira2019passage}. To evaluate the end-to-end result, we used metrics $R@k$ and $P@k$ for small $k$, mimicking what the ranking system presents to a user (the top-$k$ ranked documents). Our experiment results indicate that our document expansion models are complementary to query expansion as a first-stage ranker and can improve the end-to-end ranking performance when combined with a second-stage ranker. \begin{center} \begin{tabular}{l|SSSS} \toprule \multirow{3}{*}{Experiment category} & {Methods} & \multicolumn{2}{c}{Robust04} \\ & {} & {R@100} & {MAP} \\ \midrule {Traditional} & {\small BM25} & {0.4152} &{0.2553} \\ {IR baselines} & {\small QLD} & {0.4157} &{0.2502} \\ & {\small QLJM} & {0.3995} &{0.2287} \\ \midrule {Capped passage} & {\small BM25+passage-DE*} & {0.4212} & {\textbf{0.2630}} \\ {generation methods} & {\small QLD+passage-DE*} & {\textbf{0.4270}} & {0.2620} \\ & {\small QLJM+passage-DE*} & {0.4058} & {0.2350} \\ \midrule {Long document} & {\small BM25+DE* (CONCAT)} & {\textbf{0.4283}} & {\textbf{0.2631}} \\ {generation methods} & {\small BM25+DE* (FIRST-K)} & {0.4226} & {0.2625} \\ & {\small BM25+DE* (PI)} & {0.4212} & {0.2588} \\ \cmidrule{2-4} & {\small QLD+DE* (CONCAT)} & {\textbf{0.4290}} & {0.2615} \\ & {\small QLD+DE* (FIRST-K)} & {0.4272} & {\textbf{0.2625}} \\ & {\small QLD+DE* (PI)} & {0.4259} & {0.2577} \\ \midrule {DE+pseudo-relevance feedback} & {\small BM25+RM3} & {0.4517} & {0.2941} \\ & {\small BM25+RM3+DE**} & {\textbf{0.4641}} & {\textbf{0.3035}} \\ % \midrule \multirow{3}{*}{DE + BERT reranker}\\ & \multicolumn{3}{c}{End-to-end metrics} \\ & {R@10} & {P@10} & {P@5}\\ \cmidrule{1-4} {\small BM25 + BERT} & {0.2771} & {0.3731} & {0.4137}\\ {\small BM25 + RM3 + BERT} & {0.2608} & {0.3803} & {0.4048} \\ {\small BM25 + DE** + BERT} & {\textbf{0.2824}} & { 0.3767} & {0.4161} \\ {\small BM25 + RM3 + DE** + BERT} & {0.2726} & {\textbf{0.3944}} & {\textbf{0.4241}} \\ \bottomrule \end{tabular} \captionof{table}{Robust04 experiments (DE*: MSMarco-passage+microblog trained passage model; DE**: concatenation of paragraph generation, CONCAT, with DE*)} \label{tab:robust04} \end{center} \section{Conclusion} We showed that a document expansion model trained on passage-level datasets of (question, relevant passage) pairs can be directly applied to out-of-domain IR datasets to improve retrieval performance. We explored simple approaches to adapt the model to standard IR dataset (Robust04) where a large number of long documents are present, and we showed that adapting the passage-level trained model to long documents further improves retrieval performance. However, our current simple adaptations to long documents do not significantly improve the model performance (see Table \ref{tab:robust04}, Long document generation methods). We cannot conclude whether this is due to the nature of the relevant passage distribution over long documents (i.e., they tend to be the first few passages of any document, according to Figure \ref{fig:histograms}-Right). Hence, it may be worth exploring model architectures that allow longer input sequences, for example, by switching to sparse attention layers \citep{beltagy2020longformer}.
{ "timestamp": "2020-12-29T02:21:23", "yymm": "2012", "arxiv_id": "2012.14005", "language": "en", "url": "https://arxiv.org/abs/2012.14005" }
{ "timestamp": "2020-12-29T02:23:39", "yymm": "2012", "arxiv_id": "2012.14063", "language": "en", "url": "https://arxiv.org/abs/2012.14063" }
\section{Introduction}\label{sec: Intro} Large-scale natural flows such as atmospheric ones, as well as a wide variety of engineering applications, are among many systems that are substantially influenced by turbulence. Nonlinearity and stochastisity are two inherent elements of fluid dynamics that when significantly triggered lead the flow into turbulent regime \citep{pope2001turbulent, sagaut2008homogeneous}. Turbulence is characterized by persistent fluctuating field variables that are immensely non-Gaussian and have a multi-scale and ubiquitous influence on the fluid dynamics with a great impact on the quality of transport and mixing \citep{akhavan2020anomalous, zayernouri2011coherent}. Moreover, notable emergence of the extreme and anomalous events reflected in the statistical measurements of turbulent fields and quantities intensify the level of complexity in turbulent flows \citep{sapsis2020statistics, yeung2015extreme}. Therefore, taking into account the effects of turbulence cannot be compromised in predictions and design procedure for a fluid system affected by the turbulent regime. Although considerable advancements in the modern computational architectures and high-performance computing (HPC) over the past decade have greatly facilitated the high-fidelity predictions of turbulent transport through direct numerical simulation (DNS), those efforts have mostly remained in the area of canonical and fundamental turbulent transport. Nevertheless, large-eddy simulation (LES) of turbulence has shown a promising path towards robust, accurate, and computationally affordable predictions of the turbulent flow behavior in large-scale and real-world applications \citep{fu2020heat}. In fact, LES is considered as a reliable trade-off between the DNS and the low-fidelity simulations with Reynolds-Averaged Navier-Stokes (RANS) models. The main idea in the LES is that for sufficiently high-Reynolds flows that the statistics of turbulent fluctuations associated with small-scale motions are isotropic and hence we expect a universal behavior, one can numerically resolve the large-scale motions while dealing with the subgrid-scale (SGS) effects through proper closure modeling means that utilize resolved-scale variables. In practice, there is a spatial filtering acting on the conservation equations of transport that represents the LES equations \citep{leonard1975energy, germano1992turbulence}. Traditionally, SGS modeling is categorized into two main branches: (\textit{i}) functional modeling, and (\textit{ii}) structural modeling \citep{sagaut2006large}. Functional modeling requires a prior knowledge of the interactions between resolved-scale and subgrid-scale is required so that one can represent the LES closure in terms of a mathematical function of resolved transport variables. Functional models are usually representing the net transfer of turbulent kinetic energy from resolved scales to the subgrid scales. The Smagorinsky model initially conceptualized in \citep{smagorinsky1963general}, and its variations are well-known examples of functional SGS modeling. On the other hand, structural models seek to reconstruct the statics and structure of SGS stresses and fluxes from the resolved-scale variables. For instance, scale-similarity models initially introduced by Bardina \textit{et al.} \citep{bardina1980improved} are among well-known examples of structural models. Functional models usually are poorly correlated with the true SGS terms \textit{a priori} and by construction are incapable of reproducing backward transfer of energy (backscattering); however, in an LES setting they have shown to be dissipative enough for solver stability. In contrast, structural models such as scale-similarity type models have been found to be sufficiently correlated with the true SGS terms and fairly capable of following backscattering phenomenon in an \textit{a priori} sense. Nonetheless, their significant drawback is that in LES they are under-dissipative; hence, the stable time-integration is intractable. As a practical remedy to the mentioned issues, further efforts have been devoted to formulating a mixed representation of functional and structural models \citep{zang1993dynamic, liu1994properties}. Recently, abundance of the high-fidelity data for the SGS closures mainly available through filtered DNS data, and with the advent of modern machine learning (ML) techniques and their application to fluid mechanics and in particular, turbulence modeling, \citep{kutz2017deep, duraisamy2019turbulence, brunton2020machine, beck2020perspective} have resulted in a wide variety of predictive data-driven SGS models. Among the numerous contributions in ML-based SGS modeling and LES, interested readers are referred to the following notable works \citep{beck2019deep, kurz2020machine,portwood2020interpreting, SIRIGNANO2020}. A vital point to credibly certify an SGS model for the LES is its capability to accurately encode the statistics of turbulent transport and SGS dynamics \citep{meneveau1994statistics, moser2020statistical}. Therefore, in the current work, our main focus is to develop a statistically consistent LES closure model. Throughout this approach, we aim to ensures capturing the nonlocal interactions in the turbulent energy dissipation \citep{waleffe1992nature, hamlington2008local}, that are intensified in the SGS effects during LES \citep{samiee2020fractional}. Unlike the integer-order (standard) differential operators, fractional-order operators are fundamentally defined based on heavy-tailed stochastic processes; therefore, they are inherently nonlocal operators and are suitable to incorporate long-range interactions in a mathematical modeling \citep{meerschaert2011stochastic, d2020unified}. Among the wide-range of applications employing the fractional-order operators, modeling of visco-elasto-plastic materials for structural analysis \citep{suzuki2016fractional}, their nonlinear vibration analysis \citep{suzuki2020anomalous}, memory-dependent modeling of damage mechanics \citep{suzuki2019thermodynamically}, and nonlocal elasticity modeling of solids \citep{jokar2020variable} are listed among the outstanding works reported in the literature. Due to the remarkable and diverse applications of the fractional-order Partial Differential Equations (PDEs), development of high-order numerical methods \citep{samiee2017fast, samiee2019I, samiee2019II, lischke2019spectral, samiee2020unified, delia_nonlocal_2020, zhou2020implicit, du2020fast, fang2020fast} and data-driven numerical schemes \citep{suzuki2020self}, as well as numerical studies on the stochastic fractional PDEs \citep{kharazmi2019fractional, kharazmi2019operator} have been an active area of research. In the context of LES for turbulent flows, a recent study by Samiee \textit{et al.} \citep{samiee2020fractional} introduced a nonlocal model for the divergence of SGS stress tensor in terms of a fractional Laplacian acting on the resolved-scale velocity field. In order to derive such a model, filtered Boltzmann transport equation was considered, where the filtered equilibrium distribution is approximated with an $\alpha$-stable L\'evy distribution. Moreover, Di Leoni \textit{et al.} \citep{di2020two} proposed a nonlocal eddy-viscosity SGS model that employs a fractional gradient operator. Their modeling strategy is based upon the high-fidelity observations of nonlocal two-point correlation between the SGS stress and strain-rate tensors (inspired by the derivation of filtered K\'arman-Howarth equation), and proposing a proper nonlocal convolution kernel that yields the fractional gradient operator. They sufficiently captured the nonlocal SGS effects through proper fractional orders for different turbulent flows including the anisotropy and inhomogeneity effects. These studies demonstrated that the fractional-order operators are sophisticated candidates for modeling of the SGS stresses in the LES of turbulent flows. Of particular interest, we aim to study the nonlocal SGS modeling for the conserved passive scalars in turbulent flows \citep{warhaft2000passive, shraiman2000scalar, sreenivasan2019turbulent}; thus, we seek to model the SGS scalar flux arising as the closure term in the filtered scalar transport equation. Due to promising potential of Boltzmann transport framework to investigate the sources of spatial nonlocality appearing in the SGS dynamics \citep{samiee2020fractional}, we manage to study the filtered version of the Boltzmann transport equation for the passive scalars in turbulent flow. Using proper statistical assumptions at the kinetic level, we try to derive a continuum level closure model in terms of fractional-order Laplacian of the resolved scalar concentration. Through a statistical data-driven procedure our model is being calibrated to its optimal form so that it is capable of capturing the nonlocal statistics embedded in the ground-truth data. The structure of the rest of this work is organized as follows: in section \ref{sec: Gov-Eqns}, we state the problem and show the governing equations. In section \ref{sec: nonlocality}, we motivate the necessity of our modeling strategy to address nonlocality using statistical measures obtained from the filtered DNS data. In section \ref{sec: BT-framework}, the mathematical framework of our SGS modeling that includes fractional calculus and Boltzmann transport is described and derivation of the SGS model is presented. Afterwards, in section \ref{sec: Calibration}, a two-stage data-driven calibration procedure is introduced to optimize the model performance. Finally, section \ref{sec: Dissipation} delivers an \textit{a priori} testing on the SGS dissipation of the resolved-scale scalar variance, and it is followed by the conclusions in section \ref{sec: Conclusion}. \section{Governing Equations}\label{sec: Gov-Eqns} Considering flows governed by incompressible Navier-Stokes (NS) equations \begin{eqnarray}\label{GE-1-2} \frac{\partial V_i}{\partial t}+\frac{\partial}{\partial x_j}\left( V_i\,V_j \right)=-\frac{1}{\rho}\frac{\partial p}{\partial x_i}+\nu \, \frac{\partial^2 V_i}{\partial x_i \partial x_j}+\mathcal{A} \, V_i, \quad i,j=1,2,3, \end{eqnarray} subject to the continuity, $\nabla \cdot \boldsymbol{V}=0$, where the velocity and the pressure fields are denoted by $\boldsymbol{V}(\boldsymbol{x},t)=(V_1,\, V_2,\, V_3)$ and $p(\boldsymbol{x},t)$ for $\boldsymbol{x}=x_i$ and $i=1,2,3$, respectively. $\rho$ specifies the density and $\nu$ represents the kinematic viscosity for a Newtonian fluid. In \eqref{GE-1-2}, $\mathcal{A}$ is a dynamic coefficient associated with the artificial forcing scheme to enforce statistical stationary state on the kinetic energy to reach to a realistic and fully turbulent state. It is worth mentioning that all the values in \eqref{GE-1-2} are taken to be zero-mean values, therefore, $\boldsymbol{V}(\boldsymbol{x},t)$ corresponds to the turbulent fluctuations. In our study, a passive scalar with an imposed mean gradient along the $x_2$ direction is considered to be transported with the described turbulent flow. According to the Reynolds decomposition for the total concentration of the passive scalar, $\Phi(\boldsymbol{x},t)$, one can write that $\Phi = \langle \Phi \rangle + \phi$. Here, $\langle \cdot \rangle$ is the ensemble-averaging operator, and $\phi$ denotes the fluctuating part of the passive scalar concentration. More specifically, the imposed mean scalar gradient is taken to be uniform as $\nabla \langle \Phi \rangle = \left( 0,\beta,0\right)$, where $\beta$ is a constant. Therefore, the turbulent scalar concentration obeys an advection-diffusion (AD) equation that is simplified into the following form \begin{eqnarray}\label{GE-1-3} \frac{\partial \phi}{\partial t}+\frac{\partial}{\partial x_i}\left( \phi \, V_i \right) = -\beta \, V_2+\mathcal{D} \, \frac{\partial^2 \phi}{\partial x_i \partial x_i}, \quad i=1,2,3, \end{eqnarray} where $\mathcal{D}$ denotes the molecular diffusion coefficient of the passive scalar. Accordingly, the Schmidt number is defined as $Sc=\nu/\mathcal{D}$. In the LES of turbulent transport, the fluid and passive scalar motions are resolved down to a prescribed length scale namely as filter width, $\Delta$, which linearly decomposes the velocity and scalar concentration fields into the filtered (resolved) and the residual (unresolved) components. For instance, for the scalar concentration, $\widetilde{\phi}$ and $\phi^R=\phi - \widetilde{\phi}$ represent the filtered and residual fields, respectively. The filtered fields are obtained by a convolution, $\widetilde{\phi}= \boldsymbol{\mathcal{G}} \ast \phi$, where $\boldsymbol{\mathcal{G}} = \boldsymbol{\mathcal{G}}(\boldsymbol{r})$ denotes the generic spatial filtering kernel \citep{pope2001turbulent}. Applying such filtering operation on the governing equations returns the subsequent LES equations. For example, the filtered AD equation is formulated as \begin{eqnarray}\label{eqn: Filtered-AD} \frac{\partial \widetilde{\phi}}{\partial t}+\frac{\partial}{\partial x_i}\left( \widetilde{\phi}\,\widetilde{V}_i \right) = -\beta \, \widetilde{V}_2 + \mathcal{D} \, \frac{\partial^2 \widetilde{\phi}}{\partial x_i \partial x_i} - \frac{\partial q^R_i}{\partial x_i}, \quad i=1,2,3, \end{eqnarray} where $q^R_i$ denotes the residual or SGS scalar flux that is defined exactly as $q^R_i=\widetilde{\phi \, V_i}-\widetilde{\phi} \, \widetilde{V}_i$. In the LES sense, the SGS scalar flux needs to be closed (modeled) in terms of the resolved-scale (filtered) variables through proper and physically consistent SGS modeling. \section{Why SGS Dynamics is Statistically Nonlocal?}\label{sec: nonlocality} \begin{figure}[t!] \begin{minipage}[b]{.48\linewidth} \centering \includegraphics[width=1\textwidth]{Dissipation_FDNS} \subcaption{\footnotesize} \end{minipage} \begin{minipage}[b]{.01\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{two-point_FDNS} \subcaption{\footnotesize} \end{minipage} \caption{\footnotesize Statistics of true subgrid-scale contribution to the filtered scalar variance rate. (a) PDF of normalized SGS dissipation of filtered scalar variance, $-\boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}}$, computed over a sample space of $10 \, T_{LE}$ of statically stationary turbulence. (b) Time-averaged two-point correlation function \eqref{eqn: TPC1} between $q^R_\parallel$ and $\widetilde{G}_\parallel$ with $r=\vert \boldsymbol{r}_\perp \vert$.}\label{fig: Nonlocality} \end{figure} In an idealistic LES, one of the main elements reflecting the dynamics of turbulent transport is capturing the true filtered (resolved-scale) turbulent intensity through robust SGS modeling that is physically and mathematically consistent. In fact, such transport equation includes closure terms that directly link the correct time-evolution of turbulent intensity to the nature of the SGS closure and its modeling. In the LES of scalar turbulence, multiplying both sides of the filtered AD equation \eqref{eqn: Filtered-AD} by $\widetilde{\phi}$, yields the time evolution of filtered turbulent \textit{intensity} as \begin{equation}\label{eqn: scalar_var1} \frac{1}{2} \frac{\partial}{\partial t}\left( \widetilde{\phi} \, \widetilde{\phi} \right) + \widetilde{\phi} \, \frac{\partial}{\partial x_i} \left( \widetilde{\phi} \, \widetilde{V}_i \right)= -\beta \, \widetilde{\phi} \, \widetilde{V}_2 + \mathcal{D} \, \widetilde{\phi} \, \frac{\partial^2 \, \widetilde{\phi}}{\partial x_i \partial x_i} - \widetilde{\phi} \, \frac{\partial \, q^R_i}{\partial x_i}. \end{equation} Using the continuity equation and chain rule for differentiation, \begin{equation}\label{eqn: scalar_var2} \frac{1}{2} \frac{\partial}{\partial t}\left( \widetilde{\phi} \, \widetilde{\phi} \right) + \widetilde{\phi} \, \widetilde{V}_i \, \frac{\partial \widetilde{\phi}}{\partial x_i} = -\beta \, \widetilde{\phi} \, \widetilde{V}_2 + \mathcal{D} \, \frac{\partial}{\partial x_i}\left( \widetilde{\phi} \, \frac{\partial \widetilde{\phi}}{\partial x_i} \right) - \mathcal{D} \, \frac{\partial \, \widetilde{\phi}}{\partial x_i} \, \frac{\partial \, \widetilde{\phi}}{\partial x_i} - \frac{\partial}{\partial x_i}\left( \widetilde{\phi} \, q^R_i \right) + q^R_i \, \frac{\partial \widetilde{\phi}}{\partial x_i}. \end{equation} Applying the ensemble-averaging operator, $\langle \cdot \rangle$, on \eqref{eqn: scalar_var2}, returns the transport equation for the \textit{filtered scalar variance}, $\left\langle \widetilde{\phi} \, \widetilde{\phi} \right\rangle$. In this study, we are considering the case of homogeneous turbulent velocity and scalar fields; therefore, $\left\langle \frac{\partial}{\partial x_i}\left(\cdot \right) \right\rangle = \frac{\partial}{\partial x_i}\langle (\cdot) \rangle = 0$. Defining the filtered scalar gradient as $\widetilde{\boldsymbol{G}}(\boldsymbol{x}) = \nabla \widetilde{\phi}(\boldsymbol{x})$, time-evolution of the filtered scalar variance takes the following form \begin{align}\label{eqn: scalar_var3} \frac{1}{2} \frac{d}{d t}\left\langle \widetilde{\phi} \, \widetilde{\phi} \right\rangle &= -\widetilde{\mathcal{T}} + \widetilde{\mathcal{P}} - \widetilde{\chi} + \Pi, \\ \widetilde{\mathcal{T}} = \left\langle \widetilde{\phi} \, \widetilde{V}_i \, \widetilde{G}_i \right \rangle, \quad \widetilde{\mathcal{P}} = -\beta \left\langle \widetilde{\phi} \, \widetilde{V}_2 \right \rangle&, \quad \widetilde{\chi} = \mathcal{D} \, \left\langle \widetilde{G}_i \, \widetilde{G}_i \right\rangle, \quad \Pi = \left\langle q^R_i \, \widetilde{G}_i \right\rangle. \nonumber \end{align} In \eqref{eqn: scalar_var3}, $\widetilde{\mathcal{T}}$ denotes the \text{turbulent transport} of filtered scalar variance while $\widetilde{\mathcal{P}}$ represents the \textit{production} of resolved scalar variance by the uniform mean scalar gradient, and $\widetilde{\chi}$ is the resolved scalar variance \text{dissipation} due to the molecular diffusion. Unlike these three terms, $\Pi$ (representing the \textit{SGS production} of resolved scalar variance) is the only contributing term in \eqref{eqn: scalar_var3} that contains the effects of the SGS scalar flux. Therefore, as pointed out earlier, understanding the true statistical nature of $\boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}}$ is essential for the SGS modeling and precise evaluation of the resolved scalar variance in the LES. This examination of $\boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}}$ might be viewed both from single-point and two-point statistics as discussed in \citep{meneveau1994statistics} in the context of the LES for homogeneous isotropic turbulent flows. In a recent comprehensive study by Di Leoni \textit{et al.}, effects of the SGS contribution in the evolution of the two-point velocity correlation was explored for the incompressible Navier-Stokes equations using filtered DNS data for HIT and turbulent channel flows at high-Reynolds numbers, and revealed the importance of nonlocal effects in the SGS dynamics \citep{di2020two}. In the present study, we are also focused on the two-point statistics of the SGS production of resolved scalar variance. This quantity is well represented in terms of the following normalized two-point correlation function \begin{align}\label{eqn: TPC1} \mathcal{C}(q^R_i \, , \, \widetilde{G}_i) = \frac{\left \langle q^R_i(\boldsymbol{x}) \, \widetilde{G}_i(\boldsymbol{x}+\boldsymbol{r}) \right \rangle}{\left \langle q^R_i(\boldsymbol{x}) \, \widetilde{G}_i(\boldsymbol{x}) \right \rangle}, \end{align} where $\boldsymbol{r}=(r_1,r_2,r_3)$ denotes the spatial shift from the location $\boldsymbol{x}$. Moreover, probability distribution function (PDF) of the SGS production of scalar variance normalized by its $L_2$-norm \textit{i.e.}, $\boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}}/\Vert \boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}} \Vert$, is another measure to learn about the statistical behavior of $\Pi$ and have a more comprehensive insight into the SGS modeling. \subsection{High-Fidelity Database of the SGS Scalar Flux}\label{sec: Filtered-DNS} In order to study the statistics of $\Pi$, we compute true values of the SGS scalar flux using the box filtering kernel with isotropic filter width $\Delta$ as, \begin{align}\label{eqn: Box-filtering} \mathcal{G}(r) = \begin{cases} \frac{1}{\Delta}, & r \leq \Delta/2 \\ 0, & r \geq \Delta/2. \end{cases} \end{align} Applying this convolution kernel on a well-resolved DNS database of passive scalar with imposed mean gradient in synthetic (forced) homogeneous isotropic turbulence. To perform the simulation, we employ an open-source parallel statistical-computational platform for turbulent transport equipped with a Fourier pseudo-spectral spatial discretization of the NS and AD equations, fourth-order Runge-Kutta (RK4) time-integration scheme, and an artificial forcing method (to keep the turbulent kinetic energy at low wavenumbers constant) \citep{PSc_HIT3D2020}. Our computational domain is a triply periodic cube of $\boldsymbol{\Omega}=[0,2\pi]^3$ that is discretized on a uniform Cartesian grid with $N=520^3$ Fourier collocation points while a constant $\Delta t = 5 \times 10^{-4}$ is utilized for the stable time-integration. In construction of this DNS database, the imposed mean scalar gradient is taken as $\beta=1$, and $Sc=1$ according to the section \ref{sec: Gov-Eqns}. Letting $k_{max}$ be the maximum resolved wavenumber in our simulation and $\eta=(\nu^3/\varepsilon)^{1/4}$ be the Kolmogorov length scale while $\varepsilon$ denotes the turbulent dissipation rate, we measure $k_{max} \, \eta \approx 1.5$; therefore, one can ensure that the small-scales in the velocity and scalar fields are well-resolved \citep{PSc_HIT3D2020}. Moreover, our records indicate that the Taylor-scale Reynolds number is $Re_\lambda=240$ (averaged over 25 large-eddy turnover times, $T_{LE}$, of resolving the passive scalar field). \subsection{Statistical Analysis of the SGS Effects in Filtered Scalar Intensity}\label{sec: Stats-FDNS} By taking a large sample space over $10 \, T_{LE}$ of this stationary process (after resolving the passive scalar field for $15 \, T_{LE}$), we compute the PDF of the normalized SGS production of filtered scalar variance for four different filter widths, $\Delta/\eta=8, \, 20, \, 41, \, 53$. As a result, we observe that as $\Delta$ becomes larger the PDF exhibits broader tails as shown in Figure \ref{fig: Nonlocality}(a). Emergence of these heavy PDF tails implies that as we increase the filter width, long-range spatial interactions become stronger and more pronounced \citep{akhavan2020anomalous}. Motivated by this observation, a two-point diagnosis of the SGS scalar production of the filtered variance as defined in equation \eqref{eqn: TPC1} would be another statistical measure shedding light on the long-range interactions in addition to the filter width effects. Considering $\parallel$ as the direction along the imposed mean scalar gradient and $\perp$ representing the directions perpendicular to the imposed mean gradient, we are interested in evaluating $\mathcal{C}(q^R_\parallel \, , \, \widetilde{G}_\parallel)$. Here, we take $\boldsymbol{r}=(r_1,0,0)$ and $\boldsymbol{r}=(0,0,r_3)$ and take the average of the resulting two-point correlation functions. Due to the statistically stationary turbulence, we perform such procedure for 20 data snapshots that are uniformly spaced over $10 \, T_{LE}$ (on the same spatio-temporal data we used to compute the PDFs); hence, we obtain the time-averaged value of $\mathcal{C}(q^R_\parallel \, , \, \widetilde{G}_\parallel)$. Figure \ref{fig: Nonlocality}(b) illustrates this two-point correlation function extending over a wide range of spatial shift, $r=\vert \boldsymbol{r} \vert$, and evaluated at four filter widths similar to the ones utilized in Figure \ref{fig: Nonlocality}(a). This plot quantitatively and qualitatively reveals that as we increase $\Delta$, greater correlation values between the SGS scalar flux $q^R_\parallel(\boldsymbol{x})$, and filtered scalar gradient $\widetilde{G}_\parallel(\boldsymbol{x}+\boldsymbol{r})$ are observed at a fixed $r$. These spatial correlations are significant both in the \textit{dissipation} and also \textit{inertial} subranges. This confirms the substantial nonlocal effects in the true SGS dynamics, which needs to be carefully addressed in the SGS modeling for LES. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth]{two-point_DNS-EDM} \caption{\footnotesize Comparison between the true values of two-point correlation function given in \eqref{eqn: TPC1} and the ones obtained from the local eddy-diffusivity modeling of the SGS scalar flux given in \eqref{eqn: EDM}. The evaluations are performed at two filter widths of $\Delta/\eta = 8, \, 53$.}\label{fig: EDM_TPC} \end{figure} A popular and fairly simple approach for modeling the SGS scalar flux is Eddy-Diffusivity Modeling (EDM). In EDM, the main assumption is that the SGS scalar flux is proportional to the resolved scale scalar gradient as \begin{align}\label{eqn: EDM} \boldsymbol{q}^R(\boldsymbol{x}) \approx -\mathcal{D}_{ED} \, \widetilde{\boldsymbol{G}}(\boldsymbol{x}), \end{align} and $\mathcal{D}_{ED}$ is the proportionality coefficient. Obviously, EDM is a \textit{local} modeling approach by its construction. Computing $\mathcal{C}(q^R_\parallel \, , \, \widetilde{G}_\parallel)$ while $q^R_\parallel$ is approximated with EDM, one can compare it with its true value as shown in Figure \ref{fig: Nonlocality}(b). Figure \ref{fig: EDM_TPC} illustrates such comparison for two filter widths, $\Delta/\eta=8, \, 53$, reveals that in both of the cases local EDM substantially fails to predict the conspicuous long-range spatial correlations observed in the true two-point correlation values. This observation is closely similar to the results reported in Di Leoni \textit{et al.} \citep{di2020two} that showed local eddy-viscosity model is structurally incapable of reproducing the two-point SGS dissipation for the HIT and turbulent channel flows. This concrete evidence urges to go beyond the conventional means of SGS modeling for the scalar flux in order to address the matter of nonlocality with more sophisticated mathematical modeling tools. Thus, a nonlocal construction for the EDM would be a fairly relevant remedy to this problem. \section{Boltzmann Transport Framework}\label{sec: BT-framework} In studying the turbulent transport and mixing, kinetic Boltzmann theory has shown a rich and promising ground based upon principles of statistical mechanics, which by construction is well-suited for the stochastic description of turbulence at microscopic level \citep{harris2004introduction}. In the following, the fundamental sources of nonlocal closure and the SGS modeling for the residual passive scalar flux are studied at the kinetic Boltzmann transport framework. Our objective is to derive a nonlocal eddy-diffusivity SGS model at the continuum level. \subsection{BGK Model and Double Distribution Function}\label{sec: BGK-closure} Considering classical kinetic theory of gases, we are concerned with the evolution of a single particle distribution function, $f$, that is governed by the Boltzmann Transport Equation (BTE), \begin{align}\label{eqn: BTE-fluid} \frac{\partial f}{\partial t} + \boldsymbol{u} \cdot \nabla f = C(f). \end{align} In \eqref{eqn: BTE-fluid}, the probability distribution $f=f(t,\boldsymbol{x},\boldsymbol{u})$ is defined such that there exists mass of fluid particles that are located inside the infinitesimal volume element $d\boldsymbol{x}$ centered at $\boldsymbol{x}$, velocity element $d\boldsymbol{u}$ centered at $\boldsymbol{u}$, and at time $t$. In the phase space of particle, $\boldsymbol{x}$, $\boldsymbol{u}$, and $t$ are considered as independent variables. The left-hand side of \eqref{eqn: BTE-fluid} represents the streaming of the non-reacting particles that is balanced by the \textit{collision} operator, $C(f)$, on the right-hand side. As a widely common model for the collision operator, Bhatnagar–Gross–Krook (BGK) approximation considers scattering of the fluid particle due to collision with another particle. Therefore, the BGK model characterizes $C(f)=C_{\mathrm{BGK}}(f)$ with a single parameter, that is called the \textit{relaxation time}, $\tau$ \citep{BGK1954}. Therefore, the collision operator is written as \begin{align}\label{eqn: BGK-coll} C_{\mathrm{BGK}}(f) = -\frac{f-f^{eq}}{\tau}, \end{align} where the local equilibrium distribution function, $f^{eq}=f^{eq}(t,\boldsymbol{x},\boldsymbol{u})$ is given by the Maxwell distribution \citep{sone2012kinetic}, and is parameterized by the locally conserved quantities (density $\rho$, particle speed $\boldsymbol{u}$, and temperature $T$) as \begin{align}\label{eqn: Maxwell-fluid} f^{eq} = \frac{\rho}{(2\pi \, c_T^2)^{d/2}} \exp\left( \frac{-(\boldsymbol{u}-\boldsymbol{V})^2}{2 \, c_T^2} \right). \end{align} In \eqref{eqn: Maxwell-fluid}, $c_T=\sqrt{k_B \, T/m}$ is the thermal speed at $T$ in which $k_B$ is the Boltzmann constant, and $m$ represents the molecular air weight, while $d$ denotes the spatial dimensions \citep{huang1987statistical}. In order to study the passive scalar transport phenomena in this context, Double Distribution Function (DDF) method has been a successful approach \citep{sharma2020current}. In the DDF, we consider one distribution function to address the conservation of mass and momentum while another distribution function is taken to represent the conservation of energy. In the case of passive scalar transport, the compressive work and heat dissipation are considered to be negligible in the incompressible limit \citep{bartoloni1993lbe, eggels1995numerical, shan1997simulation}. Therefore, the extra BTE that governs the energy distribution function, $g=g(t,\boldsymbol{x},\boldsymbol{u})$, with the BGK collision model is expressed as \begin{align}\label{eqn: BTE-scalar} \frac{\partial g}{\partial t} + \boldsymbol{u} \cdot \nabla g = C_{\mathrm{BGK}}(g) = -\frac{g-g^{eq}}{\tau_g}. \end{align} In \eqref{eqn: BTE-scalar}, $\tau_g$ represents the relaxation time, which is the time-scale associated with the collisional relaxation to the local energy equilibrium denoted by the Maxwell energy distribution, \begin{align}\label{eqn: Maxwell-scalar} g^{eq} = \frac{\Phi}{(2\pi \, c_T^2)^{d/2}} \exp\left( \frac{-(\boldsymbol{u}-\boldsymbol{V})^2}{2 \, c_T^2} \right). \end{align} Defining $\mathcal{L}=(\boldsymbol{u} - \boldsymbol{V})^2/c_T^2$ and $F(\mathcal{L})=\exp (-\mathcal{L}/2)$, the Maxwell distribution in \eqref{eqn: Maxwell-scalar} (for the most general case where $d=3$) is reformulated as $g^{eq} = \frac{\Phi}{(2\pi)^{3/2} \, c_T^3} \, F(\mathcal{L})$. Subsequently, continuum averaging yields the macroscopic flow variables for the incompressible flow, $\rho = \rho(t,\boldsymbol{x})$, as follows: \begin{eqnarray} \label{eqn: continuum-ave1} \rho &=& \int_{\mathbb{R}^d} f(t,\boldsymbol{x},\boldsymbol{u}) \, d\boldsymbol{u}, \\ \label{eqn: continuum-ave2} \rho \, \boldsymbol{V}(t,\boldsymbol{x}) &=& \int_{\mathbb{R}^d} \, \boldsymbol{u} \, f(t,\boldsymbol{x},\boldsymbol{u}) \, d\boldsymbol{u}, \qquad i=1,2,3, \\ \label{eqn: continuum-ave3} \Phi(t,\boldsymbol{x}) &=& \int_{\mathbb{R}^d} g(t,\boldsymbol{x},\boldsymbol{u}) \, d\boldsymbol{u}, \end{eqnarray} where $\Phi(t,\boldsymbol{x})$ is the total passive scalar concentration field appearing in the AD equation. Let us define $L$ as the macroscopic characteristic length, $l_s$ as the microscopic characteristic length associated with the smallest length-scale of the passive scalar, and $l_m$ as the mean-free path (the average distance traveled by a particle between successive collisions). Considering $\boldsymbol{x}^{\prime}$ to be the location of particles before scattering while we characterize their current location with $\boldsymbol{x}$, one can assume that $\boldsymbol{x}^{\prime}=\boldsymbol{x}-\delta \boldsymbol{x}$, where $\delta \boldsymbol{x} = (t-t^{\prime}) \, \boldsymbol{u}$. Here we assume that during the time $t-t^{\prime}$, $\boldsymbol{u}$ approximately remains constant \citep{samiee2020fractional}. According to Chen \textit{et al.} \citep{chen2007macroscopic, chen2010macroscopic}, the Boltzmann BGK kinetics with ``constant'' relaxation time, equations \eqref{eqn: BTE-fluid} and \eqref{eqn: BTE-scalar}, admit analytical solutions for $f(t,\boldsymbol{x},\boldsymbol{u})$ and $g(t,\boldsymbol{x},\boldsymbol{u})$ based upon their local equilibrium distribution that is valid in a general flow where the distance from the wall is large compared to $l_m$. Focusing on equation \eqref{eqn: BTE-scalar} and defining $s = (t-t^\prime)/\tau_g$, the exact solution to $g(t,\boldsymbol{x},\boldsymbol{u})$ would be \begin{align}\label{eqn: analytic-sol-g} g(t,\boldsymbol{x},\boldsymbol{u}) = \int_{0}^{\infty} e^{-s} \, g^{eq}(t-s\tau_g, \, \boldsymbol{x}- \boldsymbol{u} \, s\tau_g, \, \boldsymbol{u}) \, ds =\int_{0}^{\infty} e^{-s} \, g^{eq}_{s,s}(\mathcal{L}) \, ds, \end{align} where $g^{eq}_{s,s}(\mathcal{L})=g^{eq}(t-s\tau_g, \, \boldsymbol{x}-\boldsymbol{u} \, s\tau_g, \, \boldsymbol{u})$. \subsection{Filtered BTE, Closure Problem, and Kinetic-Boltzmann Modeling}\label{sec: FBTE} Statistical description of LES is well-represented through incorporating a filtering procedure into the kinetic Boltzmann transport. For the purpose of passive scalar transport, applying a spatially and temporally invariant filtering kernel, $\boldsymbol{\mathcal{G}} = \boldsymbol{\mathcal{G}}(\boldsymbol{r})$, onto the distribution function $g(t,\boldsymbol{x},\boldsymbol{u})$ linearly decomposes that into the filtered, $\widetilde{g}=\boldsymbol{\mathcal{G}} \ast g$, and the residual, $g^{\prime}=g-\widetilde{g}$, components. Therefore, filtering the equation \eqref{eqn: BTE-scalar} results in the following filtered BTE (FBTE) for the passive scalar: \begin{equation}\label{eqn: FBTE} \frac{\partial \widetilde{g}}{\partial t} + \boldsymbol{u}\cdot \nabla \, \widetilde{g} = -\frac{\widetilde{g}-\widetilde{g^{eq}(\mathcal{L})}}{\tau_g}. \end{equation} As it was elaborated by Girimaji \citep{girimaji2007boltzmann}, the nonlinear nature of the collision operator, $C_{\mathrm{BGK}}(g)$, prohibits the filtering kernel to commute with $C_{\mathrm{BGK}}(g)$; thus, it initiates a source of closure at the kinetic level in FBTE \eqref{eqn: FBTE}. Defining $\widetilde{\mathcal{L}}:=(\boldsymbol{u}-\widetilde{\boldsymbol{V}})^2/c_T^2$, this closure problem is manifested in the following inequality, \begin{align}\label{eqn: kinetic-closure} \widetilde{g^{eq}(\mathcal{L})} = \frac{\reallywidetilde{\Phi \, \exp(-\mathcal{L}/2)}}{(2\pi)^{3/2} \, c_T^3} \neq \frac{\widetilde{\Phi} \, \exp(-\widetilde{\mathcal{L}}/2)}{ (2\pi)^{3/2} \, c_T^3} = g^{eq}(\widetilde{\mathcal{L}}). \end{align} The identified closure requires proper means of modeling so that one can numerically solve the FBTE \eqref{eqn: FBTE}. A common practice is to approximate this closure problem with a modified relaxation time approach that is described in detail in \citep{sagaut2010toward}. Despite the success of this approach in some applications, it is not physically consistent with the filtered turbulent transport dynamics \citep{girimaji2007boltzmann}. Nevertheless, here we manage to adjust this inconsistency by looking at the nonlocal effects arising from filtering the Maxwell distribution function, $g^{eq}(\mathcal{L})$, and model them with proper mathematical tools. Considering the spatial filtering kernel $\boldsymbol{\mathcal{G}}(\boldsymbol{r})$ with the filter-width $\Delta$, and applying it on the Maxwell equilibrium distribution as \begin{equation}\label{eqn: F-Maxwell} \widetilde{g^{eq}(\mathcal{L})} = \boldsymbol{\mathcal{G}} \ast g^{eq}\big (\mathcal{L}(t,\boldsymbol{u},\boldsymbol{x})\big ) = \int_{R_f}^{} \boldsymbol{\mathcal{G}}(\boldsymbol{r}) \, g^{eq}\big (\mathcal{L}(t,\boldsymbol{u},\boldsymbol{x}-\boldsymbol{r})\big ) \, d\boldsymbol{r}, \end{equation} where $R_f=[-\Delta/2 \, , \Delta/2]^3$. \begin{remb} The integral form of the convolution \eqref{eqn: F-Maxwell} implies that $\widetilde{g^{eq}(\mathcal{L})}$ consists of a summation of the exponential functions. Thus, filtering encodes a multi-exponential behavior into the filtered equilibrium distribution that is gets intensified as the filter-width enlarges. Moreover, this multi-exponential structure of the filtered Maxwell distribution induces a heavy-tailed form for the filtered distribution that essentially entails the non-Gaussian behavior and justifies the spatial nonlocality \citep{samiee2020fractional}. This statistical rationale strongly indicates that modeling this closure problem with a Gaussian-type distribution is fundamentally insufficient. On the other hand, it is well-known that the statistical behavior of a multi-exponential distribution could be sufficiently approximated with a power-law distribution \citep{chu2010power, samiee2020fractional}. \end{remb} Subsequently, by rewriting the right-hand side of the passive scalar FBTE \eqref{eqn: FBTE} into the following form \begin{align}\label{eqn: FBTE-RHS} -\frac{1}{\tau_g} \left(\widetilde{g} - \widetilde{g^{eq}(\mathcal{L})} \right) = \underbrace{-\frac{1}{\tau_g} \left(\widetilde{g} - g^{eq}(\widetilde{\mathcal{L}}) \right)}_{\text{closed}} + \underbrace{\frac{1}{\tau_g} \left(\widetilde{g^{eq}(\mathcal{L})} - g^{eq}(\widetilde{\mathcal{L}}) \right)}_{\text{unclosed}}, \end{align} the unclosed part is structurally multi-exponentially distributed and maybe approximated by a power-law distribution model as we propose \begin{align}\label{eqn: Levy-model} \widetilde{g^{eq}(\mathcal{L})} - g^{eq}(\widetilde{\mathcal{L}}) \approx g^\alpha(\widetilde{\mathcal{L}}) = \frac{\widetilde{\Phi}}{c_T^3} \, F^{\alpha}(\widetilde{\mathcal{L}}), \end{align} where $F^{\alpha}(\widetilde{\mathcal{L}})$ denotes an $\alpha$-stable L\'evy distribution that is mathematically designed based on heavy-tailed stochastic processes and replicate the power-law behavior \citep{applebaum2009levy, meerschaert2011stochastic}. Regarding the decomposition given in \eqref{eqn: FBTE-RHS}, and by applying the filtering kernel on the analytical solution to $g(t,\boldsymbol{x}, \boldsymbol{u})$ that is given in \eqref{eqn: analytic-sol-g}, we obtain \begin{align}\label{eqn: filtered-analytic-g1} \widetilde{g} (t,\boldsymbol{x}, \boldsymbol{u}) = \int_{0}^{\infty} e^{-s} \, \widetilde{g^{eq}_{s,s}(\mathcal{L})}\, ds = \int_{0}^{\infty} e^{-s} \, g^{eq}_{s,s}(\widetilde{\mathcal{L}}) \, ds + \int_{0}^{\infty} e^{-s} \, \left(\widetilde{g^{eq}_{s,s}(\mathcal{L})} - g^{eq}_{s,s}(\widetilde{\mathcal{L}})\right) \, ds, \end{align} where $\widetilde{g^{eq}_{s,s}(\mathcal{L})}=\reallywidetilde{g^{eq}\big (\mathcal{L}(t-s\tau_g,\boldsymbol{x}- \boldsymbol{u} \, s\tau_g,\boldsymbol{u}) \big )}$, and the second integral represents the closure source. Therefore, employing the power-law distribution model in \eqref{eqn: Levy-model} returns the following analytic form for $\widetilde{g}(t,\boldsymbol{x}, \boldsymbol{u})$ \begin{align}\label{eqn: filtered-analytic-g2} \widetilde{g} (t,\boldsymbol{x}, \boldsymbol{u}) = \int_{0}^{\infty} e^{-s} \, g^{eq}_{s,s}(\widetilde{\mathcal{L}}) \, ds + \int_{0}^{\infty} e^{-s} \, g^{\alpha}_{s,s}(\widetilde{\mathcal{L}}) \, ds, \end{align} wherein, $g^{\alpha}_{s,s}(\widetilde{\mathcal{L}}):=g^{\alpha}\big (\widetilde{\mathcal{L}}(t-s\tau_g,\boldsymbol{x}- \boldsymbol{u} \, s\tau_g,\boldsymbol{u}) \big )$. \subsection{Fractional-Order Model for the SGS Scalar Flux}\label{sec: Derivation} Similar to the continuum averaging shown in \eqref{eqn: continuum-ave1} to \eqref{eqn: continuum-ave3}, the macroscopic continuum variables associated with \eqref{eqn: Filtered-AD}, are obtained in terms of the filtered distribution functions, $\widetilde{f}$ and $\widetilde{g}$, as \begin{eqnarray}\label{eqn: continuum-ave-fPhi} \widetilde{\Phi} &=& \int_{\mathbb{R}^d} \widetilde{g}(t,\boldsymbol{x},\boldsymbol{u}) \, d\boldsymbol{u}, \\ \label{eqn: continuum-ave-fVPhi} \widetilde{V}_i &=& \frac{1}{\rho}\int_{\mathbb{R}^d} u_i \, \widetilde{f}(t,\boldsymbol{x},\boldsymbol{u}) \, d\boldsymbol{u}, \quad i=1,2,3. \end{eqnarray} Multiplying both sides of the passive scalar FBTE by a collisional invariant $\mathcal{X}=\mathcal{X}(\boldsymbol{u})$ and then integrating over the kinetic momentum would return \begin{equation}\label{eqn: FBTE-ave-general} \int_{\mathbb{R}^d} \mathcal{X} \left( \frac{\partial \widetilde{g}}{\partial t} + \boldsymbol{u}\cdot \nabla \, \widetilde{g}\right) d\boldsymbol{u} = \int_{\mathbb{R}^d} \mathcal{X} \left( -\frac{\widetilde{g}-\widetilde{g(\mathcal{L})}}{\tau_g} \right) d\boldsymbol{u}. \end{equation} Here, choosing $\mathcal{X} = 1$ would result in recovering the filtered AD equation \eqref{eqn: Filtered-AD}. According to the microscopic reversibility of the particles that assumes the collisions occur \textit{elastically}, the right-hand side of \eqref{eqn: FBTE-ave-general} equals zero \citep{saint2009hydrodynamic}. Therefore, \eqref{eqn: FBTE-ave-general} reads as \begin{align}\label{GE-14} \frac{\partial \widetilde{\Phi}}{\partial t} + \nabla\cdot \int_{\mathbb{R}^d} \boldsymbol{u} \, \widetilde{g} \, d\boldsymbol{u} = 0. \end{align} Since we are working with spatial filtering kernels, $\boldsymbol{\mathcal{G}}=\boldsymbol{\mathcal{G}}(\boldsymbol{r})$, \begin{align}\label{GE-16} \int_{\mathbb{R}^d} \boldsymbol{u} \, \widetilde{g} \, d\boldsymbol{u} = \int_{\mathbb{R}^d} (\boldsymbol{u}-\widetilde{\boldsymbol{V}}) \, \widetilde{g} \, d\boldsymbol{u}+ \int_{\mathbb{R}^d} \widetilde{\boldsymbol{V}} \, \widetilde{g}\, d\boldsymbol{u}. \end{align} By plugging \eqref{GE-16} into \eqref{GE-14}, we obtain that \begin{equation}\label{GE-17} \frac{\partial \widetilde{\Phi}}{\partial t} + \nabla \cdot \left(\widetilde{\Phi} \, \widetilde{\boldsymbol{V}}\right) = -\nabla \cdot \boldsymbol{q}, \end{equation} where \begin{equation}\label{GE_17_2} q_i=\int_{\mathbb{R}^d} \left(u_i-\widetilde{V}_i\right) \, \widetilde{g} \, d\boldsymbol{u}. \end{equation} Using \eqref{eqn: filtered-analytic-g2}, we formulate $q_i$ as \begin{align}\label{GE-19-1} q_i = \int_{\mathbb{R}^d}\int_{0}^{\infty} e^{-s} (u_i-\widetilde{V}_i) \, g^{eq}_{s,s}(\widetilde{\mathcal{L}}) \, ds \, d\boldsymbol{u} + \int_{\mathbb{R}^d}\int_{0}^{\infty} e^{-s} (u_i-\widetilde{V}_i) \, g^{\alpha}_{s,s}(\widetilde{\mathcal{L}}) \, ds \, d\boldsymbol{u}. \end{align} It is straightforward to show that the temporal shift can be removed from \eqref{GE-19-1}. Moreover, since $(u_i-\widetilde{V}_i) \, g^{eq}(\widetilde{\mathcal{L}})$ and $(u_i-\widetilde{V}_i) \, g^{eq}(\widetilde{\mathcal{L}})$ both represent odd functions of $u_i$, thus, \begin{align} \int_{\mathbb{R}^d}(u_i-\widetilde{V}_i) \, g^{eq}(\widetilde{\mathcal{L}}) \, d\boldsymbol{u} = \int_{\mathbb{R}^d}(u_i-\widetilde{V}_i) \, g^{\alpha}(\widetilde{\mathcal{L}}) \, d\boldsymbol{u} = 0. \end{align} As a result, $q_i$ in \eqref{GE-19-1} can be rewritten as \begin{align}\label{GE-20} q_i &=& \int_{\mathbb{R}^d}\int_{0}^{\infty} e^{-s} (u_i-\widetilde{V}_i) \left(g^{eq}_{s,s}(\widetilde{\mathcal{L}}) - g^{eq}(\widetilde{\mathcal{L}})\right) ds \, d\boldsymbol{u} \, + \int_{\mathbb{R}^d}\int_{0}^{\infty} e^{-s} (u_i-\widetilde{V}_i) \left(g^{\alpha}_{s,s}(\widetilde{\mathcal{L}}) - g^{\alpha}(\widetilde{\mathcal{L}})\right) ds \, d\boldsymbol{u}. \end{align} In an LES setting, the first integral on the right-hand side of \eqref{GE-20} represents the \textit{filtered} scalar flux, $\widetilde{\boldsymbol{q}}$, while the second integral aims to model the \textit{residual} scalar flux, $\boldsymbol{q}^R$, associated with unresolved small scales of turbulent transport. In other words, by assigning the Gaussian distribution $g^{eq}(\widetilde{\mathcal{L}})$ to $\widetilde{q_i}$ and the isotropic $\alpha$-stable L\'evy distribution, $g^{\alpha}(\widetilde{\mathcal{L}})$, to $q_i^R$, the total passive scalar flux, $\boldsymbol{q}=\widetilde{\boldsymbol{q}}+\boldsymbol{q}^R$, in \eqref{GE-20} may be decomposed as \begin{eqnarray}\label{GE-25} \widetilde{q_i} &=& \int_{0}^{\infty} \int_{\mathbb{R}^d} (u_i-\widetilde{V}_i) \left(g^{eq}_{s,s}(\widetilde{\mathcal{L}})-g^{eq}(\widetilde{\mathcal{L}})\right) e^{-s} d\boldsymbol{u} \, ds, \\ \label{GE-26} q_i^R &=& \int_{0}^{\infty} \int_{\mathbb{R}^d} (u_i-\widetilde{V}_i) \left(g^{\alpha}_{s}(\widetilde{\mathcal{L}})-g^{\alpha}(\widetilde{\mathcal{L}})\right) e^{-s} d\boldsymbol{u} \, ds. \end{eqnarray} In \ref{sec: Appendix1}, the details of derivation of $\widetilde{\boldsymbol{q}}$ and $\boldsymbol{q}^R$ in terms of macroscopic transport variables including $\widetilde{\Phi}$ and $\widetilde{\boldsymbol{V}}$ are presented. As the result, the filtered passive scalar flux is obtained as \begin{align}\label{eqn: flt-flux} \widetilde{\boldsymbol{q}} = -\mathcal{D} \, \nabla \widetilde{\Phi}, \end{align} and the divergence of residual scalar flux is derived as the fractional Laplacian of the filtered total scalar concentration, \begin{align}\label{eqn: res-flux} \nabla \cdot \boldsymbol{q}^R = -\mathcal{D}_\alpha \, (-\Delta)^{\alpha} \, \widetilde{\Phi}, \quad \alpha \in (0,1], \end{align} where $\mathcal{D}_\alpha := \frac{C_\alpha (c_T \, \tau_g)^{2\alpha}}{\tau_g} \, (2\alpha+2) \, \Gamma(2\alpha)$ is a model coefficient with the unit [$L^{2\alpha}/T$]. The filtered AD equation for the total passive scalar concentration, developed from the filtered kinetic BTE with an $\alpha$-stable L\'evy distribution model, yields a fractional-order SGS scalar flux model at the continuum level. The aforementioned filtered AD equation reads as \begin{align} \label{eqn: Flt-AD-total} \frac{\partial \widetilde{\Phi}}{\partial t}+\frac{\partial}{\partial x_i}\left( \widetilde{\Phi} \, \widetilde{V}_i\right) = \mathcal{D} \, \Delta \widetilde{\Phi} +\mathcal{D}_{\alpha} (-\Delta)^{\alpha} \, \widetilde{\Phi}. \end{align} Through a proper choice for the fractional Laplacian order $\alpha$, the developed model optimally works in an LES setting. Applying the Reynolds decomposition and considering the passive scalar with imposed uniform mean gradient, equation \eqref{eqn: Flt-AD-total} fully recovers the filtered transport equation \eqref{eqn: Filtered-AD} for the transport of the filtered scalar fluctuations, $\widetilde{\phi}$. In order to explicitly derive the modeled residual scalar flux in terms of the filtered transport fields, from the Fourier definition of fractional Laplacian and the Riesz transform in given in \ref{sec: Fractional-Calc}, one can verify that \begin{eqnarray} \mathcal{F} \Big {\{} (-\Delta)^{\alpha} \, \widetilde{\phi} \Big {\}} = \mathfrak{i} \, \xi_j \Big ( -\mathfrak{i} \, \xi_j / \vert \boldsymbol{\xi} \vert \Big) \, (\vert \boldsymbol{\xi} \vert^2 )^{\alpha-\frac{1}{2}} \, \mathcal{F} \Big {\{} \widetilde{\phi} \Big {\}}, \end{eqnarray} which leads to \begin{equation}\label{Flx-1} (-\Delta)^{\alpha} \widetilde{\phi} = \nabla_j \left(\mathcal{R}_j (-\Delta)^{\alpha-\frac{1}{2}} \, \widetilde{\phi}\right). \end{equation} Therefore, using \eqref{eqn: res-flux} we may write \begin{equation}\label{Flx-1-2} \nabla \cdot \boldsymbol{q}^{R} = \nabla \cdot \left(-\mathcal{D}_\alpha \, \mathcal{R} (-\Delta)^{\alpha-\frac{1}{2}} \, \widetilde{\phi}\right). \end{equation} Finally, from \eqref{Flx-1-2} one can find the \textit{explicit} form of the modeled SGS flux as \begin{equation}\label{Flx-2} q^{R}_{i} = -\mathcal{D}_\alpha \, \mathcal{R}_i (-\Delta)^{\alpha-\frac{1}{2}} \, \widetilde{\phi} + c, \end{equation} where $c$ is a real-valued constant. \section{Data-driven Nonlocal SGS Modeling}\label{sec: Calibration} Deriving the structure of the residual scalar flux as a nonlocal SGS model, there are two levels of model calibration in order to employ this SGS model in an LES. In fact, this model calibration problem could be viewed as a two-stage procedure where its first part is dealing with estimation of the fractional order, $\alpha$, and the other stage infers the proportionality coefficient of the model, $\mathcal{D}_\alpha$. Subsequently, we propose a two-stage \textit{a priori} parameter identification strategy based upon spatio-temporal data for the true $\boldsymbol{q}^R$, obtained from filtering well-resolved DNS of scalar turbulence as described in section \ref{sec: nonlocality}. \subsection{Capturing Nonlocality with Fractional Modeling of the SGS Scalar Flux}\label{sec: TPCs} This is the first stage of this data-driven model identification, which targets finding an optimal fractional order, $\alpha_{opt}$. Our ground-truth data comes from exact evaluation of the two-point correlation function, $\mathcal{C}(q^R_\parallel \, , \widetilde{G}_\parallel)$ as described in section \ref{sec: nonlocality}. In fact, we aim to capture the spatial nonlocality we showed in the statistics of SGS production of filtered scalar variance (see Figure \ref{fig: Nonlocality}b). Since we employ the fluctuating part of $q^R_\parallel$ in computing the two-point correlation function, and from the definition $\mathcal{C}(q^R_\parallel \, , \widetilde{G}_\parallel)$ is normalized by $\left\langle q^R_\parallel(\boldsymbol{x}) \, \widetilde{G}_\parallel(\boldsymbol{x}) \right\rangle$, finding $\alpha_{opt}$ is essentially independent of the other model parameters appeared in \eqref{Flx-2}. Using the exact values of $q^R_\parallel$ from filtered DNS, \eqref{eqn: TPC1} returns the ground-truth two-point correlation function, $\mathcal{C}^\mathrm{True}$ Using the database described in section \ref{sec: nonlocality}, while using the fractional model for SGS scalar return flux $\mathcal{C}^\mathrm{Model}$ as functions of spatial shift, $r$. In our study, for a fixed filter width, the fractional order that minimizes the mismatch function $\Vert \mathcal{C}^\mathrm{True} - \mathcal{C}^\mathrm{Model}\Vert$, simply determines $\alpha_{opt}$ capturing the entire range of spatial nonlocality. By changing $0 < \alpha \leq 1$, we evaluate $\mathcal{C}^\mathrm{Model}$ for four different filter widths, $\Delta/\eta = 8, \, 20, \, 41, \, 53$. Figure \ref{fig: optimal_alpha}, shows $\mathcal{C}^\mathrm{True}$ in addition to the variations of $\mathcal{C}^\mathrm{Model}$ with $r/\eta$ as we change $\alpha$. We observe that as $\alpha$ decreases, the nonlocal correlations in $\mathcal{C}^\mathrm{True}$ are better approximated over $r$ with the fractional SGS model. According to the minimization of the mismatch function we introduced, $\alpha_{opt}$ for the four values of filter width is reported in Table \ref{tab: one-point}. Moreover, given $\alpha_{opt}$ for each filter width, single-point correlation coefficient between the true and modeled values of the SGS scalar flux, $\varrho \left(q_{\parallel}^{\mathrm{True}}, \, q_{\parallel}^{\mathrm{Model}}\right)$, is computed and acceptably good correlation values (in an \textit{a priori} sense) are reported in Table \ref{tab: one-point}. We need to emphasize that the passive scalar transport occurs in a statistically homogeneous medium with a direction of large-scale anisotropy. This source of anisotropy significantly impacts the intensity of nonlocal effects in the SGS dynamics so that the identified fractional-order in the SGS model is found to be less than 0.5. A similar observation in the study by Di Leoni \textit{et al.} showed that presence of anisotropy effects in the turbulent channel flow (due to the non-zero mean velocity gradient along the stream-wise direction) increases the nonlocality in the SGS dynamics in a way that it requires $\alpha < 0.5$ to properly capture that with the fractional gradient SGS model \citep{di2020two}. \begin{figure}[t!] \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{two-point_08eta_r} \end{minipage} \begin{minipage}[b]{.02\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{two-point_20eta_r} \end{minipage} \begin{minipage}[b]{1\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{two-point_41eta_r} \end{minipage} \begin{minipage}[b]{.02\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{two-point_53eta_r} \end{minipage} \caption{\footnotesize Variations of the the two-point correlation function given in \eqref{eqn: TPC1} obtained for the modeled SGS flux $\mathcal{C}^{\mathrm{Model}}$ $\alpha$ is changed from 0 to 1, in addition to the exact evaluation of $\mathcal{C}(q^R_\parallel \, , \, \widetilde{G}_\parallel)$ via exact values of the SGS scalar flux from DNS, illustrated for four filter widths (a) $\Delta/\eta =8$, (b) $\Delta/\eta =20$, (c) $\Delta/\eta =41$, and (d) $\Delta/\eta =53$. The arrows indicate the increase of $\alpha$. Insets depict the two-point correlation function values on smaller regions of the spatial shift, $r/\eta < 150$, in logarithmic scale. These plots show that the true values of the two-point correlation function over the entire range of spatial shift is well-approximated with finding the $\alpha_{opt}$ in the fractional-order SGS model.}\label{fig: optimal_alpha} \end{figure} \begin{table}[t!] \caption{\footnotesize Optimal fractional orders, and their corresponding single-point correlation coefficients between true and modeled SGS scalar fluxes.}\label{tab: one-point} \centering \begin{tabular}{ccccc} \toprule \toprule $\Delta/\eta$ & $\quad$ & $\alpha_{opt}$ & $\quad$ & $\varrho \left(q_{\parallel}^{\mathrm{True}}, \, q_{\parallel}^{\mathrm{Model}}\right)$ \\ \midrule 8 & $\quad$ & 0.40 & $\quad$ & 0.35 \\ 20 & $\quad$ & 0.35 & $\quad$ & 0.40\\ 41 & $\quad$ & 0.36 & $\quad$ & 0.44\\ 53 & $\quad$ & 0.37 & $\quad$ & 0.45\\ \bottomrule \bottomrule \end{tabular} \end{table} \subsection{Sparse Regression on the Fractional-Order Model}\label{sec: regression} After obtaining the $\alpha_{opt}$ for a choice of filter width, we can compute the explicit term $\boldsymbol{X}=\mathcal{R}(-\Delta)^{\alpha_{opt}-\frac{1}{2}}\, \widetilde{\phi}$ noting the linear mapping $\boldsymbol{q}^R = - \mathcal{D}_\alpha \, \boldsymbol{X}+c$, in \eqref{Flx-2}. Having access to the true values of SGS scalar flux on an extensive spatio-temporal database (described in section \ref{sec: nonlocality}) turns the second stage of our model calibration into a \textit{sparse linear regression} procedure. Therefore, this procedure leads to learning and inferring of $\mathcal{D}_\alpha$ that is appeared in the filtered AD equation \eqref{eqn: Flt-AD-total}. Similar to Beetham and Capecelatro's work for sparse regression \citep{beetham2020formulating}, we employ a regularized linear regression method namely as \textit{elastic net} that combines the $L_1$ and $L_2$ penalties as its regularizer \citep{zou2005regularization}. Using the implementation of the elastic net method in \texttt{scikit-learn} \citep{scikit-learn} and assigning equal weights to the $L_1$ and $L_2$ regularizes, we perform the regression and the its quality is examined through scatter plots. As a common practice, and in order to choose proper training data size, we perform cross-validation tests over our spatio-temporal dataset \citep{kutz2017deep}. As a result, Figure \ref{fig: regression} shows the resulting scatter plots after the regression for two cases with $\Delta/\eta=41, \, 53$. \begin{figure}[t!] \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{Regplot_41eta} \end{minipage} \begin{minipage}[b]{.02\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{Regplot_53eta} \end{minipage} \caption{\footnotesize Regression plots between $q^{\mathrm{True}}_{\parallel}$ and $q^{\mathrm{Model}}_{\parallel}$ for the filter widths, (a) $\Delta/\eta=41$, and (b) $\Delta/\eta=53$. The corresponding optimal fractional-orders are reported in Table \ref{tab: one-point}.}\label{fig: regression} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth]{D_a_rev} \caption{Variation of the proportionality coefficient, $\mathcal{D}_\alpha$, for fractional-order SGS model with filter width, and the scale invariance study}.\label{fig: D-alpha} \end{figure} Using the described procedure, the proportionality coefficient for each filter width is achieved. Figure \ref{fig: D-alpha} illustrates predicted $\mathcal{D}_\alpha$ through this regression procedure as a function of chosen filter width, and it is notable that the predicted $\mathcal{D}_\alpha$ decreases to lower values as we chose smaller filter widths. This numerical observation is consistent with our theoretical interpretation of $\mathcal{D}_\alpha$ as we pointed out in section \ref{sec: Derivation}. A vital consideration in developing an SGS model is the concept of scale-invariant closure model, especially within the inertial-convective subrange \citep{meneveau2000scale}. As indicated in section \ref{sec: Derivation}, $\mathcal{D}_\alpha$ takes the unit of [$L^{2\alpha}/T$]. Therefore, to study the scale invariance property, by choosing the filter width as the length-scale, one can compare the variations of $\mathcal{D}_\alpha$ obtained from the sparse regression against $\Delta^{2\alpha_{opt}}$. Figure \ref{fig: D-alpha} shows that the developed fractional-order SGS model is scale-invariant. \section{\textit{A Priori} Testing via SGS Dissipation of the Resolved Scalar Variance}\label{sec: Dissipation} We subsequently examine the capability of the optimal fractional SGS model in reproducing the PDF of SGS dissipation of scalar variance, $\boldsymbol{q}^R \cdot \widetilde{\boldsymbol{G}}$. Through addressing: \begin{itemize} \item The ability of the SGS model to capture heavy tails in the true PDF, and \item If the SGS model is capable of representing the backward scattering of the scalar variance cascade i.e., reproducing the negative values in the PDF. \end{itemize} Considering two filter widths of $\Delta/\eta=41, \, 53$, Figure \ref{fig: Diss_PDFs} shows the PDF of normalized SGS dissipation of filtered scalar variance for the optimal fractional-order model, local EDM, and the true SGS flux. The sample space to compute the PDFs is identical to the one we utilized to obtain the PDFs illustrated in Figure \ref{fig: Nonlocality}a as fully described in section \ref{sec: nonlocality}. Here, one can see that for both of the filter widths the fractional-order SGS model successfully captures the broad tail of the PDF in the positive value region for the SGS dissipation, however, the local eddy-diffusivity model fails to completely do that. The positive side of the PDF is associated with the cascade of scalar variance from the resolved scales to the unresolved ones \textit{i.e.}, forward scattering of the scalar variance. On the other hand, this figure remarkably demonstrates that unlike the local EDM, fractional-order SGS model is able to predict the events with the negative SGS dissipation values as observed in the true SGS dissipation PDFs. In fact, our resulting PDFs display that the nonlocal modeling of the SGS scalar flux through fractional-order operator makes it possible to include the backward scattering in the LES of turbulent scalar transport. Similar observation in the context of fractional-order SGS modeling was reported by Di Leoni \textit{et al.}, where their developed fractional SGS model was shown to be able to reproduce the back-scattering of the filtered turbulent kinetic energy \citep{di2020two}. \begin{figure}[t!] \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{Dissipation_41eta} \end{minipage} \begin{minipage}[b]{.02\linewidth} ~ \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=1\textwidth]{Dissipation_53eta} \end{minipage} \caption{\footnotesize Probability distribution functions of the SGS dissipation of scalar variance, for the exact values from filtered DNS, local eddy-diffusivity model, and fractional-order SGS model at filter widths $\Delta/\eta=41, \, 53$.}\label{fig: Diss_PDFs} \end{figure} \section{Conclusions and Remarks}\label{sec: Conclusion} We developed a new data-driven nonlocal/fractional SGS model for the LES of passive scalars transported in the homogeneous isotropic turbulent flow. The main focus of our work was on obtaining an SGS model that is structurally designed based on the nonlocal nature of the SGS scalar flux. Therefore, we first managed to present a through statistical interpretation of nonlocality in the SGS dynamics using the single- and two-point statistics of the SGS scalar dissipation. Using a rich dataset of high-fidelity data for the SGS flux obtained from direct filtering of DNS results, we illustrated the statistical nonlocality embedded in the SGS dynamics and showed that it amplifies as the filter-width increases. Moreover, we showed that the conventional means of SGS modeling originate from a local statistical representation for the SGS dynamics and are intrinsically incapable of predicting the statistical nonlocality. As a robust starting point for our mathematical modeling, we started from Boltzmann-BGK kinetics as the microscopic transport framework for passive scalars in homogeneous turbulence and considered the closure problem manifested in filtering the transport equations. By revisiting the kinetic-level strategy for the LES modeling taking into account the consistency of the the model for the filtered equilibrium distribution with its macroscopic representation at the continuum level, we proposed to proceed with closure modeling using $\alpha$-stable L\'evy distribution to address the nonlocal and non-Gaussian behavior of the closure at the kinetic level. In order to derive a macroscopic representation of such model to employ in the filtered AD equation, we used continuum averaging and obtained the filtered and residual (modeled) passive scalar flux components that essentially return the filtered AD equation. Throughout this procedure, the up-scaled model for the divergence of the residual flux takes the form of a fractional Laplacian acting on the filtered scalar concentration with a model-specific proportionality coefficient. Next, we managed to calibrate the fractional-order model in two separate data-driven stages. First we targeted identification of the optimal fractional order using two-point statistics data for the normalized SGS dissipation function obtained from the DNS and minimizing the mismatch function with its counterpart in the fractional-order SGS model. This procedure returned the optimal fractional order that minimizes the single-point correlation between the modeled and true SGS scalar flux. Afterwards, following an sparse regression strategy over the spatio-temporal data for the SGS scalar flux in a statistically-stationary turbulent scalar field, we obtained the proportionality coefficient of the model. Moreover, we showed the consistency of the derived model in terms of the relationship between the obtained proportionality coefficient and decreasing the filter-width. Finally, in an \textit{a priori} test, we showed that the identified model is capable of capturing the PDF tail associated with the forward scattering of the filtered scalar variance and illustrated that our model has the capability to partially reproduce the backward scattering phenomenon. \section*{Acknowledgement} This work was financially supported by the MURI/ARO grant (W911NF-15-1-0562), the ARO Young Investigator Program (YIP) award (W911NF-19-1-0444), and partially by the National Science Foundation award (DMS-1923201). The high-performance computing resources and services were provided by the Institute for Cyber-Enabled Research (ICER) at Michigan State University.
{ "timestamp": "2021-07-12T02:21:50", "yymm": "2012", "arxiv_id": "2012.14027", "language": "en", "url": "https://arxiv.org/abs/2012.14027" }
\section{Introduction} \label{thintro} Shot noise of nonequilibrium quantized charge current fluctuations carries much rich information beyond the average current \cite{Bla001,Imr02,Bee0337,Naz03}. The study of current noise in transport through mesoscopic devices becomes a field of intensive theoretical and experimental research. Noise spectrum is the Fourier transformation of two--time current--current correlation function. The zero-frequency noise describes the steady--state fluctuations of the effective carrier charge that can be either fraction \cite{Pic97162,Rez99238,Bid09236802} or integer \cite{Koz003398,Lef03067002}. Noise spectrum in full frequency domain contains both static and dynamic information. It is a powerful probe to the energetics, interactions and dynamics of strongly correlated systems \cite{Ent07193308,Li05066803,Bar06017405 Gab08026601,Wab09016802,Liu131866, Jin13025044,Jin11053704,Eng04136602,Rot09075307,Yan14115411}. Thanks to the recent advancement in on-chip detection technique, high-precision measurement of nonequilibrium current fluctuations in a Kondo quantum dot (QD) is now available at finite frequency \cite{Bas10166801,Bas12046802,Del18041412}. In the finite-frequency noise spectrum, the Kondo feature is predicted a logarithmic singularity at $\omega =\pm eV$ \cite{Moc11201303,Mul13245115}. One can also observe the Kondo peaks in the derivative noise to bias voltage $V$ \cite{Bas12046802,Del18041412}. In particular, the emissive spectrum, which is largely uncontaminated, has been studied experimentally \cite{Del18041412} and theoretically \cite{Cre18107702}, for QDs in asymmetrical coupling to reservoirs. In this work, we will explore the nonequilibirum Kondo mechanism in the noise spectrum of the current tunneling through an Anderson impurity quantum dot. It is well-known that the equilibrium Kondo effect leads to the impurity density of state (DOS), $A(\omega)$, a resonance peak, which splits into two peaks under an applied bias voltage. The observed Kondo resonance at Fermi level is due to the formation of singlet on the QD screened by itinerant electrons from reservoirs. Away from the Kondo regime, the DOS also contains the two Hubbard resonances peaks at the single-occupation and double occupation transport resonances. That is, the DOS $A(\omega)$ reflects the structure information of the impurity system. On the other hand, the nonequilibrium noise spectrum, $S(\w)$, the Fourier transformation of current--current correlation function, involves not only the structure but also the transport dynamics. We will systematically investigate the nonequilibrium Kondo characteristics in both $S(\omega)$ and $\d S(\omega)/\d\omega$. The underlying mechanisms are identified, against the possible competing processes. Some details are as follows. (\emph{i}) We illustrate the Kondo characteristic in both $S(\omega)$ and $\d S(\omega)/\d\omega$, with a close comparison between the equilibrium and nonequilibrium cases. The observed Kondo characteristic is related only with the Fermi energies difference between two electrodes. Namely, the applied bias voltage splits the Kondo characteristics, from the single inflection point in $S(\omega)$ and the peak $\d S(\omega)\d\w$ at $\omega=0$, into two asymmetric upturns and peaks, respectively, at around $\omega=\pm|\mu_{\rm L}-\mu_{\rm R}|=\pm eV$; (\emph{ii}) By comparing the non-Kondo and Kondo regimes, we identify the nonequilibrium Kondo features in the current noise spectrum, appearing in the region of $\w\in [-eV, eV]$. This is a type of Kondo-Fano interference, engaging both Kondo characteristics at $-eV$ and $+eV$. In contrast, the non-Kondo cotunneling process is of the anti-Stokes in nature, occurring at $-eV$ only, which rules out the interference in $\w\in [-eV, eV]$; (\emph{iii}) We establish a bridge between the nonequilibrium Kondo noise and the transient current. The observed Kondo profile around $\omega=\pm eV$ reflects the Rabi interference of the transport current dynamics and contains the information of the Kondo oscillation frequency $|eV|$; (\emph{iv}) We illustrate that the emission noise Kondo feature ($\w<0$) is often cleaner than the absorption ($\w>0$), as the latter would be contaminated by the sequential tunneling signals. The present study is based on the well-established dissipaton equation of motion (DEOM) approach \cite{Jin15234108,Yan14054105,Yan16110306,Zha18780,Wan20041102}. This is a nonperturbative and accurate method, having been extensively explored in the study of quantum impurity problems \cite{Jin15234108,Zha16237,Zha16204109 Che20297811,Wan20164113,Gon20154111,Jin20235144}. These include the recent noise spectrum evaluations, with the identification of Coulomb blockade assisted Rabi interference in a double-dot Aharonov-Bohm interferometer \cite{Jin20235144}. The remainder of this paper is organized as follows. In \Sec{thmet}, we give a brief introduction of the DEOM theory and demonstrate how it evaluates the two-time current-current correlation function. In \Sec{thnum}, we demonstrate and elaborate the numerical results of the circuit current noise spectrum and the related transient circuit current. Finally, we conclude this work with \Sec{thsum}. \section{Methodology} \label{thmet} In this section, we present a brief account on the DEOM theory and current-current correlation function. For details see References \cite{Yan14054105,Jin15234108,Yan16110306}. Consider an electron transport setup in which an impurity system $H_{\tS}$ is sandwiched by electrode bath $h_{\B}$, under an electric bias potential ($eV=\mu_{{\rm L}}-\mu_{{\rm R}}$) applied across the leads, $\alpha = {\rm L}$ and R. The total Hamiltonian reads $H_{\rm tot}=H_{\tS}+H_{\B}+H_{\SB}$. The system Hamiltonian $H_{\tS}$ is arbitrary, including electron-electron interaction, given in terms of local electron creation $\hat a^{\dg}_{u}$ (annihilation $\hat a_{u}$) operators. For instance, in the present study we consider a QD represented by the spin-$1/2$ single Anderson impurity model (SAIM) described by \be H_{\tS} =\sum_{u=\up,\down} \varepsilon_{u} \hat a^\dg_u \hat a_u +\frac{U}{2}\sum_u\hat n_{u}\hat n_{\bar u}, \ee where the single level in the QD is characterized by a spin-degenerate energy level $\varepsilon_{\up}= \varepsilon_{\down}=\varepsilon$ and $\hat n_u=\hat a^\dg_{u} \hat a_{u}$, with $\bar u$ the opposite direction of the spin index $u$. The electrode bath is modeled as noninteracting electrons reservoirs, $ H_{\B}= \sum_{\alpha k}(\varepsilon_{\alpha k} +\mu_{\alpha}) c^{\dg}_{\alpha k} c_{\alpha k}$. Its coupling to the system assumes the standard tunneling form of \be\label{Hsb1} H_{\SB}\!=\!\sum_{\alpha u }\left(\hat a^{+}_{u} \hat F^-_{\alpha u} + \hat F^+_{\alpha u} \hat a^{-}_{u} \right)\!=\!\sum_{\alpha u \sigma}\hat a^{\bar\sigma}_{u} \wti F^{\sigma}_{\alpha u}, \ee with $ \hat F^-_{\alpha u} =\sum_k t_{\alpha u k} c_{\alpha k}=(\hat F^+_{\alpha u})^\dg$. Note that $\hat F^{\sigma}_{\alpha u}\hat a^{\bar\sigma}_{u} =-\hat a^{\bar\sigma}_{u}\hat F^{\sigma}_{\alpha u}$. For the convenience of description, we denote $\wti F^{\sigma}_{\alpha u} \equiv \bar\sigma \hat F^{\sigma}_{\alpha u}$, with $\sigma =+,-$ ($\bar\sigma$ is the opposite sign) identifying the creation and annihilation operators. For the dissipaton description of bath interaction \cite{Jin15234108,Yan14054105}, we consider the bare-bath correlation function in the exponential decomposition form \cite{Jin08234703,Zhe121129, Zhe13086601,Li12266403,Hou15104112,Ye16608}, \be\label{FF_corr} \big\la \hat F^{\sigma}_{\alpha u}(t) \hat F^{\bar\sigma}_{\alpha v}(0)\big\ra_{\B} = \sum_{m=1}^{M} \eta^{\sigma}_{\alpha uv m}e^{-\gamma^{\sigma}_{\alpha m}t}. \ee This is realized via a sum-over-poles decomposition for the Fourier integrand of the relation $\la \hat F^{\sigma}_{\alpha u}(t)\hat F^{\bar\sigma}_{\alpha v}(0)\ra_{\B} =\frac{1}{\pi}\int_{-\infty}^{\infty}\!\!d\omega\,e^{\sigma i(\omega+\mu_{\alpha}\!)t} \frac{J^{\sigma}_{\alpha uv}(\omega)} {1+e^{\sigma\beta\omega}}$. Here, $J^+_{\alpha vu}(\w) = J^-_{\alpha uv}(\w) = J_{\alpha uv}(\w)$; the reservoir hybridization spectral function is given by $J_{\alpha u v}(\omega) \equiv\pi\sum_k t_{\alpha u k}t^\ast_{\alpha v k}\delta(\omega-\varepsilon_{\alpha k}) =\frac{\Gamma_{\alpha u v}W^2}{\omega^2+W^2}$. The exponents $\{\gamma^{\sigma}_{\alpha m}\}$ in \Eq{FF_corr} arise from both the Fermi function and the hybridization function. For an optimal dissipaton description, we adopt the Pad\'{e} spectrum decomposition for Fermi function \cite{Hu10101106,Hu11244106}. The DEOM theory starts with the statistical quasi--particle (dissipaton) decomposition on the hybridizing bath operators $\{\hat F^{\sigma}_{\alpha u}\}$. It reproduces the bath correlation functions, \Eq{FF_corr}, and the time--reversal counterparts, $\big\la \hat F^{\bar\sigma}_{\alpha v}(0) \hat F^{\sigma}_{\alpha u}(t)\big\ra_{\B} =\La\hat F^{\bar\sigma}_{\alpha u}(t) \hat F^{\sigma}_{\alpha v}(0)\Ra^{\ast}_{\B}. $ To that end, we set \cite{Yan14054105,Jin15234108,Yan16110306} \be\label{wtiF_f} \wti F^{\sigma}_{\alpha u} \equiv -\sigma \hat F^{\sigma}_{\alpha u} \equiv \sum_{m=1}^{M} \hat f^{\sigma}_{\alpha u m} . \ee The involved dissipatons $\{f^{\sigma}_{\alpha u m}\}$ satisfy \be\label{ff_corr} \begin{split} \big\la\hat f^{\sigma}_{\alpha u m}(t)\hat f^{\sigma'}_{\alpha' v m'}(0)\big\ra_{\B} =\big\la\hat f^{\sigma}_{\alpha u m}\hat f^{\sigma'}_{\alpha' v m'}\big\ra^{\greater}_{\B}\, e^{-\gamma^{\sigma}_{\alpha m} t}, \\ \big\la\hat f^{\sigma'}_{\alpha' v m'}(0)\hat f^{\sigma}_{\alpha u m}(t)\big\ra_{\B} =\big\la\hat f^{\sigma'}_{\alpha' v m'}\hat f^{\sigma}_{\alpha u m}\big\ra^{\lesser}_{\B}\, e^{-\gamma^{\sigma}_{\alpha m} t}, \end{split} \ee where $\gamma^{\bar\sigma\,\ast}_{\alpha m}=\gamma^{\sigma}_{\alpha m}$ and \be\label{ff_corr0} \begin{split} \big\la\hat f^{\sigma}_{\alpha u m}\hat f^{\sigma'}_{\alpha' v m'}\big\ra^{\greater}_{\B} =-\delta_{\sigma\bar\sigma'}\delta_{\alpha\alpha'}\delta_{mm'}\, \eta^{\sigma}_{\alpha u v m}, \\ \big\la\hat f^{\sigma'}_{\alpha' v m'}\hat f^{\sigma}_{\alpha u m}\big\ra^{\lesser}_{\B} =-\delta_{\sigma\bar\sigma'}\delta_{\alpha\alpha'}\delta_{mm'}\, \eta^{\bar\sigma\,\ast}_{\alpha u v m}. \end{split} \ee For bookkeeping, we adopt the abbreviations, $j\equiv(\sigma\alpha u m)$ and $\bar j\equiv(\bar\sigma\alpha u m)$, for the collective indexes in fermionic dissipatons, such that $f_j\equiv f^{\sigma}_{\alpha u m}$ and so on. Dynamical variables in DEOM are the reduced dissipaton density operators (DDOs), \be\label{DDO_def} \rho^{(n)}_{\bf j}(t)\equiv \rho^{(n)}_{j_1\cdots j_n}(t)\equiv {\rm tr}_{\B}\Big[\big(\hat f_{j_n}\cdots\hat f_{j_1}\big)^{\circ} \rho_{\rm tot}(t)\Big]\, . \ee The product of dissipatons inside the circled parentheses, $(\,\cdot\cdot\,)^{\circ}$, is \emph{irreducible}. A swap of any two irreducible fermionic dissipatons causes a minus sign, such that $\big(\hat f_{j}\hat f_{j'}\big)^{\circ}=-\big(\hat f_{j'}\hat f_{j}\big)^{\circ}$. While $\rho_{\tS}(t) \equiv \rho^{(0)}_{\bf 0}(t)$ is the reduced system density operator, $\rho^{(n)}_{\bf j}(t)\equiv \rho^{(n)}_{j_1\cdots j_n}(t)$, as specified in \Eq{DDO_def}, engages an \emph{ordered} set of $n$ \emph{irreducible} dissipatons. Evidently, $\rho^{(n+1)}_{j{\bf j} }\equiv \rho^{(n+1)}_{jj_1\cdots j_n} =(-)^n\rho^{(n+1)}_{{\bf j}j}$. Denote also $\rho^{(n-1)}_{{\bf j}^-_r} \equiv \rho^{(n+1)}_{j_1\cdots j_{r-1}j_{r+1}\cdots j_n}$. The irreducible notation enables the generalized Wick's theorem the expression \cite{Yan14054105,Jin15234108,Yan16110306}, \begin{align}\label{Wick} &\quad\, \text{tr}_{\B}\left[\big(\hat f_{j_n}\!\cdots\!\hat f_{j_1}\big)^{\circ} \hat f_j\rho_{\rm tot}(t)\right] \nl &=\rho^{(n+1)}_{j{\bf j}} + \sum_{r=1}^n (-)^{r-1} \La\hat f_{j_r}\hat f_j\Ra^{\greater}_{\B} \rho^{(n-1)}_{{\bf j}^{-}_r}. \end{align} Evidently, this can be used in evaluating the effect of $H_{\SB}$--action on the specified DDO, $\rho^{(n)}_{{\bf j}}(t)$. Dissipaton algebra includes also the generalized diffusion equation that treats the effect of $H_{\B}$--action. Now, with the Liouville-von Neumann equation, $\dot{\rho}_{\rm tot}(t)=-i[H_{\tS}+H_{\B}+H_{\SB},{\rho} _{\rm tot}(t)]$, for the total density operator in \Eq{DDO_def}, the aforementioned disspaton algebra readily leads to \cite{Yan14054105,Jin15234108,Yan16110306} \begin{align}\label{DEOM} \dot\rho^{(n)}_{\bf j}(t)&=-\bigg(i{\cal L}_{\tS} +\sum_{r=1}^n \gamma_{j_r}\bigg)\rho^{(n)}_{\bf j}(t) -i\sum_{j} {\cal A}_{\bar j}\rho^{(n+1)}_{{\bf j}j}(t) \nl &\quad -i \sum_{r=1}^n (-)^{n-r}{\cal C}_{j_r}\rho^{(n-1)}_{{\bf j}^-_r}(t). \end{align} While ${\cal L}_{\tS}\,(\cdot)=[H_{\tS},\,(\cdot)]$, the Grassmannian superoperators, ${\cal A}_{\bar j}\equiv {\cal A}^{\bar\sigma}_{\alpha u\kappa} = {\cal A}^{\bar\sigma}_{u}$ and ${\cal C}_{j}\equiv {\cal C}^{\sigma}_{\alpha u\kappa}$, are defined via \be\label{calAC} \begin{split} {\cal A}^{\sigma}_{u} \Opm &\equiv a^{\sigma}_{ u}\Opm \pm \Opm \hat a^{\sigma}_{u} \equiv \big[\hat a^{\sigma}_{u},\Opm\big]_\pm \, , \\ {\cal C}^{\sigma}_{\alpha u\kappa} \Opm &\equiv \sum_{v} \big(\eta^{\sigma}_{\alpha uv\kappa}\hat a^{\sigma}_{v}\Opm \mp \eta^{\bar \sigma\,{\ast}}_{\alpha uv\kappa}\Opm \hat a^{\sigma}_{\kappa}\big). \end{split} \ee Here, $\Opm$ is an arbitrary operator, with even ($+$) or odd ($-$) fermionic parity, such as $\rho^{(2m)}$ or $\rho^{(2m+1)}$, respectively. Throughout this work, we adopt units of $e=\hbar=1$ for the electron charge and the Planck constant. The DEOM theory, \Eqs{wtiF_f}--(\ref{DEOM}), describes both the reduced system and hybrid bath dynamics. The underlying DEOM--space quantum mechanics \cite{Yan16110306} is a mathematical isomorphism of the conventional Hilbert/Liouville--space formulations. It supports accurate evaluations of the expectation values and correlation functions of the type of $\hat A=\hat Q_{\tS}\hat F_{\B}$ operators, including the cases of $\hat A=\hat Q_{\tS}$ and $\hat A=\hat F_{\B}$. More specific, the system $\hat Q_{\tS}$ is arbitrary, such as the combination of the creation (annihilation) operators $\hat a^{\sigma}_u$. The bath one belongs to the hybridized set, i.e., $\hat F_{\B}\in \{\hat F^{\pm}_{\alpha u}\}$. Apparently, the type of $\hat A=\hat Q_{\tS}\hat F_{\B}$ includes the lead--specified transport current operator, \be\label{hatI_alpha} \hat I_{\alpha} = -\frac{\partial\hat N_{\alpha}}{\partial t} =-i\sum_u \big(\hat a^{+}_u \hat F^-_{\alpha u} -\hat F^{+}_{\alpha u}\hat a^-_u \big). \ee It is noticed that in general correlation functions can be expressed in the form of augmented expectation values; see \Eq{corr_hilbert} below. Let us start with time--dependent expectation values. For a system dynamical operator it is directly given by ${\rm Tr}[\hat Q_{\tS}\rho_{\rm tot}(t)] ={\rm tr}_{\tS}[\hat Q_{\tS}\rho_{\tS}(t)]$, with $\rho_{\tS}(t)\equiv {\rm tr}_{\B} \rho_{\rm tot}(t)$ and ${\rm tr}_{\tS}$ being the trace over the system--subspace. The average transient transport current, $I_{\alpha}(t)\equiv {\rm Tr}[\hat I_{\alpha}\rho_{\rm tot}(t)]$, for \Eq{hatI_alpha}, is evaluated by using the generalized Wick's theorem, \Eq{Wick}. We obtain \be\label{curr} I_{\alpha}(t) = {\rm Tr}_{\T}\big[\hat I_{\alpha}\rho_{\rm tot}(t)\big] = -i\! \sum_{j_{\alpha}\in j} {\rm tr}_{\tS}\!\big[\ti a_{\bar j}\rho^{(1)}_{j}(t)\big], \ee where $\ti a_{\bar j}\equiv \ti a^{\bar \sigma}_{\alpha u k} =\bar \sigma\hat a^{\bar \sigma}_{u}$ and $j_{\alpha}\equiv \{ \sigma u k\}\in j\equiv\{\sigma\alpha u k\}$. On the other hand, the steady--state correlation functions can generally be expressed in the form of expectation values as \be\label{corr_hilbert} \la \hat A(t) \hat B(0)\ra={\rm Tr}\big[\hat A\rho_{\rm tot}(t;\hat B )\big]. \ee Here, $\hat O(t)\equiv e^{iH_{\rm tot}t}\hat O e^{-iH_{\rm tot}t} =\hat O e^{-i{\cal L}_{\rm tot}t}$ the Heisenberg picture, whereas $\rho_{\rm tot}(t;\hat B )\equiv e^{-i{\cal L}_{\rm tot}t}(\hat B\rho_{\rm tot}^{\rm st})$ the Schr\"{o}dinger picture. In the DEOM--space evaluation, the steady--state total system--and--bath composite $\rho^{\rm st}_{\rm tot}$ maps to the steady--state DDOs, $\{\rho^{(n);{\rm st}}_{\bf j}\}$, via \Eq{DDO_def}. These are steady--state solutions to \Eq{DEOM} which can readily be evaluated via, for instance, the self-consistent iteration approach \cite{Zha17044105}. Next, $\rho_{\rm tot}(t=0;\hat B ) =\hat B\rho_{\rm tot}^{\rm st}$ maps to $\{\rho^{(n)}_{\bf j}(t=0;\hat B )\}$, which can be identified by using \Eqs{DDO_def} and (\ref{Wick}). We then evaluate $\rho_{\rm tot}(t;\hat B)\rightarrow \{\rho^{(n)}_{\bf j}(t;\hat B )\}$ via \Eq{DEOM}, and the correlation function via \Eq{corr_hilbert}. The above mapping algorithm does exist whenever $\hat A$ and $\hat B$ belong to the aforementioned $(\hat Q_{\tS}\hat F_{\B})$--type of the dynamical operators. These include the DOS of the impurity, \be A_u(\omega)=\frac{1}{2\pi}\!\int^\infty_{-\infty} \!\d t\, e^{i\omega t}\la \{\hat a_u(t),\hat a^\dg_u(0)\}\ra, \ee and the nonsymmetrized current noise spectrum, \be\label{Sw_It} S_{\alpha\alpha'}(\omega)= \int_{-\infty}^{\infty}\!\!\d t\, e^{i\omega t} \La \delta{\hat I}_\alpha(t)\delta{\hat I}_{\alpha'}(0)\Ra. \ee Here, $\delta{\hat I}_\alpha\equiv{\hat I}_\alpha-I^{\rm st}_{\alpha}$, with $I^{\rm st}_{\alpha}\equiv \la\hat I_{\alpha}\ra$ being the stationary current. In contrast to the symmetrized one, $S^{{\rm sym}}_{\alpha\alpha'}(\omega) =S_{\alpha\alpha'}(\omega) +S_{\alpha'\alpha}(-\omega)$, the asymmetric $S_{\alpha\alpha'}(\omega)$ is directly related to experiments, with $\omega>0$ and $<0$ corresponding to energy absorption and emission processes, respectively \cite{Eng04136602,Rot09075307,Yan14115411, Bas10166801,Bas12046802,Del18041412,Moc11201303 Mul13245115,Cre18107702,Jin15234108}. Further details of DEOM--space quantum mechanics can be found in Ref.\,\onlinecite{Yan16110306}. Apparently, the DEOM--space evaluations on expectation values and correlation functions cover also that of the net circuit current, \be\label{It} \hat I = a\hat I_{\rm L}-b\hat I_{\rm R}. \ee Its noise spectrum is $S(\omega)= \int_{-\infty}^{\infty} \!dt\, e^{i\omega t} \La \delta{\hat I}(t)\delta{\hat I}(0)\Ra$, with $\delta{\hat I}\equiv{\hat I}-I^{\rm st}$. The junction capacitance parameters are $a=\Gamma_{\rm R}/\Gamma$ and $b=\Gamma_{\rm L}/\Gamma$, with $\Gamma=\Gamma_{\rm L}+\Gamma_{\rm R}$ being the total reservoirs coupling strength \cite{Bla001,Eng04136602,Del18041412}. It is worth noting that the DEOM theory is a quasiparticle extension to the well--established hierarchical equations of motion formalism \cite{Jin08234703}. The latter consists only of \Eq{DEOM} that has been demonstrated an efficient and universal method for strongly correlated quantum impurity systems \cite{Zhe121129,Li12266403,Zhe13086601,Hou15104112,Ye16608}. The reduced system density operator is just $\rho_{\tS}(t) \equiv {\rm tr}_{\B}\rho_{\rm tot}(t)= \rho^{(0)}(t)$. All $\big\{\rho^{(n\geq1)}_{\bf j}\big\}$ are also physically well--defined DDOs, \Eq{DDO_def}, for entangled system--bath dynamics. DEOM is naturally a nonperturbative many-particle theory and is formally exact when $n_{\rm max}=2N_{ \sigma}N_u$ \cite{Han18234108}, with $N_{u}$ being the number of spin--orbital states, and $N_{ \sigma}=2$ being the two signs of $\sigma=+$ and $-$. As an efficient and universal numerical method \cite{Jin08234703,Zhe121129, Zhe13086601,Li12266403,Hou15104112,Ye16608}, DEOM converges rapidly and uniformly with increasing the truncated tier level, $L=n_{\rm trun}$, by setting all $\rho^{(n>L)}_{\bf j}=0$, at a sufficiently large $L$ which is often much less than the maximum tier, $n_{\rm max}$. The minimal truncation tier $L$ required to achieve convergence is closely dependent on the configurations of system as well as bath and especially the temperature of the bath. In practice, the convergence with respect to $L$ is tested case by case. For the parameters exemplified in the present study of the Kondo problems, the accurate evaluations of the truncation tier probably need $L>5$ and it is time consuming. The numerical calculations here is thus up to $L=4$ tier level. The convergence calculations for $L>4$ would correct the quantities of the transient current and its noise spectrum, but would not affect their main characteristics that we will discuss in the present work. \section{Current noise spectrum and transient current} \label{thnum} \subsection{Equilibirum Kondo regime} For illustrations below, we set the parameters in the Kondo regime (unit of meV): $\varepsilon=-0.6$, $U=1.6$, $\Gamma=0.2$ and $k_{\rm B} T=0.005$, for the impurity dot energy level, Coulomb interaction, coupling reservoirs strength and temperature, respectively. Adopt a wide bandwidth with $W=50\,\Gamma$ for both electrodes. All parameters are within the current experiments \cite{Hol01256802,Del18041412}. Set also $\Gamma_{\rm L}=\Gamma_{\rm R}=\Gamma/2$, so that $a=b=1/2$ for \Eq{It}. We focus on the symmetrical bias voltage $\mu_{\rm L}=-\mu_{\rm R}=V/2$ throughout the paper, unless otherwise stated. It is worth noting that for the SAIM QD in study, both the single--occupation ($\varepsilon$) and the double--occupation ($\varepsilon+U$) transport channels are relevant. We focus on the Kondo tunneling regime where the former lies below the Fermi energy ($\varepsilon<0$) and the latter is above ($\varepsilon+U>0$). These two are called the Hubbard resonances. Moreover, we set a low temperature for the formation of Kondo's singlet state(s), located at the Fermi level(s), in either the equilibrium ($V\equiv\mu_{\rm L}-\mu_{\rm R}=0$) or the nonequilibrium ($V\neq 0$) scenario. We focus on the total circuit noise spectrum, $S(\omega)$, that would be readily accessible in experiments \cite{Bas10166801,Bas12046802,Del18041412}. As the Kondo characteristics are concerned, we have verified that individual noise spectrum, $S_{\rm L\rm L}(\omega)$, $S_{\rm R\rm R}(\omega)$ and ${\rm Re}[S_{\rm L\rm R}(\omega)]$ (not shown below) is similar to $S(\omega)$ for symmetrical coupling \cite{Jin15234108,Cre18107702}. To elaborate the underlying picture we present also other closely related properties. These include $\d S(\omega)/\d\omega$, the dot DOS $A(\omega)=A_\up(\omega)=A_\down(\omega)$, and the transient current spectrum ${\cal J}(\w)$ that will be specified later. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig1.eps} \caption{ The Kondo characteristics on (a) circuit noise spectrum $S(\omega)$ (in $10^{-8}$ eA) and (b) ${\rm d}S(\omega)/{\rm d}\omega$ (in $10^{-5}$A/V) at equilibrium ($V=0$ with $\mu^{\rm eq}_\alpha=0$). The inset in (a) is the dot DOS $A(\omega)$. } \label{fig1} \end{figure} Consider first the equilibrium ($V=0$) case, i.e., \Fig{fig1}, with the evaluated $S(\w)$ and $\d S(\w)/\d\w$ being depicted in the panels (a) and (b), respectively. The DOS $A(\w)$ is given in the inset of (a) for comparison. The observations and elaborations are as follows. Let us start with the Kondo characteristics here. Evidently, in $S(\w)$, $\d S(\w)/\d\w$ and $A(\w)$, the equilibrium Kondo characteristics all appear at $\w=0$. In particular, in the noise spectrum $S(\w)$ it exhibits an \emph{inflection} point (at $\w=0$), which turns out to be a remarkable \emph{Fano--type resonant peak} in $\d S(\omega)/\d\omega$. In contrast, the equilibrium Kondo resonance peak in the DOS, $A(\w)$, is rather symmetric. It is well--known that a perfectly symmetric $A(\w)$ goes with particle--hole symmetry \cite{Mei932601}. On the other hand, the resulted $S(\w)$ and $\d S(\w)/\d\w$ remain asymmetric, similar as above. \subsection{Hubbard resonances and absorption mechanism versus anti-Stokes co-tunneling resonance} Turn to the two Hubbard resonances, the single--occupation and the double--occupation transport resonances with energies of $\varepsilon<0$ and $\varepsilon+U>0$, respectively. While they are rather directly reflected in the DOS $A(\w)$, these two states are manifested in the noise spectrum, $S(\w)$, via the \emph{absorption mechanism}, with the characteristics frequencies at \be\label{nmks} \Delta^{\alpha}_\text{\tiny{S}} = \mu_{\alpha}-E_\text{\tiny{S}},~~~ \Delta^{\alpha}_\text{\tiny{D}}= E_\text{\tiny{D}}-\mu_{\alpha}. \ee Here, $E_\text{\tiny{S}}$ and $E_\text{\tiny{D}}$ are the two Hubbard resonance energies of $A(\omega)$, as shown by the two arrows in the inset of Fig.\,\ref{fig1}(a). Note that $\mu^{\rm eq}_{\alpha}=0$ at equilibirum, resulting in $\Delta^{\text{\tiny{L}}}_\text{\tiny{S}} =\Delta^{\text{\tiny{R}}}_\text{\tiny{S}}=\Delta_\text{\tiny{S}}$ and $\Delta^{\text{\tiny{L}}}_\text{\tiny{D}} =\Delta^{\text{\tiny{R}}}_\text{\tiny{D}}=\Delta_\text{\tiny{D}}$, the dash arrows in Fig.\,\ref{fig1}. Also note that $E_\text{\tiny{S}}$ and $E_\text{\tiny{D}}$ are not identical to the isolated energies in the dot, but only approximately, i.e., $E_\text{\tiny{S}}\approx\varepsilon$ and $E_\text{\tiny{D}}\approx\varepsilon+U$, due to the renormalization \cite{Hau08}. \begin{figure} \includegraphics[width=1.04\columnwidth]{fig2.eps} \caption{ The non-Kondo characteristics on (a) circuit noise spectrum $S(\omega)$ (in $10^{-8}$ eA); (b) ${\rm d}S(\omega)/{\rm d}\omega$ (in $10^{-5}$A/V); (c) The real-time dynamics of the transport current $I(t)$ (in pA), with $I(t=0)=0$ being the equilibrium value, before the bias voltage turns on; (d) The sine transform of transient current, ${\cal J}(\omega)=\int^\infty_0\!{\rm d}t\,\sin(\omega t)\delta I(t)$, with $\delta I(t)=I(t)-I^{\rm st}$. The inset in (a) is the DOS $A(\omega)$. The nonequilibrium bias voltage is $V=0.6$ with $\mu_{\rm L}=-\mu_{\rm R}=V/2$. The other parameters are: $\varepsilon=-0.6$, $U=2.6$, $\Gamma_{\rm L}=\Gamma_{\rm R}=\Gamma/2=0.05$, and $k_{\rm B}T=0.02$. } \label{fig2} \end{figure} To highlight the absorption mechanism, we demonstrate the nonequilibrium ($V\neq 0$) noise spectrum in the non-Kondo regime at an increased the temperature. To have the absorptive feature more visible, we also reduce the coupling strength ($\Gamma$) and enhance the Coulomb interaction ($U$); see the caption of Fig.\,\ref{fig2} for the parameters. The equilibrium characteristic at each $\w=\Delta_\text{\tiny{S/D}}$ in \Fig{fig1} now splits into $\Delta^{\rm L}_\text{\tiny{S/D}}$ and $\Delta^{\rm R}_\text{\tiny{S/D}}$ in \Fig{fig2}. The underlying mechanism is rather evident, as the energy absorption involves two sequential transport channels. One goes by the tunneling of the electron in the single--occupied state, with energy $E_\text{\tiny{S}}$, to the $\alpha$-lead by absorbing the energy $\Delta^{\alpha}_\text{\tiny{S}}$. Another channel engages the electron in the $\alpha$-lead, passing through the double--occupation channel of $E_\text{\tiny{D}}$ by absorbing the energy $\Delta^{\alpha}_\text{\tiny{D}}$. The opposite sequential processes accompanied by energy emission do not happen. Consequently, the absorption noise spectrum $S(\omega)$ displays rising steps around $\omega=\Delta^{\alpha}_\text{\tiny{S}}$ and $\Delta^{\alpha}_\text{\tiny{D}}$, see \Fig{fig1}(a) and \Fig{fig2}(a). These rising steps in $S(\omega)$ are the sequential non-Markovian quasi steps \cite{Jin11053704,Eng04136602,Rot09075307,Jin15234108}. They are turn into Lorentzian-like peaks in $\d S(\omega)/\d\omega$ as plotted in Fig.\,\ref{fig1}(b) and Fig.\,\ref{fig2}(b). Observed is also the \emph{anti-Stokes cotunneling resonance} at the frequency, $\w=\Delta^{\text{\tiny{L}}}_\text{\tiny{D}} -\Delta^{\text{\tiny{R}}}_\text{\tiny{D}}=-eV$. As inferred from \Eq{nmks}, the double--occupation ($E_\text{\tiny{D}}$) transport channel serves as the intermediate for the coherent two--electron processes here. This mechanism had been thoroughly analysed in our pervious work \cite{Jin15234108}, with the lead--specific current noise spectrum $S_{\alpha\alpha'}(\omega)$. The observed anti-Stokes characteristic is particularly dominant in $S_{\rm L\rm R}(\omega)$. This highlights the underlying L-to-R (source-to-drain) cotunneling in nature. The single--occupation ($E_\text{\tiny{S}}$) below the Fermi surfaces would not contribute to the coherent two--electron processes. Moreover, the directionality of bias voltage suppresses the inverse R-to-L Stokes cotunneling of $\Delta^{\text{\tiny{R}}}_\text{\tiny{D}} -\Delta^{\text{\tiny{L}}}_\text{\tiny{D}}=+eV$. It is well--known that the DOS $A(\omega)$ does not involve any cotunneling resonance. On the other hand, nonequilibrium Kondo resonance emerges distinguished peaks at Fermi energies in $A(\omega)$. It is also noticed that the nonequilibrium Kondo feature in the current noise spectrum $S(\omega)$ appears at $\omega=\pm eV$ \cite{Moc11201303,Mul13245115,Cre18107702,Jin15234108}. The main objective of this paper is to elucidate the nonequilibirum Kondo mechanism in the current noise spectrum $S(\omega)$. For later comparison, we report the non-Kondo transient current $I(t)$ and its sine transform ${\cal J}(\omega)$, in Figs.\,\ref{fig2}(c) and (d), respectively. Their Kondo counterparts, Figs.\,\ref{fig3}(c) and (d), are remarkably different. \subsection{Nonequilibrium Kondo regime} \begin{figure} \includegraphics[width=1.0\columnwidth]{fig3.eps} \caption{(Color online) The numerical results of the Kondo characteristics on (a) circuit noise spectrum $S(\omega)$ (in $10^{-8}$ eA); (b) ${\rm d}S(\omega)/{\rm d}\omega$ (in $10^{-5}$A/V); (c)The real-time dynamics of the transport current $I(t)$ (in pA); (d) The sine transform of transient current, ${\cal J}(\omega)=\int^\infty_0\!{\rm d}t\,\sin(\omega t)\delta I(t)$. The inset in (a) is the DOS $A(\omega)$. We adopt the values of bias voltage (in meV): $V=0.1$ (black) and $0.2$ (red). The red-dashed curve is for the asymmetrical bias voltage denoted by $V_a=0.2$ with $\mu_{\rm L}=0.2$meV and $\mu_{\rm R}=0$. } \label{fig3} \end{figure} Figure \ref{fig3}(a) reports the evaluated current noise spectrum $S(\omega)$, at different values of the applied bias voltage. Comparing to its equilibrium counterpart, \Fig{fig1}(a), the applied bias voltage splits the Kondo characteristic, from the single inflection point at $\omega=0$, into two asymmetric upturns around $\omega=\pm|\mu_{\rm L}-\mu_{\rm R}|=\pm eV$. These differ also from the nonequilibrium Kondo characteristic in the DOS $A(\w)$, the inset of \Fig{fig3}(a), with the peaks at individual $\mu_{\rm L}$ and $\mu_{\rm R}$ \cite{Mei932601,Leb01035308}. The two asymmetric upturns in $S(\w)$ turn into two remarkable peaks in $\d S(\omega)/\d\omega$, at $\omega=\pm eV$, as plotted in \Fig{fig3}(b). To highlight the fact that the Kondo characteristic in $S(\omega)$ is concerned only with the difference between two Fermi energies, we consider also the case of the asymmetrical bias voltage. The red-dash curves in Fig.\ref{fig3} report the case of $\mu_{\rm L}=0.2$\,meV and $\mu_{\rm R}=0$. The Kondo chareristics appears at the same frequencies as the symmetrical bias voltage case (red-solid) with the same Fermi energies difference. The scenario in $S(\omega)$ differs from that in $A(\omega)$. The latter is depicted in the inset of \Fig{fig3}(a), where the Kondo resonance follows the individual $\mu_{\rm L}$ and $\mu_{\rm R}$. This is related to the formation of Kondo singlet at the Fermi surfaces \cite{Mei932601,Leb01035308}. While the DOS $A(\omega)$ reflects the structure information, the current noise spectrum is related to not only the structure but also the transport current dynamics. The transient current $I(t)$ reported in \Fig{fig3}(c) displays Kondo oscillation dynamics as consistent with the previous work \cite{Che15033009}. This Kondo oscillation comes from the Rabi interference between two Kondo resonance transport channels of $\mu_{\rm L}$ and $\mu_{\rm R}$. This type of interference does not exist in the non-Kondo regime as exemplified in \Fig{fig2}(c). Further depicting the sine transform of the transient current ${\cal J}(\omega)$ in Fig.\ref{fig3}(d), it exhibits the dip and peak at $\omega=-eV$ and $\omega=eV$, respectively. This feature is also remarkably different from that of the non-Kondo regime as shown in Fig.\ref{fig2}(d). The Kondo oscillation frequency of the transient current is $|eV|=|\mu_{\rm L}-\mu_{\rm R}|$ and independent of the specific Fermi energies (see the red-solid versus the red-dash). Note that the appearance of the dip/peak at $\omega=\pm eV$ comes from the nature of the sine transformation, i.e., ${\cal J}(\omega)=-{\cal J}(-\omega)$. We now conclude that the Kondo characteristic in the noise spectrum reflects the Rabi interference of the transport current dynamics. The emergence of Kondo feature located at $\omega=\pm eV$ in $S(\omega)$ contains the information of the Kondo oscillation frequency $|eV|$. Evidently, there is a bridge between the nonequilibrium Kondo noise and the transient current. By comparing to the non-Kondo regime, \Fig{fig2}(b) and (d), emerged in the Kondo regime, \Fig{fig3}(b) and (d) are also minor but distinguished inflections near zero-frequency. Comparing further to the equilibrium Kondo counterpart, \Fig{fig1}, we could conclude that the observed inflection characteristic in $\w\in [-eV, +eV]$ is a sort of \emph{Kondo--Fano interference}. This engages both Kondo characteristics at $-eV$ and $+eV$, which differs from the anti-Stokes cotunneling feature at $-eV$ only, as seen in \Fig{fig2}(a) and (b). As the Kondo feature is concerned, $S(\w)$ and $\d S(\w)/\d\w$ in the emission ($\w<0$) region is more distinguishable than the absorption ($\w>0$) region; see Fig.\,\ref{fig3}. The spectroscopic information in $\w>0$ is complicated by the sequential tunneling resonances at $\w=\Delta^{\alpha}_{\tS}$ and $\Delta^{\alpha}_\text{\tiny D}$ of \Eq{nmks}. The observation here goes often in practical reporting experimental results of the Kondo noise spectrum in the $\w<0$ region \cite{Bas12046802,Del18041412}. The visualization in the $\w>0$ region would be possible when there is clear separation between the Kondo and the sequential resonances \cite{Bas10166801}. \section{Summary} \label{thsum} In summary, we have investigated the circuit current noise spectrum through an Anderson impurity quantum dot and underlying transient dynamics in the Kondo regime. Based on the DEOM evaluations, we first demonstrate the equilibrium case, where the Kondo resonance peak in the DOS, $A(\w)$, is rather symmetric around $\w=0$. The responding Kondo characteristic in the noise spectrum $S(\w)$ exhibits an \emph{inflection} point, and that in $\d S(\omega)/\d\omega$, turns out to be a remarkable \emph{Fano--type resonant peak}, at $\w=0$. It is well--known that the noise spectrum can be tuned by bias voltage. The related absorption mechanism, with the electron sequential tunneling resonances at $\Delta^{\alpha}_\text{\tiny{S,D}}$ of \Eq{nmks}. On the other hand, $A(\w)$ directly reflects the two Hubbard resonances at $E_\text{\tiny{S,D}}$ that are independent of the bias voltage. The tunneling resonances at $\Delta^{\alpha}_\text{\tiny{S,D}}$ are the non-Markovian quasi-steps and Lorentzian-like peaks, in $S(\w)$ \cite{Jin11053704,Eng04136602,Rot09075307} and $\d S(\omega)/\d\omega$, respectively. We then study the nonequilibrium Kondo characteristics. The applied bias voltage splits the characteristic in $S(\w)$, from the single inflection point at $\omega=0$, into two asymmetric upturns around $\omega=\pm|\mu_{\rm L}-\mu_{\rm R}|=\pm eV$. Meanwhile, in $\d S(\omega)/\d\omega$ the Kondo features are two remarkable peaks. We demonstrate that the observed absorptive/emissive Kondo resonance is concerned only with $eV=\mu_{\rm L}-\mu_{\rm R}$. This differs from the DOS $A(\omega)$, where the Kondo resonance peaks reflect rather the structure information for the shifted Fermi energies of $\mu_{\rm L}$ and $\mu_{\rm R}$. In other words, the DOS describes the formation of Kondo resonance singlet states on individual nonequilibrium Fermi surfaces. Current noise spectrum is related to not only the structure but also the transport current dynamics. Further demonstrations include also the transient circuit current. Evidently, the nonequilibrium Kondo features at $\omega=\pm eV$ originate from the Kondo oscillation of the transport current. Moreover, we compare the noise spectra between the non-Kondo and Kondo regimes and between the equilibrium and nonequilibrium cases. The observed overall inflection characteristics within $\w\in [-eV, eV]$ indicate a sort of \emph{Kondo--Fano interference}. This engages both Kondo characteristics at $-eV$ and $+eV$, which differs from the anti-Stokes cotunneling feature at $-eV$ only. The emission noise Kondo feature is often more distinguishable than the absorption, as the latter would be contaminated by the sequential tunneling signals. This work is closely related to the experiments \cite{Bas10166801,Bas12046802,Del18041412}. It could be anticipated that the present results can be readily demonstrated in the current experiments. \acknowledgments Support from the Natural Science Foundation of China (Nos.\ 11675048, 21633006 \& 11447006) is gratefully acknowledged.
{ "timestamp": "2020-12-29T02:25:05", "yymm": "2012", "arxiv_id": "2012.14102", "language": "en", "url": "https://arxiv.org/abs/2012.14102" }
\section{Introduction} Black hole thermodynamics has been developed on the understanding of black holes and quantum physics \cite{Bekenstein:1973ur,Bekenstein:1974ax,Hawking:1974sw,Hawking:1976de,Grumiller:2014qma . In the extended phase space, the cosmological constant can be considered as a thermodynamic variable \cite{Kastor:2009wy,Wang:2006eb,Sekiwa:2006qj}. Specifically, one can treat the cosmological constant $\Lambda$ as a thermodynamic pressure $P=-\frac{\Lambda}{8\pi G}$, which gives that its conjugate quantity is interpreted as a thermodynamic volume. For a negative cosmological constant $\Lambda$, the pressure $P$ is positive, which yields a well-defined equilibrium thermodynamic framework. It is worth noting that this volume is usually not equal to the geometric volume of black holes except in some simple cases, such as AdS Schwarzschild black holes. Although the definition of thermodynamic volume has been given, its physical interpretation is still a puzzle. An early attempt to answer this question leads to the conjecture that the volume satisfies the reverse isoperimetric inequality \cite{Cvetic:2010jb,Dolan:2013ft}. The conjecture was motivated in the progress of studying Kerr-AdS black holes and then generalized to other black holes. The reverse isoperimetric inequality is saturated for Schwarzschild-AdS black holes, which indicates that for a black hole of a given thermodynamic volume $V$, the entropy is maximized for Schwarzschild-AdS black holes \cite{Cvetic:2010jb}. However, further investigations discovered that this inequality does not apply to all kinds of black holes. Those black holes that exceed the maximum entropy bound are called super-entropic black holes \cite{Hennigar:2014cfa,Hennigar:2015cja,Klemm:2014rda,Noorbakhsh:2017tbp,Noorbakhsh:2017nde , which are rotating with non-compact event horizons of finite surface area\textbf{ }\cite{Hennigar:2014cfa}. More relevant discussions can be found in refs. \cite{Appels:2019vow,Johnson:2019wcq,Imseis:2020vsw,Sinamuli:2015drn,Wu:2019xse,Wu:2020cgf,Frassino:2015oca,Johnson:2019mdp,Cong:2019bud,Feng:2017jub,Noda:2020vcn,Boudet:2020eyr,Xu:2020gzm,Wu:2020mby . A black hole reduces its mass-energy via the Hawking radiation. If the specific heat is negative, this shrinking would lead to a higher temperature, increased radiation, and hence more mass loss. Therefore, the system accelerates through this downward spiral instead of settling into an equilibrium state. For charged BTZ black holes, which are the simplest super-entropic black holes, it has been shown that there is a connection between the violation of the reverse isoperimetric inequality and the thermodynamical instability with the specific heat at constant volume $C_{V}<0$ \cite{Johnson:2019mdp}. This result consequently leads to a natural conjecture that super-entropic black holes always have $C_{V}<0$, making them unstable in the extended thermodynamics. Later, it was found that this conjecture is violated for generalized exotic BTZ black holes in some parameter region \cite{Cong:2019bud}. However in this case, the specific heat at constant pressure $C_{P}$ was exhibited to be negative whenever $C_{V}>0$. Thus, a broader version of the instability conjecture was proposed \cite{Cong:2019bud}, which states that all super-entropic black holes are in general thermodynamically unstable with either negative $C_{V}$ or negative $C_{P}$. The instability conjecture was analytically verified for\textbf{ $3$D charged BTZ black holes \cite{Banados:1992wn,Banados:1992gq,Johnson:2019mdp}. Since it is difficult to obtain analytical expressions for $C_{V}$ and $C_{P}$, using methods of ref. \cite{Johnson:2019vqf}, the instability conjecture was numerically tested for ultra spinning $d$-dimensional Kerr black holes \cite{Hennigar:2015cja}, generalized exotic BTZ black holes \cite{Cong:2019bud} and super-entropic black hole with Immirzi in ref. \cite{Boudet:2020eyr}. Here, we test the instability conjecture on EBI AdS black holes in $(2+1)$-dimensional space-time. An EBI AdS black hole is the charged black hole solution in the EBI theory based on the non-linear electrodynamics proposed by Born and Infeld in 1934 \cite{Born:1934gh}, and is an extension of a RN black hole in the Einstein-Maxwell theory. Since it was found that the non-linear electrodynamics, in particular the Born-Infeld electrodynamics, can come from the low-energy limit of string theory and encodes the low-energy dynamics of D-branes (i.e., the low-energy effective action for a constant electromagnetic field is precisely the Born-Infeld action) \cite{Gibbons:2001gy,Fradkin:1985qd,Tseytlin:1986ti,Metsaev:1987qp}, it has attracted considerable attention in recent years. After BI black hole solutions in anti-de Sitter space were obtained \cite{Dey:2004yt,Cai:2004eh}, their properties have been extensively investigated \cite{Cataldo:1999wr,Myung:2008kd,Fernando:2003tz,Hendi:2015hoa,Banerjee:2010da,Li:2016nll,Dehyadegari:2017hvd,Aiello:2004rz,Fernando:2006gh,Gunasekaran:2012dq,Zou:2013owa,Zeng:2016sei,Wang:2018xdz,Wang:2019kxp,Tao:2017fsy,Banerjee:2012vk,Ma:2020qkd,Bi:2020vcg . Although various aspects of 3$\text{D}$ EBI AdS black holes have also been studied, the specific heat and instability conjecture are expected to explore. The organization of the rest of this work is as follows. In section \ref{sec: thermodynamics of EBI black-hole}, we discuss thermodynamic quantities of 3$\text{D}$ EBI AdS black holes. In section \ref{sec:New-instability-conjecture}, we first show that 3$\text{D}$ EBI AdS black holes violate the reverse isoperimetric inequality, and hence are super-entropic. Then, the instability conjecture is considered by calculating $C_{V}$ and $C_{P}$. We find that when non-linear electrodynamics effects are strong enough, there exists some parameter region where $C_{V}$ and $C_{P}$ are both positive. This observation provides a counter example to the instability conjecture. The conclusion is given in section \ref{sec:Conclusion}. In this paper, we use geometrical units where $G$, $c$, $\hbar$, and $k_{B}$ have been set to unity. \section{Thermodynamics of $3\text{D}$ EBI AdS Black Holes} \label{sec: thermodynamics of EBI black-hole} In this section, thermodynamics of $\text{3D}$ EBI AdS black holes is discussed. The action of the $3\text{D}$-Einstein gravity coupled with the Born-Infeld electrodynamics is \begin{align} \label{eq:action}I & =\int d^{3}x\sqrt{-g}\left[ \frac{R-2\varLambda {16\pi}+L(F)\right] ,\\ L(F) & =\frac{b^{2}}{4\pi}\left( 1-\sqrt{1+\frac{2F}{b^{2}}}\right) . \end{align} Here, the constant $b$ is the Born-Infeld parameter, $g$ is the determinant of the metric tensor, $\Lambda=-1/l^{2}$ is the cosmological constant, $l$ is the AdS radius, and $L(F)$ is the Lagrangian of the Born-Infeld electrodynamics. The metric and gauge potential are \cite{Myung:2008kd,Cataldo:1999wr} \begin{align} ds^{2} & =-f\left( r\right) dt^{2}+f(r)^{-1}dr^{2}+r^{2}d\theta^{2},\\ f\left( r\right) & =-8M+\frac{r^{2}}{l^{2}}+2b^{2}r\left( r-\sqrt {r^{2}+\frac{Q^{2}}{4b^{2}}}\right) -\frac{1}{2}Q^{2}\ln\left[ r+\sqrt {r^{2}+\frac{Q^{2}}{4b^{2}}}\right] \nonumber\\ & +\frac{1}{2}Q^{2}\ln\left[ l+\sqrt{l^{2}+\frac{Q^{2}}{4b^{2}}}\right] -2b^{2}l\left[ l-\sqrt{l^{2}+\frac{Q^{2}}{4b^{2}}}\right] , \end{align} where $Q$ and $M$ stand for the charge and mass of EBI black holes, respectively. In the limit of $b\rightarrow\infty$, it reduces to the charged BTZ black hole solution \cite{Johnson:2019mdp}, \[ f^{\text{BTZ}}\left( r\right) =-8M-\frac{Q^{2}}{2}\log\left( \frac{r {l}\right) +\frac{r^{2}}{l^{2}}. \] The horizon is located at $r=r_{+}$ with $f\left( r_{+}\right) =0$, from which the mass of $\text{3\text{D}}$ EBI AdS black holes is obtained \cite{Myung:2008kd}, \begin{align} M & =\frac{r_{+}^{2}}{8l^{2}}+\frac{1}{4}b^{2}r_{+}\left( r_{+}-\sqrt {r_{+}^{2}+\frac{Q^{2}}{4b^{2}}}\right) -\frac{1}{16}Q^{2}\ln\left[ r_{+}+\sqrt{r_{+}^{2}+\frac{Q^{2}}{4b^{2}}}\right] \nonumber\\ & +\frac{1}{16}Q^{2}\ln\left[ l+\sqrt{l^{2}+\frac{Q^{2}}{4b^{2}}}\right] -\frac{1}{4}b^{2}l\left[ l-\sqrt{l^{2}+\frac{Q^{2}}{4b^{2}}}\right] . \label{mass \end{align} In the extended thermodynamics, one identifies the enthalpy $H$ \cite{Kastor:2009wy} with the mass of the black hole, and the pressure is $P=-\Lambda/8\pi=1/8\pi l^{2}$. Moreover, the entropy $S$ is \begin{equation} S=\frac{A}{4}=\frac{1}{2}\pi r_{+}. \label{s \end{equation} The first law of thermodynamics, $dM=TdS+VdP+\Phi dQ$, gives the temperature and the thermodynamic volume of $3\text{\text{D}}$ EBI AdS black hole \begin{align} \label{volume-1}T & =\left. \frac{\partial M}{\partial S}\right\vert _{P}=\frac{r_{+}}{2\pi l^{2}}+\frac{b^{2}r_{+}}{\pi}\left( 1-\sqrt {1+\frac{Q^{2}}{4b^{2}r_{+}^{2}} }\right) ,\\ V & =\left. \frac{\partial M}{\partial P}\right\vert _{S}=\pi r_{+ ^{2}+2\pi l^{4}b^{2}\left( 1-\sqrt{1+\frac{Q^{2}}{4b^{2}l^{2}}}\right) , \end{align} respectively. It is observed that the thermodynamic volume is different from the geometric volume $\pi r_{+}^{2}$. \section{Instability Conjecture of $3\text{D}$ EBI AdS Black Holes} \label{sec:New-instability-conjecture} For an asymptotically AdS black hole in the extended phase space, it was conjectured in ref. \cite{Cvetic:2010jb} that a reverse isoperimetric inequality holds, \begin{equation} \label{eq:RII}R\equiv\left( \frac{\left( d-1\right) V}{\omega_{d-2 }\right) ^{\frac{1}{d-1}}\left( \frac{\omega_{d-2}}{A}\right) ^{\frac {1}{d-2}}\geq1, \end{equation} where the isoperimetric ratio $R$ is defined. Here, $V$ is the thermodynamic volume, $A$ is the horizon area, $\omega_{d}$ stands for a $d$-dimensional unit sphere, \begin{equation} \omega_{d}=\frac{2\pi^{\frac{d+1}{2}}}{\varGamma\left( \frac{d+1}{2}\right) }, \end{equation} where $\omega_{1}=2\pi$ and $\omega_{2}=4\pi$. The reverse isoperimetric inequality is saturated for a Schwarzschild AdS black hole since its thermodynamic volume simply equals to its naive geometric volume. For some more complicated black holes, e.g., Kerr \cite{Cvetic:2010jb}, STU \cite{Caceres:2015vsa} and Taub-NUT/Bolt black holes \cite{Johnson:2014xza}, thermodynamic volumes are larger than naive geometric volumes, hence resulting in $R>1$. Moreover unlike a Schwarzschild AdS black hole, these black holes have nonzero $C_{V}$. However, several black hole solutions were later found to violate the reverse isoperimetric inequality \cite{Hennigar:2014cfa,Klemm:2014rda,Hennigar:2015cja,Brenna:2015pqa,Noorbakhsh:2016faj,Noorbakhsh:2017nde . A black hole that violates the inequality is dubbed \textquotedblleft super-entropic black hole\textquotedblright\ since its entropy is larger than the maximum entropy allowed by the reverse isoperimetric inequality. Argued in ref. \cite{Hennigar:2014cfa}, the violation is attributed to a result of the finite-area but noncompact event horizon. It was further presented in refs. \cite{Johnson:2019mdp,Cong:2019bud} that a large family of super-entropic black holes has $C_{V}<0$ or $C_{P}<0$ whenever $C_{V}>0$, showing that they are unstable in the extended thermodynamics. In this section, we first show that $3\text{D}$ EBI AdS black holes are super-entropic, which means that they violate the reverse isoperimetric inequality $\left( \ref{eq:RII}\right) $. In fact, according to Eqs. $\left( \ref{s}\right) $, $\left( \ref{volume-1}\right) $ and $\left( \ref{eq:RII}\right) $, the isoperimetric ratio $R$ for $3\text{\text{D}}$ EBI AdS black holes can be readily computed to be \begin{equation} R=\sqrt{1-\frac{l^{2}Q^{2}}{2r_{+}^{2}}\left( 1+\sqrt{1+\frac{Q^{2} {4l^{2}b^{2}}}\right) ^{-1}}. \label{R \end{equation} It is obvious to observe from Eq. $\left( \ref{R}\right) $ that $R<1$, which means $3\text{\text{D}}$ EBI AdS black holes violate the reverse isoperimetric inequality as long as $Q\neq0$. Note that when $Q=0$, EBI AdS black holes would reduce to Schwarzschild AdS black holes, which have $R=1$. Consequently, $3\text{\text{D}}$ EBI AdS black holes are super-entropic. In the remainder of this section, we discuss behavior of $C_{V}$ and $C_{P}$ of $3\text{\text{D}}$ EBI AdS black holes and provide further investigation for the instability conjecture. Using Eq. $\left( \ref{s}\right) $, we can write thermodynamic quantities in terms of $S$ and $P$, \begin{align} T & =\frac{8PS}{\pi}+\frac{2Sb^{2}}{\pi^{2}}\left( 1-\sqrt{1+\frac{\pi ^{2}Q^{2}}{16b^{2}S^{2}}}\right) ,\label{T}\\ V & =\frac{4S^{2}}{\pi}+\frac{b^{2}}{32\pi P^{2}}\left( 1-\sqrt {1+\frac{2\pi PQ^{2}}{b^{2}}}\right) . \label{V \end{align} From Eq. $\left( \ref{s}\right) $, we observe that the entropy $S$ is geometrical, and only depends on the horizon radius $r_{+}$. Hence the entropy $S$ and the thermodynamic volume $V$ are independent functions, which consequently gives a nonzero $C_{V}$. To obtain the specific heat at constant volume $C_{V}$, it will be easier to start with $C_{P}$. Using Eq. $\left( \ref{T}\right) $, we can express $S$ in terms of $T$ and $P$, \begin{equation} S=\frac{\pi T}{16P}\left[ 1+\left( 1+\frac{2\pi P}{b^{2}}\right) ^{-1}\left( \frac{2\pi P}{b^{2}}+\sqrt{1+\frac{4P^{2}Q^{2}}{T^{2}b^{2} +\frac{2PQ^{2}}{\pi T^{2}}}\right) \right] . \label{entropy \end{equation} Then $C_{P}\left( T\right) $ is given by \begin{equation} C_{P}(T)=\left. T\frac{\partial S}{\partial T}\right\vert _{P}=\frac{\pi T}{16P}\left[ 1+\left( 1+\frac{2\pi P}{b^{2}}\right) ^{-1}\left( \frac{2\pi P}{b^{2}}+\frac{1}{\sqrt{1+\frac{4P^{2}Q^{2}}{b^{2}T^{2} +\frac{2PQ^{2}}{\pi T^{2}}}}\right) \right] , \label{CP \end{equation} which is manifestly positive. For large $T$, one has $C_{P}(T)=\frac{\pi T}{8P}+\cdots$. When $b\rightarrow\infty$, Eq. $\left( \ref{CP}\right) $ reduces to $C_{P}^{\text{BTZ}}$ of charged BTZ black holes (see Eq. $\left( 7\right) $ in ref. \cite{Johnson:2019mdp}) \begin{equation} C_{P}^{\text{BTZ}}(T)=\frac{\pi T}{16P}\left[ 1+\frac{1}{\sqrt{1+\frac {2PQ^{2}}{\pi T^{2}}}}\right] . \end{equation} One can calculate $C_{V}(T)$ from $C_{P}(T)$ via the well-known relation, \begin{equation} \frac{C_{P}}{C_{V}}=\frac{1}{\kappa_{T}\beta_{S}}, \label{meyer formula \end{equation} where $\kappa_{T}\equiv-V\partial P/\left. \partial V\right\vert _{T}$ is the isothermal bulk modulus, and $\beta_{S}\equiv-V^{-1}\partial V/\left. \partial P\right\vert _{S}$ is the adiabatic compressibility. To be self-contained, a derivation of Eq. $\left( \ref{meyer formula}\right) $ is given in the appendix. Substituting Eq. $\left( \ref{entropy \right) $ to Eq. $\left( \ref{V}\right) $ yields \begin{align} V\left( T,P\right) & =\frac{\pi T^{2}}{64P^{2}}\left[ 1+\left( \frac{2\pi P}{b^{2}}+1\right) ^{-1}\left( \frac{2\pi P}{b^{2}}+\sqrt {1+\frac{4P^{2}Q^{2}}{T^{2}b^{2}}+\frac{2PQ^{2}}{\pi T^{2}}}\right) \right] ^{2}\nonumber\\ & -\frac{Q^{2}}{16P}\left( \sqrt{1+\frac{2\pi PQ^{2}}{b^{2}}}+1\right) ^{-1}. \label{volume \end{align} From Eqs. $\left( \ref{V}\right) $ and $\left( \ref{volume}\right) $, $\kappa_{T}$ and $\beta_{S}$ can be readily computed, \begin{align} \kappa_{T} & \equiv-V\partial P/\left. \partial V\right\vert _{T}=-V\left[ \frac{-8S^{2}}{\pi P}+\frac{TS}{2P}\gamma+\frac{1}{16P^{2}}\frac{Q^{2} {\delta+1}+\frac{\pi Q^{4}}{16Pb^{2}}\frac{1}{\delta(\delta+1)^{2}}\right] ^{-1},\\ \beta_{S} & \equiv-V^{-1}\partial V/\left. \partial P\right\vert _{S}=-V^{-1}\left[ \frac{Q^{2}}{8P^{2}\left( \sqrt{1+\frac{2\pi PQ^{2 }{b^{2}}}+1\right) }-\frac{Q^{2}}{32P^{2}\sqrt{1+\frac{2\pi PQ^{2}}{b^{2}} }\right] , \end{align} where \begin{align} \gamma & =\frac{\frac{2\pi}{b^{2}}+\frac{\eta^{2}-1}{P\eta}-\frac{Q^{2} {\eta\pi T^{2}}}{\left( \frac{2\pi P}{b^{2}}+1\right) }-\frac{\left( \frac{2\pi P}{b^{2}}+\eta\right) \left( \frac{2\pi}{b^{2}}\right) }{\left( \frac{2\pi P}{b^{2}}+1\right) ^{2}},\nonumber\\ \eta & =\sqrt{1+\frac{4P^{2}Q^{2}}{T^{2}b^{2}}+\frac{2PQ^{2}}{\pi T^{2}}},\\ \delta & =\sqrt{1+\frac{2\pi PQ^{2}}{b^{2}}}.\nonumber \end{align} With above results for $\kappa_{T}$, $\beta_{S}$ and $C_{P}(T)$, one can use Eq. $\left( \ref{meyer formula}\right) $ to obtain the specific heat at constant volume $C_{V}$. As a check, in the limit of $b\rightarrow\infty$, we find that $C_{V}(T)$ becomes $C_{V}^{\text{BTZ}}(T)$ of charged BTZ black holes (see Eq. $\left( 10\right) $ in ref. \cite{Johnson:2019mdp}), wher \begin{equation} C_{V}^{\text{BTZ}}(T)=-\frac{Q^{2}}{32T}\left[ \frac{1+\sqrt{1+\frac{2PQ^{2 }{\pi T^{2}}}}{1+\sqrt{1+\frac{2PQ^{2}}{\pi T^{2}}}+\frac{3PQ^{2}}{2\pi T^{2 }}\right] . \end{equation} \begin{figure}[ptb] \centering \subfigure[$C_{V}$ \& $C_{P}$ $vs$. $r_+$ and $M$ \& $T$ $vs$. $r_+$ for $b=100$.]{ \includegraphics[width=0.45\linewidth]{VPT-infty.eps}\hspace{2mm} \includegraphics[width=0.45\linewidth]{MT-infty.eps}\label{Fig1:a} } \subfigure[$C_{V}$ \& $C_{P}$ $vs$. $r_+$ and $M$ \& $T$ $vs$. $r_+$ for $b=1.69$.]{ \includegraphics[width=0.45\linewidth]{VPT-169.eps}\hspace{2mm} \includegraphics[width=0.45\linewidth]{MT-169.eps}\label{Fig1:b} } \subfigure[$C_{V}$ \& $C_{P}$ $vs$. $r_+$ and $M$ \& $T$ $vs$. $r_+$ for $b=0.5$.]{ \includegraphics[width=0.45\linewidth]{VPT-05.eps}\hspace{2mm} \includegraphics[width=0.45\linewidth]{MT-05.eps}\label{Fig1:c} } \subfigure[$C_{V}$ \& $C_{P}$ $vs$. $r_+$ and $M$ \& $T$ $vs$. $r_+$ for $b=0.1$.]{ \includegraphics[width=0.45\linewidth]{VPT-01.eps}\hspace{2mm} \includegraphics[width=0.45\linewidth]{MT-01.eps}\label{Fig1:d} } \caption{{\footnotesize Plots of the heat capacity at constant volume $C_{V}$, the heat capacity at constant pressure $C_{P}$, the black hole mass $M$ and the black hole temperature $T$ against the black hole horizon radius $r_{+}$ for $3\text{\text{D}}$ EBI AdS black holes with $Q=1=l$ and various values of $b$. The yellow regions denote the regions of interest, where $C_{V}$ and $C_{P}$ are both positive, and hence black holes can be free of thermodynamic instability.} \label{Fig1 \end{figure} In FIG. \ref{Fig1}, we plot the specific heat at constant volume $C_{V}$, the specific heat at constant pressure $C_{P}$, the black hole mass $M$ and the black hole temperature $T$ as functions of the black hole horizon radius $r_{+}$ for $3\text{\text{D}}$ EBI AdS black holes with fixed pressure $l=1.0$, fixed charge $Q=1.0$ and various $b=0.1,0.5,1.69,100$. When $b=100$, non-linear electrodynamics effects are negligible, and hence behavior of $3\text{\text{D}}$ EBI AdS black holes closely resembles that of charged BTZ black holes. As shown in FIG. \ref{Fig1:a}, $C_{P}$ is always positive whereas $C_{V}$ is always negative, which recovers the results of BTZ black holes \cite{Johnson:2019mdp}. As $b$ decreases to $b\simeq1.69$, FIG. \ref{Fig1:b} exhibits that $C_{V}$ stays negative, and becomes large negative as $T$ goes to zero. Interestingly, for small enough values of $b$ (i.e., $b\lesssim 1.69$), our numerical results show that $C_{V}$ and $C_{P}$ can both be positive in some parameter region. In fact, when $b=0.5$ and $0.1$, the regions where $C_{V}>0$ and $C_{P}>0$ are represented by yellow regions in FIGs. \ref{Fig1:c} and \ref{Fig1:d}. Note that the black hole temperature $T$ and mass $M$ are both positive in the yellow regions of FIGs. \ref{Fig1:c} and \ref{Fig1:d}, which means that the $3\text{\text{D}}$ EBI AdS black hole solutions with $C_{V}>0$ and $C_{P}>0$ are physical. And FIGs. \ref{Fig1:c} \& \ref{Fig1:d} suggest that the conjecture violation region increases in size with decreasing parameter $b$. In short, we find that $3\text{D}$ EBI AdS black holes can violate the instability conjecture. \begin{figure}[ptb] \centering \subfigure{ \includegraphics[width=0.45\linewidth]{Tr-fixV.eps}\hspace{2mm} \includegraphics[width=0.45\linewidth]{FT-fixV.eps} }\caption{\footnotesize Plots of the horizon radius $r_{+}$ and the Helmholtz free energy $F$ against the black hole temperature $T$ for $3\text{\text{D}}$ EBI AdS black holes with fixed volume $V$. Here, we take $Q=1$ and $b=0.1$. The blue and red lines represent Small BH and Large BH, respectively. The specific heat at constant volume $C_{V}$ of Small/Large BH is positive/negative. As a result, $C_{V}$ is discontinuous at the maximum value of $T$. \label{Fig2 \end{figure} Interestingly, FIG. \ref{Fig1} shows that $C_{V}$ has a discontinuity for a small enough $b$. To investigate the nature of the discontinuity of $C_{V}$, we plot the horizon radius $r_{+}$ and the Helmholtz free energy $F$ as functions of the black hole temperature $T$ with fixed volume $V$ in FIG. \ref{Fig2}, where $Q=1$ and $b=0.1$. The left panel of FIG. \ref{Fig2} shows that, for a given $T$, there are two black hole solutions of different sizes, namely Large BH (red line) and Small BH (blue line). Moreover, the black hole temperature $T$ has a maximum $T_{\max}$, which corresponds to $\left. \partial r_{+}/\partial T\right\vert _{V}=0$. Note that $C_{V}$ can be rewritten a \begin{equation} C_{V}=\frac{\pi}{2}\left. \frac{\partial r_{+}}{\partial T}\right\vert _{V}, \end{equation} where we use Eq. $\left( \ref{s}\right) $ for the entropy $S$. Therefore, Large/Small BH has a negative/positive $C_{V}$, which goes to negative/positive infinity as $T$ approaches $T_{\max}$. In short, the discontinuity of $C_{V}$ corresponds to the maximum value of the black hole temperature, where the two black hole phases (i.e., Large BH and Small BH) merge. The right panel of FIG. \ref{Fig2} displays that the free energy of Small BH is always smaller than that of Large BH, which indicates that there is no phase transition. Our results suggest that, at a constant volume, $3\text{\text{D}}$ EBI AdS black holes with positive $C_{V}$ are globally stable. \section{Conclusion and discussion} \label{sec:Conclusion} In this paper, considering $3\text{D}$ EBI AdS black holes, we tested the instability conjecture: super-entropic black holes always have $C_{V}<0$ or $C_{P}<0$ whenever $C_{V}>0$, making them unstable in extended gravitational thermodynamics. This conjecture was tested and found satisfied for a large class of super-entropic solutions \cite{Johnson:2019mdp,Hennigar:2014cfa,Cong:2019bud}. After showing $3\text{D}$ EBI AdS black holes are super-entropic, we found that the black holes satisfy the instability conjecture when $b$ is large enough (i.e., non-linear electrodynamics effects are inessential). However, when non-linear electrodynamics effects play an important role, our numerical results (see FIG. \ref{Fig1}) showed that there exists some parameter region, where both $C_{V}>0$ and $C_{P}>0$, and hence provided a counter example to the instability conjecture. In addition, it was suggested that the violation region will increase as non-linear electrodynamics effects become stronger. It is worthwhile pointing out that for $d\geq4$ dimension, the thermodynamic volume of EBI AdS black holes is just the naive geometric volume $V=\left( d-1\right) ^{-1}\omega_{d-2}r_{+}^{d-1}$, which means that $R=1$, and hence higher dimensional EBI AdS black holes are not super-entropic. On the other hand, for higher dimensional EBI AdS black holes, the entropy $S$ and the volume $V$ are not independent, which leads to the constant volume specific heat $C_{V}=0$ \cite{Dolan:2010ha}. \begin{acknowledgments} We are grateful to Wei Hong and Yucheng Huang for useful discussions and numerical analysis. This work is supported in part by NSFC (Grant No. 11005016, 11875196 and 11375121). \end{acknowledgments}
{ "timestamp": "2021-05-07T02:06:42", "yymm": "2012", "arxiv_id": "2012.14206", "language": "en", "url": "https://arxiv.org/abs/2012.14206" }
\section{Introduction} The main theme of this paper and its companion \cite{oim_copain} is the Landau levels of compact manifolds. For a physicist, the Landau quantization refers to a charged particle confined to two dimensions and exposed to a magnetic field. It has discrete energy levels connected by ladder operators. Besides the planar geometry considered by Landau \cite{La30}, the case of Riemann surfaces has been investigated in the context of the quantum Hall effect, see \cite{IeLi94} and references therein. On a mathematical point of view, a natural generalisation is the Bochner Laplacian acting on the sections of a Hermitian line bundle $L$ on a compact manifold. This Laplacian is defined from two data: a Riemannian metric of the base and a connection of the line bundle. The idea underlying this work is that when the curvature of the connection is non-degenerate and large with respect to the metric, the spectrum of the Laplacian exhibits a structure similar to the Landau quantization. More specifically, let us assume that the curvature is related to the Riemannian metric by a complex structure, and consider the spectrum of the Laplacian of $L^{k}$ in the large $k$ limit. In this setting, Faure-Tsujii \cite{FaTs15} have shown that the eigenvalues are grouped in clusters, each of them representing a generalised Landau level. The first level was previously identified by Guillemin-Uribe {\cite{GuUr88}} and studied further by Borthwick-Uribe \cite{BoUr96} as a generalization of K\"ahler quantization. In particular, its dimension is given by a Riemann-Roch number and it comes with an algebra of Toeplitz operators quantizing the classical Poisson algebra. Our goal in this paper is to extend these results to the higher Landau levels. Our main results are: \begin{enumerate} \item the dimension of the $m$-th Landau level is the Riemann-Roch number of $L^k \otimes F_m$ when $k$ is sufficiently large, where $F_m$ is the symmetric $m$-th power of the complex tangent bundle of the base. \item there is an algebra of Berezin-Toeplitz operators associated to the $m$-th Landau level, the symbol of these operators being sections of the endomorphism bundle $\operatorname{End} F_m$. \item the $m$-th Landau level is isomorphic with the first Landau level twisted by $F_m$ through a ladder operator, these isomorphisms are compatible with the Berezin-Toeplitz operators. \end{enumerate} The main ingredient to establish these results is an asymptotic expansion of the Schwartz kernel of the spectral projector of each level. For the first level, when the complex structure is integrable, the K\"ahler case, this kernel is the Szeg\"o kernel. Its asymptotic is well-understood since the seminal work by Boutet de Monvel and Sj\"ostrand \cite{BoSj} and has been used in numerous papers starting from \cite{BoGu}, \cite{BoMeSc}, \cite{Ze}. In the non-K\"ahler case, the asymptotics of the first level projector kernel has been obtained by Borthwick-Uribe \cite{BoUr07} and Ma-Marinescu \cite{MaMa08}. For the higher Landau levels, this asymptotic expansion will be the main result of our second paper \cite{oim_copain}. In the current paper, we will rely on this asymptotic expansion or more generally we will show that the previous results hold for higher Landau levels defined as the image of any projector whose Schwartz kernel has the convenient asymptotics. Here the inspiration is the generalised Toeplitz structure of Boutet de Monvel-Guillemin \cite{BoGu} and our previous work \cite{oim}, the idea being that the only important feature of the Landau levels is this asymptotic expansion. The main tool we will use for the proofs is a particular class of operators containing the Landau level projectors, the associated Toeplitz operators and also the generalised ladder operators. The operators in this class are controlled at first order by their symbols, which are defined as sections of a bundle of non-commutative algebras. Each of these algebras is generated by the spectral projectors and ladder operators of a Landau Hamiltonian. By this mechanism, the basic properties of the Landau quantization are transferred to the Bochner Laplacian. To finish this general introduction, let us mention the two contemporaneous papers \cite{Yuri_1}, \cite{Yuri_2} by Yuri Kordyukov on the same subject. Let us mention as well that in a related but different context, belonging to homogeneous microlocal analysis instead of semi-classical analysis, Boutet de Monvel-Guillemin \cite[Chapter 15]{BoGu} and Epstein-Melrose \cite[Chapter 6]{EM} have considered generalised Szegö projections at higher level with associated Toeplitz algebras, which are similar to our constructions. \subsection{Magnetic Laplacian} \subsubsection*{Constant magnetic intensity} Consider a Riemannian manifold $(M,g)$ with a Hermitian line bundle $L $ equipped with a connection $\nabla$. Associated to these data is a Laplacian $\frac{1}{2} \nabla^* \nabla$ acting on $\Ci ( M, L)$, which from the physical point of view is a Schr\"odinger operator with a magnetic field $\Omega = i \operatorname{curv} ( \nabla) \in \Omega^2 (M, {\mathbb{R}})$. We will assume that $\Omega$ is non-degenerate at each point and has a constant magnetic intensity with respect to $g$ in the following sense. In the case where $M$ is a surface, the magnetic intensity is the positive function defined by $ |\Omega| = B \operatorname{vol}_g$, where $\operatorname{vol}_g $ is the Riemannian volume, and we merely assume that $B$ is constant. In higher dimension, $\Omega$ being non-degenerate, the dimension of $M$ is even, say $2n$. At any $p \in M$, there exists a skew-symmetric endomorphism $j_B (p)$ of $(T_pM, g_p)$ such that $\Omega_p (X,Y) = g_p ( j_B(p) X, Y)$. The eigenvalues of $j_B(p)$ are $ \pm i B_{\ell} (p)$ with $0 < B_1(p) \leqslant \ldots \leqslant B_n (p)$. We assume that these eigenvalues are all equal, $B_1 = \ldots = B_{n}$, and do not depend on $p$. Equivalently $j_B(p) = B j(p)$ with $B$ a positive constant and $j$ an almost complex structure of $M$ compatible with $g$, cf. Proposition 2.5.6. of \cite{McSa}. So we have that $\Omega = B {\omega}$ where $B >0$ is constant and ${\omega}$ is a symplectic form of $M$ defined by ${\omega} ( X,Y) = g (jX, Y)$. We will consider the large $B$ limit. To do this, we will replace $L$ by $L^k$, $k \in {\mathbb{N}}$, so that the curvature of $\nabla^{L^k}$ is $k B {\omega}$, and let $k$ tend to infinity. We will also normalise the metric so that $B=1$, and our magnetic intensity is simply $k$. Alternatively, we can introduce our data as follows. Consider a compact symplectic manifold $(M^{2n}, {\omega})$ with a compatible almost complex structure $j$ and a Hermitian line bundle $L \rightarrow M$ with a connection $\nabla$ having curvature $\frac{1}{i} {\omega}$. Such a bundle is called a {\em prequantum} bundle in the Kostant-Souriau theory, where it is used to define the geometric quantization of $M$. For any positive integer $k$, we consider the Laplacian \begin{gather} \Delta_k = \tfrac{1}{2}( \nabla^{L^k}) ^* \nabla^{L^k} : \Ci ( M , L^k)\rightarrow \Ci ( M , L^k) \end{gather} with $\nabla^{L^k} : \Ci ( M, L^k) \rightarrow \Omega^1 (M, L^k)$ the covariant derivative induced by $\nabla$, and the Riemannian metric $g(X,Y) = {\omega} (jX, Y)$ independent of $k$. \subsubsection*{Earlier results} It is known that the spectrum ${\sigma} ( \Delta_k)$ of $\Delta_k$ is partitioned into clusters around each point of $k ( \frac{n}{2} + {\mathbb{N}})$ in the large $k$ limit. More precisely, for any $m \in {\mathbb{N}}$, define the interval $I_m$ $$ I_0 = [0, \tfrac{n}{2} + \tfrac{1}{2}], \qquad I_m = (\tfrac{n}{2} +m) + [-\tfrac{1}{2} , \tfrac{1}{2} [ \text{ if } m \geqslant 1 ,$$ so that we have a partition $[0,\infty [ = \bigcup _{m\in {\mathbb{N}}} I_m$. Then we set \begin{gather} \label{eq:def_Landau} {\Sigma}_{m,k} := \bigl( k^{-1} {\sigma} (\Delta_k) \bigr) \cap I_m , \qquad \mathcal{H}_{m,k} := \bigoplus_{{\lambda} \in {\Sigma}_{m,k}} \ker ( k^{-1} \Delta_k - {\lambda}). \end{gather} It was proved by Faure-Tsuji \cite{FaTs15} that \begin{gather} \label{eq:Si_m:Faure_Tsujii} {\Sigma}_{m,k} \subset \Bigl( \tfrac{n}{2}+ m + C_m k^{-\frac{1}{4}} [-1,1] \Bigr) \end{gather} and by Demailly \cite{Dem} that \begin{gather} \label{eq:asymptot_Demailly} \operatorname{dim} \mathcal{H}_{m,k} = \Bigl( \frac{k}{2 \pi} \Bigr)^n { {m+n-1} \choose {n-1} } \operatorname{vol} (M) + \operatorname{o} (k^n) \end{gather} For a surface $(n=1)$ with a constant Gauss curvature $S$, more precise results have been obtained by Iengo-Li \cite{IeLi94}: if $k + m S >0$, then \begin{gather} \label{eq:surface_eignvalue_dim} \begin{split} {\Sigma}_{m,k} = \bigl\{ \tfrac{1}{2} + m + k^{-1} S \tfrac{m(m+1)}{2} \bigr\} , \\ \operatorname{dim} \mathcal{H}_{m,k} = \tfrac{k}{2 \pi} \operatorname{vol} (M) + (\tfrac{1}{2} + m ) \chi (M). \end{split} \end{gather} So in this case, when $k$ is sufficiently large, the $m$-th eigenvalue is degenerate with multiplicity equal to $\operatorname{dim} \mathcal{H}_{m,k}$. The first cluster has been further studied. In the K\"ahler case, that is when the complex structure $j$ is integrable, $L$ has itself a natural holomorphic structure such that $\overline{\partial}_L = \nabla^{0,1}$ and by Kodaira identities, we have when $k$ is sufficiently large that \begin{gather} {\Sigma}_{0,k} = \{ \tfrac{n}{2} \} , \qquad \mathcal{H}_{0,k} = H^0 ( M, L^k). \end{gather} and the dimension of $\mathcal{H}_{0,k}$ is given by the Riemann-Roch-Hirzebruch Theorem \begin{gather} \label{eq:RRH} \dim \mathcal{H}_{0,k} = \int_M \exp \Bigl( \frac{ k {\omega} }{2\pi } \Bigr) \; \operatorname{Todd} M \end{gather} Here $\operatorname{Todd} M$ is the Todd class of $(M,j)$. More generally, when $j$ is not necessarily integrable, it was proved by Guillemin-Uribe \cite{GuUr88} that \begin{gather} \label{eq:si_0-k_Guillemin_Uribe} {\Sigma}_{0,k} \subset \Bigl( \tfrac{n}{2} + C_0 k^{-1} [-1,1] \Bigr) \end{gather} and by Borthwick-Uribe \cite{BoUr96} that the dimension of $\mathcal{H}_{0,k}$ is given by \eqref{eq:RRH} when $k$ is sufficiently large. \subsection{Main results} In the sequel $m \in {\mathbb{N}}$ is a fixed non negative integer and all the results holds in the large $k$ limit, with estimates, bounds depending on $m$. \subsubsection*{Dimension} Our first result is the computation of the dimension of $\mathcal{H}_{m,k}$, as the Riemann-Roch number of $L^k \otimes \mathcal{D}_m (TM)$ where $\mathcal{D}_m (TM)$ is the $m$-th symmetric power of $(T^{0,1} M)^*$. Here, $T^{0,1}M = \ker (j + i )$ and $j$ is the almost complex structure introduced previously. The reason why we prefer to work with $(T^{0,1}M)^*$ instead of the isomorphic bundle $T^{1,0}M$ should be clear later. \begin{theo} \label{theo:dim_landau} If $k$ is sufficiently large, then $$ \operatorname{dim} \mathcal{H}_{m,k} = \int_M \exp \Bigl( \frac{ k {\omega} }{2\pi } \Bigr) \; \operatorname{ch} ( \mathcal{D}_m (TM) ) ) \operatorname{Todd} M $$ with $\operatorname{ch}$ the Chern character and $\operatorname{Todd} M$ the Todd class of $(M,j)$. \end{theo} As far as we know, Theorem \ref{theo:dim_landau} is a new result, except in the cases already mentionned ($n=1$ with constant curvature or $m=0$). \begin{rem} \begin{itemize} \item[-] When $n=1$, $\mathcal{D}_m ( TM)$ is isomorphic with $K^{-m}$, $K$ being the canonical bundle, and it is easy to see that we recover the second equation \eqref{eq:surface_eignvalue_dim}. However, even for $n=1$, Theorem \ref{theo:dim_landau} goes further since we don't assume that the Gauss curvature is constant. In this generality, it is not likely that ${\Sigma}_{m,k}$ consists of a single degenerate eigenvalue, but the dimension of the $m$-th cluster is given by the same formula. \item[-] For a general dimension $n$, $\mathcal{D}_m (TM)$ has rank ${ {m+n-1} \choose {n-1} }$ and we recover the asymptotic \eqref{eq:asymptot_Demailly}. \end{itemize} \end{rem} \subsubsection*{Symbol spaces} In the sequel, we will use $\mathcal{D}_m (TM)$ as a bosonic space, with associated creation an annihilation operators defined as follows. For any $x \in M$, let us view $ \mathcal{D}_{m} ( T_xM)$ as the space of homogeneous polynomials maps $T_x^{0,1} M \rightarrow {\mathbb{C}}$ with degree $m$. Set \begin{gather} \label{eq:dtm_intro} \mathcal{D} (T_xM) := \bigoplus_{m \in {\mathbb{N}}} \mathcal{D}_m (T_xM) \end{gather} and let $\pi_m(x)$ be the corresponding projector of $\mathcal{D} (T_x M)$ onto $\mathcal{D}_m (T_x M)$. For any $ Y \in T_xM \otimes {\mathbb{C}}$, let $\rho (Y)$ be the endomorphism of $\mathcal{D} (T_xM)$ defined as follows. Write $Y = U + \overline V$ with $U, V \in T_x ^{1,0} M$. Then \begin{gather} \label{eq:rho} \rho ( Y)= \rho ( U) + \rho ( \overline V) \; \text{ with } \; \begin{cases} \rho (U) = \text{ multiplication by } i {\omega} ( U, \cdot) \\ \rho ( \overline {V}) = \text{ derivation with respect to } \overline V \end{cases} \end{gather} More concretely, let $(U_i)$ be a basis of $T^{1,0}_xM$ such that $\frac{1}{i} {\omega} ( U_i , \overline {U}_j) = \delta_{ij}$. Let $(z_i)$ be the basis of $(T^{1,0}_xM)^*$ dual to $(U_i)$, so $z_i = \frac{1}{i} {\omega} ( \cdot, \overline{U}_i)$ and $ \mathcal{D} (T_x M) = {\mathbb{C}} [\overline{z}_1 , \ldots, \overline{z}_n]$. Then for any polynomial $P$ of the variable $\overline{z}_1$, $\ldots$, $\overline{z}_n$ \begin{gather} \rho ( U_i) P = - \overline{z}_i P, \qquad \rho ( \overline{U}_i) P = \frac{\partial P}{\partial \overline{z}_i} . \end{gather} So $-\rho ( U_i)$ and $\rho (\overline{U}_i)$ are respectively the creation and annihilation operators. \subsubsection*{Berezin-Toeplitz operators} Our second result is about Berezin-Toeplitz operators. By \cite{BoUr96}, the spaces $\mathcal{H}_{0,k}$ can be considered as quantizations of $M$, replacing the standard K\"ahler quantization $H^0 ( M,L^k)$, for symplectic manifold not necessarily having an integrable complex structure. An important feature is that there is a natural way to pass from classical to quantum Hamiltonians, provided by the Berezin-Toeplitz quantization. In the semi-classical limit, defined here as the large $k$ limit, the product and commutator of quantum observables correspond to the product and Poisson bracket of classical observables, up to some error terms. More precisely, let $\Pi_{m,k}$ be the orthogonal projector of $\Ci ( M , L^k)$ onto $\mathcal{H}_{m,k}$ and for any $f \in \Ci (M)$, let $T_{m,k} (f)$ be the endomorphism of $\mathcal{H}_{m,k}$ defined by \begin{gather} \label{eq:toeplitz} T_{m,k} (f) \psi = \Pi_{m,k} ( f \psi) \qquad \forall \; \psi \in \mathcal{H}_{m,k} . \end{gather} For the first Landau level, it is known \cite{BoMeSc}, \cite{oim_op}, \cite{MaMa} that for any $N$ \begin{gather} \label{eq:t_0_k_produit_tout_ordre} T_{0,k} ( f) T_{0,k} (g) = \sum_{\ell = 0 } ^N k^{-\ell} T_{0,k} ( B_\ell ( f,g)) + \mathcal{O} (k^{-(N+1)}) \end{gather} for some bidifferential operators $B_{\ell} : \Ci ( M) \times \Ci ( M) \rightarrow \Ci ( M)$, where \begin{gather} \label{eq:b_0_b_1} B_0 ( f,g) = fg, \qquad B_1(f,g) = - \tfrac{1}{2} g(X,Y) + \tfrac{1}{2i} {\omega} ( X, Y), \end{gather} $X$ and $Y$ being the Hamiltonian vector fields of $f$ and $g$ respectively. For the generalisation to higher Landau levels, we will use in addition to the $T_{m,k} ( f)$'s the following operators: let $p \in {\mathbb{N}}$ and $X_1$, \ldots, $X_{2p}$ be vector fields of $M$. Define $ T_{m,k} (X_1, \ldots , X_{2p}): \mathcal{H}_{m,k} \rightarrow \mathcal{H}_{m,k}$ by \begin{gather} \label{eq:toep_der} T_{m,k} (X_1, \ldots , X_{2p}) (\Psi) = k^{ -p} \Pi_{m,k} ( \nabla^{L^k}_{X_1} \ldots \nabla_{X_{2p}}^{L^k} \Psi) \end{gather} Let $\mathcal{T}_m$ be the vector space of families $ (P_k : \mathcal{H}_{m,k} \rightarrow \mathcal{H}_{m,k}, \; k \in {\mathbb{N}})$ spanned by the $(T_{m,k} (f), k \in {\mathbb{N}})$'s and $(T_{m,k} ( X_1, \ldots , X_{2p}), \; k \in {\mathbb{N}}) 's$. Here the functions $f$ or vector fields $X_1, \ldots X_{2p}$ do not depend on $k$. As we will see, for any $(P_k)$ in $\mathcal{T}_m$, the operator norm $\| P_k\|$ is bounded independently of $k$. Define the semiclassical completion $\mathcal{T}_m ^{\operatorname{sc}}$ as the vector space of families $(P_k : \mathcal{H}_{m,k} \rightarrow \mathcal{H}_{m,k} , \; k \in {\mathbb{N}})$ such that for any $N$, $$ P_k = \sum_{\ell= 0 } ^N k^{-\ell} P_{\ell,k} + \mathcal{O} ( k ^{-N-1}) $$ where the coefficients $(P_{\ell, k })_k$, $\ell \in {\mathbb{N}}$ all belong to $\mathcal{T}_m$, and the $\mathcal{O} $ is for the operator norm. \begin{theo} \label{theo:Toeplitz_Landau} For any $m \in {\mathbb{N}}$, we have: \begin{enumerate} \item $ \mathcal{T}_m^{\operatorname{sc}}$ is closed under product. \item There exists a linear map $ \mathcal{\tau} : \mathcal{T}_m^{\operatorname{sc}} \rightarrow \Ci ( M , \operatorname{End} ( \mathcal{D}_m (TM))) $ defined on the generators \eqref{eq:toeplitz} and \eqref{eq:toep_der} by \begin{xalignat}{2} \label{eq:symbolmap} \begin{split} \tau ( T_{m,k} ( f) )_x & = f(x) \operatorname{id}_{\mathcal{D}_m (T_x M)}, \\ \tau ( T_{m,k} ( X_1, \ldots , X_{2p}) )_x & = \pi_m(x) \rho(X_1(x)) \ldots \rho (X_{2p}(x)) \end{split} \end{xalignat} $\tau$ is onto, its kernel consists of $k^{-1} \mathcal{T}_{m}^{\operatorname{sc}}$. \item For any $P,Q \in \mathcal{T}_{m}^{\operatorname{sc}}$ \begin{xalignat}{2} \label{eq:prod} & \tau ( PQ) = \tau ( P) \tau (Q) , \\ \label{eq:norm} & \| P_k \| = \sup \{ \| \tau ( P)_x \| , \; x \in M \} + \mathcal{O} ( k^{-1}), \\ \label{eq:trace} & P_k(x,x) = \Bigl( \frac{k}{2\pi} \Bigr)^n ( \operatorname{tr} (\tau (P)_x) + \mathcal{O} (k^{-1})). \end{xalignat} \item For any $f$, $g$ in $\Ci ( M)$, we have \begin{gather} \label{eq:produit} T_{m,k} (f) T_{m,k} (g) = T_{m,k} ( fg) + k^{-1} T_{m,k} (X,Y) + \mathcal{O} ( k^{-2}) \end{gather} where $X$, $Y$ are the Hamiltonian vector fields of $f$ and $g$. In particular, \begin{gather} \label{eq:com_poisson} ik [ T_{m,k} (f), T_{m,k} (g) ] = T_{m,k} ( \{ f,g \}) + \mathcal{O} ( k^{-1}) \end{gather} with $\{ \cdot, \cdot \}$ the Poisson Bracket of $(M, {\omega})$. \end{enumerate} \end{theo} We call $\tau$ the symbol map. The symbol of the generators \eqref{eq:symbolmap} is defined in terms of the endormorphisms \eqref{eq:rho}. The product of symbols in the left-hand side of \eqref{eq:prod} is the pointwise composition. In the norm estimate \eqref{eq:norm}, the norm of $\tau ( P)_x$ is defined in terms of the hermitian structure of $\mathcal{D}_m (T_xM)$. In \eqref{eq:trace}, $P_k (x,x)$ is the value of the Schwartz kernel of $P_k$ at $(x,x)$. Integrating \eqref{eq:trace}, we obtain the following estimate of the trace of $P_k$, \begin{gather} \operatorname{tr} P_k = \Bigl( \frac{k}{2\pi} \Bigr)^n \int_M \operatorname{tr} (\tau (P)_x) \mu_M(x) + \mathcal{O} ( k^{n-1}) \end{gather} where $\mu_M = {\omega}^n /n!$. Since $P_k =T_{m,k} (1)$ is the identity of $\mathcal{H}_{m,k}$, we recover the estimate \eqref{eq:asymptot_Demailly}. The main estimates for the Toeplitz operators associated to functions, that is $T_{m,k} (f) T_{m,k} (g) = T_{m,k} ( fg) + \mathcal{O} ( k^{-1})$ and $ik [T_{m,k} (f), T_{m,k} (g) ]= T_{m,k} ( \{ f, g \}) + \mathcal{O} ( k^{-1})$ have been proved independently in \cite{Yuri_2}. An important difference between the first and the higher Landau levels, which does not appear in \cite{Yuri_2}, is that we use the Toeplitz operators $T_{m,k} ( X_1, \ldots , X_{2p})$'s in addition to the $T_{m,k} (f)$'s and the related fact that the symbol is a section of $\operatorname{End} ( \mathcal{D}_m(TM))$ instead of a scalar valued function. In the surface case, $n=1$, $\mathcal{D}_m(TM)$ is a line bundle, so any endomorphism of $\mathcal{D}_m(T_xM)$ is scalar and as a consequence of Theorem \ref{theo:Toeplitz_Landau}, $\mathcal{T}_{m}^{\operatorname{sc}}$ consists of the families $( P_k = T_{m,k} ( f( \cdot, k)) + \mathcal{O} ( k^{-\infty}) , \; k \in {\mathbb{N}})$ where the multiplicator $f( \cdot, k)$ depends on $k$ in such a way that it admits an expansion $f( \cdot, k ) = f_0 + k^{-1} f_1 + \ldots$. We could as well prove that the $T_{m,k} (f)$ satisfy \eqref{eq:t_0_k_produit_tout_ordre} for some bidifferential operators $B_\ell^m$ depending on $m$. In the general case, we may ask if there exists a subalgebra of $\mathcal{T}_{m}^{\operatorname{sc}}$ consisting of Toeplitz operators associated to scalar multiplicators. This is not the case because, as shows a short computation, if $X$ and $Y$ are any real vector fields of $M$, by \eqref{eq:symbolmap}, the symbol of $(T_{m,k}(X,Y))$ at $x \in M$ is the endomorphism of $\mathcal{D}_m(T_x M)$ given in terms of a basis $(U_i)$ of $T_x^{0,1}M$ as above by \begin{gather} \label{eq:symbolTXY} \tau ( T_{m} (X,Y) )(x) = - \sum_{i,j =1}^n \pi_m (x) ( {\alpha}_i \overline{{\beta}}_j \overline{z}_i \partial_{\overline{z}_j} + \overline{{\alpha}}_i {\beta}_j \partial_{\overline{z}_i} \overline{z}_j ) \pi_m(x) \end{gather} where $X(x) = \sum {\alpha}_i U_i + \overline{{\alpha}}_i \overline{U}_i$ and $Y(x) = \sum {\beta}_i U_i + \overline{{\beta}}_i \overline{U}_i$. Assume that $n \geqslant 2$ and $m \geqslant 1$. Then it is not difficult to see from \eqref{eq:symbolTXY} that $\tau ( T_m (X,X) ) (x)$ is a scalar multiple of $\pi_m(x)$ only when $X(x) =0$. So by the second assertion of Theorem \ref{theo:Toeplitz_Landau} and \eqref{eq:produit}, for any $f \in \Ci ( M , {\mathbb{R}})$ not locally constant, we do not have that $T_{m,k}(f) T_{m,k}(f) = T_{m,k} (f^2) + k^{-1}T_{m,k} (g) + \mathcal{O} ( k^{-2})$ for a function $g$. For $m=0$, we recover from \eqref{eq:symbolTXY} and \eqref{eq:produit} the formula for $B_1$ in \eqref{eq:b_0_b_1}. For $n=1$, we have similarly that $T_{m,k} ( f) T_{m,k} ( g) = T_{m,k} ( fg) + k^{-1} T_{m,k} (B_1(f,g) ) + \mathcal{O} ( k^{-2})$ with $$ B_1 (f,g) = - ( \tfrac{1}{2} +m ) g(X,Y) + \tfrac{1}{2i} {\omega} (X,Y)$$ where $X,Y$ are the Hamiltonian vector fields of $f$ and $g$. \subsubsection*{Ladder operators} The last result we would like to emphasize in this introduction is the construction of some ladder operators for the spaces $\mathcal{H}_{m,k}$. In the surface case with constant Gauss curvature, $\mathcal{H}_{m,k}$ is naturally isomorphic with the space of holomorphic sections of $L^k \otimes K^{-m}$ where $K$ is the canonical bundle \cite{Te06}, the isomorphism being the {\it ladder} operator $\overline{\partial}_{L^k \otimes K^{-m+1}} \circ \ldots \circ \overline{\partial}_{L^k}$, cf. the appendix \ref{sec:appendix}. Here we will show that the family $(\mathcal{H}_{m,k}, \; k \in {\mathbb{N}})$ is isomorphic to a quantization of $M$ twisted by the vector bundle $\mathcal{D}_m (TM) $. Recall that for any Hermitian vector bundle $F \rightarrow M $, we can define a family of finite dimensional subspaces $\mathcal{H}_{F,k} \subset \Ci ( M , L^k \otimes F)$, $k \in {\mathbb{N}}$, having the following properties: \begin{enumerate} \item $\operatorname{dim} \mathcal{H}_{F,k} = \int_M \operatorname{ch} ( L^k \otimes F) \; \operatorname{Todd} M$, when $k$ is sufficiently large. \item the space $\mathcal{T}_F^{\operatorname{sc}}$, consisting of families of $(T_k \in \operatorname{End} (\mathcal{H}_{F,k}) , \; k \in {\mathbb{N}})$ having an expansion of the form $$ T_k = \sum_{\ell = 0 }^N k^{-\ell} T_{F,k} ( f_{\ell} ) + \mathcal{O} ( k^{-N+1}) , \qquad \forall N \in {\mathbb{N}} $$ for a sequence $(f_\ell)$ of $\Ci ( M , \operatorname{End} F)$, is closed under product. Here, $T_{F,k} (f_{\ell}) (\psi) = \Pi_{F,k} ( f_{\ell} \psi)$ for any $\psi \in \mathcal{H}_{F,k} $ where $\Pi_{F,k}$ is the orthogonal projector of $\Ci ( M, L^k \otimes F)$ onto $\mathcal{H}_{F,k}$. \item At first order, the product is given by the pointwise product, that is $ T_{F,k} ( f) T_{F,k} (g) = T_{F,k} ( fg) + \mathcal{O} (k^{-1})$ for any $f,g \in \Ci (M, \operatorname{End} F)$. \end{enumerate} The algebra $\mathcal{T}_F^{\operatorname{sc}}$ is called the Toeplitz algebra. In the K\"ahler case, that is when $(M, {\omega})$ is K\"ahler, $L$ holomorphic with $\nabla$ the Chern connection, and $F$ holomorphic as well, the space $\mathcal{H}_{F,k}$ can be defined as the space $ H^0 (L^k \otimes F)$ of holomorphic sections. In the non-K\"ahler case, various constructions have been developed \cite{BoUr96}, \cite{MaMa}: Spin-c quantization, first Landau level of a Laplacian acting on $\Ci ( M , L^k \otimes F)$, or more generally any image of a projector of $\Ci ( M , L^k \otimes F)$ having a specific Schwartz kernel \cite{oim}. Let us call such a family $(\mathcal{H}_{F,k}, \; k \in {\mathbb{N}})$ a {\em quantization of $(M,L)$ twisted by $F$}. These twisted quantizations have sometimes better properties than the non twisted one (corresponding to $F={\mathbb{C}}$), typically when $F$ is a half-form bundle \cite{oim_eq}. The general case where the rank of $F$ is $\geqslant 2$ may be viewed as a free generalization without any application, but interestingly, this is exactly what we need. Assume $F$ is equipped with a connection $\nabla^F : \Ci ( M , F) \rightarrow \Omega^1 (M, F)$. Let $G = (T^{0,1}M)^*$ and $D_{F,k} : \Ci ( M , L^k \otimes F) \rightarrow \Ci ( M , L^k \otimes F \otimes G) $ be the $(0,1)$-part of the connection $\nabla^{F \otimes L^k}$ induced by $\nabla^F$ and $\nabla^{L^k}$. Endow $G$ with a connection and define the differential operators \begin{gather} \label{eq:def_w_k} \begin{split} W_k : \Ci ( M , L^k ) \rightarrow \Ci ( M, L^k \otimes \mathcal{D}_m (TM) )\\ W_k = R_m D_{G^{\otimes (m-1)},k} \circ D_{G^{\otimes (m-2)},k} \circ \ldots \circ D_{G, k} \circ D_{{\mathbb{C}},k} \end{split} \end{gather} where $R_m$ is the projection from $G^{\otimes m }$ onto $D_m (TM) = \operatorname{Sym}^m G$. \begin{theo} \label{theo:ladder} For any quantization $( \mathcal{H}_{F,k}, \; k \in {\mathbb{N}})$ of $(M,L)$ twisted by $F= \mathcal{D}_m ( TM)$, the linear maps $$V_k = \tfrac{1}{m!} k^{-\frac{m}{2}} \Pi_{F,k} W_k : \mathcal{H}_{m,k} \rightarrow \mathcal{H}_{F,k}, \qquad k \in {\mathbb{N}} $$ satisfy: \begin{enumerate} \item $V_kV_k^* = \operatorname{id}_{\mathcal{H}_{F,k}} + \mathcal{O} ( k^{-1})$ and $V_k^* V_k =\operatorname{id}_{\mathcal{H}_{m,k}} + \mathcal{O} ( k^{-1})$. In particular, $V_k$ is an isomorphism when $k$ is sufficiently large. \item the conjugation by $V=(V_k)$ is an isomorphism between the Toeplitz algebra $ \mathcal{T}^{\operatorname{sc}}_m$ and $\mathcal{T}_F^{\operatorname{sc}}$ modulo $\mathcal{O} ( k^{-\infty})$. In particular, for any $(P_k) \in \mathcal{T}^{\operatorname{sc}}_m$, $(V_k P_k V_k^*)_k$ belongs to $\mathcal{T}_F^{\operatorname{sc}}$ and if $f \in \Ci ( M , \operatorname{End} F)$ is the symbol $ \tau(P_k)$, then $V_k P_k V_k^* = T_{F,k} (f) + \mathcal{O} ( k^{-1})$. \end{enumerate} \end{theo} The first assertion of Theorem \ref{theo:ladder} tells us that $V_k$ is almost unitary. This can be improved by setting $U_k := A_k V_k$ with $A_k$ the endomorphism of $\mathcal{H}_{F,k}$ equal to $(V_kV_k^*)^{-1/2} |_{\mathcal{H}_{m,k}}$ when $k$ is sufficiently large and to $0$ for the first values of $k$. Then $U_k U_k^* = \operatorname{id}_{\mathcal{H}_{F,k}}$ and $U_k^* U_k = \operatorname{id}_{\mathcal{H}_{m,k}}$ when $k$ is sufficiently large. Furthermore the second assertion of \ref{theo:ladder} holds with $(U_k)$ instead of $(V_k)$. \subsection{Generalised Landau level and Schwartz kernel expansion} \subsubsection*{Generalised Landau level} In the previous results, the $m$-th Landau level $\mathcal{H}_{m,k}$, $k\in {\mathbb{N}}$ can be replaced by any family $(\mathcal{H}_{m,k} \subset \Ci ( M , L^k ) , \; k \in {\mathbb{N}} ) $ of finite dimensional subspaces, such that the Schwartz kernel of the orthogonal projector $\Pi_{m,k}$ of $\Ci ( M, L^k)$ onto $\mathcal{H}_{m,k}$ has a specific form in the large $k$ limit. We require first that \begin{gather} \label{eq:first_order} \Pi_{m,k} (x,y) = \Bigl( \frac{k}{ 2\pi} \Bigr)^n E^k (x,y) Q^{(n-1)}_m ( k \delta (x,y) ) + \mathcal{O} (k^{n-1}) \end{gather} where \begin{itemize} \item[-] $E$ is a section of $L \boxtimes \overline{L}$ such that $|E(x,y)| < 1$ when $x \neq y$, and its second order Taylor expansion along the diagonal has a specific form, cf. Equation \eqref{eq:hypotheseE}. In particular, $\ln |E(x+ \xi, x)|= - \frac{1}{4} |\xi|^2 + \mathcal{O} ( |\xi|^3)$. \item[-] $\delta \in \Ci (M^2)$ is any function vanishing to second order along the diagonal and satisfying $\delta (x+ \xi , x) = |\xi|^2 + \mathcal{O} (|\xi|^3)$, $\xi \in T_xM$. $Q_m^{(p)} $ is the generalised Laguerre polynomial $Q^{(p)}_m (x) = \frac{x^{-p}}{m!} \bigl ( \frac{d}{dx} -1 \bigr) ^m x ^{m+p}$. \end{itemize} In addition to \eqref{eq:first_order}, we require a full expansion of the form \begin{gather} \label{eq:full} \Pi_{m,k} (x,y) = \Bigl( \frac{k}{ 2\pi} \Bigr)^n E^k (x,y) \sum _{\ell \in {\mathbb{Z}}} k^{-\ell} a_{\ell} (x,y) + \mathcal{O} (k^{-\infty}) \end{gather} with coefficients $a_{\ell} \in \Ci ( M^2)$ such that for $\ell<0$, $a_{\ell}$ vanishes to order $m(\ell) \geqslant -2 \ell$ along the diagonal and $m(\ell) + 2\ell \rightarrow \infty $ as $\ell \rightarrow - \infty$. The meaning of this expansion is not obvious because the negative $\ell$'s give positive powers of $k$. Actually, the condition satisfied by $|E|$ implies that $|E^k (x,y) b(x,y) | = \mathcal{O} ( k^{-m/2})$ when $b$ vanishes to order $m$ along the diagonal. So the $\ell$-th summand in \eqref{eq:full} is in $\mathcal{O} ( k^{n - \frac{1}{2} ( m ( \ell) + 2\ell)})$, and the expansion is meaningful because of the conditions satisfied by $m (\ell)$. We will prove that for any family $(\mathcal{H}_{m,k})$ whose associated projector $\Pi_{m,k}$ satisfies \eqref{eq:first_order} and \eqref{eq:full}, Theorems \ref{theo:dim_landau}, \ref{theo:Toeplitz_Landau} and \ref{theo:ladder} hold. On the other hand, in the second part of this work \cite{oim_copain}, cf. also \cite{Yuri_1}, it is proved that the Schwartz kernel of the orthogonal projector $\Pi_{m,k}$ onto the Landau levels $\mathcal{H}_{m,k}$ defined in \eqref{eq:def_Landau} from the Laplacian $\Delta_k$, satisfies \eqref{eq:first_order} and \eqref{eq:full}. The assumption that the magnetic field is constant with respect to the metric can be relaxed. It is actually possible to define some Landau levels and describe the asymptotic expansion of the associated projector as soon as a particular gap condition is satisfied, cf \cite{oim_copain}. \subsubsection*{The class $\lag (A,B)$} To establish our results, we will introduce a specific class of operators, containing the projector $\Pi_{m,k}$, the Berezin Toeplitz operators $T_{m,k} (f)$ and $T_{m,k} ( X_1, \ldots, X_{2p})$ and also the projector $\Pi_{F,k}$ of any twisted quantization, the corresponding Toeplitz operators $T_{F,k} (g)$, the isomorphisms $V_k$ of Theorem \ref{theo:ladder} and their unitarizations $(U_k)$. This operator class has a natural filtration, with associated symbol spaces, which allows to prove most of the results by successive approximations as often in microlocal analysis. Interestingly, in the symbolic calculus appear the eigenprojectors of the Laplacian of ${\mathbb{C}}^n$, providing another link between the Landau Laplacian and our geometric Landau levels. Introduce two auxiliary Hermitian vector bundles $A$, $B$ over $ M$. Then $\lag (A,B)$ consists of families $(P_k : \Ci( M , L^k \otimes A ) \rightarrow \Ci(M , L^k \otimes B), \; k \in {\mathbb{N}})$ of operators having a smooth Schwartz kernel satisfying \begin{gather} \label{eq:exp_lag} P_k (x,y) = \Bigl( \frac{k}{2\pi} \Bigr)^n E^{k}(x,y) \sum_{\ell \in {\mathbb{Z}} } k^{-\frac{\ell}{2}} b_{\ell}(x,y) + \mathcal{O} ( k^{- \infty}) \end{gather} where $E$ is defined as in \eqref{eq:first_order}; the coefficients $b_{\ell} $ are in $\Ci ( M^2, B \boxtimes \overline{A})$; for $\ell <0$, $b_{\ell}$ vanishes to order $m (\ell) \geqslant -\ell $ along the diagonal; $m( \ell ) + \ell \rightarrow \infty $ as $\ell \rightarrow - \infty$, and the meaning of this expansion is the same as in \eqref{eq:full}. We have a decomposition into even/odd elements: $(P_k) \in \lag^{+} (A,B)$ (resp. $\lag^{-} (A,B)$) if the expansion \eqref{eq:exp_lag} holds with a sum over the $\ell$'s even (resp. odd). The main property is that this class of operators is closed under composition: \begin{gather} \label{eq:comp} \lag^{{\beta}} ( B, C) \cdot \lag^{{\alpha}} (A,B) \subset \lag^{{\alpha} {\beta}} (A,C), \qquad {\alpha} , {\beta} \in \{ \pm 1\}. \end{gather} In particular, $\lag^+ (A) := \lag^{+} (A,A)$ is an algebra. We also have a filtration $ \lag_q(A,B) := \lag (A,B) \cap \mathcal{O} (k^{-q/2})$, $q \in {\mathbb{N}}$ and the corresponding graduation is described by symbol maps ${\sigma}_q$ $$ 0 \rightarrow \lag_{q+1} (A,B) \rightarrow \lag_q (A,B) \xrightarrow{{\sigma}_q} \Ci (M , \mathcal{S} (M) \otimes \operatorname{Hom} (A,B) ) \rightarrow 0 $$ Here, $\mathcal{S} (M)$ is an infinite rank vector bundle over $M$, each fiber $\mathcal{S}_x(M)$ is a subalgebra of the algebra of endomorphisms of the space $\mathcal{D} (T_xM)$ defined in \eqref{eq:dtm_intro}. This is compatible with the composition \eqref{eq:comp} in the sense that $\lag_p (B,C) \cdot \lag_q (A,B) \subset \lag_{p+q} (A,C)$ and the corresponding product of symbols is the pointwise product of $\mathcal{S}(M)$ tensored by $\operatorname{Hom} (B,C) \otimes \operatorname{Hom} (A,B) \rightarrow \operatorname{Hom} (A,C)$. The projector $(\Pi_{m,k})_k$ is an idempotent of $\lag^+ ({\mathbb{C}})$ with symbol ${\sigma}_0 ( \Pi_{m} )$ equal at $x$ to the projector $\pi_m (x)$ onto the $m$-th summand in \eqref{eq:dtm_intro}. The Toeplitz algebra introduced previously is \begin{gather} \label{eq:toep_land} \mathcal{T}_m^{\operatorname{sc}} = \{ P \in \lag^{+} ({\mathbb{C}}) / \; \Pi_m P \Pi_m = P \}. \end{gather} The isomorphism $V$ of Theorem \ref{theo:ladder} belongs to $\lag ( {\mathbb{C}}, F)$ and has the same parity as $m$. Interestingly, $\mathcal{S}_x(M)$ has a representation as operators of $L^2 ({\mathbb{C}}^n)$, and in this representation, $ \pi_m(x) = {\sigma}_0 ( \Pi_m)(x)$ is the projector onto the $m$-th Landau level of a magnetic Laplacian of ${\mathbb{C}}^n$. \subsubsection*{Outline of the paper} In Section \ref{sec:class-lag-pres}, we introduce the class $\lag (A,B)$, and state its main properties. In section \ref{sec:projectors-Toeplitz}, we prove variations of the theorems stated before, where the $\mathcal{H}_{m,k}$ are subspaces of $\Ci ( M , L^k \otimes A)$ such that the corresponding family $(\Pi_{m,k})$ of orthogonal projectors belongs to $\lag^+ (A)$ with a convenient symbol. Sections \ref{sec:landau-levels-cn}, \ref{sec:schw-kern-oper} and \ref{sec:derivatives} are devoted to the proof of the properties of $\lag (A,B)$. The proofs of Theorems \ref{theo:dim_landau}, \ref{theo:Toeplitz_Landau} and \ref{theo:ladder} is given in the last subsection \ref{sec:ledernier}. In the appendix \ref{sec:appendix}, we prove formulas \eqref{eq:surface_eignvalue_dim} on constant curvature surface. \subsubsection*{Acknowledgment} I would like to thank Yuri Kordyukov for his collaboration at an early stage of this work. \section{The class \texorpdfstring{$\lag (A,B)$}{L(A,B)}} \label{sec:class-lag-pres} We start the discussion with the algebra in which the symbol of operators of $\lag (A, B)$ takes their values. The class $\lag (A,B)$ is defined in Subsection \ref{sec:operators}. \subsection{Symbol spaces} \label{sec:symbol-spaces} \subsubsection*{The algebra $\mathcal{S} ( {\mathbb{C}}^n)$} Let $n$ be a positive integer and denote by $z_1, \ldots, z_n$ the linear coordinates of ${\mathbb{C}}^n$. Let $\mathcal{D} ({\mathbb{C}}^n)= {\mathbb{C}} [ \overline{z}_1, \ldots, \overline{z}_n]$ be the space of antiholomorphic polynomial maps from ${\mathbb{C}}^n$ to ${\mathbb{C}}$. Introduce the scalar product \begin{gather} \label{eq:scal_product_bargm} \langle f, g \rangle = (2\pi)^{-n}\int_{{\mathbb{C}}^n} e^{-|z|^2} f (z)\, \overline{g (z)} \; d \mu_n (z) , \qquad f,\; g \in \mathcal{D} ( {\mathbb{C}}^n) \end{gather} where $|z|^2 = \sum_{i=1}^n |z_i|^2$ and $ \mu_n $ is the measure $\prod_{i=1}^n dz_i d\overline{z}_i$. The family $(({\alpha} !)^{-\frac{1}{2}} \overline{z}^{\alpha} , \, {\alpha} \in {\mathbb{N}}^n)$ is an orthonormal basis of $\mathcal{D} ( {\mathbb{C}}^n)$. We will also need the decomposition into even and odd functions \begin{gather} \label{eq:paire_impaire} \mathcal{D} ( {\mathbb{C}}^n) = \mathcal{D}^+ ({\mathbb{C}}^n) \oplus \mathcal{D} ^{-} ( {\mathbb{C}}^n) \end{gather} where $ \mathcal{D}^+ ({\mathbb{C}}^n) $ is spanned by the $\overline{z}^{\alpha}$ with $|{\alpha}| = \sum {\alpha}(i)$ even and $\mathcal{D} ^{-} ( {\mathbb{C}}^n) $ by the $\overline{z}^{\alpha}$ with $|{\alpha}|$ odd. Let $\mathcal{S} ({\mathbb{C}}^n)$ be the space of endomorphisms $s $ of $\mathcal{D}( {\mathbb{C}}^n)$ such that $s ( \overline{z}^{\alpha}) = 0$ except for a finite number of ${\alpha} \in {\mathbb{N}}^{n}$. We claim that $\mathcal{S} ( {\mathbb{C}}^n)$ is closed under product and taking adjoint. To see that, simply observe that $\mathcal{S} ( {\mathbb{C}}^n)$ is the space of endomorphisms having a matrix in the basis $(\overline{z}^{\alpha})$ whose almost all entries are equal to zero. Notice as well that the family $(\rho_{{\alpha}, {\beta}}, \; {\alpha}, {\beta} \in {\mathbb{N}}^n)$ of $\mathcal{S} ({\mathbb{C}}^n)$ defined by \begin{gather} \label{eq:def_Ualphabeta} \begin{split} \rho_{{\alpha} {\beta} } \bigl( ({\beta} !)^{-\frac{1}{2}} \overline{z}^{\beta} \bigr) = ({\alpha} !)^{-\frac{1}{2}} \overline{z}^{{\alpha}}, \\ \rho_{{\alpha} {\beta}} ( \overline{z}^{{\gamma}} ) = 0 , \quad \forall {\gamma} \in {\mathbb{N}}^n \setminus \{ {\beta} \} \end{split} \end{gather} is a vector space basis of $\mathcal{S} ( {\mathbb{C}}^{n})$. And we have \begin{gather} \label{eq:rel_U} \rho_{{\alpha} {\beta}} \circ \rho_{\tilde {\alpha} \tilde {\beta} } = \delta_{{\beta} \tilde{{\alpha}}} \rho_{{\alpha} \tilde {\beta}} , \qquad \rho_{{\alpha} {\beta}}^* = \rho_{{\beta} {\alpha}} \end{gather} for all ${\alpha}$, ${\beta}$, $\tilde {\alpha}$, $\tilde {\beta}$ in ${\mathbb{N}}^n$. Each element $s \in \mathcal{S} ( {\mathbb{C}}^n)$ can be written in a block matrix $s = \begin{pmatrix} s_{++} & s_{-+} \\ s _{+-} & s_{--} \end{pmatrix} $ in the decomposition \eqref{eq:paire_impaire}, which leads to a decomposition into even/odd endomorphisms \begin{gather} \label{eq:symb+_-} \mathcal{S} ( {\mathbb{C}}^n) = \mathcal{S}^{+} ( {\mathbb{C}}^n) \oplus \mathcal{S}^{-} ( {\mathbb{C}}^n) \end{gather} where $s \in \mathcal{S}^{+} ( {\mathbb{C}}^n)$ iff $s_{-+} = s_{+-} =0$, and $s \in \mathcal{S}^{-} ({\mathbb{C}}^n)$ iff $s_{++} = s_{--} = 0 $. Observe that $\rho_{{\alpha}{\beta}}$ has the same parity as $|{\alpha} | + | {\beta}|$. Furthermore \begin{gather} \mathcal{S}^{\ep} ( {\mathbb{C}}^n)\, \cdot \, \mathcal{S}^{\ep'} ( {\mathbb{C}}^n) \subset \mathcal{S}^{\ep \ep'} ( {\mathbb{C}}^n) \end{gather} for any $\ep, \ep' \in \{ 1, -1 \}$. \subsubsection*{Extension to vector bundles} In the previous definitions, we can replace ${\mathbb{C}}^n$ by any $n$-dimensional Hermitian vector space $\mathbf{E}$. We denote by $\mathcal{D} (\mathbf{E})$ the space of antiholomorphic polynomial maps $\mathbf{E} \rightarrow {\mathbb{C}}$. Choosing an orthonormal basis $(e_i)$ of $\mathbf{E}$, we can identify $\mathbf{E}$ with ${\mathbb{C}}^n$ and then define the scalar product of $\mathcal{D} (\mathbf{E})$ by the formula \eqref{eq:scal_product_bargm}. Since the weight $|z|^2$ and the measure $d \mu_n$ are invariant by unitary change of coordinates, the resulting scalar product of $\mathcal{D} (\mathbf{E})$ is independent of $(e_i)$. Similarly, we define the subspace $\mathcal{S} (\mathbf{E})$ of the space of endomorphisms of $\mathcal{D} (\mathbf{E})$ and associated to the basis $(e_i)$ of $\mathbf{E}$, we have a basis $(\rho_{{\alpha},{\beta}} ,\; {\alpha}, {\beta} \in {\mathbb{N}}^n )$ of $\mathcal{S} (\mathbf{E})$. The decompositions into even/odd elements are defined and denoted as for ${\mathbb{C}}^n$ by \begin{gather} \mathcal{D} (\mathbf{E}) = \mathcal{D}^+ (\mathbf{E}) \oplus \mathcal{D}^{-} (\mathbf{E}), \qquad \mathcal{S} (\mathbf{E}) = \mathcal{S} ^{+} (\mathbf{E}) \oplus \mathcal{S}^{-} (\mathbf{E}). \end{gather} We can extend all these constructions to vector bundles. Let $\mathbf{E} \rightarrow M$ be a Hermitian vector bundle with rank $n$. Define the infinite-dimensional vector bundles $\mathcal{D} (\mathbf{E})$ and $\mathcal{S} (\mathbf{E})$ over $M$ with fibers $\mathcal{D} (\mathbf{E})_x = D(\mathbf{E}_x)$ and $\mathcal{S} (\mathbf{E})_x = \mathcal{S} (\mathbf{E}_x)$. Later, we will choose for $\mathbf{E}$ the complex tangent bundle of an almost-complex manifold, and we will construct operator whose symbols are smooth sections of $\mathcal{S} (\mathbf{E}) \otimes A$, where $A$ is an auxiliary vector bundle. Since the bundle $\mathcal{S}(\mathbf{E})$ has infinite rank, let us make precise the definition of its smooth sections: a section $s \in \Ci ( M , \mathcal{S} (\mathbf{E}) \otimes A)$ is a family $(s(x) \in \mathcal{S} (\mathbf{E}_x) \otimes A_x, \; x \in M)$ such that for any orthonormal frame $(e_i)$ of $\mathbf{E}$ and $(a_j)$ of $A$ over the same open set $U$ of $M$, if $(\rho_{{\alpha}, {\beta}}(x))$ is the basis of $\mathcal{S} (\mathbf{E}_x)$ associated to the basis $(e_i(x))$, then $$ s (x) = \sum {\lambda}_{{\alpha},{\beta},j} (x) \, \rho_{{\alpha},{\beta}} (x) \otimes a_j(x), \qquad x \in U $$ where the ${\lambda}_{{\alpha}, {\beta}, j} $ are smooth functions on $U$, almost all equal to zero. \subsection{Operators} \label{sec:operators} \subsubsection*{Schwartz kernel} Consider a compact symplectic manifold $(M, {\omega})$ with a compatible almost-complex structure $j$ and a prequantum bundle $L \rightarrow M$. Assume we have two auxiliary Hermitian vector bundles $A$ and $B$ over $M$. The dimension of $M$ is $2n$ with $n \in {\mathbb{N}}$. We will define a space $\lag ( A, B)$ consisting of families of operators \begin{gather} \label{eq:P_k_family} \bigl( P_k : \Ci (M, L^k \otimes A ) \rightarrow \Ci ( M , L^k \otimes B ), \; k \in {\mathbb{N}} \bigr) \end{gather} having smooth Schwartz kernels satisfying some conditions. Let us first recall some standard definitions and notations. The Schwartz kernel of $P_k$ is the section $K_k$ of $(L^k \otimes B) \boxtimes (\overline{L}^k \otimes \overline{A})$ such that $$ (P_k f )(x) = \int_M K_k (x,y)\cdot f(y) \; \mu_M ( y), \qquad \forall \; f \in \Ci ( M , L^k \otimes A) $$ where the $\cdot$ stands for the scalar product $(\overline{L}_y^k \otimes \overline{A}_y ) \times (L_y^k \otimes A_y) \rightarrow {\mathbb{C}}$, and $\mu_M = {\omega}^n/n!$. We will denote the operator and its Schwartz kernel by the same letter, hoping it is not too confusing. Since $L$, $A$ and $B$ are Hermitian bundles, the bundle $(L^k \otimes B) \boxtimes (\overline{L}^k \otimes \overline{A})$ has a natural metric, so the pointwise norm $|P_k (x,y)|$ is well-defined. For any $N \in {\mathbb{N}}$, we will say that $(P_k)$ is in $\mathcal{O} ( k^{-N})$ on an open set $U$ of $M^2$ if $|P_k (x,y)| = \mathcal{O} ( k^{-N})$ for $(x,y) \in U$ with a $\mathcal{O}$ uniform on any compact subsets of $U$. We say that $(P_k)$ is in $\mathcal{O} ( k^{-\infty})$ on $U$ if $(P_k)$ is in $\mathcal{O} ( k^{-N})$ on $U$ for any $N$. We will also use the uniform norm $\| P_k \| = \sup \|P_k (f) \| / \| f\|$ with respect to the usual $L^2$ norms of section: $\|f\|^2 = \int_M |f(x)|^2 \; d\mu_M (x)$. \subsubsection*{Definition of $\lag (A,B)$} By definition, a family $(P_k)$ as in \eqref{eq:P_k_family} belongs to $\lag ( A , B)$ if each $P_k$ has smooth Schwartz kernel, the Schwartz kernel family is in $\mathcal{O} ( k^{-\infty})$ on $M^2 \setminus \operatorname{diag} M$ and we have the following expansion on a neighborhood of the diagonal. Assume first that $A$ and $B$ are the trivial line bundle ${\mathbb{C}}_M := M \times {\mathbb{C}}$. Choose a coordinate chart $U \subset M$ and a unitary frame $t : U \rightarrow L$ so that we can identify $U$ with an open set of ${\mathbb{R}}^{2n}$, the sections of $L^k$ over $U$ with functions of $U$ and our Schwartz kernels on $U^2$ become functions as well. Then for any $N \in {\mathbb{N}}$, we require that over $U^2$ \begin{gather} \label{eq:expansion_kernel} P_k ( x + \xi , x) = \Bigl( \frac{k}{2\pi} \Bigr)^n e^{- k\varphi (x, \xi) } \sum_{p =0 }^{N} k^{- \frac{p}{2}} a_{p} \bigl( x, k^{\frac{1}{2}} \xi \bigr) + \mathcal{O} \bigl( k^{n - \frac{N+1}{2}} \bigr) \end{gather} where \begin{itemize} \item[-] $\varphi( x, \xi ) = - i \bigl (\sum_{i=1}^{2n} {\alpha}_i (x)\xi_i + \frac{1}{2}\sum_{i,j=1}^{2n} (\partial_{x_i}{\alpha}_j)(x) \xi_i \xi_j \bigr) + \frac{1}{4} |\xi|_x^2 $, with ${\alpha} = \sum {\alpha}_i dx_i \in \Omega^1 (U, {\mathbb{R}})$ the one-connection form defined by $\nabla t = \frac{1}{i} {\alpha} \otimes t$. \item[-] $a_{p} (x,\xi) \in {\mathbb{C}}$ depends polynomially on $\xi$, meaning that for some $d(p) \in {\mathbb{N}}$, $a_p (x,\xi) = \sum_{|{\alpha} |\leqslant d(p)} a_{p,{\alpha}} (x) \xi^{\alpha}$ with smooth coefficients $a_{p, {\alpha}}$. \end{itemize} Since the real part of $\varphi (x,\xi)$ is $\frac{1}{4} |\xi|_x^2$, we have for any $p$ $$ e^{-k \varphi (x, \xi)} a_p \bigl( x, k^\frac{1}{2} \xi \bigr) = \mathcal{O} (1).$$ So the $p$-th summand in \eqref{eq:expansion_kernel} is in $\mathcal{O} (k^{n -\frac{p}{2}} \bigr)$ and the expansion is meaningful. We require an expansion \eqref{eq:expansion_kernel} for any coordinate chart $U$ and unitary frame $t$ on $U$. In the case where $A$ and $B$ are general vector bundles, we introduce frames of $A$ and $B$ on $U$, so that the Schwartz kernel $P_k$ on $U^2$ becomes a ${\mathbb{C}}^r$-valued functions with $r =(\operatorname{rank} A)( \operatorname{rank} B)$, and we can adopt the same definition with ${\mathbb{C}}^r$-valued coefficients $a_p$. It is not obvious that this definition is equivalent with the one given in the introduction, cf. \eqref{eq:exp_lag}. This will be proved in Proposition \ref{prop:global_local_expansion}. The advantage of the expansion \eqref{eq:expansion_kernel} is that its analytical meaning is more transparent, the drawback is that it depends on local choices (coordinates, frames, rescaling $k^{\frac{1}{2}} \xi$) whereas the expansion \eqref{eq:exp_lag} is global. \subsubsection*{Properties of $\lag (A,B)$} $\lag (A , B)$ has a natural filtration defined as follows. For any $q \in {\mathbb{N}}$, $\lag _q ( A , B)$ is the subspace of $\lag (A,B)$ consisting of the operators such that the local expansions \eqref{eq:expansion_kernel} hold with a sum starting at $p = q$, that is the coefficients $a_0$, \ldots $a_{q-1}$ are zero. \begin{prop} \label{prop:lag} $ $ \begin{enumerate} \item $\lag _q ( A, B) = k^{-\frac{q}{2}} \lag (A, B)$ and if $q \geqslant q'$, then $\lag_q( A,B) \subset \lag_{q'} (A,B)$. \item For any $(P_k)$ in $\lag (A, B)$, \begin{xalignat*}{2} & (P_k) \in \lag_q(A,B) \Leftrightarrow \| P_k \| = \mathcal{O} (k^{-\frac{q}{2}}) \\ & \Leftrightarrow \text{the Schwartz kernel family of $(P_k)$ belongs to } \mathcal{O} ( k^{n-\frac{q}{2}}) \end{xalignat*} \item $\lag_{\infty} (A,B) := \bigcap_q \lag_q( A,B)$ consists of the families \eqref{eq:P_k_family} with a smooth Schwartz kernel in $ \mathcal{O} ( k^{-\infty})$. \item for any sequence $(P_q)_{q \in {\mathbb{N}}}$ of $\mathcal{L} (A,B)$ such that $P_q \in \lag_q(A, B)$ for any $q$, there exists $P \in \lag (A,B)$ satisfying $P = \sum_{p=0}^q P_p$ modulo $\lag_{q+1} (A,B)$ for any $q$. \end{enumerate} \end{prop} We will now describe the quotients $\lag_q ( A,B)/ \lag_{q+1} (A,B)$ by using the material introduced in Section \ref{sec:symbol-spaces}. Since $M$ has an almost complex structure $j$ compatible with ${\omega}$, the tangent bundle $TM$ is a complex vector bundle with a Hermitian metric, which defines our bundle $\mathcal{S} (M) := \mathcal{S} (TM)$. \begin{theo} \label{theo:lag} For any $q \in {\mathbb{N}}$, there exists a linear map $${\sigma}_q: \lag_q (A,B) \rightarrow \Ci ( M, \mathcal{S} (M) \otimes \operatorname{Hom} (A,B))$$ which is onto and has kernel $\lag_{q+1} (A,B)$. Furthermore, the following holds for any $ P \in \lag_q ( A,B)$: \begin{enumerate} \item ${\sigma} _q ( P) = {\sigma}_0 ( k^{\frac{q}{2}} P)$. \item For any $f \in \Ci ( M , \operatorname{Hom} (B, C))$, $( f \circ P_k )$ belongs to $\lag_q ( A,C)$ and ${\sigma}_q( f \circ P) = f \circ {\sigma}_q (P)$. For any $g \in \Ci ( M , \operatorname{Hom} ( C, A))$, $( P_k \circ g)$ belongs to $\lag_q ( C,B)$ and ${\sigma}_q ( P \circ g) = {\sigma}_q ( P) \circ g$. \item $P^* $ belongs to $\lag_q ( B,A)$ and ${\sigma} (P^*) = {\sigma} (P)^*$. \item For any $P' \in \lag_{q'}(B,C)$, $P' \circ P$ belongs to $\lag_{q'+ q }( A,C)$ and $$ {\sigma}_{q'+ q} ( P' \circ P) = {\sigma}_{q'} (P') \circ {\sigma}_q (P).$$ \item The Schwartz kernel of $P_k$ on the diagonal satisfies $$ P_k (x,x) = \frac{k^{n-\frac{q}{2}}}{ (2\pi)^n} \Bigl[ \operatorname{tr} ( {\sigma}_q( P)(x)) + \mathcal{O} ( k^{-1/2}) \Bigr] $$ where $\operatorname{tr}$ is the map $\mathcal{S} (T_xM) \otimes \operatorname{Hom} (A_x,B_x) \rightarrow (L_x \otimes \overline{L}_x)^k \otimes B_x \otimes \overline{A}_x \simeq \operatorname{Hom} (A_x,B_x)$ sending $s \otimes f$ to $(\operatorname{tr} s) f$. \end{enumerate} \end{theo} Let us explain how is defined the symbol map ${\sigma}_0$ for $A= B = {\mathbb{C}}_M$. Consider $P \in \lag (A,B)$ and the local expansion \eqref{eq:expansion_kernel}. We view $(x,\xi)$ as a tangent vector of $M$, that is $ \xi \in T_xM$, so we consider $a_0 (x, \cdot)$ as a polynomial of $T_xM$. Then it is not obvious but nevertheless true that this polynomial does not depend on the choice of the coordinate chart $U$ and the unitary frame $t$. To compare, the coefficients $a_p$ in \eqref{eq:expansion_kernel} with $p \geqslant 1$ do depend on the choice of the coordinates and the frame of $L$. To pass from $a_0(x,\cdot)$ to the symbol of $P$ at $x$, we first choose a unitary frame $(e_i)$ of $T_xM$. So $T_xM \simeq {\mathbb{C}}^n$ by sending $\xi = \sum z_i e_i(x) $ to $z(\xi)= (z_i)$. We also have a basis $\rho_{{\alpha}, {\beta}}(x)$ of $\mathcal{S} (T_xM) $ defined in \eqref{eq:def_Ualphabeta}. Then $$ {\sigma}_0(P) (x) = \sum f_{{\alpha}, {\beta}} (x) \rho_{{\alpha},{\beta}} (x) \Leftrightarrow a_0 (x,\xi) = \sum f_{{\alpha}, {\beta}} (x) p_{{\alpha}, {\beta}} (z(\xi))$$ where we use the polynomials $ p_{{\alpha}, {\beta}} (z) = \bigl( \frac{1}{{\alpha}! {\beta} !} \bigr) ^{1/2} \bigl( \partial_z - \overline{z})^{{\alpha}} z^{\beta} .$ These polynomials form a basis of ${\mathbb{C}} [z, \overline{z}]$, cf. proof of Proposition \ref{prop:tilde_symb_and_op}. \begin{exe} \label{exemple:projecteur} Choose a connection of $A$ and let $\Delta_k$ be the Laplacian \begin{gather} \label{eq:laplacien_fibre_auxiliaire} \Delta_k = \tfrac{1}{2} (\nabla^{L^k \otimes A})^* \nabla^{L^k \otimes A} : \Ci ( L^k \otimes A) \rightarrow \Ci ( L^k \otimes A). \end{gather} For any $m \in {\mathbb{N}}$, let $\Pi_{m,k}$ be the spectral projector \begin{gather} \label{eq:projecteur_fibre_auxiliaire} \Pi_{m,k} := 1_{[m - \frac{1}{2}, m + \frac{1}{2}]} ( k^{-1} \Delta_k). \end{gather} By \cite{oim_copain}, the family $(\Pi_{m,k})$ belongs to $\lag(A,A )$, its ${\sigma}_0$-symbol at $x$ is $\pi_m(x) \otimes \operatorname{id}_{A_x}$ where $\pi_m(x)$ is the projector of $\mathcal{D} (T_xM)$ onto the subspace $\mathcal{D}_m(T_xM)$ of homogeneous degree $m$ polynomials. Since $ \pi_m (x) = \sum_{|{\alpha}|=m} \rho_{{\alpha}, {\alpha}}(x)$, if the auxiliary bundle $A$ is trivial, the corresponding function $a_0$ is \begin{gather} \label{eq:a_0laguerre} a_0(x, \xi) = \sum_{|{\alpha}| =m} p_{{\alpha},{\alpha}}(z(\xi)) = Q_m^{(n-1)} ( |z(\xi)|^2) \end{gather} where $Q_m^{(p)} $ is the Laguerre polynomial $Q^{(p)}_m (x) = \frac{x^{-p}}{m!} \bigl ( \frac{d}{dx} -1 \bigr) ^m x ^{m+p}$. The second equality in \eqref{eq:a_0laguerre} follows from $p_{m,m} ( z) = Q_m^{(0)} (|z|^2)$ and the identity $$Q_m^{(n-1)} (x_1+ \ldots + x_n) = \sum_{|{\alpha}| = m } Q_{{\alpha}(1)}^{(0)} (x_1) \ldots Q_{{\alpha}(n)}^{(0)} (x_n).$$ Actually we won't use the expression in terms of Laguerre polynomials, what really matters is the fact that ${\sigma}_0 ( \Pi_m) (x)$ is the orthogonal projector onto $\mathcal{D} _m(T_xM)$. \qed \end{exe} The definition of the symbol map ${\sigma}_0$ is motivated by Theorem \ref{theo:lag} and its efficiency in the proofs of Section \ref{sec:projectors-Toeplitz}. But this definition does not explain why it is natural to associate to $P \in \lag (A,B)$ an endomorphism of $\mathcal{D} (T_xM)$. A first explanation is provided by the following construction of peaked sections. A deeper reason will be provided later in Section \ref{sec:landau-levels-cn}. We still assume that $A = B = {\mathbb{C}}_M$ to simplify the exposition. Let $x \in M$ be a base point, with a coordinate chart $U$ at $x$ and a unitary frame $t: U \rightarrow L$. Let $\psi \in \Ci_0(U)$ be equal to $1$ on a neighborhood of $x$. To any $f \in \mathcal{D} ( T_xM )$, we associate a family $\Phi_k^f \in \Ci ( M , L^k)$ defined by $$ \Phi_k^f (x+\xi ) = \Bigl( \frac{k}{2\pi} \Bigr)^{\frac{n}{2}} e^{-k \varphi(x,\xi)} \, f(k^{\frac{1}{2}} \xi) \, \psi (x+\xi) \, t ^k(x+\xi) , \qquad k \in {\mathbb{N}} $$ where $\varphi$ is the same function as in \eqref{eq:expansion_kernel}. \begin{prop} \label{prop:peaked-sections} For any $f \in \mathcal{D} (T_xM)$, $ \| \Phi^f_k \| = \| f \| + \mathcal{O} ( k^{-1/2})$ and for any $P \in \lag ( {\mathbb{C}}_M, {\mathbb{C}}_M)$, $$ P_k \Phi^f_k = \Phi^g_k + \mathcal{O} ( k^{-1/2})$$ where $g = {\sigma}_0(P)(x) \cdot f$. \end{prop} For a more general result with auxiliary bundles $A$, $B$ and the estimates of the scalar product of peaked sections, cf. Proposition \ref{prop:peaked-sections_++}. We say that an element $P$ of $\lag (A,B)$ is even (resp. odd) if in the local expansions \eqref{eq:expansion_kernel}, every polynomial $a_p(x,\cdot)$ has the same (resp. the opposite) parity as $p$. Denote by $\lag^+ ( A, B)$ and $\lag^- ( A, B)$ the subspaces of even and odd elements respectively. \begin{theo} \label{theo:parity} We have \begin{enumerate} \item $ \lag ( A, B) = \lag^+ ( A, B) + \lag^- ( A, B)$, $\lag^+ (A,B) \cap \lag ^{-} (A,B) = \lag_{\infty} (A,B)$. \item $\lag ^{\ep} (A,B) \cdot \lag^{\ep'} (B,C) \subset \lag ^{\ep \ep'} (A, C) $ for any choice of signs $\ep$, $\ep'$. \item ${\sigma}_q( \lag_q(A,B) \cap \lag^{\ep} (A,B) ) = \Ci ( M, \mathcal{S}^{\ep (-1)^q}(M) \otimes \operatorname{Hom} (A,B))$. \end{enumerate} \end{theo} The proofs of Proposition \ref{prop:lag}, Theorem \ref{theo:lag} and Theorem \ref{theo:parity} are postponed to Section \ref{sec:proofs-results}. \subsection{Comparison with earlier works} The expansions \eqref{eq:exp_lag}, \eqref{eq:expansion_kernel} or similar versions appeared in the literature \cite{ShZe}, \cite{oim_op}, \cite{MaMa} to describe Bergman kernels of ample line bundles and their symplectic generalizations as well as the associated Toeplitz operators. In a more general context, the Boutet de Monvel-Guillemin theory \cite{BoGu} is built on two classes of operators: Hermite operators and Fourier integral operators respectively. The spaces $\lag^{+} (A,B)$ may be viewed as an intermediate choice in the semi-classical setting. In \cite{oim}, we considered a subalgebra of $\lag^{+} ( A,A)$, denoted by $\mathcal{A}(A)$, consisting of operators having an expansions \eqref{eq:expansion_kernel} in which each $a_p(x,\cdot)$ has degree $\leqslant \frac{3}{2} p$. For our applications in this paper, it is necessary to consider the larger spaces $\lag (A,B)$, because our generalized projectors $\Pi_m$ and unitary equivalences do not belong to $\mathcal{A} (A)$. More precisely, only the projector corresponding to the first Landau level belongs to $\mathcal{A} (A)$. Theorem \ref{theo:lag} is a generalization of similar results for $\mathcal{A}(A)$ established in \cite{oim}, and surprisingly the proofs are somehow easier in this new generality. However, a crucial difference with \cite{oim} relies in the symbols. Roughly, the symbols of the elements of $\mathcal{A}(A)$ were defined directly as the polynomials $a_0 (x,\cdot)$. This had the advantage that it is easier to pass from the Schwartz kernel of the operator to the symbol. The drawback of this definition is that the product for these symbols is given by the mysterious formula \begin{gather} \label{eq:old_prod} (u \star v) ( x, z, \overline{z}) =\bigl[ \exp ( \Box ) ( u (x, - \zeta, \overline{z} - \overline {\zeta} ) v (x, z+ \zeta , \overline{\zeta} ) )\bigr]_{\zeta = \overline{\zeta} =0 } \end{gather} where $\Box = \sum \partial^2 / \partial \zeta^i \partial \overline{\zeta}^i$. This was actually tractable for what we did in \cite{oim} because our main interest was the projector $\Pi$ for the first Landau level, with symbol ${\sigma}_0(\Pi)= \rho_{00}$, so its old symbol was the function $u(x,\xi) =1$. But for our new projectors whose symbols are typically a sum of the $\rho_{{\alpha}\al}$'s, it is essential to work with the symbol in $\mathcal{S} (M)$. For instance, it is even not obvious how to recover the relations \eqref{eq:rel_U} from the product \eqref{eq:old_prod}. \section{Projectors of \texorpdfstring{$\lag (A)$}{L(A)} and Toeplitz operators} \label{sec:projectors-Toeplitz} In this section, we consider an auxiliary Hermitian vector bundle $A$ with arbitrary rank. We denote by $\lag (A): = \lag (A,A)$ the associated algebra and by $\lag^+(A)$ the subalgebra consisting of even elements. The symbols of operators of $\lag (A)$ are sections of $\mathcal{S} (M) \otimes \operatorname{End} A$. We will view $\mathcal{S} (M)_x \otimes \operatorname{End} A_x$ as a subspace of $\operatorname{End} ( \mathcal{D} (T_xM) \otimes A_x)$. Let $F$ be a subbundle of $\mathcal{D}_{\leqslant m}(TM) \otimes A$ for some $m \in {\mathbb{N}}$, where $\mathcal{D}_{\leqslant m} (T_xM)$ is the subspace of $\mathcal{D}(T_xM)$ of polynomial with degree $\leqslant m$. We assume that $F$ has a definite parity, that is $F \subset \mathcal{D} ^\ep (TM) \otimes A$ with $\ep \in \{\pm 1\}$. Associated to $F$ is the section $\pi$ of $\mathcal{S} (M) \otimes \operatorname{End} A$ such that $\pi(x)$ is the orthogonal projector of $\mathcal{D}(T_xM) \otimes A_x$ onto $F_x$ at each point $x \in M$. The content of the following subsections is: \begin{itemize} \item[-] \ref{sec:constr-proj}: we construct a selfadjoint projector $\Pi \in \lag ^+(A)$ with symbol $\pi$. \item[-] \ref{sec:toeplitz-algebra}: we study the Toeplitz algebra $ \mathcal{T} = \{ \Pi P \Pi , \; P \in \lag^{+} (A) \} $ \item[-] \ref{sec:unitary-equivalence}: we prove $( \operatorname{Im} \Pi_k)$ is isomorphic with any quantization of $(L,M)$ twisted by $F$, and deduce that the dimension of $ \operatorname{Im} \Pi_k$ is the Riemann-Roch number of $L^k \otimes F$ when $k$ is sufficiently large. \end{itemize} A possible choice for $F$ is $F = \mathcal{D}_m (TM)\otimes A $ where $m\in {\mathbb{N}}$ and $\mathcal{D}_m (T_xM)$ is the subspace of $\mathcal{D} (T_xM)$ consisting of homogeneous polynomials with degree $m$. As explained in example \ref{exemple:projecteur}, the projector $\Pi_{m,k}$ onto the $m$-th Landau level $\mathcal{H}_{m,k}$ belongs to $\lag^+ (A)$ and has symbol the projector onto $F$, so it can be used as the projector $\Pi$. Theorems \ref{theo:dim_landau}, \ref{theo:Toeplitz_Landau} and \ref{theo:ladder} will mainly follow from the results in Sections \ref{sec:toeplitz-algebra} and \ref{sec:unitary-equivalence}. By \cite{oim_copain}, the spectral projectors of Laplacians with a magnetic field, not necessarily constant but still satisfying some convenient assumptions, give other instances of projectors in $\lag ^+(A)$. Another choice for $F$ is $F = \mathcal{D}_0 (TM) \otimes A$ where $\mathcal{D}_0 (T_xM) ={\mathbb{C}}$ is the subspace of $\mathcal{D} ( T_xM)$ of constant polynomials. The corresponding quantum space and Toeplitz algebra is the quantization of $(M,L)$ twisted by $A$. A last example is the Spin-c Dirac quantization twisted by an auxiliary bundle $B$. In this case, $A = S \otimes B$ where $S$ is the spinor bundle \color{red} $\bigoplus \bigwedge^k (T^*M)^{0,1} $ and $F = \mathcal{D}_0(TM) \otimes \bigwedge^0 (T^*M)^{0,1} \otimes B$. \color{black} This example will be used to compute the dimension of our quantum spaces from the Atiyah-Singer Theorem. \subsection{Construction of the projector}\label{sec:constr-proj} Let $\chi : {\mathbb{R}} \rightarrow {\mathbb{R}}$ be defined by $\chi (x) =1$ if $x \geqslant \frac{1}{2}$ and $\chi (x) =0$ otherwise. If $P$ is a bounded self-adjoint operator of a Hilbert space $\mathcal{H}$, then using the functional calculus for Borel bounded functions, we define a new bounded operator $\chi (P)$ of $\mathcal{H}$, cf. as instance \cite[Theorem VII.2]{ReSi}. Since $\chi$ is real valued and $\chi^2 = \chi$, $\chi (P)$ is a self-adjoint projector. \begin{theo} \label{theo:constr-proj} Let $P \in \lag ( A)$ be self-adjoint and having symbol ${\sigma}_0 (P) = \pi$. Then $\chi(P)$ belongs to $\lag (A)$ and ${\sigma}_0(\chi (P)) = \pi$. If furthermore $P \in \lag ^+ (A)$, then $\chi (P)$ is in $\lag ^+ (A)$. \end{theo} Theorem \ref{theo:constr-proj} holds without the assumption that $F$ has a definite parity. When $F$ does have a definite parity, $\pi$ is even, so we can choose $P \in \lag^{+}(A)$ with symbol $\pi$. \begin{proof} To prove that the $\chi (P_k)$'s have smooth kernels, we will use the following basic fact: let $Q$, $Q'$ be two operators with smooth kernels acting on $\Ci (M,A)$ and $Q''$ be a bounded operator of $L^2(M, A)$. Then $Q Q''Q'$ has a smooth kernel. This follows from the Schwartz theorem saying that the operators with smooth kernel are the operators which can be continuously extended $\mathcal{C}^{-\infty} \rightarrow \Ci $. We will also need the following pointwise norm estimates: consider families of operator $Q_k, Q'_k : \Ci (M,L^k \otimes A) \rightarrow \Ci (M, L^k\otimes A)$ and $Q''_k : L^2 (M, L^k \otimes A) \rightarrow L^2 (M, L^k \otimes A)$. Then by \cite[Section 4.3]{oim}, if the Schwartz kernel families of $(Q_k)$ and $(Q'_k)$ are respectively in $\mathcal{O} (k^{-N})$ and $\mathcal{O} (k^{-N'})$, and the operator norms of $Q_k^{''}$ are in $\mathcal{O} (1)$, then the Schwartz kernel family of $Q_k Q''_k Q_k'$ is in $\mathcal{O} (k^{-(N +N)'})$. Back to our problem, we can write $\chi(P_k) = P_k \tilde{\chi} (P_k) P_k$ with $\tilde{\chi} (x) = \chi(x) / x^2$. Since $ \tilde{\chi}(P_k)$ is bounded, this shows that $\chi (P_k)$ has a smooth kernel. This also shows that the Schwartz kernel family of $\chi (P)$ is in $\mathcal{O} (k^{2n})$. To improve this, observe that $Q = P^2 - P$ is in $\lag (A)$ and ${\sigma} _0 (Q) = \pi^2 - \pi =0$, so $\| Q_k \| = \mathcal{O} (k^{-1/2})$, which implies easily that $\frac{1}{2}$ is not in the spectrum of $P_k$ when $k$ is sufficiently large, cf. \cite[Proposition 4.2]{oim}. Now for $x \in {\mathbb{R}} \setminus \{ \frac{1}{2}\}$, $y = x^2 -x > -1/4 $ and we have $$ \chi (x) = x + (1-2x) f ( x^2 -x ) \quad \text{with } \quad f (y) = \tfrac{1}{2} ( 1- (1+ 4y)^{-1/2} )$$ Write the Taylor expansion of $f$ at $0$ as $f (y) = \sum_{\ell =0}^m a_\ell y^\ell + y^{m+1} f_m (y)$ with $f_m \in \mathcal{C}^0(]-\frac{1}{4}, \infty[, {\mathbb{R}})$. Then \begin{gather} \label{eq:toutestla} \chi (P) = P + \sum_{\ell = 0 }^m a_\ell (1-2P) Q^\ell + (1 - 2 P ) Q^{m+1} f_m (Q). \end{gather} Now ${\sigma}_0(Q)=0$ implies that $Q^\ell$ and $PQ^{\ell}$ belong both to $\lag_{\ell} ( A)$. Furthermore, $\|f_m(Q_k) \| = \mathcal{O} (1)$. Since $Q^{m+1} f_m (Q) = Q^m f(Q) Q$ and similarly $PQ^{m+1} f_m (Q) = PQ^m f(Q) Q$, it follows from the preliminary observation that the Schwartz kernel family of $(1 - 2 P ) Q^{m+1} f_m (Q)$ is in $\mathcal{O}_{\infty} ( k^{2n -m} )$. We can now conclude easily the proof from \eqref{eq:toutestla} by choosing at each step sufficiently large value of $m$: first the Schwartz kernel family of $\chi (P)$ is in $\mathcal{O} (k^{-\infty})$ outside the diagonal and second the local expansions \eqref{eq:expansion_kernel} hold. \end{proof} \subsection{Toeplitz algebra} \label{sec:toeplitz-algebra} Choose a self-adjoint projector $\Pi \in \lag ^+ (A)$ with symbol $\pi$, which exists by Theorem \ref{theo:constr-proj}. For any $k\in {\mathbb{N}}$, let $\mathcal{H}_k = \operatorname{Im} \Pi_k \subset \Ci ( M , L^k \otimes A)$. Computing the trace of $\Pi_k$ by integrating its Schwartz kernel over the diagonal, we deduce from the last assertion of Theorem \ref{theo:lag} that $\mathcal{H}_k$ is finite dimensional and \begin{gather} \label{eq:estim_dim} \operatorname{dim} \mathcal{H}_k = \Bigl( \frac{k}{2 \pi} \Bigr)^n (\operatorname{rank} F) \; \operatorname{vol} (M, {\omega}) \; \bigl( 1 + \mathcal{O}( k^{-1}) \bigr) . \end{gather} We will now work with families of operators $(T_k \in \operatorname{End} \mathcal{H}_k$, $k \in {\mathbb{N}})$. Equivalently, we can consider that each $T_k$ acts on the larger space $\Ci ( M , L^k \otimes A)$ and satisfies $\Pi_k T_k \Pi_k = T_k$. Define the space \begin{gather} \label{eq:Toeplitz_Pi} \mathcal{T} = \{ T \in \lag ^{+} (A) / \, \Pi T \Pi = T \} . \end{gather} For any $q \in {\mathbb{N}}$, set $\mathcal{T}_q := k^{-q} \mathcal{T} = \lag_{2q} (A) \cap \mathcal{T} $ and $\mathcal{T}_{\infty} = \bigcap_q \mathcal{T}_q $. Clearly, $$ \mathcal{T}_\infty \subset \mathcal{T}_q \subset \mathcal{T}_p \subset \mathcal{T} $$ when $q \geqslant p$. \begin{theo} \label{theo:toeplitz} \hspace{1em} \begin{enumerate} \item For any $T \in \mathcal{T} $, $T$ belongs to $\mathcal{T}_q $ iff $\| T_k \| = \mathcal{O} ( k^{-q})$. Furthermore, $\mathcal{T}_{\infty}$ consists of the families $(T_k \in \operatorname{End} (\mathcal{H}_k), \, k \in {\mathbb{N}}^*)$ such that $\| T_k \| = \mathcal{O}(k^{-N})$ for any $N$. \item $ \mathcal{T} $ is closed under composition and taking adjoint: $ \bigl( \mathcal{T}_q \bigr)^* = \mathcal{T}_q$ and $\mathcal{T}_q \cdot \mathcal{T}_p \subset \mathcal{T}_{q+p} $ for any $q$ and $p$. \item For any $q$, there exists a linear map $\tau_q : \mathcal{T}_q \rightarrow \Ci ( M , \operatorname{End} F ) $, which is onto, has kernel $\mathcal{T}_{q+1} $ and is determined by ${\sigma}_{2q} ( T) = \tau_q( T) \pi$. Furthermore, if $P \in \mathcal{T}_q $ and $Q \in \mathcal{T}_p $, then \begin{xalignat}{2} \begin{split} & \tau_q( P) = \tau_0(k^{q}P) , \qquad \tau_q( P ^*) = \tau_q( P)^* , \\ & \tau_q ( P ) \tau_p( Q) = \tau_{q+p} ( PQ) \end{split} \end{xalignat} and the restriction to the diagonal of the Schwartz kernel of $P_k$ satisfies $$ P_k (x,x) = \frac{k^{n-q}}{(2\pi)^n} \Bigl[ \operatorname{tr} (\tau_q ( P) (x)) + \mathcal{O} (k^{-1}) \Bigr].$$ \end{enumerate} \end{theo} Let us give more details on the equation ${\sigma}_{2q} ( T) = \tau_q( T) \pi$ defining the symbol map $\tau_q$. Recall that $\pi(x)$ is the orthogonal projector of $\mathcal{D} (T_xM) \otimes A_x$ onto $F_x$. Then for an endomorphism $s$ of $F_x$, we define $s \pi(x) \in \mathcal{S} (T_xM) \otimes \operatorname{End} A_x $ as the endomorphism of $\mathcal{D} (T_xM) \otimes A_x$ sending $\psi $ into $s ( \pi(x) \psi)$. \begin{proof} 1. The first assertion follows from Part 2 of Proposition \ref{prop:lag}. To establish the second assertion, we deduce from the first part of the proof of Theorem \ref{theo:constr-proj} that if a family $(P_k \in \operatorname{End} (\mathcal{H}_k))$ satisfies $\| P_k \| = \mathcal{O} ( k^{-\infty})$, then its Schwartz kernel is in $\mathcal{O} ( k^{-\infty})$ because $ \Pi_k P_k \Pi_k = P_k$ and the Schwartz kernel of $\Pi_k$ is in $\mathcal{O} (k^{n})$. Parts 2 and 3 follow from Theorem \ref{theo:lag} and the fact that $\lag ^{+} (A)$ is a subalgebra of $\lag (A)$ by theorem \ref{theo:parity}. To define $\tau_q (P)$, simply observe that $\Pi P \Pi = P$ implies that $\pi {\sigma}_{2q} (P) \pi = {\sigma}_{2q}(P)$, so we can write ${\sigma}_{2q} (P) = \tau_q(P) \pi$ with $\tau_q (P)$ a section of $ \operatorname{End} F$. The map $\tau_q$ is onto because for any section $s$ of $\operatorname{End} F$, there exists $P$ in $ \lag_{2q} (A)$ with ${\sigma}_{2q} (P ) = s \pi$. Since $\pi (s \pi) \pi = s \pi$, we have ${\sigma}_{2q} ( \Pi P \Pi) = s \pi$ and clearly $\Pi P \Pi \in \mathcal{T}_q$. The kernel of $\tau_q$ is $\mathcal{T}_{q+1}$ because $\tau_q (P) =0$ implies that ${\sigma}_{2q} (P) =0$ so $P \in \lag^{+}_{2q+1} (A)$ so ${\sigma}_{2q+1} (P)$ is odd by Theorem \ref{theo:parity}. But $\Pi P \Pi = P$ implies that ${\sigma}_{2q+1} ( P ) = \pi {\sigma}_{2 q+1} (P) \pi$. This implies that ${\sigma}_{2q+1} (P)$ is even because $F$ has a definite parity. Indeed, if for instance $F \subset \mathcal{D}^+(TM) \otimes A$, and $f = \pi g \pi$ with $g \in \mathcal{S} (T_xM) \otimes \operatorname{End} A_x$, then in the decomposition $$\mathcal{D} (T_xM) \otimes A_x = (\mathcal{D}^+ (T_xM) \otimes A_x) \oplus ( \mathcal{D}^{-} (T_xM) \otimes A_x),$$ $f$ has the form $\begin{pmatrix} f_{++} & 0 \\ 0 & 0 \end{pmatrix}$, so $f$ is even. Consequently, ${\sigma}_{2q+1} (P) = 0$ so $P \in \lag_{2q+2} (A)$. The formulas giving the symbol of products, adjoints and the Schwartz kernel on the diagonal follow directly from Theorem \ref{theo:lag}. \end{proof} For any $f \in \Ci(M)$ and $k \in {\mathbb{N}}$, define the endomorphism $T_k(f)$ of $\mathcal{H}_k$ such that $$ \langle T_k (f) \psi, \psi' \rangle = \langle f \psi , \psi'\rangle , \qquad \forall \psi, \psi' \in \mathcal{H}_k .$$ Viewed as an operator of $\Ci ( M, L^k \otimes A)$, $T_k(f)$ is merely $\Pi_k f \Pi_k$. It follows from part 2 of Theorem \ref{theo:lag} that the family $(T_k(f))$ belongs to $\mathcal{T}$ and has symbol $\tau_0 ( T_k (f))= f \operatorname{id}_{F}$. By part 4 of Theorem \ref{theo:toeplitz}, we deduce that $$ T_k (f) T_k (g) = T_k (fg) + \mathcal{O} (k^{-1}).$$ A consequence of Theorem \ref{theo:unitary_equivalence} will be that $$ [ T_k (f) , T_k (g) ] = i k^{-1} T_k ( \{ f,g \} ) + \mathcal{O} ( k^{-2})$$ with $\{ f, g \}$ the Poisson bracket of $f$ and $g$ with respect to ${\omega}$. This equality does not follow from Theorem \ref{theo:toeplitz}. However, the following characterization of the Toeplitz operators having a scalar symbol follows from Theorem \ref{theo:toeplitz}: for any $P \in \mathcal{T} $, $$ P = T (f) +\mathcal{O} (k^{-1}) \text{ for some } f \in \Ci (M) \, \Leftrightarrow \; \forall Q \in \mathcal{T}, \, [P,Q ] \in \mathcal{T}_{1} .$$ By Jacobi identity, this proves that $[ T_k (f), T_k (g) ] = k^{-1} T_k (h) + \mathcal{O} ( k^{-2})$ for some function $h \in \Ci(M)$. \subsection{A unitary equivalence}\label{sec:unitary-equivalence} Consider an auxiliary vector bundle $B$ with an arbitrary rank. Set $F' = \mathcal{D}_0 (T M) \otimes B $ with $\mathcal{D}_0 (T_xM) \subset \mathcal{D} (T_xM)$ the subspace of constant polynomials. Then by Theorem \ref{sec:constr-proj}, there exists a projector $\Pi'$ in $\lag^{+}(B)$ having symbol $${\sigma}_0(\Pi') = \rho_{00} \otimes \operatorname{id}_{B} \in \Ci ( M,\mathcal{S}(M) \otimes \operatorname{End} B),$$ where $\rho_{00} (x) \in \mathcal{S} (T_xM)$ is the orthogonal projector of $\mathcal{D} (T_xM)$ onto $\mathcal{D}_0 (T_xM)$, the notation being the same as in \eqref{eq:def_Ualphabeta}. Starting from $\Pi'$, we define $\mathcal{H}_k' := \operatorname{Im} \Pi'_k$ and the corresponding Toeplitz space $\mathcal{T}' := \{ P \in \lag^{+} (B)/ \; \Pi' P \Pi' = P \}$. Since $\mathcal{D}_0(TM) = {\mathbb{C}}$, $F \simeq B$, so the symbols of the Toeplitz operators of $\mathcal{T}'$ are sections of $\operatorname{End} B$: $$ 0 \rightarrow \mathcal{T}'_{q+1} \rightarrow \mathcal{T}'_q \xrightarrow{\tau'_q} \Ci ( M , \operatorname{End} B) \rightarrow 0 .$$ Our goal now is to establish an equivalence between the $(\mathcal{H}_k, \mathcal{T})$ and $(\mathcal{H}_k', \mathcal{T}')$ when the bundle $B$ is $F$. The critical point is the existence of a convenient symbol. Recall our assumption that $F \subset \mathcal{D} ^{\ep} (TM) \otimes A$ with $\ep \in \{ \pm 1 \}$. \begin{lemme} \label{lem:unitary-symbole} If $B = F$, then there is a canonical symbol $\rho \in \Ci (M , \mathcal{S}^{\ep} (M) \otimes \operatorname{Hom} (A, B))$ such that $ \rho^* \rho = \pi $ and $\rho \rho ^* = \rho_{00} \otimes \operatorname{id}_{B} .$ \end{lemme} \begin{proof} On the one hand, $\pi (x) $ is the orthogonal projector of $\mathcal{D} (T_xM) \otimes A_x$ onto $F_x$. On the other hand, $\pi'(x) := \rho_{00} (x) \otimes \operatorname{id}_{B_x}$ is the orthogonal projector of $\mathcal{D} (T_xM) \otimes B_x$ onto ${\mathbb{C}} \otimes B_x$. Since $B = F$, the images of $\pi (x)$ and $\pi'(x)$ are isomorphic by the map $ \xi (x) : F_x \rightarrow \operatorname{Im} \pi '(x)$ sending $f$ into $1 \otimes f$. We define $\rho (x)$ as the extension of $\xi(x)$ \begin{gather} \label{eq:def_rho} \rho (x) : F_x \oplus F_x^{\perp} \rightarrow (\operatorname{Im} \pi'(x) ) \oplus ( \operatorname{Im} \pi'(x) )^{\perp} \end{gather} having the block decomposition $\begin{pmatrix} \xi (x) & 0 \\ 0 & 0 \end{pmatrix} $. So $\rho (x)$ is canonically defined. The equalities $ \rho(x)^* \rho(x) = \pi (x) $ and $\rho(x) \rho(x) ^* = \pi '(x)$ are easily verified by using that $\xi (x)$ is unitary. Writing $\rho $ in terms of a local frame of $F$, we see that $\rho (x)$ depends smoothly on $x$. Finally, $F_x \subset \mathcal{D}^{\ep} (T_xM) \otimes A_x$ and $\operatorname{Im} \pi'(x) \subset \mathcal{D}^{+} (T_x M) \otimes B_x$, so $\rho (x) \in \mathcal{S}^{\ep} (M)_x \otimes \operatorname{Hom} (A_x, B_x)$. \end{proof} \begin{theo} \label{theo:unitary_equivalence} Assume that $B = F$ and $\rho$ is the symbol defined above. Then there exists $U \in \lag ^{\ep}(A,B)$ with symbol ${\sigma}_0(U) = \rho$ and such that \begin{gather} \label{eq:U} U^*_k U_k = \Pi_k, \qquad U_k U_k^* = \Pi'_k \end{gather} when $k$ is sufficiently large. Modifying $\Pi_k'$ for a finite number of $k$, we can choose $U$ so that \eqref{eq:U} holds for any $k$. In this case, the Toeplitz algebras $ \mathcal{T}$ and $\mathcal{T}'$ are isomorphic by the map sending $P$ into $U P U^* $. Furthermore, $P \in \mathcal{T}_q$ if and only if $UPU^* \in \mathcal{T}'_q$ and when this is satisfied \begin{gather} \label{eq:symbol_iso} \tau'_q (U P U^* ) = \tau_q (P) . \end{gather} \end{theo} \begin{proof} Choose $W \in \lag^{\ep} (A,B)$ with symbol $\rho$ and set $V := \Pi' W \Pi$. Then $V \in \lag^{\ep} (A,B)$ with $ {\sigma}_0(V) = \pi \rho \pi' = \rho$ and since $\rho^* \rho = \pi$ and $\rho \rho^* = \pi'$, we have $$ V^*_k V_k = \Pi_k + \mathcal{O} (k^{-\frac{1}{2}}), \qquad V_k V^*_k = \Pi_k' + \mathcal{O} ( k^{-\frac{1}{2}} ) $$ So $V_k$, viewed as an operator from $\mathcal{H}_k$ to $\mathcal{H}_k'$, is invertible when $k$ is sufficiently large. Observe also that $V^*V$ is a Toeplitz operator of $\mathcal{T}$ with symbol $\operatorname{id}_{F}$. So $V^*V = \Pi + Q$ with $Q \in \mathcal{T}_1$. Since $\| Q_k\| = \mathcal{O} (k^{-1})$, the spectrum of $Q_k$ is contained in $[-\frac{1}{2}, \frac{1}{2} ]$ when $k$ is sufficiently large. Modifying $Q_k$ for a finite number of $k$, we can assume this holds for any $k$, and when $k$ is sufficiently large, we still have $V_k^*V_k = \Pi_k + Q_k$. Let $P_k$ be the endomorphism of $\Ci ( M , L^k \otimes A$) which is zero on $\mathcal{H}_k^{\perp}$ and equal to $(\operatorname{Id}_{\mathcal{H}_k} + Q_k ) ^{-1/2}$ on $\mathcal{H}_k$. We claim that $(P_k)$ belongs to $\mathcal{T}$ and has symbol $\operatorname{id}_{F}$. Assuming this temporarily, it follows that $U_k := V_k P_k $ belongs to $\lag ^{\ep} (A,B)$, has symbol $\rho$ and satisfies when $k$ is sufficiently large $U_k^* U_k = P_k V_k^* V_k P_k = \Pi_k$. Since $U_k$, viewed as an operator from $\mathcal{H}_k$ to $\mathcal{H}'_k$ is invertible, this also implies that $U_kU_k^* = \Pi'_k$. To prove the claim above, we write the Taylor expansion $(1+x)^{-1/2} = 1 + \sum_{\ell = 0 }^m a_\ell x^\ell + x^{m+1} f_m (x)$ with $f_m$ a continuous function $[-\frac{1}{2},\frac{1}{2}] \rightarrow {\mathbb{R}}$. Then \begin{gather} \label{eq:racine} P_k = \Pi_k + \sum_{\ell = 0 }^m a_\ell Q_k^{\ell} + Q_k^{m+1} f_m (Q_k) . \end{gather} Then we show that $P$ belongs to $\lag ^+ (A)$ by arguing as in the proof of Theorem \ref{theo:constr-proj}: $Q^\ell \in \mathcal{T}_\ell $ and $\| f_m ( Q_k) \| = \mathcal{O} (1)$, so the Schwartz kernel family of $Q_k^{m+1} f_m (Q_k) = Q_k^m f_m (Q_k) Q_m$ is in $\mathcal{O} ( k^{2n - m-1 })$. Choosing $m$ sufficiently large at each step, we then deduce from \eqref{eq:racine} that the Schwartz kernel family of $P_k $ is $\mathcal{O} ( k^{-\infty})$ outside the diagonal and that the local expansions \eqref{eq:expansion_kernel} hold. So we have proved the existence of $U \in \lag^{\ep}(A,B)$ with ${\sigma}_0(U) = \rho$ and satisfying \eqref{eq:U} for any $k$ except a finite set. For the missing $k$'s, we modify $\Pi_k'$ by choosing any subspace $\mathcal{H}'_k$ of $\Ci ( M , L^k \otimes B)$ having the same dimension as $\mathcal{H}_k$, define $\Pi_k'$ as the orthogonal projector onto $\mathcal{H}_k'$ and $U_k$ as any isometry $\mathcal{H}_k \rightarrow \mathcal{H}'_k$ extended to zero on $\mathcal{H}_k^{\perp}$. Then $\Pi_k'$ and $U_k$ have a smooth Schwartz kernel, so the new families $\Pi '$ and $U$ are still in $\lag ^{+} (B)$ and $\lag ^{\ep} (A,B)$ respectively. It is now easy to prove the last assertion: if $P \in \lag ^{+} (A)$, then $U P U^* \in \lag ^{+}(B)$ because $U \in \lag ^{\ep} (A,B)$ and $U^* \in \lag^{\ep} (B,A)$. If $ \Pi P \Pi = P$, then $\Pi' (U P U^* ) \Pi' = U P U^*$ by \eqref{eq:U}. So $P \in \mathcal{T}$ implies that $U P U^* \in \mathcal{T}'$, which defines an isomorphism from $\mathcal{T}$ into $\mathcal{T}'$ because we can invert it by sending $Q$ into $U^* Q U$. Furthermore, ${\sigma}_0 ( UPU^* ) = \rho^* {\sigma}_0 ( P) \rho$ which leads to \eqref{eq:symbol_iso}. \end{proof} A first corollary is the computation of the symbols of commutators in terms of Poisson bracket. Recall the Toeplitz operators $T_k (f) : \mathcal{H}_k \rightarrow \mathcal{H}_k$ associated to $f \in \Ci ( M)$. Define similarly $T'_k (f) : \mathcal{H}'_k \rightarrow \mathcal{H}_k'$. \begin{cor} $ [ T_k (f) , T_k (g) ] = i k^{-1} T_k ( \{ f,g \} ) + \mathcal{O} ( k^{-2})$ for any $f , g \in \Ci (M)$. \end{cor} \begin{proof} This amounts to show that for any two Toeplitz operators $T$, $S$ of $\mathcal{T}$ with symbol $\tau_0(T) = f \operatorname{id}_F$, $\tau_0(S) = g \operatorname{id}_F$, we have $\tau_1 ( [T,S] ) = i \{ f, g\} \operatorname{id}_F$. By Theorem \ref{theo:unitary_equivalence}, this holds for $\mathcal{T}$ if and only if this holds for $\mathcal{T}'$. The results for $\mathcal{T}'$ has been proved in \cite{oim}, when the projector is chosen in a specific subalgebra of $\lag ^{+} (B)$, which is always possible. \end{proof} The operators $T'_k(f)$ are defined not only for $f \in \Ci (M)$ but also for $f \in \Ci ( M, \operatorname{End} B)$. Since $\tau_q'( k^{-q} T_k' (f)) = f$, it follows that we can define the Toeplitz operators of $\mathcal{T}'$ as the families $(T_k)$ such that for any $N$, $$ T_k = \sum_{\ell =0 }^{N} k^{-\ell} T_{k} (f_\ell ) + \mathcal{O} ( k^{-(N+1)})$$ for a sequence $(f_\ell)$ of $\Ci ( M, \operatorname{End} B)$. This provides a definition of $\mathcal{T}'$ without any reference to the algebra $\lag ^{+} (B)$. Observe also that the coefficients $f_{\ell}$ are uniquely determined by $T$ and the map $\mathcal{T}' \rightarrow \Ci ( M , \operatorname{End} B) [[\hbar]]$ sending $T$ into $\sum \hbar^\ell f_{\ell}$ is a full symbol map, meaning that it is onto and its kernel is $\mathcal{T}' \cap \mathcal{O} ( k^{-\infty})$. This full symbol map can also be used to get uniform control of the product of Toeplitz operators, cf. \cite{oim}. But unfortunately, this does not hold for $\mathcal{T}$, except in the particular case where $F$ has rank one, so that $\operatorname{End} F \simeq {\mathbb{C}}$. This happens in particular for higher Landau level in dimension $n=1$. A second consequence of Theorem \ref{theo:unitary_equivalence} is the computation of the dimension of our quantum spaces. \begin{theo} \label{theo:dim_general} If $\Pi \in \lag^{+} (A)$ is any self-adjoint projector with symbol $\pi$, then the dimension of $\mathcal{H}_k= \operatorname{Im}( \Pi_k)$ is $$ \operatorname{dim} \mathcal{H}_k = \int_M \operatorname{ch} ( L^k \otimes F ) \; \operatorname{Td} (M)$$ when $k$ is sufficiently large. \end{theo} \begin{proof} We introduce a new family $(\mathcal{H}''_k:= \operatorname{Ker} D_k)$ where $D_k$ is the spin-c Dirac operator acting on $\Ci ( M , L^k \otimes B \otimes S)$ with \color{red} $S :=\bigwedge (T^*M)^{0,1} $ \color{black} the spinor bundle. By the Atiyah-Singer Theorem and a vanishing theorem \cite{BoUr96}, \cite{MaMa02}, the dimension of $\mathcal{H}''_k$ is given by the Riemann-Roch number of $L^k \otimes B$ when $k$ is sufficiently large. We claim that the projector $\Pi''_k$ of $\Ci ( M , L^k \otimes B \otimes S)$ onto $\mathcal{H}_k''$ belongs to $\lag ^+ ( B \otimes S)$ and has symbol $\rho_{00} \otimes p_B$ where $p_B $ is the section of $\operatorname{End} ( B \otimes S)$ equal at each $x\in M$ to the projector of $B_x \otimes S_x$ onto $B_x \otimes {\mathbb{C}}$. This is actually a reformulation of results by Ma and Marinescu \cite{MaMa}, as is explained in \cite[Appendice A]{oim}. Alternatively this follows from the companion paper \cite{oim_copain}. Now the image of the symbol $\rho_{00} \otimes p_B$ is isomorphic with $B$, so by Theorem \ref{theo:unitary_equivalence}, when $k$ is sufficiently large, $\mathcal{H}''_k$ has the same definition as $\mathcal{H}_k' = \operatorname{Im} \Pi'_k$, where $\Pi_k$ is any self-adjoint projector of $\mathcal{L}^+(B)$ with symbol $ \rho_{00} \otimes \operatorname{id}_B$. And by another application of theorem \ref{theo:unitary_equivalence}, for $B=F$, $\mathcal{H}'_k$ and $\mathcal{H}_k$ have the same dimension when $k$ is sufficiently large. \end{proof} \section{Landau Hamiltonian algebra} \label{sec:landau-levels-cn} In this section, we come back to the algebra $\mathcal{S} ({\mathbb{C}}^n)$ introduced in Section \ref{sec:symbol-spaces}. We extend the action of the elements of $\mathcal{S} ({\mathbb{C}}^n)$ on $\mathcal{D} ( {\mathbb{C}}^n)$ to the complete polynomial space and we compute the corresponding Schwartz kernel. This will be used in the sequel to give an intrinsic definition of the symbol maps ${\sigma}_q$, cf. Definition \ref{def:symbol}, and to understand the composition properties of the class $\lag (A,B)$. Let $\mathcal{P} ( {\mathbb{C}}^n)$ be the space of polynomials map from ${\mathbb{C}}^n$ to ${\mathbb{C}}$, so any $f \in \mathcal{P} ( {\mathbb{C}}^n)$ has the form $ f = \sum a_{{\alpha} {\beta}} z^ {\alpha} \overline{z}^{\beta}$ where the sum is finite and the $a_{{\alpha} {\beta}}$ are complex numbers. The space $\mathcal{D} ( {\mathbb{C}}^n)$ introduced in Section \ref{sec:symbol-spaces} is the subspace of $\mathcal{P} ( {\mathbb{C}}^n)$ of antiholomorphic maps. We endow $\mathcal{P} ( {\mathbb{C}}^n)$ with the same scalar product \begin{gather} \label{eq:scal_prod_2} \langle f, g \rangle = (2\pi)^{-n} \int_{{\mathbb{C}}^n} e^{-|z|^2} f (z) \overline{g (z)} \; d \mu_n (z) \end{gather} as in \eqref{eq:scal_product_bargm} for $\mathcal{D} ( {\mathbb{C}}^n)$. The family $(( ({\alpha} + {\beta} )! )^{-\frac{1}{2}} z^{{\alpha}} \overline{z}^{\beta}, \; {\alpha} , {\beta} \in {\mathbb{N}}^n )$ is an orthonormal basis of $\mathcal{P} ( {\mathbb{C}}^n)$. For any $i =1, \ldots, n $, introduce the endomorphism $a_i = \partial_{\overline{z}_i}$ and its adjoint $a_i^{*} = \overline{z}_i - \partial_{z_i}$. They satisfy the bosonic commutation relations $$ [ a_i , a_j ] = [a_i^{*}, a_j^{*} ] = 0 , \qquad [a_i , a_j^{*}] = \delta_{ij}. $$ So the $a_i^{*} a_i$'s are mutually commuting Hermitian endomorphisms. Their eigenspaces are the Landau levels of ${\mathbb{C}}^n$. In the sequel, we use the notation $a^{\alpha} := a_1^{{\alpha}(1)} \ldots a_n^{{\alpha} (n)}$ and $(a^{*})^{{\alpha}} := (a_1^*)^{{\alpha}(1)} \ldots (a_n^* )^{{\alpha} (n)}$. \begin{prop} \label{prop:linearlandau} $ $ \begin{enumerate} \item For $i=1, \ldots, n$, $a_i^* a_i$ is diagonalisable with spectrum ${\mathbb{N}}$. So we have a decomposition into mutually orthogonal joint eigenspaces $ \mathcal{P} ( {\mathbb{C}}^n) = \bigoplus_{{\alpha} \in {\mathbb{N}}^n} \mathcal{L}_{\alpha}$ with $\mathcal{L}_{{\alpha}} = \bigcap_{i=1}^{n} \operatorname{ker} ( a_i^* a_i - {\alpha} (i) ).$ \item $\mathcal{L}_0 = {\mathbb{C}} [z_1, \ldots ,z_n]$ and for any ${\alpha} \in {\mathbb{N}}^n$, $ \mathcal{L}_{{\alpha}} = (a^*)^{\alpha} \mathcal{L}_0 $. \item For any ${\alpha} , {\beta} \in {\mathbb{N}}^n$, let $\tilde \rho_{{\alpha} {\beta}} := ( {\alpha} ! {\beta} !)^{-\frac{1}{2}} (a^{*})^{{\alpha}} \tilde \rho_{00} a ^{{\beta}}$ with $\tilde \rho_{00} $ the orthogonal projector of $\mathcal{P} ( {\mathbb{C}}^n)$ onto $\mathcal{L}_0$. Then \begin{enumerate} \item $\tilde \rho_{{\alpha}{\beta}}$ is zero on the $\mathcal{L}_{{\gamma}}$'s with ${\gamma} \neq {\beta}$ and restricts to a unitary isomorphism from $\mathcal{L}_{{\beta}}$ to $\mathcal{L}_{{\alpha}}$ \item $\tilde \rho_{{\alpha} {\alpha}}$ is the orthogonal projector onto $\mathcal{L}_{\alpha}$. \item $ \tilde \rho_{{\alpha} {\beta}} \circ \tilde \rho_{\tilde{{\alpha}} \tilde{{\beta}}} = \delta_{{\beta} \tilde{{\alpha}}} \tilde \rho_{{\alpha} \tilde {\beta}} $ and $\tilde \rho_{{\alpha} {\beta}} ^{*} = \tilde \rho_{{\beta} {\alpha}} $ \end{enumerate} \end{enumerate} \end{prop} \begin{proof} The result is certainly standard in condensed matter theory. For the convenience of the reader, we explain briefly the proof for $n=1$. The extension in higher dimension is straightforward. We write $a:= a_1$, recall the commutation relation $ [a , a^{*} ]= 1 $ and set $\mathcal{L}_m := (a^{*})^m ({\mathbb{C}}[z])$ for any $m \in {\mathbb{N}}$. We check by induction that $\mathcal{L}_m = \ker ( a^* a - m)$. First, writing $\langle a^{*} a f, f \rangle = \| a f \|^2 $, it comes that $\ker a^* a = \ker a = \mathcal{L}_0$. Assume now that $\mathcal{L}_m = \ker ( a^* a - m)$. By the commutation relation, $f \in \mathcal{L}_m$ implies that $a^{*} a a ^{*} f = (m+1) a^* f$, so $\mathcal{L}_{m+1} \subset \ker ( a^* a - (m+1))$. Conversely, by the commutation relation again, $a^* a f = ( m+1) f $ implies that $ (a^* a ) a f = m a f$ so $ af \in \mathcal{L}_m$ and $f = (m+1)^{-1} a^* (af) \in \mathcal{L}_{m+1}$. To conclude that $a^* a $ is diagonalizable with eigenvalues in ${\mathbb{N}}$, it suffices to prove that $\mathcal{P} ( {\mathbb{C}})$ is spanned by the $\mathcal{L}_m$. Introduce the filtration $\mathcal{F}_m := \oplus _{\ell=0}^{m} \overline{z}^\ell {\mathbb{C}}[z]$, $m \in {\mathbb{N}}$. If $f \in {\mathbb{C}} [z]$, then $\overline{z}^m f = (a^{*})^m f \mod \mathcal{F}_{m-1}$. So $\mathcal{F}_m = \mathcal{L}_m + \mathcal{F}_{m-1} = \ldots = \mathcal{L}_m + \mathcal{L}_{m-1} + \ldots + \mathcal{L}_0$ by reiterating. So we have proved that $\mathcal{P} ( {\mathbb{C}}) = \bigoplus \mathcal{L}_m$ with $\mathcal{L}_m = \ker ( a^* a - m)$, which shows the first and second assertions of the proposition. By the commutation relation, $a a^{*} = (m+1)$ on $\mathcal{L}_m$. So $a^{*} : \mathcal{L}_m \rightarrow \mathcal{L}_{m+1}$ is invertible with inverse $(m+1)^{-1} a: \mathcal{L}_{m+1} \rightarrow \mathcal{L}_{m}$. We conclude that \begin{itemize} \item if $p \leqslant m$, then $a^p$ restricts to an isomorphism from $\mathcal{L}_m$ to $\mathcal{L}_{m-p}$, whose inverse is the restriction of $\frac{(m-p)!}{m!} (a^{*})^m$ to $\mathcal{L}_{m-p}$. \item if $p > m$, then $a ^p ( \mathcal{L}_m) = \{0\}$. \end{itemize} With these two facts, we easily check the third assertion. \end{proof} By the last assertion of Proposition \ref{prop:linearlandau}, the space $\widetilde{\mathcal{S}} ({\mathbb{C}}^n)$ of endomorphisms of $\mathcal{P} ( {\mathbb{C}}^n)$ generated by the $\tilde \rho_{{\alpha} {\beta}}$'s is closed under composition, so it is an algebra. By the following proposition, $\widetilde{\mathcal{S}} ({\mathbb{C}}^n)$ is isomorphic with the algebra $\mathcal{S} ({\mathbb{C}}^n)$ introduced in Section \ref{sec:symbol-spaces}, through the map sending $\tilde \rho_{{\alpha}{\beta}}$ into $\rho_{{\alpha}{\beta}}$. \begin{prop} The elements of $\widetilde{\mathcal{S}} ({\mathbb{C}}^n)$ preserve the subspace $\mathcal{D} ( {\mathbb{C}}^n)$ of $\mathcal{P} ( {\mathbb{C}}^{n})$. Furthermore, the restriction map $\operatorname{res} : \widetilde{\mathcal{S}} ({\mathbb{C}}^n) \rightarrow \operatorname{End} ( \mathcal{D} ( {\mathbb{C}}^{n}) )$ is injective, with image $\mathcal{S} ( {\mathbb{C}}^n)$ and $\operatorname{res} ( \tilde \rho_{{\alpha} {\beta}} ) = \rho_{{\alpha} {\beta}}$. \end{prop} Recall the decomposition \eqref{eq:symb+_-} of $\mathcal{S} ({\mathbb{C}}^n)$ into the subspaces of even and odd elements. Since $\widetilde \mathcal{S} ({\mathbb{C}}^n) \simeq \mathcal{S} ( {\mathbb{C}}^n)$, this gives us a new decomposition $$\widetilde{\mathcal{S}} ({\mathbb{C}}^n) = \widetilde{\mathcal{S}}^+ ({\mathbb{C}}^n) \oplus \widetilde{\mathcal{S}}^- ({\mathbb{C}}^n).$$ \begin{proof} Observe first that the operators $a_i$, $a_i^{*}$ and the projector $\tilde \rho_{00}$ preserves $\mathcal{D} ( {\mathbb{C}}^n)$. Furthermore, for any $f \in \mathcal{D} ({\mathbb{C}}^n)$, $a_i f = \partial_{\overline{z}_i} f$, $a_i^{*} f = \overline{z}_i f$ and $\tilde \rho_{00} f = f (0)$. Consequently, the operators $\tilde \rho_{{\alpha} {\beta}}$ preserve $\mathcal{D} ( {\mathbb{C}}^n)$ and an easy computation shows that $$ \tilde \rho_{{\alpha}{\beta}} \bigl( ({\beta} !)^{-\frac{1}{2}} \overline{z}^{\beta} \bigr) = ({\alpha} !)^{-\frac{1}{2}} \overline{z}^{{\alpha}}, \qquad \tilde \rho_{{\alpha} {\beta}} ( \overline{z}^{{\gamma}} ) = 0 , \quad \forall {\gamma} \in {\mathbb{N}}^n \setminus \{ {\beta} \}. $$ This means that the restriction of $\tilde \rho_{{\alpha} {\beta}}$ to $\mathcal{D} ({\mathbb{C}}^n)$ is exactly the endomorphism $\rho_{{\alpha} {\beta}}$ introduced in Section \ref{sec:symbol-spaces}, cf. Equation \eqref{eq:def_Ualphabeta}. So the restriction map $\operatorname{res}$ is well-defined, its image is $\mathcal{S} ( {\mathbb{C}}^n)$, and the $\rho_{{\alpha}{\beta}}$'s being linearly independent, it is injective. \end{proof} Let us compute the Schwartz kernel of each $\tilde \rho_{{\alpha}{\beta}}$. \begin{lemme} \label{lem_kenrel_u_alpha_beta} For any $f \in \mathcal{P} ( {\mathbb{C}}^n)$, we have $$ (\tilde \rho_{{\alpha}{\beta}} f)(u ) = (2 \pi)^{-n} \int_{{\mathbb{C}}^{n} } e^{ u\cdot \overline{v} - |v|^2 } p_{{\alpha}{\beta}} (u-v) \, f(v ) \; d\mu_n (v) $$ where $u \cdot \overline v = \sum u_i \overline{v}_i$, $p_{{\alpha},{\beta}} (z) = \bigl( {\alpha}! {\beta} ! \bigr) ^{-\frac{1}{2}} \bigl( \partial_z - \overline{z})^{{\alpha}} z^{\beta} $. \end{lemme} \begin{proof} For ${\alpha} = {\beta} =0$, this is the well-known formula for the Schwartz kernel $K ( u,v) = (2\pi)^{-n} e^{u \cdot \overline v - |v|^2} $ of the projector onto the Bargmann space, which in our setting is the $L^2$-completion of $\mathcal{L}_0$. So the Schwartz kernel of $(a^{*})^{{\alpha}} \tilde \rho_{00} a^{{\beta}}$ is $ K_{{\alpha},{\beta}} (u,v) = (\overline{u} -\partial_u)^{{\alpha}} (-\partial_{\overline{v}})^{{\beta}} K(u,v)$. To compute this, we use that for a polynomial $g (z,\overline{z})$ \begin{xalignat*}{2} (\overline{u}_i - \partial_{u_i}) (K(u,v) g(u-v ) ) & = K(u,v) (a_i^{*} g)( u-v ) , \\ ( \partial_{\overline{v}_i}) (K(u,v) g(u-v ) ) & = K(u,v) (b_i^{*} g)( u-v ) \end{xalignat*} where $b_i^{*} := z_i - \partial_{\overline{z}_i}$. So $K_{{\alpha},{\beta}} (u,v) = K(u,v) p(u-v) $ with $p (z,\overline z) = ( a^{*})^{{\alpha}} (b^{*})^\beta 1= \bigl( \partial_z - \overline{z})^{{\alpha}} z^{\beta} $, which ends the proof. \end{proof} So on one hand, the elements of $\widetilde{\mathcal{S}} ( {\mathbb{C}}^n)$ act on $\mathcal{P} ( {\mathbb{C}}^n)$, on the other hand, the Schwartz kernel of $\tilde \rho_{{\alpha}{\beta}}$ is given by a polynomial $p_{{\alpha}{\beta}} \in \mathcal{P} ( {\mathbb{C}}^n)$. \begin{prop} \label{prop:tilde_symb_and_op} $\widetilde{\mathcal{S}} ( {\mathbb{C}}^n)$ consists of the endomorphisms $V$ having the form \begin{gather} \label{eq:sch_ker} (V f)(u) = (2\pi)^{-n} \int_{{\mathbb{C}}^n} e^{u\cdot \overline v - |v|^2} q (u-v ) f(v) \; d\mu_n (v) \end{gather} with $q \in \mathcal{P} ( {\mathbb{C}}^n)$. Furthermore, the map $\operatorname{Op} : \mathcal{P} ({\mathbb{C}}^n) \rightarrow \widetilde{\mathcal{S}} ( {\mathbb{C}}^n)$, sending $q$ into $V$, is an isomorphism which preserves the parity and \begin{alignat}{2} \label{eq:rel_bizare} \begin{split} & \operatorname{tr} \bigl( \operatorname{Op} (f)|_{\mathcal{D} ( {\mathbb{C}}^n)} \bigr) = f (0) \\ & \operatorname{Op} (f) \circ \operatorname{Op} (g) = \operatorname{Op} ( \operatorname{Op} (f) g ) \end{split} \end{alignat} for any $f,g \in \mathcal{P} ( {\mathbb{C}}^n)$. \end{prop} \begin{proof} Lemma \ref{lem_kenrel_u_alpha_beta} says that $\operatorname{Op} ( p_{{\alpha}{\beta}} ) = \tilde \rho_{{\alpha} {\beta}}$. The family $(p_{{\alpha}{\beta}})$ is a basis of $\mathcal{P} ( {\mathbb{C}}^n)$ because $p_{{\alpha}{\beta}} (z ) = ({\alpha} ! {\beta} ! ) ^{-\frac{1}{2}}(- z)^{{\alpha}} \overline{z}^{\beta} + $ a linear combination of $z^{{\alpha}'} \overline{z}^{{\beta}'}$ with ${\alpha}' < {\alpha}$ and ${\beta}' < {\beta}$. Since $(\rho_{{\alpha}{\beta}})$ is a basis of $\mathcal{S} ( {\mathbb{C}}^n)$, $(\tilde \rho_{{\alpha} {\beta}})$ is a basis of $\widetilde { \mathcal{S} } ( {\mathbb{C}}^n)$ and it follows that $\operatorname{Op}$ is an isomorphism. This isomorphism preserves the parity, because $\tilde \rho_{{\alpha} {\beta}} $ and $p_{{\alpha}{\beta}}$ have both the same parity as $|{\alpha}| + |{\beta}|$. For the first equation of \eqref{eq:rel_bizare}, it suffices to prove it for $q = p_{{\alpha},{\beta}}$, and in this case, it follows from $\operatorname{tr} \rho_{{\alpha}{\beta}} =\delta_{{\alpha}{\beta}} = p_{{\alpha}{\beta}} (0)$. To prove the second equation of \eqref{eq:rel_bizare}, observe that we recover $q$ from the Schwartz kernel of $\operatorname{Op} (q)$ by setting $v = 0 $, that is $$ e^{u\cdot \overline v - |v|^2} q (u-v ) \Bigr|_{v =0} = q (u) .$$ Applying this in the Schwartz kernel composition formula leads to \eqref{eq:rel_bizare}. \end{proof} As a last remark, we can replace in the previous definitions ${\mathbb{C}}^n$ with any $n$-dimensional Hermitian space $\mathbf{E}$ as we did in section \ref{sec:symbol-spaces}. So we denote by $\mathcal{P} ( \mathbf{E})$ the space of polynomial maps $\mathbf{E} \rightarrow {\mathbb{C}}$ and by $\widetilde{\mathcal{S}} ( \mathbf{E})$ the space of endomorphisms of $\mathcal{P} ( \mathbf{E})$ having the form \eqref{eq:sch_ker}, where we interpret $u \cdot \overline{v}$ as the scalar product of the vectors $u$, $v$ of $\mathbf{E}$ and $|v|$ as the norm of $v$. Observe as well that the map $\operatorname{Op} : \mathcal{P} ( \mathbf{E} ) \rightarrow \widetilde{\mathcal{S}} ( \mathbf{E} )$ is well-defined. Furthermore the restriction from $\mathcal{P} ( \mathbf{E})$ to its subspace $\mathcal{D} ( \mathbf{E})$ induces an isomorphism $\widetilde{\mathcal{S}} ( \mathbf{E} ) \simeq \mathcal{S} (\mathbf{E})$. \section{The Schwartz kernels of operators of \texorpdfstring{$\lag (A, B)$}{L(A,B)} } \label{sec:schw-kern-oper} \subsection{The section \texorpdfstring{$E$}{E}} \label{sec:section-ee} An important ingredient in the global Schwartz kernel description of operators of $\lag (A,B)$ is a section $E$ of $L \boxtimes \overline{L}$ satisfying the following conditions. For any $y\in M$, denote by $E_y $ the section of $L \otimes \overline{L}_y$ given by $E_y(x) = E(x,y)$ for any $x \in M$. Then we will assume that for any $y\in M$ \begin{xalignat}{2} \label{eq:hypotheseE} \begin{split} & E_y(y) = u \otimes \overline{u},\qquad \forall u \in L_y \text{ with } |u| =1, \\ & (\nabla E_y)(y) =0 , \\ & (\nabla_{\xi} \nabla_{\eta} E_y)(y) = - \bigl( \tfrac{i}{2} {\omega} (\xi, \eta) + \tfrac{1}{2} {\omega} ( \xi, j \eta) \bigr) E_y (y) , \qquad \forall \; \xi, \eta \in T_yM \end{split} \end{xalignat} Such a section appeared already in the expansion \eqref{eq:expansion_kernel} as follows. Choose a unitary frame $t$ of $L$ and a coordinate system on the same open set, then the section \begin{gather} \label{eq:exempleE} E ( y+ \xi, y) := e^{-\varphi ( y, \xi)} t( y+ \xi) \otimes \overline{t}(y), \end{gather} with $\varphi$ defined as in \eqref{eq:expansion_kernel}, satisfies \eqref{eq:hypotheseE}. From this local construction, we easily obtain a global section $E$ by using a partition of unity. The conditions \eqref{eq:hypotheseE} determine the second-order Taylor expansion of $E$ at $(y,y)$ in the directions tangent to the first factor of $M^2$. Since any tangent vector of $M^2$ at $(y,y)$ is the sum of a vector tangent to the diagonal and a vector tangent to the first factor, we deduce that $E$ is uniquely determined modulo a section vanishing to third order along the diagonal. The function $\psi_y (x)= - 2 \ln |E_y (x)|$ vanishes to second order at $y$ and for any $\xi, \eta \in T_yM$, $(\xi.\eta. \psi_y )(y) = {\omega} (\xi, j \eta)$, so $\psi_y(x)>0$ when $x\neq y$ is sufficiently close to $y$. So modifying $E$ outside the diagonal, we can assume that it satisfies as well \begin{gather} \label{eq:normE} |E (x,y) | <1, \qquad \forall (x,y) \in M^2 \text{ such that } x \neq y \end{gather} Another important property of $E$ is the symmetry: \begin{gather} \label{eq:symE} \overline{E (x,y)} = E(y,x) + \mathcal{O} ( |x-y|^3) \end{gather} For a longer discussion, the reader is referred to \cite{oim}. In the sequel we will need the following expression of $E$ in terms of complex coordinates and a frame of $L$, both normal at a point $p_0 \in M$. We say that a function or a section is in $\mathcal{O}_{p_0} (m)$ if it vanishes to order $m$ at $p_0$. Let $(\partial_i )_{i =1}^{n}$ be an orthonormal basis of $T^{1,0}_{p_0} M$, i.e. $\frac{1}{i}{\omega}_{p_0} ( \partial_i, \overline{\partial}_j) = \delta_{ij}$. Choose complex valued functions $z_i$ on a neighborhood of $p_0$ such that \begin{gather} \label{eq:coordonnees_z} z_i (p_0) =0, \qquad dz_i ( \partial_j) =\delta_{ij}, \quad dz_i ( \overline \partial_j) = 0 \quad \text{ at }p_0. \end{gather} Then $(\operatorname{Re} z_i , \operatorname{Im}z_i )_{i=1}^n$ is a coordinate system on a neighborhood of $p_0$ and ${\omega}|_{p_0} = i \sum dz_i \wedge d\overline{z}_i$. The curvature of $L$ being $\frac{1}{i} {\omega}$, there exists a unitary frame $t$ of $L$ at $p_0$ such that $\nabla t = \frac{1}{2} \textstyle{\sum} ( z_i d \overline{z}_i - \overline{z}_i d z_i ) \otimes t + \mathcal{O}_{p_0} ( 2)$. Then \begin{gather} \label{eq:E_linearise} E(x,y) = e^{ z(x) \cdot \overline{z} (y) -\frac{1}{2} ( |z(x) |^2 + |z (y)|^2) } t(x) \otimes \overline{t(y)} + \mathcal{O}_{(p_0, p_0)} (3). \end{gather} A similar expression already appeared in the description of Schwartz kernel of the operators of $\widetilde{\mathcal{S}} ({\mathbb{C}}^n)$. Indeed, let $L_{{\mathbb{C}}^n}= {\mathbb{C}}^n \times {\mathbb{C}}$ be the trivial holomorphic line bundle equipped with the metric such that the frame $s( z) = (z,1)$ has a pointwise norm $|s(z)|^2 = e^{-|z|^2}$. Then it is natural to interpret the elements of $\mathcal{P} ( {\mathbb{C}}^n)$ as sections of $L_{{\mathbb{C}}^n}$, because the scalar product \eqref{eq:scal_prod_2} is the integral of the pointwise scalar product $\bigl( f (z) s(z), g(z) s(z) \bigr) = f(z) \overline{g} (z) e^{-|z|^2}$. Furthermore, in the integral \eqref{eq:sch_ker}, we can interpret $e^{-|v|^2} f(v)$ as the pointwise scalar product $( f(v) s(v) , s(v))$. In other words, the Schwartz kernel of $V$ is $$(2\pi)^n E_{{\mathbb{C}}^n} (u,v) q(u-v ) \quad \text{ with } \quad E_{{\mathbb{C}}^n}(u,v) =e^{u \cdot \overline{v}} s(u) \otimes \overline{s}(v).$$ Now equip $L_{{\mathbb{C}}^n}$ with its Chern connection, that the unique connection compatible with both the holomorphic and Hermitian structures. Then $\nabla s = - \sum \overline{z}_i dz_i \otimes s$. So the curvature is $\frac{1}{i} {\omega}_{{\mathbb{C}}^n} $ with ${\omega}_{{\mathbb{C}}^n} = i \sum dz_i \wedge d \overline{z}_i$. And if $t $ is the unitary frame $t(z) = e^{|z|^2/2} s (z)$, we have $\nabla t = \frac{1}{2} \sum ( z_i d \overline{z}_i - \overline{z}_i d z_i ) \otimes t$ and $$E_{{\mathbb{C}}^n}(u,v) = e^{ u \cdot \overline v - \frac{1}{2} ( |u|^2+ |v|^2)} t(u) \otimes \overline{t(v)},$$ the same formula as \eqref{eq:E_linearise}. \subsection{A global expansion} \label{sec:global-expansion} We consider operator family $(P_k : \Ci( M , L^k \otimes A ) \rightarrow \Ci(M , L^k \otimes B), \; k \in {\mathbb{N}})$ having a smooth Schwartz kernel. Recall the notations introduced in the beginning of Section \ref{sec:operators}. In particular, $\| P_k \|$ is the operator norm whereas $|P_k|$ is the function of $M^2$ sending $(x,y)$ into $|P_k(x,y)|$. Let $E$ be a section of $L \boxtimes \overline L $ satisfying \eqref{eq:hypotheseE} and \eqref{eq:normE} and $b \in \Ci ( M^2 , B \boxtimes \overline A)$. Then, viewing $(L^k \otimes B) \boxtimes (\overline{L}^k \otimes \overline{A})$ as $(L \boxtimes \overline L )^k \otimes ( B \boxtimes \overline{A})$, we introduce the operator family $(P_k)$ with Schwartz kernels \begin{gather} \label{eq:prel} P_k (x,y) = \Bigl( \frac{k}{2\pi} \Bigr)^n E^k(x,y) b (x,y) \end{gather} The norms of $P_k$ depends in an essential way on the vanishing order of $b$ along the diagonal. If $m \in {\mathbb{N}}$, we write $b = \mathcal{O} ( m)$ to say that all the derivatives of $b$ of order $\leqslant m-1$ are zero at each point of the diagonal. Recall that $$\psi (x,y) = - 2 \ln |E(x,y)| $$ is positive outside the diagonal, and vanishes to second order along the diagonal with a Hessian non-degenerate in the transverse direction. \begin{lemme} \label{lem:estimate} If $P_k$ is given by \eqref{eq:prel} with $b = \mathcal{O} ( m)$, then $|P_k | = \mathcal{O} \bigl( k^{n-\frac{m}{2}} e^{-k \frac{\psi}{2}} \bigr)$ and $\| P_k \| = \mathcal{O} ( k^{- \frac{m}{2}})$ \end{lemme} \begin{proof} Since $b = \mathcal{O} (m)$, $ | b | = \mathcal{O} ( \psi ^{\frac{m}{2}} )$. So $$|P_k | \leqslant C k^n e^{-k \psi} \psi ^{\frac{m}{2}} = C k^{n- \frac{m}{2}} e^{-k \frac{\psi}{2}} e^{-k \frac{\psi}{2}} (k\psi) ^{\frac{m}{2}} \leqslant C' k^{n- \frac{m}{2}} e^{-k \frac{\psi}{2}} $$ because $t \rightarrow e^{-\frac{t^2}{2} } t^m$ is bounded on ${\mathbb{R}}_{\geqslant 0}$. This proves the first estimate and can be written locally in a coordinate system as $$|P_k (x,y) | \leqslant C k^{n - \frac{m}{2}} e^{-k|x-y|^2 /C}.$$ So $\int_M |P_k(x,y)| \, d\mu_M (y)$ and $\int_M |P_k (x,y)|\, d\mu_M (x)$ are both $\leqslant C k^{-\frac{m}{2}}$ and the operator norm estimate follows from Schur test. \end{proof} More generally, instead of $P_k$ given by \eqref{eq:prel}, assume that for any $N \in {\mathbb{N}}$, \begin{gather} \label{eq:expansion} P_k (x,y) = \Bigl(\frac{k}{2\pi} \Bigr)^n E^k(x,y) \sum_{\substack{\ell \in {\mathbb{Z}}, \\ \ell + m ( \ell ) \leqslant N }} k^{- \frac{\ell}{2}} b_{\ell} (x,y) + R_{N,k} (x,y) \end{gather} where \begin{enumerate}[i)] \item $m : {\mathbb{Z}} \rightarrow {\mathbb{N}} \cup \{\infty \} $ is such that for any $N$, $\{ \ell/\; \ell + m(\ell) \leqslant N \}$ is finite, and $\ell + m ( \ell) \geqslant 0$ for any $\ell$. \item $(b_{\ell})_{\ell \in {\mathbb{Z}}}$ is a family of $\Ci ( M^2, B \boxtimes \overline{A} )$ such that $b _{\ell} = \mathcal{O} ( m( \ell))$ for any $\ell$. \item $|R_{N,k} (x,y) | = \mathcal{O} ( k^{n - \frac{N+1}{2}} )$ uniformly on $M^2$. \end{enumerate} By Lemma \ref{lem:estimate}, $| E^k(x,y) k^{-\frac{\ell}{ 2}} b_{\ell}(x,y) | \in \mathcal{O} ( k^{ - \frac{1}{2} (\ell + m ( \ell))})$. So the expansion \eqref{eq:expansion} is consistent in the sense that passing from $N$ to $N+1$, we add new terms $k^{-\frac{\ell}{2}} b_\ell$ such that $\ell + m ( \ell) = N+1$, which contribute to $P_k (x,y)$ with a $\mathcal{O} (k^{n -\frac{1}{2}(N+1)})$. \begin{lemme} \label{lem:estim_expansion} If the expansion \eqref{eq:expansion} holds, then for any $q>0$ $$ |P_k | = \mathcal{O} \bigl( k^{n} e^{- k \frac{\psi }{2}} \bigr) + \mathcal{O} ( k^{-q} ) , \qquad \| P_k \| = \mathcal{O} (1)$$ Similarly the remainders $R_{N,k}$'s satisfy for any $q>0$, $$ |R_{N,k} | = \mathcal{O} \bigl( k^{n- \frac{N+1}{2}} e^{- k \frac{\psi }{2}} \bigr) + \mathcal{O}( k^{-q}) , \qquad \| R_{N,k} \| = \mathcal{O} (k^{-\frac{N+1}{2}}) . $$ \end{lemme} \begin{proof} To prove the first estimate, we use \eqref{eq:expansion} with $N$ sufficiently large so that $| R_{N,k} | = \mathcal{O} ( k^{-q})$, and the result follows from Lemma \ref{lem:estimate} because $\ell + m (\ell) \geqslant 0$. The operator norm estimate is proved similarly by choosing $N$ so that $|R_{N,k} | =\mathcal{O} (1)$ which implies that $\| R_{N,k} \| = \mathcal{O} (1)$. The proof for the $R_{N,k}$ is essentially the same. \end{proof} We next show that in the expansion \eqref{eq:expansion}, we can choose any section $E$ satisfying the assumptions given in Section \ref{sec:section-ee}. \begin{lemme} \label{lem:changement_E} Assume \eqref{eq:expansion} holds and let $E'$ be a section satisfying \eqref{eq:hypotheseE} and \eqref{eq:normE}. Then there exists a family $(b'_{\ell})$ of $\Ci ( M^2 , B \boxtimes \overline A )$ such that \eqref{eq:expansion} holds with $E'$ and $b'_{\ell}$ instead of $E$ and $b_{\ell}$. \end{lemme} \begin{proof} Observe first that \eqref{eq:expansion} holds outside the diagonal if and only $|P_k (x,y)|$ is in $\mathcal{O} ( k^{-\infty})$ outside the diagonal, and this condition is clearly independent of the choice of $E$ and the $b_{\ell}$'s. On a neighborhood of the diagonal, we have $E = e^{g} E'$. Since $g \in \mathcal{O} ( 3)$, we can assume that $|g| \leqslant \psi/4$. Let us write $$ P_k = \Bigl(\frac{k}{2 \pi}\Bigr)^n E^k b_{N,k} + \mathcal{O} ( k^{n- \frac{N+1}{2}}) \quad \text{with } \quad b_{N,k} = \sum_{\ell + m ( \ell) \leqslant N} k^{-\frac{\ell}{2}}b_{\ell} .$$ By Lemma \ref{lem:estimate}, $|E|^k b_{N,k} = \mathcal{O} ( e^{-k \frac{\psi}{2}})$. Using that $$ \exp z = \sum_{p = 0 }^N \frac{z^p }{p!} + r_N (z) \quad \text{ with } \quad |r_{N} (z) | \leqslant \frac{|z|^{N+1} }{ (N+1)!} e^{|\operatorname{Re} z |} ,$$ we deduce from $E^k = e^{kg } (E')^k$ that \begin{gather} \label{eq:fin} E^k b_{N,k} = (E')^k b_{N,k} \sum_{p = 0 }^N k^p \frac{g^p}{p!} + R_{N,k} \end{gather} where $$| R_{N,k} | \leqslant C_N e^{-k \frac{\psi}{2} } |kg|^{N+1} e^{ k |\operatorname{Re} g |} \leqslant C_N e^{-k \frac{\psi}{4} } |kg|^{N+1} = \mathcal{O} ( k^{- \frac{N+1}{2}})$$ because $g = \mathcal{O} (3)$. To conclude now, it suffices to define the $b'_{\ell}$ so that \begin{gather} \label{eq:last_goal} (E')^k \Bigl[ \sum_{\ell + m ( \ell) \leqslant N} k^{-\frac{\ell}{2}}b_{\ell} \Bigr] \Bigl[ \sum_{p = 0 }^N k^p \frac{g^p}{p!} \bigr]= (E')^k \sum_{\ell + m ' ( \ell) \leqslant N } k^{- \frac{\ell}{2}} b'_{\ell} + \mathcal{O} ( k^{-\frac{N+1}{2}}) \end{gather} holds for any $N$. This suggests that each $b'_{\ell}$ should be equal to the infinite sum $$b_{\ell} + b_{\ell +2 } \, g + b_{\ell +4 } \frac{g^2}{2} + b_{\ell+ 6} \frac{g^3}{6} +\ldots$$ But by Lemma \ref{lem:estimate}, the equality \eqref{eq:last_goal} depends only on the class of $b'_{\ell}$ modulo $\mathcal{O} ( N- \ell)$, so we can interpret these infinite sums as sums of Taylor expansions along the diagonal. Since $b_{\ell +2p} \, g^{p} = \mathcal{O} ( m(\ell +2 p ) + 3p ) = \mathcal{O} ( 3p)$, by Borel lemma, there exists $b'_{\ell}$ such that for any $M$ \begin{gather} \label{eq:changement_E} b'_{\ell} = \sum_{p=0}^M b_{\ell+ 2p } \frac{g^p}{p!} + \mathcal{O} (3(M+1)) \end{gather} So $b'_{\ell} = \mathcal{O} ( m '(\ell) )$ with $ m' (\ell ) := \operatorname{min} \{ m( \ell + 2p ) + 3p , \; p \in {\mathbb{N}} \} .$ We easily check that $m'$ satisfies the same condition as $m$. We finally deduce \eqref{eq:last_goal} by removing with Lemma \ref{lem:estimate} all the coefficients leading to a $\mathcal{O} ( k^{-\frac{N+1}{2}})$. \end{proof} Suppose now we have an open set $U$ of $M$, and functions $u_i \in \Ci (U^2)$, $i =1, \ldots , 2n$ vanishing along the diagonal and such that for any $y \in U$, $(u_i( \cdot , y))$ is a coordinate system on a neighborhood of $y$. Then we can write the Taylor expansions along the diagonal as follows: any $ f \in \Ci (U^2)$ has a decomposition \begin{gather} \label{eq:taylor_expansion} f(x,y ) = \sum_{m=0}^M f_m (y, u(x,y)) + \mathcal{O} ( M+1 ) \end{gather} where each $f_m (y,\xi)$ is homogeneous polynomial in $\xi$ with degree $m$. This can be done also for sections of $B \boxtimes \overline{A}$ by introducing frames of $A$ and $B$ on $U$, so that $\Ci (U^2 , B \boxtimes \overline{A}) \simeq \Ci (U^2, {\mathbb{C}}^r)$. \begin{lemme} \label{lem:new_expansion} The expansion \eqref{eq:expansion} holds on $U^2$ if and only if there exists a sequence $(a_p )$ of $\Ci ( U \times {\mathbb{R}}^{2n}, {\mathbb{C}}^r)$, each $a_p (x,\xi)$ being polynomial in $\xi$, such that for any $N$ \begin{gather} \label{eq:new_expansion} P_k (x,y) = \Bigl( \frac{k}{2 \pi} \Bigr)^n E^k (x,y) \sum_{p=0}^{N} k^{-\frac{p}{2}} a_p ( y, k^{\frac{1}{2}} u (x,y)) + \mathcal{O} ( k^{n- \frac{N+1}{2}} ). \end{gather} \end{lemme} The remainders in \eqref{eq:new_expansion} satisfy the same pointwise estimates as the $R_{N,k}$ given in Lemma \ref{lem:estim_expansion}, the proof is identical. \begin{proof} If \eqref{eq:expansion} holds on $U^2$, writing the Taylor expansion of each $b_{\ell}$ as in \eqref{eq:taylor_expansion}, we have by Lemma \ref{lem:estimate} \begin{gather*} E^k (x,y) k^{-\frac{\ell}{2}} b_{\ell} (x,y) = E^k (x,y) k^{- \frac{\ell}{2}} \sum_{m = m ( \ell) }^{N- \ell } b_{\ell,m} (y, u(x,y)) + \mathcal{O}( k^{-\frac{N+1}{2}})\\ =E^k (x,y) k^{-\frac{\ell +m }{2}} \sum_{m = m ( \ell) }^{N- \ell } b_{\ell,m} (y,k^{\frac{1}{2}} u(x,y)) + \mathcal{O}( k^{-\frac{N+1}{2}}) \end{gather*} So we obtain \eqref{eq:new_expansion} with $ a_p = \sum _{\ell + m ( \ell) \leqslant p } b_{\ell , p - \ell} $, this sum being finite because of the assumption satisfied by $m(\ell) $. Conversely, starting from the $a_{p}$'s, for each $\ell \in {\mathbb{Z}}$, we construct by Borel summation a function $b_{\ell}$ such that $ b_{\ell} ( x, y)= \sum _{m =0 }^{M} a _{ \ell+m , m}(y,u (x,y)) + \mathcal{O} (M+1)$ for all $M$, where by convention $a_{p}=0$ for $p<0$, and $a_{m+\ell, m}$ is the degree $m$ homogeneous component of $a_{m+\ell}$. We readily deduce the expansion \eqref{eq:expansion} from \eqref{eq:new_expansion} by using Lemma \ref{lem:estimate} again. Observe that $b_{\ell} = \mathcal{O}(m(\ell))$ with $m(\ell)$ the smallest $m$ such that $a_{\ell + m , m } \neq 0$. Since $a_{p} = 0$ for $p<0$, we have $\ell + m(\ell) \geqslant 0$. Furthermore, $\ell + m ( \ell) \leqslant N$ happens only if there exists $m \leqslant N - \ell$ such that $a_{\ell + m, m } \neq 0$, that is if there exists $p \leqslant N$ such that $ a _{p, p-\ell } \neq 0$, so necessarily $ p - \ell \leqslant d (p)$ where $d(p)$ is the degree of $a_p$. So $\ell + m (\ell) \leqslant N$ implies that $\ell \geqslant \min \{ p - d(p) / \; p =0, \ldots, N \}$. So $\ell + m (\ell) \leqslant N$ only for a finite number of $\ell$. \end{proof} We can now explain the relation with the space $\lag (A,B)$ introduced in Section \ref{sec:operators}. Identify $U$ with an open convex set of ${\mathbb{R}}^{2n}$, then the functions $u_i(x,y)= x_i -y_i$ satisfy the above conditions. And for $(x,y) = (x' + \xi' , x')$, we have $a_p ( y, k^{\frac{1}{2}} u(x,y)) = a_p ( x', k^{\frac{1}{2}} \xi')$, so the expansions \eqref{eq:new_expansion} and \eqref{eq:expansion_kernel} are the same when $E= e^{-\varphi}$. \begin{prop} \label{prop:global_local_expansion} Let $(P_k : \Ci (M, L^k \otimes A) \rightarrow \Ci (M, L^k \otimes B), \; k\in {\mathbb{N}})$ be a family of operators with smooth Schwartz kernels. Then $(P_k)$ belongs to $\mathcal{L} (A,B)$ if and only if the Schwartz kernel family of the $P_k$'s has the expansion \eqref{eq:expansion} for some family $(b_\ell)$ of $\Ci ( M^2, B \boxtimes \overline{A} )$. \end{prop} \begin{proof} This follows from Lemma \ref{lem:new_expansion}, the local version of Lemma \ref{lem:changement_E} and the fact that $E=e^{-\varphi}$ in \eqref{eq:expansion_kernel} satisfies the conditions \eqref{eq:hypotheseE}. \end{proof} The parity of an element of $\lag (A,B)$ is particularly easy to understand with the global expansion \eqref{eq:expansion}. \begin{prop} \label{prop:parity} For any $(P_k)$ in $\lag (A,B)$, $(P_k)$ is even (resp. odd) if and only if its Schwartz kernel has the global expansion \eqref{eq:expansion} with $b_{\ell } = 0$ for any odd $\ell \in {\mathbb{Z}}$ (resp. even). \end{prop} \begin{proof} This follows from the relation between the coefficients $a_{p}$ and the coefficients $b_{\ell}$ given in the proof of Lemma \ref{lem:new_expansion}. For instance, if $b_{\ell} =0$ for any odd integer $\ell$, then $b_{\ell, p-\ell} \neq 0$ only for even $\ell$ and in this case it has the same parity as $p$, so $ a_p = \sum _{\ell + m ( \ell) \leqslant p } b_{\ell , p - \ell} $ has the same parity as $p$. Conversely, if $a_{p}$ has the same parity as $p$ for any $p$, then $a_{ \ell+m, m } =0$ for odd $\ell$, so $b_{\ell} $ vanishes to infinite order on the diagonal for odd $\ell$, so we can assume that $b_{\ell} =0$. The proof for odd elements is the same. \end{proof} \subsection{Filtration and symbol} For an operator $(P_k ) \in \lag (A,B)$, we have two different ways of writing the expansion of its Schwartz kernel: a global one \eqref{eq:expansion} with coefficients $b_\ell$ and a local one \eqref{eq:new_expansion} with coefficients $a_p$. We now discuss the uniqueness of these coefficients. Recall that $(P_k) \in \lag_q (A,B)$ if in all the local expansions \eqref{eq:new_expansion}, the coefficients $a_{p} $ are zero for $p < q$. \begin{prop} \label{prop:filtration-symbol} \hspace{1em} \begin{enumerate} \item In the local expansions \eqref{eq:new_expansion}, the coefficients $a_{p}$ are uniquely determined by the section $E$, the functions $(u_i)$ and the frames of $A$ and $B$. \item In the global expansion \eqref{eq:expansion}, the Taylor expansions of the coefficients $b_{\ell}$ along the diagonal are uniquely determined by the section $E$. \item $ (P_k) \in \lag_q (A,B)$ iff $\bigl( \forall \, \ell \in{\mathbb{Z}}, \; b_{\ell} = \mathcal{O} ( q - \ell) \bigr)$ iff $| P_k | = \mathcal{O} ( k^{n-\frac{q}{2}}) $. \item If $(P_k) \in \lag_q (A,B)$, then the coefficient $a_q$ of the local expansion \eqref{eq:new_expansion}, viewed as a section of $ \mathcal{P} ( TM ) \otimes B \otimes \overline{A} \rightarrow U$ does neither depend on $E$ nor on the functions $(u_i)$. Furthermore, \begin{gather} \label{eq:a_n_fonction_b_ell} a_q = \sum_{\ell + m ( \ell) = q} b_{\ell, q - \ell} \end{gather} where the $b_{\ell}$ are the coefficients of the global expansion \eqref{eq:expansion} and $b_{\ell, q-\ell}$ is defined as in \eqref{eq:taylor_expansion}. \end{enumerate} \end{prop} \begin{proof} Assertions 1, 2 and 3 follow from the following facts: Let $f_0$, \ldots, $f_q$ in $\Ci (M^2)$. Let $\psi = -2 \ln |E|$. Then \begin{gather} \label{eq:esti_iff} e^{-k \psi} \sum_{\ell =0}^q k^{-\frac{\ell}{2}} f_{\ell} = \mathcal{O} ( k^{-\frac{q}{2}}) \quad \Leftrightarrow \quad f_0 \in \mathcal{O} (q),\, \ldots ,\; f_q \in \mathcal{O} (0) . \end{gather} Indeed, recall that $\psi \geqslant 0 $, is in $\mathcal{O} ( 2 )$ and its Hessian is non-degenerate in the direction transverse to the diagonal. The converse of \eqref{eq:esti_iff} follows from the same proof as Lemma \ref{lem:estimate}. The direct sense of \eqref{eq:esti_iff} follows from \cite[Proposition 2.4 and Remark 2.5]{oim}. From this, we deduce that $|P_k | = \mathcal{O} ( k^{n -\frac{q}{2}})$ iff $b_{\ell} = \mathcal{O} (q-\ell)$ for any $\ell$. Since $a_{p} = \sum_{\ell + m ( \ell) \leqslant p} b_{\ell , p -\ell}$ by the proof of Lemma \ref{lem:new_expansion}, $(b_\ell = \mathcal{O} (q-\ell)$ for any $\ell$) iff $(a_p = 0,$ for any $p < q$). This last condition is the definition of $\lag _q ( A,B)$. We have just proved Assertion 3. This implies that $|P_k | =\mathcal{O} (k^{-\infty})$ iff ($a_{p} = 0$ for any $p$) iff $(b_{\ell} = \mathcal{O}( \infty)$ for any $\ell$), which proves Assertions 1 and 2. For the fourth assertion, since $P \in \lag_q(A,B)$, we have $b_{\ell} = \mathcal{O} ( q-\ell)$ for any $\ell$, so we can assume that $\ell + m ( \ell) \geqslant q$, so $$a_q = \sum_{\ell + m ( \ell ) \leqslant q } b_{\ell, q- \ell} =\sum_{\ell + m ( \ell ) = q } b_{\ell, q- \ell} .$$ Since $b_{\ell} = \mathcal{O} ( q-\ell)$, the $(q-\ell)$-th order term in the Taylor expansion of $b_{\ell}$ is intrinsically defined as a function $$ \xi \in T_xM \rightarrow b_{\ell, q- \ell} (x,\xi) \in B_x \otimes \overline{A}_x,$$ so we can view $a_q(x,\cdot)$ as an element of $\mathcal{P} (T_xM) \otimes B_x \otimes \overline{A}_x$. It remains to prove that for $\ell + m ( \ell) =q$, $b_{\ell, q-\ell}$ does not depend on the choice of $E$. With the notation of the proof of Lemma \ref{lem:changement_E}, this amounts to prove that $b'_{\ell} = b_{\ell} + \mathcal{O} ( q-\ell +1)$. This follows from \eqref{eq:changement_E}, because $b_{\ell + 2p } g^p = \mathcal{O} ( m (\ell+ 2 p) +3 p)$ and $m(\ell +2p ) + 3p \geqslant q - (\ell+ 2p ) + 3p = q + p - \ell \geqslant q + 1 -\ell$ when $p \geqslant 1$. \end{proof} We are now ready to define the symbol map $${\sigma}_q : \lag_q (A,B) \rightarrow \Ci ( \mathcal{S}(M) \otimes \operatorname{Hom} (A,B)).$$ First, for any $x \in M$, $T_xM$ is a Hermitian space, so it has an associated algebra $\widetilde{\mathcal{S}} ( T_xM)$ with a map $\operatorname{Op}:\mathcal{P} ( T_xM) \rightarrow \widetilde{\mathcal{S}} ( T_xM)$ as in Section \ref{sec:landau-levels-cn}. For any $(P_k)$ in $\lag_q (A,B)$, by the fourth assertion of Proposition \ref{prop:filtration-symbol}, $a_q( x, \cdot) \in \mathcal{P} ( T_xM) \otimes B_x \otimes \overline{A}_x$. Identifying $B_x \otimes \overline{A}_x$ with $ \operatorname{Hom} (A_x , B_x)$, we set \begin{gather} \label{eq:def_si_tilde} \widetilde{ {\sigma}}_q (P) (x) := \operatorname{Op} ( a_q(x, \cdot)) \in \widetilde{\mathcal{S}} ( T_xM) \otimes \operatorname{Hom}(A_x, B_x ) \end{gather} Recall that we have an isomorphism $\widetilde{\mathcal{S}} ( T_xM) \simeq \mathcal{S} (T_xM)$ defined by restriction from $\mathcal{P} ( T_xM)$ to $\mathcal{D} ( T_xM)$. \begin{defin} \label{def:symbol} ${\sigma} _q (P) (x) \in \mathcal{S} ( T_xM) \otimes \operatorname{Hom}(A_x, B_x )$ is defined as the restriction of $\widetilde{{\sigma}}_q (P) (x)$. \end{defin} \subsection{Proofs of the results of Section \ref{sec:operators}} \label{sec:proofs-results} We now give the proof of Proposition \ref{prop:lag}, Theorem \ref{theo:lag} and Theorem \ref{theo:parity}. \begin{proof}[Proof of Proposition \ref{prop:lag}] The first assertion is an easy consequence of the definition of $\lag_q (A,B)$ by the local expansions. In the second assertion, the characterisation in terms of pointwise norm is the third assertion of Proposition \ref{prop:filtration-symbol}. By Lemma \ref{lem:estimate} or Lemma \ref{lem:estim_expansion}, every $(P_k) \in \lag_q(A,B)$ satisfies $\| P_k \| = \mathcal{O} ( k^{-\frac{q}{2}})$. For the converse, it suffices to show that if ${\sigma}_0 ( P ) \neq 0$, then $\| P _k \| \geqslant c >0$. This is a consequence of Corollary \ref{cor:norm_estim}. The third assertion is straightforward. The fourth assertion is a variation on Borel Lemma, cf. for instance \cite[Proposition 2.1]{oim}. \end{proof} \begin{proof}[Proof of Theorem \ref{theo:lag}] In Definition \ref{def:symbol}, we have defined a map $$ {\sigma}_q : \lag_q(A,B) \rightarrow \Ci ( M , \mathcal{S} (M) \otimes \operatorname{Hom} (A,B)) .$$ having kernel $\lag_{q+1} (A,B)$ by Proposition \ref{prop:filtration-symbol}. To prove that it is surjective, we show that that for any $c \in \Ci ( M , \mathcal{P} (TM) \otimes B \otimes \overline{A})$, there exists $P \in \lag_q (A,B)$ such that in the local expansions \eqref{eq:new_expansion}, $a_q =c$. To do this, let $d(q) \in {\mathbb{N}}$ be an upper bound of the degree of $c(x, \cdot)$ for any $x\in M$. For any $m =0, \ldots , d(q)$, let $c_m(x,\cdot)$ be the homogeneous component with degree $m$ of $c(x, \cdot)$. Choose a section $b_{q-m}$ of $B \boxtimes \overline{A}$ vanishing to order $m$ along the diagonal and satisfying $b_{q-m} (x+ \xi, x) = c_m (x,\xi) + \mathcal{O} ( m+1)$. Then we set $$ P_k (x,y ) := \Bigl(\frac{k}{2\pi} \Bigr)^n E^k(x,y) \sum_{\ell = q- d(q)}^q k^{- \frac{\ell}{2}} b_{\ell} (x,y) .$$ Since $b_\ell = \mathcal{O} ( q-\ell)$ for any $\ell$, $(P_k) \in \lag_q (A,B)$. By \eqref{eq:a_n_fonction_b_ell}, we have $a_q = \sum_{\ell = q-d(q)}^q b_{\ell, q- \ell} = \sum_{m=0}^{d(q)} c_{m} = c$, as was to be proved. Let us prove the remaining assertions of Theorem \ref{theo:lag}. Let $P \in \lag_q (A,B)$. Assertion 1, that is ${\sigma} _q ( P) = {\sigma}_0 ( k^{q/2} P)$, follows directly from the local expansions \eqref{eq:new_expansion}. Let us prove Assertion 2. If $f \in \Ci ( M , \operatorname{Hom} (B,C))$, then the Schwartz kernel of $ P_k'= f \circ P_k$ is $P_k' (x,y) = f(x) (P_k (x,y))$, so $P_k'$ has the same expansion \eqref{eq:expansion} as $P_k$ with $b'_{\ell} (x,y) = f(x)(b_{\ell}(x,y))$ instead of $b_{\ell}$, which implies that $P'_k$ belongs to $\lag_q (A,B)$ with the same function $\ell \mapsto m(\ell) $. Furthermore, with the notation \eqref{eq:taylor_expansion}, $b'_{\ell, m(\ell)} (x, \cdot ) = f(x) b_{\ell, m(\ell)} (x, \cdot)$, which implies by \eqref{eq:a_n_fonction_b_ell} that ${\sigma}_q( P')(x) = f(x) \circ {\sigma}_q(P)(x)$. Let us prove Assertion 3. Since $P_k^* (x,y) = \overline{P_k (y,x)}$, the Schwartz kernel of $P_k^*$ has the expansion \eqref{eq:expansion} with $E' (x,y) = \overline{E (y,x)}$ instead of $E$ and $b'_{\ell} (x,y) = \overline{b_{\ell} (y,x)}$ instead of $b_{\ell}$. By \eqref{eq:symE}, we deduce that $(P_k^*) \in \lag_q (B,A)$. Furthermore, $b'_{\ell, m(\ell)} (x, \xi) = \overline{b}_{\ell,m(\ell)}(x, - \xi)$ so $a_q' (x,\xi) = \overline{a}_q (x, -\xi)$. By \eqref{eq:sch_ker}, $\operatorname{Op} (q )^* = \operatorname{Op} (r)$ with $r( \xi ) = \overline{q} ( - \xi)$, so ${\sigma}_q(P^*) = {\sigma}_q(P)^*$. Let us prove assertion 5. By \eqref{eq:new_expansion}, $$ P_k(x,x) = \frac{k^{n-q/2}}{ (2\pi)^n} \bigl( a_q(x,0 ) + \mathcal{O} ( k^{-\frac{1}{2}} ) \bigr) $$ and by the first equation of \eqref{eq:rel_bizare}, $a_q(x,0) = \operatorname{tr} ({\sigma}_q(P) (x))$. It remains to prove Assertion 4. Let $(P'_k) \in \lag_{q'} (B,C)$. We will prove that $Q_k = P'_k \circ P_k$ belong to $\lag _{q'' } ( A,C)$ with $q'' = q + q'$ and compute its symbol. Since the composition of operators with kernels in $\mathcal{O} ( k^{-p})$ and $\mathcal{O} ( k^{-\ell})$ respectively, has a kernel in $\mathcal{O} ( k^{-(p+\ell)})$, we can consider each summand of the expansions \eqref{eq:expansion} for $P_k$ and $P'_k$ separately. In other words, we can assume that $P_k = \bigl( \frac{k}{2\pi} \bigr)^n E^k f$ and $P'_k = \bigl( \frac{k}{2\pi} \bigr)^n E^k f'$ with $f = \mathcal{O} ( q)$ and $f' = \mathcal{O} (q')$. So \begin{gather} \label{eq:R_kernel} Q_k (x,z) = \Bigl( \frac{k}{2\pi} \Bigr)^{2n} \int_M (E(x,y) \cdot E(y,z) )^k g(x,y,z) d \mu_M (y) \end{gather} with $g(x,y,z ) = f(x,y) f' (y,z)$. Observe that $g$ vanishes to order $q''$ along ${\Sigma} = \{ (x,y,z) \in M^3 , \; x= y = z \}$. By \eqref{eq:normE}, $|E(x,y) \cdot E(y,z) | < 1 $ if $(x,y,z) \notin {\Sigma}$. This implies first that the Schwartz kernel of $(Q_k)$ is in $\mathcal{O} ( k^{-\infty})$ outside the diagonal. Furthermore, to compute $Q_k$ on a neighborhood of $(p,p)$ up to a $\mathcal{O} ( k^{-\infty})$, we can reduce the integral \eqref{eq:R_kernel} to a neighborhood of $p$. So we can work locally. Introduce a local orthonormal frame $(\partial_i, i =1, \ldots , n)$ of $T^{1,0}M$ on an open neighborhood $U$ of $p$ in $M$. Let ${\sigma}_i \in \Ci (U^2)$, $i =1,\ldots n$ be such that \begin{gather} \label{eq:dsi_i--partial_j} d{\sigma}_i ( \partial_j, 0 ) = \delta_{ij} + \mathcal{O} (1) , \qquad d{\sigma}_i ( \overline{\partial}_j, 0) = \mathcal{O} (1) \end{gather} for any $i$ and $j$. Observe that if the $z_i$ are coordinates as in \eqref{eq:coordonnees_z}, then \begin{gather} \label{eq:si_z} {\sigma}_i (x,y) = z_i(x) - z_i (y) + \mathcal{O}_{(p_0, p_0)} (2). \end{gather} So we can use the functions $u_i = \operatorname{Re} {\sigma} _i$ and $u_{i+n} = \operatorname{Im} {\sigma}_i$ when we write the Taylor expansion \eqref{eq:taylor_expansion} and the local expansion \eqref{eq:new_expansion}. Restricting $U$ if necessary, we can assume that for any $z$, the map $ y\in U \rightarrow ({\sigma}_i (y,z)) \in {\mathbb{C}}^n$ is a diffeomorphism onto its image. Let $\mu_z$ be the pull-back of the volume $\mu_n $ by this map. By \eqref{eq:si_z}, we have $\mu_M (y) = \rho (y,z) \mu_z (y)$ with $\rho \in \Ci (U^2)$ satisfying $\rho (y,y) = 1$. Now using the expressions \eqref{eq:E_linearise} and \eqref{eq:si_z}, we readily prove that $$ E(x,y) \cdot E(y,z) = e^{ \varphi (x,y,z) + r(x,y,z) } E (x,z), $$ where $r(x,y,z) = \mathcal{O}_{{\Sigma}} (3)$ and $ \varphi(x,y,z) = ({\sigma} (x,z) - {\sigma} (y,z))\cdot \overline{{\sigma}} (y,z) $. Arguing as in the proof of Lemma \ref{lem:changement_E}, it comes that $$ ( E(x,y) \cdot E(y,z) )^k = E^k(x,z) e^{k \varphi ( x,y,z)} \sum_{\ell =0 }^{N} \frac{k^{\ell}}{\ell !} (r (x,y,z))^{\ell} + \mathcal{O} \bigl( k^{-\frac{1}{2} (N+1) } \bigr)$$ so the integrand of \eqref{eq:R_kernel} is equal to $$ E^k(x,z) e^{k \varphi ( x,y,z)} \sum_{\ell =0 }^{N} k^{\ell} g_{\ell} (x,y,z) \; d\mu_z (y) + \mathcal{O} \bigl( k^{-\frac{1}{2} ( q'' + N+1)} \bigr) $$ with $g_{\ell} (x,y,z) = \rho(y,z) g (x,y,z) (r (x,y,z)) ^\ell / (\ell !) = \mathcal{O}_{{\Sigma}} ( q''+ 3\ell)$. For any $z \in U$, we write the Taylor expansion of $(x,y) \rightarrow g_{\ell}(x, y,z)$ at $(z,z)$ with the coordinates system $$(x,y) \rightarrow \operatorname{Re} {\sigma}_i (x,z), \operatorname{Im} {\sigma}_i (x,z) , \operatorname{Re} {\sigma}_i (y,z) , \operatorname{Im} {\sigma}_i (y,z).$$ We obtain $$ g_\ell(x,y,z) = \sum_{m= q'' + 3 \ell}^{p} h_{\ell,m} (z, {\sigma} (x,z), {\sigma} (y,z)) + \mathcal{O}_{{\Sigma}} ( p+1) $$ with $h_{\ell,m} ( z, \xi , \eta)$ homogeneous polynomial in $\xi, \eta$ with degree $m$. Arguing as in Lemma \ref{lem:new_expansion}, we obtain that \begin{gather} \label{eq:dev_loc_R} Q_k (x,z) = \Bigl(\frac{k}{2\pi} \Bigr)^n E^k(x,z) \sum_{\ell= 0}^{N} \sum_{m= q''+ 3 \ell}^{q''+ 2 \ell +N} k^{\ell - \frac{m}{2}} I_{\ell,m} (x,z) + \mathcal{O} \bigl( k^{-\frac{1}{2} ( q'' + N+1)} \bigr) \end{gather} with $$ I_{\ell,m} (x,z) = \Bigl(\frac{k}{2\pi} \Bigr)^n \int_U e^{k \varphi (x,y,z) } h_{\ell,m} (z, k^{\frac{1}{2}} {\sigma} (x,z) , k^{\frac{1}{2}} {\sigma} (y,z) ) \; d\mu_z (y) $$ Set $u_i = {\sigma}_i (x,z)$ and let us use the coordinates $v_i = {\sigma}_i (y,z)$ for the integration so that $\varphi (x,y,z) = u \cdot \overline v - |v|^2$ and $d\mu_z (y) = |dv d \overline{v}|$. It comes that $ I_{\ell, m} (x,z) = J_{\ell, m} ( z, k^{\frac{1}{2}} {\sigma} ( x,z))$ with \begin{gather} \label{eq:j_ell_m} J_{\ell , m } (z, u ) = \Bigl(\frac{k}{2\pi} \Bigr)^n \int e^{ k^{\frac{1}{2}} u \cdot \overline v - k |v|^2} h_{\ell,m} ( z, u , k^{\frac{1}{2} } v ) \; d \mu_n (v) \end{gather} where we integrate on a neighborhood of the origin in ${\mathbb{C}}^n$. We can actually integrate on ${\mathbb{C}}^n$ because this will modify $E^k(x,z) I_{\ell,m} (x,z)$ by a $\mathcal{O} ( e^{-k/C})$. Indeed, $| E(x,z) | = e^{-\frac{1}{2} |u|^2} + \mathcal{O} (|u|^3)$ so $|E(x,z) | = \mathcal{O} ( e^{-\frac{1}{3} |u|^2})$ so $$|E(x,z) e^{u \cdot \overline v - |v|^2} | = \mathcal{O} ( e^{ - \frac{1}{3} |u|^2 + |uv| - |v|^2 } ) = \mathcal{O} ( e^{ -\frac{1}{4} |v| ^2} ) $$ and we conclude by using that $ \int_{|v|\geqslant \epsilon} e^{- \frac{k}{4} |v|^2} |v|^m \; |dv d \overline{v}| = \mathcal{O} ( e^{-k/C})$ for any $\ep >0$ and $m \in {\mathbb{N}}$. Taking the integral \eqref{eq:j_ell_m} over ${\mathbb{C}}^n$, it comes that \begin{gather} \label{eq:j_ell-m} J_{\ell,m} ( z,u) = (2 \pi)^{-n} \int_{{\mathbb{C}}^n} e^{u \cdot \overline v - | v| ^2 } h_{\ell, m } ( z, u, v) \; d\mu_n (v) \end{gather} So $J_{\ell,m}$ does not depend on $k$. Furthermore it is polynomial in $u$. To see this, it suffices to view $h_{\ell, m } ( z,u, v)$ as a polynomial in the variables $u-v$, $v$ and to compare with the formula \eqref{eq:sch_ker}. So $Q_k (x,z)$ has the local expansion \eqref{eq:new_expansion}, so $(Q_k)$ belongs to $\lag _{q''} ( A, C)$. Its symbol is given by the leading order term in \eqref{eq:dev_loc_R} which corresponds to $\ell =0$ and $m = q''$, that is $$\widetilde{{\sigma}}_{q''} ( Q ) (x) = \operatorname{Op} ( J_{0, q''} (x, \cdot)).$$ We can compute it in terms of the symbols of $P$ and $P'$ as follows: by \eqref{eq:a_n_fonction_b_ell}, $\widetilde {\sigma}_{q} ( P)(x) = \operatorname{Op} ( a_q(x, \cdot))$ where $\xi \rightarrow a_q(x,\xi)$ is the homogeneous polynomial of degree $q$ such that $f(x,y) = a_q (y, {\sigma} ( x,y) ) + \mathcal{O} ( q+1)$. Similarly, $\widetilde {\sigma}_{q'} ( P')(x) = \operatorname{Op} ( a'_{q'}(x, \cdot))$ with $f'(x,y) = a'_{q'} (y, {\sigma} ( x,y) ) + \mathcal{O} ( q'+1)$. Now by \eqref{eq:si_z}, ${\sigma} (x,y) = {\sigma} (x,z) - {\sigma} ( y,z) + \mathcal{O} _{{\Sigma}} (2)$ and it comes that \begin{xalignat*}{2} g(x,y,z) & = a_q ( y, {\sigma} (x,z) - {\sigma} ( y,z)) a'_{q'} ( z, {\sigma} (y,z)) + \mathcal{O}_{{\Sigma}} (q'') \\ & = a_q ( z, {\sigma} (x,z) - {\sigma} ( y,z)) a'_{q'} ( z, {\sigma} (y,z)) + \mathcal{O}_{{\Sigma}} (q'') \end{xalignat*} leading to $ h_{0, q''} (z, u,v) = a_q ( z, u - v) a'_{q'} (z,v) $ and using first \eqref{eq:sch_ker} and then \eqref{eq:rel_bizare}, we have that \begin{xalignat*}{2} \widetilde{{\sigma}}_{q''} ( Q )(x) & = \operatorname{Op} ( J_{0, q''} (x, \cdot)) = \operatorname{Op} \bigl( \operatorname{Op} (a_q ( x,\cdot)) a'_{q'} (x,\cdot) \bigr) \\ & = \operatorname{Op} (a_q ( x,\cdot)) \circ \operatorname{Op} (a'_{q'} (x,\cdot)) = \widetilde {\sigma}_{q} ( P) (x) \circ \widetilde {\sigma}_{q'} ( P' )(x) \end{xalignat*} as was to be proved. \end{proof} \begin{proof}[Proof of theorem \ref{theo:parity}] The first assertion follows from Proposition \ref{prop:parity}. For the composition, it suffices to consider the case treated in the previous proof: we start from $(P_k)$ and $(P'_k)$ both even. Since $h_{\ell,m}(x,\cdot) $ has degree $m$, $J_{\ell,m}(x, \cdot)$ given by \eqref{eq:j_ell-m} has the same parity as $m$, so by \eqref{eq:dev_loc_R}, $(P'_k \circ P_k)$ is even. Last assertion is simply the fact that $\operatorname{Op} (a_q(x,\cdot) )$ has the same parity as $a_q (x,\cdot)$ by Proposition \ref{prop:tilde_symb_and_op}. \end{proof} \subsection{Peaked sections} In this section, we state and prove a generalisation of Proposition \ref{prop:peaked-sections}. Consider an auxiliary bundle $A$. Let us choose a base point $x \in M$, with a coordinate chart $U$ centered at $x$, and a trivialisation $A|_U \simeq U \times A_x$. To any $f \in \mathcal{P} (T_xM ) \otimes A_x$, we associate the section of $L^k \otimes A$ \begin{gather} \label{eq:phi_peaked_section_+} \Phi_k^f (x+\xi ) = \Bigl( \frac{k}{2\pi} \Bigr)^{\frac{n}{2}} E^k (x+ \xi, x ) \, f(k^{\frac{1}{2}} \xi) \, \psi (x+ \xi) \end{gather} where $E$ is chosen as in Section \ref{sec:section-ee}, and $\psi \in \Ci_0(U)$ is equal to $1$ on a neighborhood of $x$. \begin{prop} \label{prop:peaked-sections_++} $ $ \begin{enumerate} \item For any $f$, $g \in \mathcal{P} ( T_x M ) \otimes A_x$, $\bigl\langle \Phi_k^f, \Phi_k^g \bigr\rangle = \langle f , g \rangle + \mathcal{O} (k^{-\frac{1}{2}} )$. \item For any $f \in \mathcal{P} ( T_x M ) \otimes A_x$ and $Q \in \lag ( A,B)$, $ Q_k \Phi_k^f = \Phi_k^h + \mathcal{O} ( k^{-\frac{1}{2}})$ where $h = \tilde{{\sigma}}_0 ( Q)(x) \cdot f \in \mathcal{P} ( T_xM) \otimes B_x$. \end{enumerate} \end{prop} In the second part, we used the symbol $\tilde{{\sigma}}_0 (P)$ defined in \eqref{eq:def_si_tilde}, and $\Phi_k^{h}$ is defined as $\Phi_k^f$ with a trivialisation of $B$. \begin{proof} Consider the operator $P^f \in \lag ({\mathbb{C}}, A)$ with Schwartz kernel $$P^f_k (x+ \xi, x) = \Bigl ( \frac{k}{2\pi} \Bigr)^n E^k (x+ \xi, x) \, f ( k^{\frac{1}{2}} \xi)\, \psi (x+ \xi) \, \psi (x).$$ On one hand, $( \frac{k}{2\pi})^{\frac{n}{2}} \phi^f_k = P_k^f (\cdot ,x)$. On the other hand, $\tilde{{\sigma}}_0(P^f)(x) = \operatorname{Op} ( f)$. So we can compute the scalar product of $\phi^f_k$ and $\phi^g_k$ as a composition of Schwartz kernels \begin{xalignat*}{2} \Bigl ( \frac{k}{2\pi} \Bigr)^n \bigl\langle \Phi_k^f, \Phi_k^g \bigr\rangle & = ((P_k^g)^* P_k^f)(x,x) \\ & = \Bigl(\frac{k}{2\pi} \Bigr)^n \operatorname{tr}_{\mathcal{D} ( T_xM)} ( \operatorname{Op} (g)^* \operatorname{Op} (f)) + \mathcal{O} ( k^{n-\frac{1}{2}}) \end{xalignat*} by the last part of Theorem \ref{theo:lag}. To conclude, we have by \eqref{eq:rel_bizare} $$ \operatorname{tr}_{\mathcal{D} ( T_xM)} ( \operatorname{Op} (g)^* \operatorname{Op} (f)) = (\operatorname{Op} (g)^* f )(0) = \langle f , g \rangle .$$ The proof of the second part is similar, we have $$ \Bigl ( \frac{k}{2\pi} \Bigr)^{\frac{n}{2}} Q_k \Phi^f_k = (Q_k P_k)(\cdot, x) $$ By Theorem \ref{theo:lag}, $Q_k P_k \in \lag ( {\mathbb{C}},B)$ with symbol at $x$ equal to $$\tilde{{\sigma}}_0 (Q) (x) \circ \operatorname{Op} ( f) = \operatorname{Op} (\tilde{{\sigma}}_0 (Q) (x) f ) = \operatorname{Op} (h)$$ by \eqref{eq:rel_bizare}. \end{proof} We deduce the following lower bound for the operator norm of operators of $\lag (A,B)$. If $\rho \in \mathcal{S}_x (M) \otimes \operatorname{Hom} (A_x, B_x)$, then we denote by $\| \rho \|$ the norm $$ \| \rho \| = \sup \bigl\{ \| \rho f \| / \| f \| ,\; f \in \mathcal{D} (T_xM) \otimes A_x, \; f \neq 0 \bigr\}.$$ \begin{cor} \label{cor:norm_estim} For any $P \in \lag (A,B)$, we have $$ \liminf_{k \rightarrow \infty} \| P_k \| \geqslant \sup_{x \in M} \| {\sigma}_0 (P)(x) \| $$ \end{cor} \begin{proof} By Proposition \ref{prop:peaked-sections_++}, for any $f \in \mathcal{D} ( T_xM) \otimes A_x$ non zero, $$ \frac{\| P_k \phi_k^f \|}{\| \phi_k^f \| } = \frac{ \| {\sigma}_0 (P) (x) f \|}{\| f \| } + \mathcal{O} ( k^{-\frac{1}{2}}) .$$ So $ \liminf_{k \rightarrow \infty} \| P_k \| \geqslant \| {\sigma}_0 (P) (x) f \| / \| f \|$ . \end{proof} \section{Derivatives} \label{sec:derivatives} The class $\lag (A, B)$ has been defined without any control on the derivatives of the Schwartz kernels. The reason was merely to simplify the exposition but in the applications it is natural and necessary to understand the composition of operators of $\lag(A, B)$ with covariant derivatives. We start from general considerations, then we define a subclass $\lag ^{\infty}(A,B)$ where the asymptotic expansion of the Schwartz kernels hold with respect to a convenient $C^{\infty}$ topology. Finaly we apply this to complete the proofs of the theorems stated in the introduction. \subsection{The class \texorpdfstring{$\mathcal{O}_{\infty} (k^{-N})$}{O-infinity}} Consider as before a Hermitian line bundle $L \rightarrow M$ and an auxiliary Hermitian vector bundle $A \rightarrow M$. Let $\mathcal{F}$ be the space of families $$s= (s_k \in \Ci (M , L^k \otimes A) ,\, k \in {\mathbb{N}} ).$$ Recall that $s \in \mathcal{O} ( k^{-N})$ if for any $x \in M$, $|s_k (x)| = \mathcal{O} ( k^{-N})$ with a $\mathcal{O} $ uniform on any compact subsets of $M$. Here we do not assume that $M$ is compact. The definition of $\mathcal{O}_{\infty}$ involves the derivatives. If $L$ and $A$ are trivial bundles so that $(s_k)$ is a sequence of $\Ci ( M, {\mathbb{C}}^r)$ with $r$ the rank of $A$, then we say that $(s_k) \in \mathcal{O}_{\infty} ( k^{-N})$ if for any $m \in {\mathbb{N}}$, the derivatives of order $m$ of $(s_k)$ are in $\mathcal{O} ( k^{-N+m})$. More precisely, for any vector fields $X_1$, \ldots, $X_m$ of $M$, we require that \begin{gather} \label{eq:derk} X_1 \ldots X_m s_k = \mathcal{O} ( k^{-N+m}) . \end{gather} So we loose one power of $k$ for each derivative. Because of this, the class $\mathcal{O}_{\infty} ( k^{-N})$ is invariant by multiplication by $e^{ik h}$, where $h$ is any real-valued function of $M$. For actual vector bundles $L$ and $A$, we introduce unitary frames $u$ and $(v_j)_{j=1}^{r}$ of $L$ and $A$ over the same open set $U$ of $M$ and write $s_k = \sum f_{k,j} u^k \otimes v_j$ with $f_{k} \in \Ci ( U, {\mathbb{C}}^r)$. Then we say that $(s_k)$ belongs to $\mathcal{O}_{\infty} ( k^{-N})$ if for all choices of unitary frames of $L$ and $A$, the corresponding local representative sequence $(f_k)$ is in $\mathcal{O}_{\infty} ( k^{-N})$. Observe that changing the frame $u$ of $L$ amounts to multiply $f_k$ by $e^{ikh}$, so the condition that $f_k \in \mathcal{O}_{\infty} ( k^{-N})$ does not depend on the frame choice when these frames are defined on the same open set. The typical example of a family in $\mathcal{O}_{\infty} ( k^{-N})$ is an oscillating sequence $$s_k (x) = k^{-N} e^{-k \varphi (x) } a (x) $$ with $ \varphi \in \Ci ( M)$ having a non negative real part and $a \in \Ci ( M, {\mathbb{C}}^r)$. More generally, for actual bundles, we can set $$ s_k (x) = k^{-N} E^k(x) a(x) $$ where $E \in \Ci ( M, L)$ is such that $|E|\leqslant 1$ and $a \in \Ci ( M, A)$. Obviously, if $N' \geqslant N$, $\mathcal{O}_{\infty} ( k^{-N'} ) \subset \mathcal{O}_{\infty} ( k^{-N})$. Define $\mathcal{O}_{\infty} (k^{-\infty} ) := \cap_N \mathcal{O}_{\infty} ( k^{-N})$. We will need the following result. \begin{lemme} \label{lem:dev_der_bor} Let $(s_\ell)$ be a sequence of $\mathcal{F}$ such that for any $\ell$, $s_{\ell} \in \mathcal{O}_{\infty} ( k^{-p(\ell)} ) $ where $(p(\ell))$ is an increasing real sequence, and $p(\ell) \rightarrow \infty$ as $\ell \rightarrow \infty$. Then \begin{enumerate} \item There exists $s \in \mathcal{F} \cap \mathcal{O}_{\infty} ( k^{-p(0)})$ unique modulo $\mathcal{O}_{\infty} ( k^{-\infty} )$ such that \begin{gather} \label{eq:dev_Der} s_k = \sum_{\ell =0}^{N-1} s_{\ell,k} + \mathcal{O}_{\infty} ( k^{-p(N)}) , \qquad \forall N \end{gather} \item Let $s \in \mathcal{F}$ such that $s \in \mathcal{O}_{\infty} (k^{p})$ for some $p$ and $s_k = \sum_{\ell =0}^{N-1} s_{\ell,k} + \mathcal{O} ( k^{-p(N)})$ for any $N$. Then $s \in \mathcal{O}_{\infty} (k^{-p(0)})$ and \eqref{eq:dev_Der} holds. \end{enumerate} \end{lemme} The first part is a variation of Borel Lemma, the second part follows from interpolation inequalities, cf. as instance \cite[Lemma 32.]{Shubin} \subsection{Application to \texorpdfstring{$\lag (A,B)$}{L(A,B)}} Choose a section $E$ as in section \ref{sec:section-ee} and let $b \in \Ci ( M^2 , B \boxtimes \overline{A})$ vanishing to order $m$ along the diagonal. Then by the same proof as Lemma \ref{lem:estimate}, the family $ (E^k b)$ is in $\mathcal{O}_{\infty} (k^{-\frac{m}{2}})$. Actually, we even have a better result if instead of using any derivatives, we only consider covariant derivatives for the connection of $(L \boxtimes \overline{L})^k \otimes (B \boxtimes \overline{A})$ induced by the connection of $L$ and any connections of $A$ and $B$. \begin{lemme} \label{lem:comp_der_lag} For any $\ell \in {\mathbb{N}}$, any vector fields $X_1$, \ldots, $X_{\ell}$ of $M^2$, we have $ \nabla _{X_1} \ldots \nabla_{X_\ell} (E^k b) $ is in $\mathcal{O} ( k^{-\frac{1}{2} ( m - \ell)})$. \end{lemme} The improvement is that we only loose a half power of $k$ for each derivative. \begin{proof} The main observation is that $\nabla E $ vanishes on the diagonal. Indeed, $\nabla_{X} E =0$ on the diagonal when $X$ is tangent to the first factor because of the second equation in \eqref{eq:hypotheseE}, but also when $X$ is tangent to the diagonal by the first equation in \eqref{eq:hypotheseE}. So on a neighborhood of the diagonal, we have \color{red} $\nabla_X E = f E$ \color{black} with $f \in \mathcal{O} (1)$. By Leibniz rule, $ \nabla_X ( E^k b) = E^k ( k f b + \nabla_X b ) $. Using this repeatedly, we obtain $$ \nabla _{X_1} \ldots \nabla_{X_\ell} (E^k b) = E^k ( k^{\ell} b_{\ell} + k^{\ell -1} b_{\ell-1} + \ldots + b_0 )$$ where $ b_{\ell} = \mathcal{O} ( m+ \ell )$, $b_{\ell-1 } \in \mathcal{O} ( m+ \ell - 2)$, \ldots, $b_0 \in \mathcal{O} ( m - \ell)$. And we conclude as in the proof of Lemma \ref{lem:estimate}. \end{proof} Recall that the Schwartz kernel family of an operator $P \in \lag (A,B)$ has by definition an expansion of the form \begin{gather} \label{eq:exp_der_part} P_k (x,y) = \Bigl(\frac{k}{2\pi} \Bigr)^n E^k(x,y) \sum_{ \ell + m ( \ell ) \leqslant N } k^{- \frac{\ell}{2}} b_{\ell} (x,y) + R_{N,k} (x,y) \end{gather} with $R_{N,k} \in \mathcal{O} ( k^{n- \frac{N+1}{2}})$. Define the classes $$\lag^{\infty} (A,B) := \lag (A,B) \cap \mathcal{O}_{\infty} (k^n), \quad \lag_q^{\infty} (A,B) := \lag_q (A,B) \cap \mathcal{O}_{\infty} (k^n)$$ By the following proposition, these new classes have the same properties than the $\lag (A,B)$ and this follows directly from Lemma \ref{lem:dev_der_bor}. \begin{prop} \label{prop:versionlinfty}$ $ \begin{enumerate} \item \label{item:1} If $P \in \lag^{\infty} (A,B)$ and the expansion \eqref{eq:exp_der_part} holds with $R_{N,k} \in \mathcal{O} ( k^{n- \frac{N+1}{2}})$ for any $N$, then $R_{N,k} \in \mathcal{O}_{\infty} ( k^{n- \frac{N+1}{2}})$ for any $N$. \item \label{item:2} For any $P \in \lag (A,B)$ there exist $Q \in \lag^{\infty} (A,B)$ unique modulo $\mathcal{O}_{\infty} (k^{-\infty})$ such that $Q = P + \mathcal{O} ( k^{-\infty})$. \item \label{item:3} \color{red} For any $ P \in \lag^{\infty}(A,B )$, \begin{enumerate} \item \label{item:a} the adjoint of $P$ belongs to $\lag^{\infty} (B,A)$, \item \label{item:b} $(P_kQ_k) \in \lag ^{\infty} ( A,C)$ for any $Q \in \lag ^{\infty} ( B,C)$, \item \label{item:c} $(f \circ P_k)$ belongs to $\lag^{\infty} ( A,C)$ for any $f \in \Ci (M , \operatorname{Hom} (B,C))$; $(P_k \circ g) $ belongs to $\lag ^{\infty} ( C,B)$ for any $g \in \Ci ( M , \operatorname{Hom} (C,A))$ \item \label{item:d} for any vector field $X$ of $M$ and connections on $A$ and $B$, $(k^{-\frac{1}{2}} P_k \circ \nabla^{L^k \otimes A}_X) $ and $(k^{-\frac{1}{2}} \nabla^{L^k \otimes B}_X \circ P_k)$ belong to $\lag^{\infty} ( A,B)$. Furthermore, if $(P_k)$ is even (resp. odd), these two operators are odd (resp. even). \end{enumerate} \color{black} \item \label{item:4} $\lag_q^{\infty} (A,B) = \lag (A,B) \cap \mathcal{O}_{\infty} ( k^{n - \frac{q}{2}})$, the restriction of ${\sigma}_q$ to $\lag_q^{\infty} ( A,B)$ is onto and has kernel $\lag_{q+1} ^{\infty} ( A,B)$. \end{enumerate} \end{prop} \begin{proof} Assertion \ref{item:1} follows the preliminary observation on $E^kb$ and the second part of Lemma \ref{lem:dev_der_bor}. Assertion \ref{item:2} follows from the first part of Lemma \ref{lem:dev_der_bor}. Claim \ref{item:b} follows from the fact that the composition of two kernels in $\mathcal{O}_{\infty} ( k^{n})$ is in $\mathcal{O}_{\infty} (k^{2n})$, and $\mathcal{O}_{\infty} ( k^{2n}) \cap \lag (A,C) = \lag^{\infty} ( A,C)$ by the second part of Lemma \ref{lem:dev_der_bor}. Claims \ref{item:a} and \ref{item:c} are straightforward. Ones proves claim \ref{item:d} by arguing as in the proof of Lemma \ref{lem:comp_der_lag}. Part \ref{item:4} follows from the second part of Lemma \ref{lem:dev_der_bor}. \end{proof} \begin{rem} \label{sec:rem_linfty} We can adapt Theorems \ref{theo:constr-proj} and \ref{theo:unitary_equivalence} to the spaces $\mathcal{L}^{\infty}$: \begin{enumerate} \item in Theorem \ref{theo:constr-proj}, if we start with $P \in \lag^{\infty} ( A)$, then $\chi ( P) \in \lag ^{\infty} (A)$. \item in Theorem \ref{theo:unitary_equivalence}, if $\Pi$ and $\Pi'$ are in $\lag^{\infty} (A)$ and $\lag^{\infty} (B)$ respectively, then we can choose $U \in \lag^{\infty} (A,B)$. \end{enumerate} In both cases, the only change in the proof is the fact that for any families of operators $Q_k, Q'_k : \Ci (M,L^k \otimes A) \rightarrow \Ci (M, L^k\otimes A)$ and $Q''_k : L^2 (M, L^k \otimes A) \rightarrow L^2 (M, L^k \otimes A)$, by \cite[Section 4.3]{oim}, if the Schwartz kernel families of $(Q_k)$ and $(Q'_k)$ are respectively in $\mathcal{O}_{\infty} (k^{-N})$ and $\mathcal{O}_{\infty} (k^{-N'})$, and the operator norms of $Q_k^{''}$ are in $\mathcal{O} (1)$, then the Schwartz kernel family of $Q_k Q''_k Q_k'$ is in $\mathcal{O}_{\infty} (k^{-(N +N')})$. \qed \end{rem} By Theorem \ref{theo:lag}, we already know how to compute the symbols of $P^*$, $PQ$, $f P$ or $gP$ in terms of the symbols of $P$ and $Q$. To complete this, we compute the symbol of the compositions of $P$ with the covariant derivatives $\nabla_X^{L^k \otimes A}$ and $\nabla_X^{L^k \otimes B}$. Recall that for any $Y \in T_xM$, we defined in the introduction some endomorphisms $\rho (Y) \in \operatorname{End} ( \mathcal{D} ( T_xM))$ in \eqref{eq:rho}. \begin{lemme} \label{lem:symb_der} For any $P \in \mathcal{L}^{\infty}( A,B)$ and vector field $X$ of $M$, we have \begin{gather*} {\sigma}_0 (k^{-\frac{1}{2}} P_k \circ \nabla^{L^k \otimes A}_X) (x) = {\sigma}_0 ( P_k) (x) \circ \rho ( X(x) ) \\ {\sigma}_0 (k^{-\frac{1}{2}} P_k \circ \nabla^{L^k \otimes A}_X) (x) = \rho ( X(x) ) \circ {\sigma}_0 ( P_k) (x) . \end{gather*} \end{lemme} \begin{proof} We deduce one formula from the other by taking adjoint. To prove the first one, it suffices by Proposition \ref{prop:peaked-sections_++} to show that if $(\Phi_k^f )$ is the peaked section associated to $ f\in \mathcal{D}(T_xM) \otimes A_x$, then $$k^{-\frac{1}{2}} \nabla_X \Phi_k^f = \Phi_k^g + \mathcal{O} ( k^{-\frac{1}{2}}) $$ where $g = \rho ( X(x)) f $. This is easily checked if we use the normal coordinates as in \eqref{eq:coordonnees_z} centered at $p_0 = x$. We have to derivate \eqref{eq:phi_peaked_section_+}. We have first that $k^{-\frac{1}{2}} \nabla_X ( E^k) = k^{\frac{1}{2}} E^k (\nabla_X E)E^{-1}$ and by \eqref{eq:E_linearise}, $$(\nabla_{\partial_i} E)E^{-1} = - \overline{z}_i + \mathcal{O} (2), \qquad (\nabla_{\overline{\partial}_i} E)E^{-1} = \mathcal{O} (2).$$ Second we have $ k^{-\frac{1}{2}} \partial_i f ( k^{\frac{1}{2}} \xi ) = 0$ since $f \in {\mathbb{C}} [ \overline{z}_1, \ldots, \overline{z}_n]$ and $ k^{-\frac{1}{2}} \overline{\partial}_i f ( k^{\frac{1}{2}} \xi ) = ( \partial f / \partial \overline{z}_i) (k^{\frac{1}{2}} \xi )$. To conclude recall that $\rho ( \partial_i) $ is the multiplication by $-\overline{z}_i$ whereas $\rho ( \overline{\partial}_i)$ is the derivation with respect to $\overline{z}_i$. \end{proof} \subsection{Proofs of Theorems \ref{theo:dim_landau}, \ref{theo:Toeplitz_Landau} and \ref{theo:ladder}} \label{sec:ledernier} In this last section, we complete the proof of the theorems stated in the introduction, a generalization actually since we will consider more general projectors. Let $\Pi \in \mathcal{L}^{\infty}(A) \cap \mathcal{L}^{+} (A)$ be a self-adjoint projector with symbol $\pi= \pi_m \otimes \operatorname{id}_A$, where $\pi_m$ is the projector of $\mathcal{D} (TM)$ onto $\mathcal{D}_m (TM)$. Such an operator exists by Theorem \ref{theo:constr-proj} and Remark \ref{sec:rem_linfty}. Alternatively, the projector $\Pi = ( \Pi_{m,k})$ onto the $m$-th Landau level defined in \eqref{eq:projecteur_fibre_auxiliaire} has the expected properties \cite[Theorems 5.2, 5.3]{oim_copain}. By Theorem \ref{theo:dim_general}, the dimension of $\mathcal{H}_k= \operatorname{Im}( \Pi_k)$ is $$ \operatorname{dim} \mathcal{H}_k = \int_M \operatorname{ch} ( L^k \otimes A \otimes \mathcal{D}_m(TM) ) \; \operatorname{Td} (M)$$ when $k$ is sufficiently large, which implies Theorem \ref{theo:dim_landau}. Define the Toeplitz algebra $$ \mathcal{T}^{\infty} = \{ P \in \lag^{\infty}(A) \cap \lag^{+} (A)/\; \Pi P \Pi = P \}. $$ Clearly $\mathcal{T}^{\infty}$ is contained in the Toeplitz algebra $\mathcal{T}$ defined in \eqref{eq:Toeplitz_Pi}, and by assertion 2 of Proposition \ref{prop:versionlinfty}, the difference is rather small: for every $ P \in \mathcal{T}$, there exists $P' \in \mathcal{T}^{\infty}$ unique modulo $\mathcal{O}_{\infty} ( k^{-\infty})$ such that $P' = P + \mathcal{O} ( k^{-\infty})$. Recall the symbol map $\tau_0$ introduced in Theorem \ref{theo:toeplitz} and denote by $\tau$ its restriction to $\mathcal{T}^{\infty}$ $$\tau : \mathcal{T}^{\infty} \rightarrow \Ci ( M , \operatorname{End} ( \mathcal{D}_m (TM) \otimes A)) .$$ We could as well consider the maps $\tau_q$ with $q \geqslant 1$ but we will limit ourselves to $\tau_0$. By Theorem \ref{theo:toeplitz} and Assertions 2, 4 of Proposition \ref{prop:versionlinfty}, $\tau$ is onto and its kernel is $k^{-1} \mathcal{T}^{\infty}$. It follows as well from Theorem \ref{theo:toeplitz} that for any $P,Q \in \mathcal{T}^{\infty}$ \begin{gather*} \tau (PQ) = \tau ( P) \tau (Q), \quad \| P_k \| = \sup_{x \in M} \| \tau(P)_x\| + \mathcal{O} (k^{-1}) \\ P_k (x,x) = \Bigl( \frac{k}{2\pi} \Bigr)^n \operatorname{tr} ( \tau ( P)_x) + \mathcal{O} ( k^{-1}) \end{gather*} Choose any connection on $A$. By Proposition \ref{sec:rem_linfty}, for any $ f \in \Ci ( M , \operatorname{End} A)$, $p \in {\mathbb{N}}$ and vector fields $X_1$, \ldots, $X_{2p}$ of $M$, the operator \begin{gather} \label{eq:toeplitz_mult_der} T_k ( f, X_1, \ldots, X_{2p}) = k^{-p} \Pi_k f \nabla_{X_1}^{L^k \otimes A} \ldots \nabla_{X_{2p}}^{L^k \otimes A} \Pi_k \end{gather} belong to $\mathcal{T}^{\infty}$ and its $\tau$-symbol is $ ( \pi_m \rho(X_1) \ldots \rho(X_{2p})\pi_m) \otimes f $. Since these symbols generates $\Ci ( M , \operatorname{End} ( \mathcal{D}_m (TM) \otimes A))$ as a vector space, we deduce that any $P \in \mathcal{T}^{\infty}$ is of the form $$ P_k = \sum_{\ell =0 }^N k^{-\ell} P_{\ell,k} + \mathcal{O} ( k^{-(N+1)}) , \qquad \forall N \in {\mathbb{N}}$$ where for any $\ell$, $(P_{\ell,k})_k$ is a finite sum of operators such as \eqref{eq:toeplitz_mult_der}. So in the case where the auxiliary bundle $A$ is trivial, $\mathcal{T}^{\infty}$ is the space $\mathcal{T}^{\operatorname{sc}}_m$ defined in the introduction. To end the proof of Theorem \ref{theo:Toeplitz_Landau}, we will show that for any $f,g \in \Ci (M)$, \begin{gather} \label{eq:lastproof} T_{k} (f) T_k (g) = T_k (fg) + k^{-1} T_k ( 1, X,Y) + \mathcal{O} ( k^{-2}) \end{gather} where $X$ and $Y$ are the Hamiltonian vector fields of $f$ and $g$ respectively. Our sign convention is ${\omega} (X, \cdot ) + df =0$. The proof is based on the following result, interesting by itself, which involves the Kostant-Souriau operator. \begin{lemme} \label{lem:ledernier} For any $f \in \Ci ( M, {\mathbb{R}})$ with Hamiltonian vector field $X$, for any $P \in \lag^{\infty}_0 ( A)$ we have \begin{itemize} \item $ [f, P]$ belongs to $\lag_{1}^{\infty} (A)$ \item $[f,P] \equiv (ik)^{-1} [\nabla_X^{L^k \otimes A} , P]$ modulo $\lag_{2}^{\infty} (A)$. \end{itemize} So the commutator $[f + \frac{i}{k} \nabla_X^{L^k \otimes A} , P]$ belongs to $\lag_{2}^{\infty}(A)$. \end{lemme} \begin{proof} The Schwartz kernel of $[f, P]$ is the product of $g(x,y)= f(x) - f(y)$ by the Schwartz kernel of $P$. Since $g$ vanishes along the diagonal, this implies that $[f, P] \in \lag_1 ( A)$. Furthermore, it follows from the definition \eqref{eq:def_si_tilde} of the symbol that if $\widetilde{{\sigma}}_0 ( P)(x) = \operatorname{Op} ( b)$ with $b \in \mathcal{P} (T_xM)$, then $\widetilde{{\sigma}}_1 ( P )(x) = \operatorname{Op} ( \ell b )$ where $\ell = d_x f \in T_x^*M$. Now a computation from \eqref{eq:sch_ker} shows that $$ [ \operatorname{Op} ( b ) , a_i ] = \operatorname{Op} ( z_i b) , \qquad [ \operatorname{Op} (b) , a_i^* ] = \operatorname{Op} ( \overline{z}_i b ) $$ where as in Section \ref{sec:landau-levels-cn} we use the annihilition and creation operators $a_i = \partial_{\overline{z}_i}$, $a_i^* = \overline{z}_i - \partial_{z_i}$. Since $ \rho ( \overline{\partial}_i)$ and $\rho ( \partial_i)$ are the restrictions of $a_i$ and $-a_i^*$ to ${\mathbb{C}}[\overline{z}_1, \ldots, \overline{z}_n]$ respectively, we deduce from Lemma \ref{lem:symb_der} that $$ {\sigma}_1 ( [f, P] ) = \tfrac{1}{i} {\sigma}_0 ( \bigl[ k^{-\frac{1}{2}} \nabla_X, P] ) $$ and the result follows. \end{proof} \begin{proof}[Proof of \eqref{eq:lastproof}] By a straightforward computation, we have $$ \Pi f \Pi g \Pi = \Pi fg \Pi + \Pi [f, \Pi ] [g, \Pi ] \Pi $$ By lemma \ref{lem:ledernier}, $\Pi [f, \Pi] [g, \Pi] \Pi$ belongs to $\lag_2 (A)$ and its symbol is \begin{xalignat*}{2} {\sigma}_2 ( \Pi [f, \Pi] [g, \Pi] \Pi ) & = - \pi_m [\rho (X) , \pi_m] [ \rho (Y), \pi_m] \pi_m \\ & = \pi_m \rho (X) \rho (Y) \pi_m \qquad \intertext{ since $\pi_m \rho(X) \pi_m =0$ and $\pi_m \rho(Y) \pi_m=0$} & = {\sigma}_2 ( k^{-1} \Pi \nabla_X^{L^k \otimes A} \nabla_Y^{L^k \otimes A} \Pi ) \end{xalignat*} and the result follows. \end{proof} Let us prove now Theorem \ref{theo:ladder}. Introduce a quantization $( \mathcal{H}_{F,k})$ of $(M,L)$ twisted by $F = \mathcal{D}_m (TM) \otimes A$. We can adapt the definition \eqref{eq:def_w_k} of $W_k$ with the auxiliary bundle $A$ by setting \begin{gather} \label{eq:def_w_k_A} \begin{split} W_k : \Ci ( M , L^k \otimes A ) \rightarrow \Ci ( M, L^k \otimes F ), \qquad k \in {\mathbb{N}}\\ W_k = R_m D_{G^{\otimes (m-1)} \otimes A ,k} \circ D_{G^{\otimes (m-2)}\otimes A,k} \circ \ldots \circ D_{G \otimes A, k} \circ D_{A,k} \end{split} \end{gather} \begin{lemme} The operator $(V_k = \frac{1}{m!} k^{-\frac{m}{2}} \Pi_{F,k} W_k \Pi_{k}$, $k \in {\mathbb{N}}$) belongs to $ \lag ^{\infty} ( A, F)$, has the same parity as $m$ and its symbol ${\sigma}_0(V)$ viewed as a morphism from $\mathcal{D}(TM) \otimes A$ to $\mathcal{D}(TM) \otimes F$ is given by $$ \forall \, f \in \mathcal{D}_p(TM), \; \forall \, a \in A, \; {\sigma}_0(V) ( f \otimes a) = \begin{cases} 1 \otimes f \otimes a \text{ if $p = m $} \\ 0 \text{ otherwise} \end{cases}$$ \end{lemme} \begin{proof} By Proposition \ref{prop:versionlinfty}, for any even (resp. odd) operator $P \in \lag^{\infty} (B, A) $, $(k^{-\frac{1}{2}}D_{A,k} \circ P )$ belongs to $\lag^{\infty} ( B, A \otimes G) $, is odd (resp. even) and its symbol is $\varphi_A \circ {\sigma}_0 ( P)$ where $$\varphi_A = \sum_ i a_i \otimes \overline{z}_i \otimes \operatorname{id}_A \in \mathcal{S} (TM) \otimes G \otimes \operatorname{End} A .$$ Here we have introduced an orthonormal frame $( \partial_i)$ of $T^{1,0}M$, $(z_i)$ is the dual frame of $(T^{1,0}M)^*$ and $a_i =\partial_{\overline{z}_i}$ is the annihilation operator. Consequently, $k^{-\frac{m}{2}}R_m D_{A \otimes G^{m-1}} \circ \ldots \circ D_{A,k} \circ \Pi_k $ belong to $\lag ( A, F)$ with symbol $\varphi_{A}^m \circ \pi_m$ where $\varphi_A^m $ is the morphism from $\mathcal{D} ( TM) \otimes A$ to $\mathcal{D} ( TM) \otimes \mathcal{D}_m (TM) \otimes A$ given by $$ \varphi_A^m = \sum_{i_1, \ldots, i_m =1}^{n} a_{i_1} \ldots a_{i_m} \otimes (\overline{z}_{i_1} \ldots \overline{z}_{i_m}) \otimes \operatorname{id}_A $$ Computing $\varphi_A^m ( \overline{z}^{{\beta}} )$ for any multi-index ${\beta}$, we show that for any $f \in \mathcal{D}_p(TM)$ and $ a\in A$, $ \varphi_A^m ( f \otimes a) = m ! (1 \otimes f \otimes a )$ when $p=m$ and $0$ otherwise. \end{proof} The symbol $\sigma_0 (V)$ is exactly the symbol $\rho$ introduced in Lemma \ref{lem:unitary-symbole}. Now Theorem \ref{theo:ladder} and the remark on the unitarization of $V_k$ follows by the same proof as Theorem \ref{theo:unitary_equivalence}. \section{Appendix} \label{sec:appendix} In this appendix we discuss the known results for surfaces with constant curvature. Two important features appear for the negatively curved surfaces: only the lower part of the spectrum consists of Landau levels, and moreover, there is an isomorphism between the $m$-th Landau level and the first level of a Laplacian twisted by the $m$-th power of the complex determinant bundle. These results appeared in the physics literature, cf. in particular \cite{IeLi94} for the case with surface with genus $\geqslant 2$. A more recent mathematical reference is \cite{Te06}. \subsubsection*{The plane \cite{La30}} Consider a quantum particle confined in a two dimensional plane $(x,y)$ and subject to a constant magnetic field perpendicular to this plane. Its Hamiltonian is the operator \begin{gather} \label{eq:laplacian_landau} H = -\tfrac{1}{2} (\nabla_x^2 + \nabla_y^2) \quad \text{ with } \quad \nabla_x = \tfrac{\partial}{ \partial x} + \tfrac{i}{2} B y,\; \nabla_y = \tfrac{\partial}{\partial y} - \tfrac{i}{2} Bx . \end{gather} $B$ is a positive constant representing the strength of the magnetic field. The spectrum of $H$ is $B(\frac{1}{2} + {\mathbb{N}} )$ and the Landau levels $\mathcal{H}_m = \operatorname{ker} ( H - B( \frac{1}{2} + m))$ are given in terms of the ladder operators $\nabla_z = \nabla_x - i \nabla_y$, $\nabla_{\overline{z}} = \nabla_x + i \nabla_y$ by \begin{gather} \label{eq:landau_level} \mathcal{H}_0 = \operatorname{ker} ( \nabla_{\overline z}), \qquad \mathcal{H}_m = ( \nabla_z)^m \mathcal{H}_0 , \quad m \geqslant 1 . \end{gather} \subsubsection*{Surface with constant curvature, \cite{IeLi94}, \cite{Te06}} Let $M$ be a compact orientable surface with a Riemannian metric having a constant Gauss curvature $S$. Introduce a Hermitian line bundle $ L \rightarrow M$ with a connection $\nabla : \Ci (M, L ) \rightarrow \Omega^1 (M,L)$. Assume that the curvature satisfies \begin{gather} \label{eq:constant_magnetic} i \operatorname{curv} ( \nabla) = B \operatorname{vol}_g \end{gather} where $B$ is a non-zero constant and $\operatorname{vol}_g$ is the Riemannian volume. Choosing the convenient orientation for $M$, we can assume that $B$ is positive. The quantum Hamiltonian is the Laplacian $ \Delta := \frac{1}{2} \nabla^* \nabla $ acting on sections of $L$. Then denoting its eigenvalue by $0 \leqslant {\lambda}_0 < {\lambda}_1 < \ldots $, it is known that \begin{gather} \label{eq:spec_magentic_surface} {\lambda}_m = B (\tfrac{1}{2} + m ) + S \tfrac{ m (m+1)}{2} \qquad \text{ if } B + m S >0 \end{gather} For a sphere or a torus, $S \geqslant 0$, and these formulas describe the whole spectrum. If the genus of $M$ is larger than $2$, then $S<0$ and the condition $B + m S >0$ is satisfied only for a finite number of $m$. In this case, it is not reasonable to expect an explicit formula for the other eigenvalues. Indeed, if $S=-1$ and $L = K^r$ with $K$ the canonical bundle of $M$ and $r$ a positive integer, then \eqref{eq:spec_magentic_surface} gives the first $(r+1)$ eigenvalues, and for any $n \in {\mathbb{N}}$, ${\lambda}_{r+n} = {\lambda}_r + \tfrac{1}{2} \mu_n$, where $\{\mu_n, \; n \in {\mathbb{N}}\}$ is the spectrum of the Laplace-Beltrami operator of $M$. This latter spectrum depends in an essential way on the metric. Indeed, by Huber's theorem \cite[Theorem 9.2.9]{Buser}, $\{ \mu_n\}$ determines the length spectrum of $M$. The multiplicity of the first eigenvalues is equal to: \begin{gather} \label{eq:mult} \operatorname{mult} ( {\lambda}_m) = B \frac{ \operatorname{vol}_g (M)}{2\pi} + (\tfrac{1}{2} + m) \chi (M) \qquad \text{ if } B + (m+1)S >0. \end{gather} Here $\chi (M)$ is the Euler characteristic of $M$ and observe that $B \operatorname{vol}_g (M) / (2\pi)$ is the degree of $L$, so it is an integer. \eqref{eq:mult} follows from a description of the corresponding eigenspace $\mathcal{H}_m = \ker ( \Delta - {\lambda}_m)$ similar to \eqref{eq:landau_level}, that we explain below. \subsubsection*{Proof of formulas \eqref{eq:spec_magentic_surface} and \eqref{eq:mult}} To start with, we do not assume that the Gauss curvature $S$ and the function $B$ defined in \eqref{eq:constant_magnetic} are constant. We choose any orientation on $M$. Let $j$ be the complex structure of $M$ compatible with $g$, i.e. $g(jX, jY) = g(X,Y)$ for any tangent vectors $X$, $Y \in T_pM$ and $ X \wedge jX >0$ if $X \neq 0$. Since we are in real dimension 2, $j$ is integrable. Furthermore the associated volume form $\operatorname{vol}_g$ is the symplectic form ${\omega} (X, Y) = g(jX, Y)$. $L$ has a natural holomorphic structure such that its $\overline{\partial}$-operator is $\nabla^{0,1}$. We denote it by $\overline{\partial}_L: \Ci ( L) \rightarrow \Ci ( L \otimes \overline{K})$ with $K = (T^*M)^{1,0}$ the canonical bundle. The curvature of $\nabla$ has the form $\frac{1}{i} B {\omega}$ with $B \in \Ci ( M , {\mathbb{R}})$. The canonical bundle has a natural metric induced by $g$. Its Chern connection, that is its connection compatible with both its metric and holomorphic structure, has curvature $i S {\omega}$ where $S$ is the Gauss curvature. \begin{theo} The following identities holds: \begin{enumerate} \item Weitzenb\"ock formula: $\Delta_L = \overline{\partial}^*_L \overline{\partial}_L + \frac{1}{2} B$. \item Bosonic commutation relation: $ \overline{\partial}_L \overline{\partial}_L^* = \overline{\partial}_{L\otimes K^{-1}} ^* \overline{\partial}_{L\otimes K^{-1}} + (B + S)$. \end{enumerate} \end{theo} The Weitzenb\"ock formula is a classical relation, it holds more generally on K\"ahler manifolds. We call the second formula the bosonic commutation relation because it replaces the canonical commutation relation satisfied by the creation/annihilation operators $[a, a^{*} ] =1$. In this formula, we identify $\overline{K}$ with $K^{-1}$ through the metric so that the operators $ \overline{\partial}_L \overline{\partial}_L^*$ and $\overline{\partial}_{L\otimes K^{-1}} ^* \overline{\partial}_{L\otimes K^{-1}}$ act on the same space $\Ci ( L\otimes \overline{K}) = \Ci ( L \otimes K^{-1})$. A similar formula were obtained in \cite[Proposition 9]{Te06} for the same purpose of computing the spectrum of $\Delta_L$. \begin{proof}[Proof of the bosonic identity] Introduce a local holomorphic frame $s$ of $L$ and a complex coordinate $z$ on $M$. We have first $\overline{\partial}_L ( f s) = f_{\overline{z}} \; s \otimes d \overline{z}$. To compute the adjoint, recall that the scalar products of $\Ci (L)$ and $\Ci( L \otimes \overline{K})$ are defined by integrating the pointwise scalar products against the volume form. Write ${\omega} = i h dz \wedge d \overline{z}$ and $|s|^2 = e^{- \varphi}$ with $h$ and $\varphi$ real valued functions. Then $|d \overline z |^2 = h^{-1}$ and a direct computation leads to $$\overline{\partial}_L^* ( f s \otimes d\overline{z} ) = h^{-1} ( - f_z + f \varphi_z ) s.$$ With the identification $\overline{K} \simeq K^{-1}$, we have $ d\overline{z} = h^{-1} (dz)^{-1}$. We deduce that $$ \overline{\partial}_L \overline {\partial}_L^* ( f s \otimes (dz)^{-1}) = h^{-1} \bigl( - f_{z \overline{z} } + f_{\overline{z}} ( \varphi_z - \tfrac{h_z}{h}) + f ( \varphi_{z \overline{z}} - \partial_{\overline{z}} ( \tfrac{h_z}{h})) \bigr) s \otimes (dz )^{-1}$$ Similar computations by using $|s \otimes dz^{-1} |^2 = h e^{-\varphi}$ leads to $$ \overline{\partial}_{L\otimes K^{-1}} ^* \overline{\partial}_{L\otimes K^{-1}} ( f s \otimes (dz)^{-1}) = h^{-1} \bigl( - f_{z \overline{z}} + f_{\overline{z}} ( \varphi_z - \tfrac{h_z}{h} ) \bigr) s \otimes (dz)^{-1} .$$ To conclude, observe that $B = h^{-1} \varphi_{z \overline z}$ and $S = - h^{-1} \partial_z \partial_{\overline z} \, \ln h $. \end{proof} From now on, we will assume that $B$ and $S$ are constant. With the Weitzenb\"ock formula, we pass directly from the spectrum of $\Delta _L$ to the one of $\overline{\partial}^*_L \overline{\partial}_L$. We can use the Bosonic relation exactly as it is usually done with the Landau Hamiltonian, cf. proof of Proposition \ref{prop:linearlandau}. We deduce that for any ${\lambda} \neq 0$, ${\lambda} $ is an eigenvalue of $\overline{\partial}^*_L \overline{\partial}_L$ if and only if ${\lambda} - (B+S)$ is an eigenvalue of $\overline{\partial}^*_{L\otimes K^{-1}} \overline{\partial}_{L\otimes K^{-1}}$. Moreover the eigenspaces have the same dimension. Indeed $\overline{\partial}_L$ restricts to an isomorphism $$ \operatorname{Ker} ( {\lambda} - \overline{\partial}^*_L \overline{\partial}_L ) \rightarrow \operatorname{ker} ( {\lambda} - (B+S) - \overline{\partial}^*_{L\otimes K^{-1}} \overline{\partial}_{L\otimes K^{-1}} ) $$ with inverse the restriction of ${\lambda}^{-1} \overline {\partial}_L^*$. Besides this, $\ker (\overline{\partial}^*_L \overline{\partial}_L)$ is the space $H^0 ( L)$ of holomorphic sections of $L$. By Riemann-Roch theorem, $H^0 (L)$ has dimension $d + \frac{1}{2} \chi (M)$ if the degree $d = B \operatorname{Vol}(M)/(2\pi)$ of $L$ is larger that $- \chi (M)$. To summarize, when $B$ is sufficiently large, $0 $ is an eigenvalue of $\overline{\partial}^*_L \overline{\partial}_L$ with multiplicity equal to $ B \operatorname{Vol}(M) / (2 \pi) + \frac{1}{2} \chi(M)$, and the remainder of the spectrum is identical with the spectrum of $(B+S) + \overline{\partial}^*_{L\otimes K^{-1}} \overline{\partial}_{L\otimes K^{-1}}$, multiplicities included. We can iterate this argument and deduce by induction the formulas \eqref{eq:spec_magentic_surface}, \eqref{eq:mult} giving the first eigenvalues of $\Delta_{L}$ with their multiplicity. Since $\operatorname{deg} ( L \otimes K^{-1}) = \operatorname{deg} (L) + \chi (M)$, we can repeat ad infinitum this argument when $\chi (M) \geqslant 0$ and obtain the whole spectrum of $\Delta_L$; whereas for $\chi (M) <0$, only a finite number of iterations is possible. \bibliographystyle{plain}
{ "timestamp": "2022-04-22T02:20:44", "yymm": "2012", "arxiv_id": "2012.14190", "language": "en", "url": "https://arxiv.org/abs/2012.14190" }
\section{Introduction} \IEEEPARstart{I}{n} recent years, thanks to the increasingly ubiquitous deployment of WiFi infrastructure and the open-source software \cite{Tool}, human sensing based on WiFi Channel State Information (CSI) has gained significant attention \cite{track} \cite{wirol} \cite{fall}. To achieve more fine-grained human sensing, WiFi-based human pose estimation has been a focus of research recently. Pioneering works \cite{rf} \cite{rf3D} estimate human poses using radars operating at the WiFi frequency, i.e., 5.46-7.24$GHz$. \par Further, for easy deployment in real world and cost savings, \cite{letters} \cite{mobicom} use commodity WiFi devices to obtain fine-grained human poses. \cite{letters} enables commodity WiFi devices to capture 2D human skeleton images. But it can merely obtain human poses from one perspective, and performs unsatisfactorily in some special perspectives restricted by annotations.\par Different from \cite{letters}, \cite{mobicom} presents WiPose, the only 3D human pose estimation system using commodity WiFi. It utilizes the amplitude component of CSI and achieves high performance in the experimental environment. However, during the experiment, the subjects are required to perform movements at a fixed point. Moreover, it consists of 9 distributed antennas. Therefore, WiPose is limited to some specific applications and not convenient enough for daily use, such as smart home, health monitoring, etc.\par In summary, due to the change of the target position and the difficulty of data collection, WiFi-based human pose estimation still has some limitations for wide adoption in daily life.\par To address the above limitations, we propose Wi-Mose, the first system which can capture fine-grained 3D moving human poses with commodity WiFi devices in both Line of Sight (LoS) and Non-Line of Sight (NLoS) scenarios. To reconstruct 3D moving human poses, we mainly design the system from two aspects: data processing and network design. \par For one thing, we convert the processed amplitude and phase, which contain the pose and position information, respectively, into a sensitive CSI tensor, called \emph{CSI images}. And then, we feed CSI images into the network rather than only amplitude or phase information. For another, we design a deep feature extraction network to extract pose related features from the amplitude channel and weaken the influence of position changes by leveraging the information in the phase channel. Specifically, we use the position information contained in the phase channel as prior knowledge to add constraints to the attitude estimation. We also design a pose regress network to convert the features into key-point coordinates. Therefore, the subject can \emph{move continuously and freely without space constraints}. The main contributions of our work are listed as follows:\par 1. We propose a method to convert the raw CSI data into CSI images so that the neural network can extract features which contain more pose information but less position component. \par 2. We design a neural network which is suitable to extract moving human pose features from CSI images and convert WiFi signals into 3D human poses.\par 3. We build a 3D human pose estimation prototype system for experiment and evaluation. Results show that the system can estimate 3D human poses with 29.7mm (37.8mm) Procrustes analysis Mean Per Joint Position Error (P-MPJPE) in the LoS (NLoS) scenarios, achieving 21\% (10\%) improvements in accuracy over the state-of-the-art method. \par 4. Because of the usage of CSI images and specialized network, Wi-Mose utilizes only 6 antennas to capture information, which is lightweight and low-cost compared with the state-of-the-art method \cite{mobicom}.\par The rest of this paper is organized as follows. Section II is the system overview. Section III discusses data collection and processing. Section IV introduces the neural network. Section V is the baseline. Section VI describes experiments and performance followed by a conclusion in Section VII. \begin{figure*}[!t] \vspace{-0.9em} \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{0.cm} \centerline{\includegraphics[scale=0.58]{system_noversion}} \caption{The system overview. The upper pipeline provides ground truth for supervision training, while the bottom pipeline learns to extract human poses using only WiFi signals from commodity devices.} \label{system} \vspace{-1.8em} \end{figure*} \section{System Overview} \vspace{-0.1cm} The system consists of three parts: data collection, data processing, and pose estimation, as shown in Fig.~\ref{system}. The data collection part contains two receivers, a transmitter, and a monocular camera, which are used to collect synchronous CSI and video frames. The data processing part converts the raw CSI data into CSI images and transforms the video frames to human key-point coordinates which are used for supervised learning. The pose estimation part extracts features from CSI images and converts the features into key-point coordinates which are utilized to reconstruct 3D human pose skeletons. \vspace{-0.3cm} \section{Data Collection and Processing} \vspace{-0.2cm} In this paper, to reconstruct 3D moving human pose skeletons using commodity WiFi devices, we need to collect synchronous raw CSI and video frames and then process these data so that they can be fed into the neural network.\\ \vspace{-1cm} \subsection{Data Collection} \vspace{-0.1cm} According to Fresnel zone model \cite{fresnel}, to capture human poses in the whole space, we need at least 2 pairs of transceivers. Hence in this paper, we utilize 3 commodity WiFi devices (one transmitter and two receivers) to capture the raw CSI data which contains human pose information. To collect more information, we set 1 transmitting antenna at the transmitter, 3 receiving antennas at each receiver. In order to improve resolution and capture pose-related features efficiently, the 2 pairs of transceivers are mutually perpendicular and the transmitter is placed at the intersection. During collecting CSI data, we collect synchronous video frames using a monocular camera, which are utilized to extract 3D key-point coordinates as the ground truth to train the proposed neural network. \vspace{-0.5cm} \subsection{Data Processing} \vspace{-0.1cm} \subsubsection{Link Selection}We observe that there is always an antenna which receives CSI with a larger variance value than others. It means this antenna has larger dynamic responses. So we choose it as the reference and make use of its amplitude information. \subsubsection{Denoising} In reality, there are multiple paths between a pair of transceiver. In an ideal state, the response of the wireless channel at time $t$ and frequency $f$ can be expressed as: \begin{footnotesize} \vspace{-0.1cm} \begin{equation} H(f,t)=\sum_{i=1}^{N}\alpha _{i}(t)e^{-j2\pi f\tau_{i}(t)} \end{equation} \vspace{-0.3cm} \end{footnotesize}\par \noindent where $N$ is the number of multipath, $\alpha _{i}(t)$ and $\tau_{i}(t)$ are the complex attenuation and time of flight for the $i$-th path, respectively.\par According to whether the length of the path changes, CSI can be divided into two parts, the static path and the dynamic path components, which can be expressed as: \begin{footnotesize} \vspace{-0.1cm} \begin{equation} H(f,t)=H_{s}(f,t)+\sum_{i\in P_{d}}\alpha _{i}(t)e^{-j2\pi f\tau_{i}(t)} \end{equation} \vspace{-0.3cm} \end{footnotesize}\par \noindent where $H_{s}(f,t)$ is the sum of responses of all static paths including LoS and other static reflection paths, $P_{d}$ is the collection of dynamic paths which are not constant over time. Our purpose is to extract the dynamic path component.\par We cannot directly utilize raw CSI to capture human poses. Because compared with the signals of LoS and other static paths in raw CSI measurements, pose-related signals are too week and easily influenced by unpredictable interference. To improve the accuracy of pose estimation, we eliminate interference and extract the dynamic paths corresponding to humans.\par For more denoising details, please refer to the previous work of our team in \cite{letters}. \subsubsection{Segmentation}In order to capture continuous human poses, we segment the processed CSI according to the synchronous video frames and reconstruct CSI images, as shown in Fig.~\ref{system}. The CSI images contain an amplitude channel and a phase channel. The amplitude can reflect people's movements, while the phase is more used to obtain changes in position. To extract pose information and weaken the influence of position changes, we combine them to provide both pose and position information for the neural network. \par \vspace{-0.1cm} \begin{figure}[!t] \vspace{-0.5cm} \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{0.cm} \centerline{\includegraphics[scale=0.8]{block}} \caption{The structure of a residual block. The 1$\times$1 and 3$\times$3 refer to the kernel size. In and out refer to the out channel of the layer.} \label{block} \vspace{-1em} \end{figure} \section{Neural Network Design} \vspace{-0.1cm} In this paper, we design a neural network to extract features and convert them into key-point coordinates. In order to reduce the influence of position changes on pose estimation, different from \cite{mobicom}, we choose to directly regress the key-points, similar to the method in computer vision. \vspace{-0.3cm} \subsection{Data and Annotations} \vspace{-0.1cm} To accurately and intuitively associate CSI data with human poses, we use a camera synchronized with a receiver to capture video frames. Then, we apply AlphaPose \cite{alpha} and VideoPose3D \cite{videopose} to get the 3D key-point coordinates from the video frames. Since our goal is to reconstruct 3D moving human pose skeletons, we choose key-point coordinates as the annotation rather than the whole skeleton. Because the limbs are rigid, locating key-points can be more accurate and prevent overfitting. \vspace{-0.4cm} \subsection{Network Framework} \vspace{-0.1cm} The design of our neural network must consider the time correlation of human poses and the spatial position of the human body. In addition, the spatial resolution of WiFi signals is low, which makes it difficult to capture complete human poses from just a single CSI sample. To solve these problems, we make the network learn to aggregate information from multiple CSI samples instead of taking a single CSI sample as input. \par We design a neural network to convert CSI data into 3D human key-point coordinates, including a feature extraction network and a key-point regression network. Because the input CSI images contain a phase channel which increases the amount of data and introduces linear constraints, the feature extraction network should be able to extract sufficiently deep features.\par \begin{figure*}[!t] \vspace{-1em} \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.0cm} \centerline{\includegraphics[scale=0.356]{show_v4}} \caption{Test examples which show the constructed skeletons of a person in the LoS and NLoS scenarios. The first line: Video frames captured by the camera, and presented here for visual reference. The second line: Human poses extracted from video frames, and presented here as ground truth. The third line: The human poses captured by WiPose. The fourth line: The human poses captured by Wi-Mose. The poses in red boxes have obvious errors.} \label{show} \vspace{-1.5em} \end{figure*} Therefore, in the feature extraction network, we use a residual network, which contains 13 residual blocks as shown in Fig.~\ref{block}, to extract features related to human poses. Because of its special structure, the residual network can avoid gradient explosion and disappearance caused by the deepening of the network. In the key-point regression network, we utilize two fully connected layers to integrate feature information, and finally convert these features into key-point coordinates. The details of the network are shown in Table~\ref{network}.\par Take a set of synchronized CSI data and key-point coordinates $(C_{1},C_{2},K)$ as an example, where $(C_{1},C_{2})$ denotes the CSI images from two pairs of transceivers and $K$ denotes the corresponding key-point coordinates from video frames.\par In the training stage, we feed $(C_{1},C_{2})$ into the proposed neural network and get the predicted 3D human key-point coordinates $P$. Then we use $K$ as an annotation and compare $P$ with it to optimize the entire network. \par We define the training process as minimizing the average Euclidean distance error between predicted joints and the ground truth, so we first define the position loss $L_{P}$ as the $L_{2}$ norm between the predicted joints and the ground truth: \begin{footnotesize} \vspace{-0.1cm} \begin{equation} L_{P} =\frac{1}{T}\sum_{t=1}^{T}\frac{1}{N}\sum_{i=1}^{N}\left \| \tilde{p}_{t}^{i}-p_{t}^{i} \right \|_{2} , \end{equation} \vspace{-0.3cm} \end{footnotesize} \noindent where $\tilde{p}_{t}^{i}$ and $p_{t}^{i}$ are the predicted and real coordinate of joint $i$ in time slot $t$, $N$ is the joint number in the model we use, and $T$ means the number of data samples.\par \begin{table}[t] \centering \setlength{\abovecaptionskip}{-0.1cm} \vspace{-0.5cm} \setlength{\belowcaptionskip}{0.cm} \caption{The Neural Network Implementation} \label{network} \renewcommand{\arraystretch}{1.3} \renewcommand\tabcolsep{16.0pt} \fontsize{7}{7}\selectfont \begin{tabular}{c|c|c|c \toprule Network &Input Size & Output Size & Stride \\ \hline BLOCK1& 30$\times$20$\times$4& 30$\times$20$\times$4& 1$\times$1 \\ BLOCK2 &30$\times$20$\times$4& 30$\times$20$\times$8& 1$\times$1 \\ BLOCK3 &30$\times$20$\times$8& 15$\times$10$\times$8& 2$\times$2 \\ BLOCK4 &15$\times$10$\times$8& 15$\times$10$\times$16& 1$\times$1 \\ BLOCK5 &15$\times$10$\times$16& 8$\times$5$\times$16& 2$\times$2 \\ BLOCK6 &8$\times$5$\times$16& 8$\times$5$\times$64& 1$\times$1 \\ BLOCK7 &8$\times$5$\times$64& 4$\times$3$\times$64& 2$\times$2 \\ BLOCK8 &4$\times$3$\times$64& 4$\times$3$\times$256& 1$\times$1 \\ BLOCK9 &4$\times$3$\times$256& 2$\times$2$\times$256& 2$\times$2 \\ BLOCK10 &2$\times$2$\times$256& 2$\times$2$\times$1024& 1$\times$1 \\ BLOCK11 &2$\times$2$\times$1024& 1$\times$1$\times$1024& 2$\times$2 \\ BLOCK12 &1$\times$1$\times$1024& 1$\times$1$\times$2048& 1$\times$1 \\ BLOCK13 &1$\times$1$\times$2048& 1$\times$1$\times$2048& 1$\times$1 \\ FC1 &1$\times$1$\times$2048& 1$\times$512& - \\ FC2 &1$\times$512& 1$\times$51& - \\ \bottomrule \end{tabular} \begin{tablenotes} \scriptsize \item[1] 1. Stride just applies to the two layers inside the red box in Fig.~\ref{block}. \item[2] 2. Other layers' stride is always $1\times1$. \end{tablenotes} \vspace{-3em} \end{table} Since our network regresses the key-points directly, we introduce Huber Loss in the loss function. Huber Loss is a parameterized loss function for regression problems. It can enhance $L_{P}$ and reduce the interference of outliers. The Huber Loss $L_{H} $ in our loss function is expressed as: \begin{footnotesize} \vspace{-0.1cm} \begin{equation} L_{H} =\frac{1}{T}\sum_{t=1}^{T}\frac{1}{N}\sum_{i=1}^{N}\left \| \tilde{p}_{t}^{i}-p_{t}^{i} \right \|_{H} , \end{equation} \vspace{-0.1cm} \end{footnotesize} \noindent where $\left\|\cdot\right \|_{H}$ means the Huber norm. It is defined as: \vspace{-0.1cm} \begin{footnotesize} \begin{equation} \left\|x\right \|_{H}=\frac{1}{n}\sum_{i=1}^{n}huber(x_{i}), \end{equation} \vspace{-0.1cm} \end{footnotesize}\par \noindent where: \vspace{-0.1cm} \begin{footnotesize} \begin{equation} huber(x_{i})= \begin{cases} \quad 0.5x_{i}^{2} \quad \quad\quad \!\!{\rm if}\;x_{i}<\delta \\ \quad|x_{i}|-0.5 \quad {\rm otherwise.} \end{cases} \end{equation} \end{footnotesize}\par \vspace{-0.1cm} The $\delta$ is a parameter of the Huber Loss and set to 0.75 in our experiment.\par Finally, the loss function is defined as: \begin{footnotesize} \begin{equation} L=L_{P}+L_{H} \end{equation} \end{footnotesize}\par \vspace{-0.1cm} We use the Adam \cite{Adam} optimizer to optimize the loss function in our network. \vspace{-0.5cm} \subsection{Network Settings and Training} We collect CSI data at 150Hz and video frames at 30Hz, which means every 5 CSI samples in each receiver are synchronized with one video frame by timestamps. Consider the continuity of movement and the efficiency of training, we use 5 strictly corresponding CSI samples and the preceding 15 samples to correspond to one video frame. Therefore, the final result is 20 unique CSI samples corresponding to one video frame, that is, the final generated action is 7.5 Hz. \par The structure of the neural network is shown in Table~\ref{network}. The feature extraction part consists of 13 residual blocks shown in Fig.~\ref{block}. We add a batch normalization layer after each convolution. In order to add non-linearity to the model, we use Rectified Linear Unit (ReLU) activation functions after each batch normalization layer. To improve training efficiency, we set the stride of the two layers inside the red box in Fig.~\ref{block} to 2$\times$2 in the 3rd, 5th, 7th, 9th, and 11th residual blocks, and all other strides to 1$\times$1. In addition, in these layers, we perform a convolution operation (kernel size is 1$\times$1, stride is 2$\times$2) on the input to make it match the size of the output. After the residual network, two fully connected layers convert high-dimensional features into the key-point coordinates. \par The network is implemented with TensorFlow \cite{tensorflow}. The result is trained for 6 epochs using the Adam \cite{Adam} optimizer with 0.0001 learning rate, 4 batch size. Moreover, we introduce learning rate decay method that the learning rate is multiplied by 0.9 after every epoch. \vspace{-0.4cm} \section{Baseline} \vspace{-0.1cm} WiPose proposed in \cite{mobicom} is currently the state-of-the-art 3D human pose estimation system based on commodity WiFi. In this paper, we apply the deep learning model proposed in WiPose as our baseline. The model contains four-layer Convolutional Neural Network (CNN) and three-layer Long Short Term Memory network (LSTM) on top of the CNN. The system applies the output of LSTM to the initial skeleton model to obtain the current poses according to forward kinematics. In \cite{mobicom}, the author proposes two input data formats. Because our system uses 2D CSI data, we only use 2D CSI data to train the model in \cite{mobicom}. \par Note that although our work is carried out on the basis of \cite{letters}, we do not choose the model in \cite{letters} as our baseline. Because the model estimates 2D human skeleton images while our goal is to estimate 3D human poses. \vspace{-0.3cm} \section{Experiment and Evaluation} \vspace{-0.2cm} \subsection{Setup} Our experimental site is a 7m$\times$8m basement. We use 3 transceivers work in the 5GHz frequency with 20MHz bandwidth and we install CSI-Tool \cite{Tool} on all the transceivers. The two receivers are synchronized using the Network Time Protocol (NTP). And we connect a monocular camera to a receiver to record video frames. Throughout the experiment, other wireless signals are still operated. \vspace{-0.4cm} \subsection{Dataset} \vspace{-0.1cm} The dataset contains 10 hours of data for 5 people with different heights, weights, clothing, and genders, which means that we finally collected 5,400,000 CSI samples at each receiver. We ask each volunteer to perform continuous poses in the room. Meanwhile, the camera will record their poses. Then we obtain the joints synchronized with CSI samples through automatic annotations for joints (AlphaPose and VideoPose3D). For the data of the four persons who are chosen as the training subjects, 75\% is used as the training set to train the network and the remaining 25\% is used as the test set to test the model. The data of the last person is used for testing the generalization of our model. \vspace{-0.2cm} \begin{table*}[!htbp] \setlength{\abovecaptionskip}{-0.1cm} \renewcommand{\arraystretch}{1.1} \renewcommand\tabcolsep{2.5pt} \caption{P-MPJPE(unit:$mm$) for The Basic Scenario} \label{basic} \fontsize{8}{10}\selectfont \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c \toprule Joints & MidHip & LHip & LKnee & LAnkle & RHip & RKnee & RAnkle & Back & Neck & Nose & Head & LShoulder & LElbow & LWrist & RShoulder & RElbow & RWrist & Overall\\ \hline WiPose &17 & 30 & 36 & 57 & 34 & 40 & 60 &31 & 13 & 25 & 38 & 26 & 45 & 53 & 24 & 43 & 67 & 37.6\\ Wi-Mose &16 & 22 & 32 & 49 & 23 & 31 & 50 &18 & 14 & 24 & 36 &19 & 31 & 46 &18 &24 &52&29.7\\ \bottomrule \end{tabular} \vspace{-2em} \end{table*} \vspace{-0.3cm} \subsection{Performance} \vspace{-0.1cm} To measure the performance, we introduce P-MPJPE which performs Procrutes analysis before calculates MPJPE. We observe that compared with the ground truth, some constructed 3D human pose coordinates have slight and global offset. The reason is that we directly regress key-point coordinates, which introduces an error independent of poses. Since P-MPJPE is more suitable for moving human pose estimation, we utilize it to weaken the effect of the attitude-independent error. \subsubsection{Basic Scenario}We first evaluate the performance in the basic scenario which is the LoS scenario. The left part of Fig.~\ref{show} shows a test example of the constructed skeletons of a person who continuously walks in different poses and directions. \par Table~\ref{basic} reports the P-MPJPE for the basic scenario. We not only calculate the errors for each joint but also measure the overall performance by averaging them. The results show that the proposed system performs much better than the baseline. The overall P-MPJPE of our system is 29.7mm, while that of the baseline is 37.6mm. Note that the error of the root joint in WiPose, which is set to Neck in our model, is smaller than that in Wi-Mose because of the introduction of forward kinematics. The positioning accuracy of other points of Wi-Mose is higher than that of the baseline. For both baseline and the presented system, it is more difficult to locate the joints farther from the trunk, since the reflected signals from these parts are weaker than those from closer parts. Another reason is that these parts of human body have smaller reflection areas and always have a larger moving range in our dataset, which makes it much harder to locate.\par The upper part of Fig.~\ref{multi} shows a test example in different perspectives. Compared with 2D pose estimation, we can show the poses in all perspectives, even if the limbs are obscured. The results show that Wi-Mose is more suitable to capture 3D human pose throughout the space. \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \setlength{\belowcaptionskip}{0.0cm} \centerline{\includegraphics[scale=0.32]{nlos_show}} \caption{The examples of estimated poses in multiple perspectives for the LoS (upper) and NLoS (bottom) scenarios.} \label{multi} \vspace{-0.8em} \end{figure} \subsubsection{Occluded Scenario} In order to verify the performance of Wi-Mose in the occluded environment (the NLoS scenario), we add a wooden screen between the subject and the receiver. The distribution of training and test data is the same as the basic environment. A test example is shown in the right part of Fig.~\ref{show}. The overall P-MPJPE result of Wi-Mose is 37.8mm, while that of the baseline is 42.0mm, indicating that Wi-Mose outperforms the baseline. Thanks to the penetration of WiFi, we can still receive signals even if there are obstructions. Because the signals will attenuate when passing through the wooden screen, the final error will be larger than the basic scenario. In the NLoS scenario, some poses estimated by the baseline are obviously wrong. The reason is that the baseline is more susceptible to position changes than Wi-Mose, especially when there is less information. \par The bottom part of Fig.~\ref{multi} shows a test example in different perspectives. The results indicate that Wi-Mose can capture high-precision 3D moving human poses in the occluded scenario. \subsubsection{Cross-subject Scenario} Wi-Mose performs well in cross-subject scenario. We collect 5 people's data in both the basic and occluded scenarios. When we train the network in each scenario, we feed the data of the first four people into the network, and then, use the last person's data for testing. It should be noted that there are obvious differences in height and weight of the five people. The result is in Table~\ref{cross}. As we can see, the performance of the proposed Wi-Mose framework is slightly better than the baseline. Compared with the basic and occluded scenarios, the performance of our system is degraded a little larger than that of the baseline. The reason is that we use the network loss to constrain the pose estimation, which is a weak constraint. While the baseline applies the preset skeleton model and the network loss to constrain the poses, which is a strong constraint. This makes our model more flexible in dealing with full-space scenarios, but less satisfactory in cross-subject pose estimation. \section{Conclusion} In this paper, we present Wi-Mose, the first high-precision 3D moving human pose estimation system using commodity WiFi devices. We construct CSI images which contain both pose and position information so that the neural network can extract features which is related to poses but independent of position. Moreover, we design a neural network to extract features from CSI images and convert them into key-point coordinates. The experiment results show that Wi-Mose achieves 29.7mm and 37.8mm P-MPJPE in the LoS and NLoS scenarios, which is 21\% and 10\% increased in accuracy compared with the baseline, respectively. In the future, we will prove that Wi-Mose can also construct high-precision 3D moving human pose skeletons in different environments. \begin{table}[t] \setlength{\abovecaptionskip}{-0.1cm} \renewcommand{\arraystretch}{1.1} \renewcommand\tabcolsep{18.0pt} \caption{P-MPJPE(unit:$mm$) for The Cross-subject Scenario} \label{cross} \fontsize{8}{10}\selectfont \centering \begin{tabular}{p{1.5cm}<{\centering}|p{1.5cm}<{\centering}|p{1.5cm}<{\centering} \toprule Scenarios & Model & Overall \\ \hline \multirow{2}{*}{ Basic} & WiPose &43.7 \cr & Wi-Mose &42.6 \cr\hline \multirow{2}{*}{ Occluded} & WiPose &50.7 \cr & Wi-Mose &46.8 \\ \bottomrule \end{tabular} \vspace{-0.8em} \end{table} \ifCLASSOPTIONcaptionsoff \newpage \fi \normalem \begin{spacing}{1} \bibliographystyle{IEEEtr}
{ "timestamp": "2020-12-29T02:23:41", "yymm": "2012", "arxiv_id": "2012.14066", "language": "en", "url": "https://arxiv.org/abs/2012.14066" }
\section{Introduction} Two hundred and sixty five years ago, Euler introduced the equations for an inviscid, incompressible, three-dimensional (3D) fluid in \textit{Principes g\'en\'eraux du mouvement des fluides}~\cite{euler1755,euler2008,aussois}. The incompressible Euler partial differential equation (PDE) and its descendant, the incompressible Navier-Stokes PDE~\cite{navier1822,stokes1880}, govern, respectively, ideal and viscous fluid flows at low Mach numbers. They are, therefore, among the most prominent equations in physics; and their solutions are of importance in a variety of physical settings. Furthermore, these equations pose challenges for mathematicians: It is well known that the solutions of the two-dimensional(2D) Euler equation, with analytic initial data, do not exhibit a finite-time singularity \cite{2006pauls}; however, it is still not known if any solutions of the 3D Euler equations develop a singularity in a finite time, if we start with analytic initial data (for non-analytic initial data, see Ref.~\cite{elgindi2019Euler3D}). The answer to this grand-challenge, finite-time-singularity problem also has important implications for turbulence in fluids, even if we use the 3D Euler PDE, as conjectured by Onsager~\cite{onsager,constantin1994onsager}; for a detailed discussion of these issues, see, e.g., Refs.~\cite{eyink,eyink2006onsager}, and, for recent advances, Ref.~\cite{buckmaster2021convex}. The possible relation between finite-time-singularities in the 3D Euler PDE and finite-dissipation weak solutions of the 3D Euler equations, and their potential relevance to solutions of the 3D Navier-Stokes equation in the limit of vanishing viscosity are discussed in Refs.~\cite{eyink,eyink2006onsager,de2010admissibility,de2012h,de2013dissipative,de2014dissipative}. In this paper, we do not address the regularity problem for the 3D Navier-Stokes PDE, which is one of the Clay Mathematics problems; for a discussion of this problem we refer the reader to Ref.~\cite{clay}. Here, we investigate a potentially singular solution, first studied by Luo and Hou~\cite{houluo}, of a 3D axisymmetric Euler flow. Explorations of finite-time-singularity problems (for the Euler case see, e.g., Refs.~\cite{aussois,gibbonrev}) often use direct numerical simulations (DNSs), which have not yielded unambiguous results for or against a finite-time singularity in the 3D Euler PDE. Luo and Hou~\cite{houluo} have explored a \textit{potentially singular solution of the radially bounded, 3D, axisymmetric Euler equations} via a hybrid Galerkin and finite-difference method. Given the importance of this problem, it behooves us to study this potentially singular solution by a completely different numerical scheme and another singularity-detection criterion, in addition to the one based on the well-known Beale-Kato-Majda theorem~\cite{houluo,beale1984remarks,bkmas}. In particular, we use the singularity-detection criterion based on the movement of singularities in the complex space that was first discussed in the work of Sulem, \textit{et al.}~\cite{sulem,kida1986study,ootb,bkmas,cickptg}. This method, referred to as the analyticity-strip method, calls for a pseudo-spectral simulation of the governing PDEs. Therefore, we have developed a pseudospectral, Fourier-Chebyshev scheme to study this problem; in any numerical implementation, we can only use a \textit{finite number} of Fourier-Chebyshev modes, i.e., we have a spectrally truncated system. Our method leads to new insights that include the formation of localized, oscillatory structures, called \textit{tygers}, at points of positive strain in the velocity fields. Tygers were first introduced in the context of the one-dimensional (1D) Burgers and two-dimensional (2D) Euler equations~\cite{tyger1,tyger2,di2018dynamics,banerjee2014transition,pramana}, \textit{en route} to \textit{thermalization}, in spectrally truncated pseudospectral DNSs; note that the appearance of tygers does not necessarily imply the formation of a finite-time singularity, which occurs in the inviscid 1D Burgers equation but not for the 2D Euler PDE. Lee~\cite{lee1952some} and Hopf~\cite{hopf1952statistical} had proposed~\cite{kraichnan1955statistical,cartes2021galerkin} that such spectrally truncated systems, with a finite number of modes, \textit{must thermalize, at sufficiently long times}, because the total energy is conserved; the thermalized state displays equipartition of the energy between all wavenumber ($k$) modes. Such thermalization has been observed in various spectrally truncated hydrodynamical equations including the 3D Euler~\cite{cichowlas2005effective} and the 3D and 2D Gross-Pitaevskii~\cite{krstulovic2011energy,shukla2013turbulence} equations. The high-$k$ modes thermalize faster than the low-$k$ ones in, e.g., the spectrally truncated 3D Euler equation; these high-$k$ thermalized modes act effectively as a dissipation range for the low-$k$ modes and, over intermediate time scales, before complete thermalization occurs, the fluid energy spectrum shows a power law $\sim k^p$ form with the exponent $p \simeq -5/3$ as in the Kolmogorov 1941 phenomenology for inertial-range scaling in 3D Navier-Stokes (NS) turbulence~\cite{cichowlas2005effective}. We note, in passing, that high-order hyperviscosity in the 3D NS equation can emulate these effects of Galerkin truncation in the 3D Euler PDE as discussed in Ref.~\cite{frisch2008hyperviscosity}. A discussion of hyperviscosity is out of place here because we are concentrating on the 3D axisymmetric Euler PDE; a full discussion of Galerkin truncation via very-high-order hyperviscosity would require a separate study. We concentrate on the Galerkin-truncated axisymmetric 3D Euler PDE. We find that, before the appearance of tygers, our method yields spectral convergence to the 3D Euler PDE we consider, and the truncated solution is the true solution; soon after the birth of tygers, our spectrally truncated system moves towards thermalization and it does not provide a good representation of this PDE. Nevertheless, we show how to generalize the analyticity-strip method to uncover signatures of the potential singularity discussed above. The remainder of this paper is organised as follows: In Sec.~\ref{sec:Model} we define the model we study. Section~\ref{sec:NM} contains the numerical methods we use. In Sec.~\ref{sec:Results} we present the results of our study. Section~\ref{sec:Conclusions} contains a discussion of our results in the light of earlier studies. Some details of our calculations are given in the Appendices~\ref{app:rescomp}-~\ref{app:stat}. \section{Model} \label{sec:Model} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{supp_1.png} \caption{(Color online) A section of our cylindrical simulation domain with the heat-map of $\omega^1$ at a representative time $t=0.003094$ for a resolution of $N_r= 512$ and $N_z= 1024$. Chebyshev collocation points are shown schematically in the $r-z$ plane for a constant value of $\theta$; these are spaced more closely near $r=0$ and $r=1$ than in the middle of the domain.} \label{fig:cyl} \end{figure} The 3D Euler PDE, for an incompressible, inviscid fluid is \begin{eqnarray} \bom_t + \bu \cdot \nabla \bom &=& \bom \cdot \nabla \bu ; \nonumber \\ \bom = \nabla \times \bu ; \; \bu &=& \nabla \times \bpsi ; \label{eq:3DEuler} \end{eqnarray} here, $\bom$ is the vorticity, $\bu$ the velocity field, and $\bpsi$ the vector-valued stream function that is related to the vorticity by the Poisson equation $\bom = - \nabla^2 \bpsi$; and $\bom_t \equiv \partial \bom /\partial t$. For axisymmetric flows, we use $ \bu(r,z) = u^r (r,z) \ \mathbi{\hat{e}_r} + u^{\theta}(r,z) \ \mathbi{\hat{e}_{\theta}} + u^z(r,z) \ \mathbi{\hat{e}_z} $, where $ \ \mathbi{\hat{e}_r} , \ \mathbi{\hat{e}_{\theta}}$, and $ \ \mathbi{\hat{e}_z}$ are unit vectors in the cylindrical coordinate system. Then, Eq.\eqref{eq:3DEuler} can be reduced to a system of equations for \begin{equation} u^1 = u^{\theta}/r, \qquad \omega^1 = \omega^{\theta}/r, \qquad \psi^1=\psi^{\theta}/r, \label{eq:u1etc} \end{equation} where $u^{\theta}$, $\omega^{\theta}$, and $\psi^{\theta}$ are angular components: \begin{subequations} \label{eq:set} \begin{align} u^1_t + u^ru^1_r + u^zu^1_z &= 2u^1\psi_z^1 ,\label{eq:main1}\\ \omega_t^1 + u^{r}\omega_r^1 + u^z\omega_z^1 &= ((u^1)^2)_z ,\label{eq:main2}\\ -\Big( \partial_r^2 + \frac{3}{r}\partial_r + \partial_z^2 \Big) \psi^{1} &= \omega^{1} , \label{eq:main3} \end{align} with $u^{r} = - r\psi^{1}_{z}$ and $u^{z} = 2\psi^{1}+r\psi^{1}_{r}$; and the subscripts $r, \, t,$ and $z$ on the functions indicate $\partial_r$, $\partial_t$, and $\partial_z$, respectively. \label{eq:AxisymmetricEuler} \end{subequations} The variables $u^1, \omega^1$, and $\psi^1$ are well defined, so long as the solutions to Eq.\eqref{eq:AxisymmetricEuler} are smooth {($C^{\infty}(\mathbf{R} \times \bar{\mathbf{R}}^{+})$} with $\mathbf{R}$, the set of real numbers and $\bar{\mathbf{R}}^{+}$, the set of affinely extended positive real numbers); $u^{\theta}, \omega^{\theta}$, and $\psi^{\theta}$ must all vanish at $r=0$ for these solutions to remain smooth~\cite{liuwang}. We solve Eq.\eqref{eq:AxisymmetricEuler} in the domain $ D(1,L) = \{ (r,z) : 0 \leq r \leq 1 , 0 \leq z \leq L \};$ we use $L$-periodic boundary conditions in $z$, the no-flow condition at $r=1$ \eqref{eq:noflow}, and the pole condition at $r=0$ \eqref{eq:polecond}: \begin{eqnarray} \psi^{1}(r=1,z,t) &=& 0; \label{eq:noflow} \\ u^{1}_r(r=0,z,t)= \omega^{1}_r(r=0,z,t) &=&\psi^{1}_r(r=0,z,t) \nonumber \\ &=& 0; \label{eq:polecond} \end{eqnarray} and the initial data \cite{houluo}: \begin{subequations} \begin{align} u^{1}(r,z,t=0)& = 100e^{-30(1-r^{2})^{4}}\sin\Big({\frac{2\pi z}{L}\Big)}; \\ \omega^{1}(r,z,t=0)& = \psi^{1}(r,z) = 0. \label{eq:initial3D} \end{align} \end{subequations} To compare our results with those of Luo and Hou~\cite{houluo}, it is \textit{imperative} that we use their initial condition. (See Appendix \ref{app:stat} for other types of initial conditions.) \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{main_1.png} \caption{(Color online){ Plots versus $t$ of $(a)$ log (base 10) of the percentage change, in our DNS, of the energy $ (\delta E \%)$ (red full line), $(b)$ log (base 10) of the absolute value $|H|$ of the helicity (Eq.~\ref{eq:eh_exp}), and $(c)$ $\log_{10}(\log_{10}(||\omega||_{\infty}))$ (dark blue full line) for $N_z = 4096$ and $N_r = 512$. Here, $||\omega||_{\infty}$, the $L_\infty$ norm of the vorticity, is well approximated by the maximum value of $|\omega|$ on our grid. The red (blue) dashed line indicates the time of the birth of a tyger (see text) in $u^1$ ($\omega^1$); the black dashed line denotes the estimate for the time of the (potential) singularity, from Ref.~\cite{houluo}. In Fig. \ref{fig:2res} of Appendix \ref{app:rescomp}, we give similar plots for other values of $N_z$ and $N_r$; the higher the values of $N_z$ and $N_r$ (especially $N_z$), the better our scheme captures the rapid growth of $\log_{10}(\log_{10}(||\omega||_{\infty}))$.}} \label{fig:main_1} \end{figure} \section{Numerical methods} \label{sec:NM} \subsection{Fourier Chebyshev spectral methods} \begin{figure*} \centering \includegraphics[width=\linewidth]{main_2.png} \caption{(Color online) $(a)$ Plots versus $k$ of $\ln(\mathcal{S}_1(r=1,k,t))$, at different times $t$ (the full temporal evolution is given in the video S1 in the Supplemental Material \cite{supp}); here, the modes with $k > k_G$, the dealiasing-cutoff wavenumber, have zero energy. $N_r = 512$, $N_z=1024$, and the dealiasing cutoff is $k_G=341$. $(b)$ Plots versus $m$ of $\ln(\mathcal{S}_2(m,z=0,t))$ at different times $t$; there is an exponentially decaying tail in the spectrum $\mathcal{S}_2(m,z=0,t)$, at large $m$, whose decay rate decreases with $t$. (The full temporal evolution is given in the video S2 in the Supplemental Material \cite{supp}.) } \label{fig:main_2} \end{figure*} We use the Fourier-Chebyshev representation, in which a function $f(r,z)$ is approximated by \begin{equation} f(r,z) = \sum_{k} \sum_{m}\hat{f}(k,m) e^{ikz} \ T_m(2r-1), \end{equation} where $T_m$ is the Chebyshev polynomial (of the first kind) of order $m$. { In the schematic diagram in Fig.~\ref{fig:cyl}, we display the collocation points in our Fourier-Chebyshev DNS; these points are distributed uniformly in the periodic (axial) direction $z$; in the radial direction $r$, these points coincide with the roots of the highest-order Chebyshev polynomial in our basis. We use a finer resolution in the $z$ direction than in the $r$ direction, because, for a given number of collocation points, the Chebyshev nodes are spaced more closely near the boundary at $r=1$ than the Fourier nodes. This prevents excessive elongation of the cells in our simulation grid, in physical space near this boundary. If these cells are very elongated and narrow in the radial direction, it becomes difficult to satisfy the Courant-Friedrichs-Lewy (CFL) condition at every time-integration step. We use a CFL number $C =0.2$ and adjust the time step $dt$, to ensure that the CFL condition is satisfied. For the temporal evolution of Eqs.\eqref{eq:main1},\eqref{eq:main3}, we use the explicit fourth-order Runge-Kutta scheme in physical space; we evaluate the derivatives in Fourier-Chebyshev space and, subsequently, compute the nonlinear terms in physical space. We solve the Poisson equation Eq.\eqref{eq:main3} in the domain $D(1,L) = \{ (r,z) : 0 \leq r \leq 1 , 0 \leq z \leq L \}$ with the boundary conditions Eq.\eqref{eq:noflow}, \eqref{eq:polecond}. We use the $2/3$ truncation method for dealiasing both Fourier and Chebyshev modes. Reference~\cite{houluo} utilizes the symmetry properties of this initial condition to study the Euler PDEs in the domain $\mathcal{D}(1,L/4)$; in our Fourier-Chebyshev method we use the full length $L$ of the domain. } \subsection{Conserved quantities and Spectra} The total energy and helicity are, respectively, \begin{subequations} \begin{align} E &= \frac{1}{2} \int^1_0 \int^L_0 (|u^r|^2 +|u^z|^2 + |u^{\theta}|^2) \ r dr dz ; \\ H &=\int^1_0 \int^L_0 \bu \cdot \bom \ r dr dz . \label{eq:eh_exp} \end{align} \end{subequations} We calculate these by using the Fourier-Chebyshev coefficients of $\bu$ and $\bom$ {(see Figs. \ref{fig:main_1}(a) and (b))}. Fourier and Chebyshev transforms, over $z$ and $r$, respectively, yield the fixed-$r$ and fixed-$z$ spectra \begin{subequations} \begin{align} \mathcal{S}_1(r,k,t) &:= \frac{g(k)}{2 \ N_z} \Big( |\hat{u}^{\theta}(r,k,t)|^2 + \nonumber \\ & |\hat{u}^{r}(r,k,t)|^2 + |\hat{u}^{z}(r,k,t)|^2 \Big), \label{eq:S1} \\ \mathcal{S}_2(m,z,t) &:= \frac{N_r}{2 \ g(m)} \Big( |\hat{u}^{\theta}(m,z,t)|^2 + \nonumber \\ & |\hat{u}^{r}(m,z,t)|^2 + |\hat{u}^{z}(m,z,t)|^2 \Big), \label{eq:S2} \end{align} \end{subequations} where $g(i=0) = 1$ and $g(i>0)=2$ ($i$ is $k$ or $m$). We give the spatiotemporal evolution of $\mathcal{S}_1(r,k,t)$ and $\mathcal{S}_2(m,z,t)$ in videos S1 and S2, respectively, in the Supplemental Material \cite{supp}. Similarly, simultaneous Fourier-Chebyshev transforms give us the following spectra \begin{subequations} \begin{align} \mathcal{S}_3(m,k,t) := \Big(& |\hat{u}^{\theta}(m,k,t)|^2 + \nonumber \\ & |\hat{u}^{r}(m,k,t)|^2 + |\hat{u}^{z}(m,k,t)|^2 \Big), \label{eq:S3} \\ \mathcal{S}_4(m,k,t) := \Big(& |\hat{u}^{\theta}(m,k,t) \ \hat{\omega}^{\theta}(m,k,t)| + \nonumber \\ & |\hat{u}^{r}(m,k,t) \ \hat{\omega}^{r}(m,k,t)| + \nonumber \\ & |\hat{u}^{z}(m,k,t) \ \hat{\omega}^{z}(m,k,t)| \Big). \label{eq:S4} \end{align} \end{subequations} \subsection{Methods to track singularity} \subsubsection{Beale-Kato-Majda criterion: Growth of $||\omega||_\infty$} The detection of a singularity based on the BKM theorem~\cite{beale1984remarks,bkmas} uses a plot of $\log_{10}(\log_{10}(||\omega||_{\infty}))$ versus $t$. We show such a plot (blue full line) in Fig.~\ref{fig:main_1}$(c)$, from our DNS; the red (blue) dashed line indicates the time of the birth of a tyger (see below) in $u^1$ ($\omega^1$); the black dashed line denotes the estimate for the time of the (potential) singularity, from Ref.~\cite{houluo}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{main_3.png} \caption{(Color online) Plots versus $z$ of $(a),(b)$ $u^{1}(r=1)$ and $(d),(e)$ $\omega^{1}(r=1)$ at various times $t$ listed in panel $(b)$ for $N_r=512$ and $N_z=1024$; as we go from columns one to two, we zoom in to the region with localized oscillatory structures called \textit{tygers}~\cite{tyger1,tyger2}. Plots of the tyger-birth time $t_{b}$ versus $\lambda_G = 2 \pi/k_G$ in $(c)$ $u^1$ and $(f)$ $\omega^1$, respectively, where $k_G$ is the dealiasing cutoff wavenumber. To determine the tyger-birth times $t_b^{(\bu^1)}$ and $t_b^{(\bom^1)}$), we examine plots of $\bu_1$ and $\bom_1$ as a function of $t$.} \label{fig:main_3} \end{figure*} \subsubsection{Analyticity Strip method} For a DNS in a domain with periodic boundary conditions in all spatial directions, the analyticity-strip method~\cite{sulem,brachet1983small,kida1986study,brachet1992numerical,ootb,bkmas,cickptg} proposes that the solution of the PDE can be continued analytically to complex space variables $ {\bf z} = {\bf x} + i{\bf y}$, inside the \textit{analyticity strip} $\mid {\bf y} \mid < \delta(t)$, where $t$ is real and $\delta(t)$, the width of this strip, follows from the spatial Fourier transform of the solution, which decays, at large wavenumbers $k$, as $\exp(-k \delta (t))$ (this has an algebraic prefactor). We obtain $\delta(t)$ and estimate if $\delta(t) \to 0$ at a finite time $t^*$; at this time the solution shows a finite-time singularity because singularities, in the complex plane for $t < t^*$, hit the real axis. Our determination of $\delta(t)$ is accurate up until times at which $\delta(t)$ remains larger than a few mesh widths. For such times, we have spectral convergence of the Fourier expansion. We now extend the analyticity-strip method: (a) We first work with a fixed value of $r$; we evaluate the Fourier transform(in the $z$ direction) of the components of the velocity; the wavenumber dependence of this transform yields the width of this analyticity strip. (b) Next, we work with a fixed value of $z$; we evaluate the Chebyshev transform(in the $r$ direction) of the components of the velocity; we then examine the dependence of the Chebyshev-expansion coefficients~\cite{gargano2009singularity,takeshi,takeshi1,takeshi2,takeshi3,takeshi4} on the order $m$; if these coefficients decrease as $\exp \ (-m \alpha)$, for large $m$, then the velocity field is analytic in the Bernstein ellipse $\mathcal{E}_{\rho_{*}} =\{z\in \mathbb{C} \mid z= (\rho_{*}e^{\imath \theta} - \rho_{*}^{-1}e^{-\imath \theta})/2, 0 \leq \theta \leq 2 \pi \}$, with \begin{equation} \rho_* = e^{\alpha};\; \rm{and} \;\; \delta_r=( \rho_{*} - \rho_{*}^{-1})/2, \label{eq:alpha_dr} \end{equation} the width of this analyticity strip. Before the birth of tygers, we have spectral convergence of our Fourier-Chebyshev expansions. This allows us to employ the analyticity-strip method. We concentrate on $\mathcal{S}_1(r,k,t)$ and $\mathcal{S}_2(m,z,t)$. In Fig.~\ref{fig:main_2}$(a)$ we plot $\ln (\mathcal{S}_1(r=1,k,t))$(see Eq. \ref{eq:S1}) versus $k$, at different times $t$ (see the video S1 in the Supplemental Material \cite{supp}); here, the modes with $k > k_G$, the dealiasing-cutoff wavenumber, have zero energy. The symmetries of our initial condition lead to even-odd $k$ oscillations in, e.g., $\mathcal{S}_1(r=1,k,t)$ (black, brown, and orange curves in Fig.~\ref{fig:main_2}$(a)$. At small and intermediate values of $t$, these oscillations have exponentially decaying envelopes at large $k$. The envelope for odd $k$ lies above its even-$k$ counterpart and the separation between these envelopes increases with $t$. The natural logarithmic decrements of these envelopes, $\delta_{\text{odd}}(t)$ and $\delta_\text{even}(t)$, respectively, decrease as $t$ increases. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{main_4.png} \caption{(Color online) Surface plots of $(a)$ $\delta_{even}(r,t)$ (this falls fastest at the wall at $r=1$) and $(b)$ $\delta_{r}(z,t)$ (this falls fastest at $z=0$).} \label{fig:main_4} \end{figure} At sufficiently large $t$, $\mathcal{S}_1(r=1,k,t)$ does not have exponentially decaying envelopes (e.g., the orange curve in Fig.~\ref{fig:main_2}), because of the formation of tygers, our spectrally truncated system proceeds towards thermalization, and we lose spectral convergence of the Fourier expansion. Similarly, we obtain Chebyshev spectra, at fixed values of $z$ (see panel $(b)$ in Fig. \ref{fig:main_2} and Eq. \ref{eq:S2}). At small and intermediate values of $t$, these spectra decay exponentially at large values of $m$, with the slope decreasing with increasing $t$. At sufficiently large $t$, $\mathcal{S}_2(m,z=0,t)$ does not decay at large $m$, because of the formation of tygers and the consequent loss of spectral convergence of the Chebyshev expansion. \section{Results} \label{sec:Results} \subsection{Tygers and the onset of Thermalisation} Given the finite resolution of any practical spectral or pseudospectral DNS, we integrate not the full hydrodynamical PDE, but its Galerkin-truncated modification. Tygers appear when complex-space singularities come within one Galerkin wavelength $\lambda_G = 2 \pi / k_G$ ~\cite{tyger1,tyger2,ootb,cichowlas2005effective} of the real domain. As we increase the resolution of our DNS, $\lambda_G$ decreases, hence there is an increase in the time taken by the pole, nearest to the real domain, to cross into this region. Therefore, the time $t_b$ at which tygers first appear increases with the spatial resolution of our DNS. In the first two columns of Fig.~\ref{fig:main_3}, we present plots, versus $z$, of $u^{1}(r=1,z,t)$ [top row] and $\omega^{1}(r=1,z,t)$ [bottom row] at various times~\cite{tyger1,tyger2}; in the last column we plot $t_{b}$, the time of the birth of tygers, versus $\lambda_G$. Tygers appear clearly in $\omega^{1}(r=1,z,t)$ before they become visible in $u^{1}(r=1,z,t)$. { We define tyger-birth times as the time at which oscillations, with the wavelength $\lambda_G$, are first detected by the \texttt{find\_peaks} module of MATLAB. } Both tyger-birth times, for the vorticity ($t_b^{(\omega^1)}$) and the velocity ($t_b^{(u^1)}$), precede (Fig.~\ref{fig:main_1}) the estimate for the singularity time given in Ref.~\cite{houluo}. The plots in Fig.~\ref{fig:main_3} are the clearest examples of tygers in a 3D hydrodynamical PDE. \begin{figure}[th!] \centering \includegraphics[width=\linewidth]{main_5.png} \caption{(Color online) $(a)$ Plots versus $t$ of the widths $\delta_{\text{odd}}$ and $\delta_{\text{even}}$, {which we obtain from} the odd- and even-$k$ envelopes, respectively, of $\mathcal{S}_1(r=1,k,t)$. $(b)$ Log-log (base 10) plots of $\delta_{\text{even}}$ versus $|t-t^*|$, where $t^*=0.0035056$ is the estimate of the time of the (potential) singularity in Ref.~\cite{houluo} along with the power-law fit(black full line) $\delta_{\text{even}} = a|t-t^*|^b$. [see text $(c)$ Plot of $\delta_{r}$ versus $t$ with a linear fit(black full line). [see text $N_r=512$ and $N_z=4096$.} \label{fig:main_5} \end{figure} As in the 1D Burgers equation~\cite{tyger1,tyger2}, tygers do not appear at the point where the singularity develops, as a step in $u^{\theta}(r=1,z,t)$ at $z=0$, but some distance away from it, where a resonant interaction occurs between the fluid particle and the truncation waves~\cite{tyger1}. The tygers appear most prominently in $u_{\theta}$, which is the component of the velocity that is perpendicular to the direction in which the fastest variation in $u_{\theta}$ is seen (i.e, $\hat{z}$). The tygers grow, as they initiate the process of thermalization and spread through the whole domain; this is the real-space manifestation of thermalization. The development of the (potential) singularity leads to numerical errors as our DNS nears the singularity-time estimate of Ref.~\cite{houluo}; eventually, energy and helicity conservation become poor, and this prevents us from proceeding, in our DNS, to complete thermalization. {The plots versus $z$ in Fig.~\ref{fig:main_3} provide a natural motivation for studying a 1D model formulated by Luo and Hou \cite{houluo}; this model displays a finite-time singularity, which we study via the analyticity-strip method and for which we show that tygers are formed before the time at which the singularity occurs (see Appendix \ref{app:1D}). } \begin{figure*}[th] \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{supp_9b.png} \caption{$(a)$ Log-log (base 10) plots of $\delta_{\text{even}}$ versus $|t-t^*|$, where $t^*=0.0035056$ along with the power-law fit(black full line) $\delta_{\text{even}} = a|t-t^*|^b$; we find $\log a= 7 \pm 1$ and $b=2.6 \pm 0.5$, in the region between the dashed grey lines. $(b)$ The fit range is based on the local-slope (blue full line) and local intercept (red full line) analysis shown in the inset panel. } \label{fig:supp_9a} \end{minipage} \hspace{1mm} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{supp_9a.png} \caption{$(a)$ Plot of $\delta_{r}$ versus $t$; along with a linear fit (black full line) $\delta_{r} = ct+d$ with $c = -140 \pm 20 $ and $d=0.47 \pm 0.05$, in the region between the dashed grey lines. We obtain $t^* = 0.0033 \pm 0.0002$ for the x-intercept of the fit. The potential time reported by Luo et al \cite{houluo} lies in this range. $(b)$ In the inset panel, we show the local-slope (blue full line) and local intercept (red full line) analysis.} \label{fig:supp_9b} \end{minipage} \end{figure*} \subsection{Analysis of analyticity strip widths} The spectrum $\mathcal{S}_3(m,k,t)$, at $t=0$, has significant weight at low values of $m$ and $k$; with the passage of time, we see that this weight cascades to large values of $m$ and $k$. This allows us to use the analyticity strip method for times when $\mathcal{S}_3(m,k,t)$ decays at large values of $m$ and $k$. We extract $\delta_\text{even}(r,t)$ (similarly $\delta_\text{odd}(r,t)$) by using a least-squares fit for the envelopes of $\mathcal{S}_1(r,k,t)$ at even ($k_e$) and odd($k_o$) wavenumbers \begin{subequations} \begin{align} \ln(\mathcal{S}_1(r,k_{e},t)) =& C_e -n_e \ln(k_{e}) - \nonumber \\ & 2 \ \delta_\text{even}(r,t) \ k_{e}, \\ \ln(\mathcal{S}_1(r,k_{o},t)) =& C_o -n_o \ln(k_{o}) - \nonumber \\ & 2 \ \delta_\text{odd}(r,t) \ k_{o}. \label{eq:fit_z} \end{align} \end{subequations} In Fig.~\ref{fig:main_4}$(a)$, we give a surface plot of $\delta_{\text{even}}(r,t)$ to show that it decays fastest at $r=1$. Similarly, we obtain the rate at which the tail of $\mathcal{S}_2(m,z,t)$ decays exponentially, for intermediate times $t$, and thence the width $\delta_r(z,t)$ of the analyticity strip shown in Fig.~\ref{fig:main_4}$(b)$, it decays fastest at $z=0,L/2,L$. Concurrently, we see that the fastest variation in $\bom^1(r,z)$ and $\bu^1(r,z)$ occurs at the set of points corresponding to $r=1$ and $z=0,L/2,L$ where the fastest decay of the analyticity strip widths have been reported above. At sufficiently large $t$, there is no exponential decay (e.g., for the top plot in orange) because of the onset of thermalization in our spectrally truncated system. We use the least-squares fit \begin{equation} \ln(\mathcal{S}_2(m,z,t)) = C_2 - 2 m \alpha \label{eq:fit_dr} \end{equation} and relate $\alpha$ to $\delta_r(z,t)$ via Eq.~(\ref{eq:alpha_dr}). {The tygers in Fig.\ref{fig:main_3} appear as soon as the pole, in which we are interested, enters the analyticity strip. In Fig.\ref{fig:main_4} we portray the time dependences of the widths of analyticity strips.} We now summarise our results for analyticity-strip widths: In panel $(a)$ of Fig.~\ref{fig:main_5}, we plot versus $t$, the widths $\delta_{odd}$ and $\delta_{even}$ associated with the odd- and even-$k$ envelopes, respectively, of $\mathcal{S}_1(r=1,k,t)$. In panel $(b)$ of Fig.~\ref{fig:main_5}, we present a log-log (base 10) plot of $\delta_{even}$ versus $|t-t^*|$, where $t^*=0.0035056$ is the estimate of the time of the (potential) singularity in Ref.~\cite{houluo} along with the power-law fit $\delta_{even} = a|t-t^*|^b$; { we find $ \log_{10} a = 7 \pm 1 $ and $b = 2.6 \pm 0.5 $} in the region between the dashed grey lines. {In Fig.~\ref{fig:supp_9a}, we show the local-slope analysis for $\log_{10}(\delta_\text{even})$ versus $\log_{10} |t-t^*|$ (of Fig. \ref{fig:main_5}(b)). We find that the slope increases linearly with time, because of the finite spatial resolution of our DNS. By using grey lines, we have indicated the region of almost constant slope, which we then use to obtain the fit in Fig. \ref{fig:main_5} $(b)$.} {The video S3 which shows the evolution of this fit with $t$ can be found in the Supplemental Material \cite{supp}.} In panel $(c)$ of Fig.~\ref{fig:main_5}, we plot, versus $t$, the width $\delta_r$, which we obtain from the natural logarithmic decrements of $\mathcal{S}_2(m,z=0,t)$. This is very-nearly linear until just before the estimate of the time of the (potential) singularity given in Ref.~\cite{houluo}. From a linear fit, in the region between the dashed grey lines, we find an intercept, on the horizontal axis, at {$t = 0.0033 \pm 0.0002$}, which is slightly less than the estimate for the time of (potential) singularity in Ref.~\cite{houluo}. {The linear fit $\delta_r = ct+b$ gives the following values for the parameters: $c = -140 \pm 20 $ and $d=0.47 \pm 0.05$.} {In Fig.~\ref{fig:supp_9b}, we show the local-slope analysis for $\delta_r$ versus $t$ (of Fig.\ref{fig:main_5} $(c)$). We indicate the region that is used for fitting of Fig. \ref{fig:main_5} $(c)$ by using grey lines.} {The video S4 which shows the evolution of this fit with $t$ can be found in the Supplemental Material \cite{supp}. } \section{Conclusions} \label{sec:Conclusions} We have examined the potentially singular solution of the 3D, axisymmetric and radially bounded Euler equation~\cite{houluo} by developing a pseudospectral, Fourier-Chebyshev scheme. Our method leads to new insights for it shows that, in this scheme, the formation of tygers precedes the development of the (potential) singularity and leads eventually to the thermalization of our system. We then show how to generalise the analyticity-strip method~\cite{sulem,kida1986study,ootb,bkmas,cickptg} to track this (potential) singularity. Our results are consistent with a finite-time singularity. A recent paper by Barkley~\cite{barkley} has also used a Fourier-Chebyshev method to study this initial condition; it concentrates on the physical mechanism for the singularity and not on the issues we discuss. Recent work by Hertel, Besse, and Frisch~\cite{hertel} has examined this singularity by a Cauchy-Lagrange (CL) method, which requires the computation of Lagrangian trajectories and high-order Taylor expansions based on the Cauchy-Invariants formula; the advantage of this method is that the time step is not restricted by a Courant-Fredrichs-Lewy (CFL) criterion; however, this method is computationally expensive because it requires interpolations to map the Lagrangian grid onto the Eulerian one. This CL study also uses the BKM criterion to investigate the growth of the vorticity. Reference~\cite{houluo} uses a hybrid $6^{th}$-order Galerkin and $6^{th}$-order finite- difference method on a mesh that adapts itself in time to resolve the peak of the maximum in the vorticity (for the BKM criterion); this adaptive mesh is computationally involved and expensive. The smallest scale in the mesh of Ref.[18] is $\simeq 10^{- 15}$ . In our DNSs the highest resolution is $10^{- 5}$ near $r=1$, which suffices for our application of the analyticity-strip methods. Our pseudospectral method allows us to use a completely different method to track the (potential) singularity, namely, the analyticity-strip method; and given the calculations we carry out, the CFL criterion is not a significant constraint. This singularity-detection method gives us a complementary perspective on the development of the potential singularity that we have discussed above. \begin{acknowledgments} We thank SERB, CSIR, NSM, and UGC (India) and the Indo-French Centre for Applied Mathematics (IFCAM) for their support and J.K. Alageshan, N. Besse, M.E. Brachet, U. Frisch, A. Gupta, T. Hertel, K. Kolluru, T. Matsumoto, P. Perlekar, S.S. Ray, and A.K. Verma for very useful discussions. We thank, especially, N. Besse, U. Frisch, and T. Hertel for sharing the results of their Cauchy-Lagrange study with us. For our high-resolution computations we have used the SahasraT CRAY computer at the Indian Institute of Science; we thank the CRAY team here for their support. \end{acknowledgments} \bibliographystyle{apsrev4-2} \nocite{*} \section{Errors due to operator subroutines} We list the maximal relative errors of derivatives computed using our Fourier and Chebyshev derivative subroutines for the function: $f(r,z)= e^{1- \cos(2 \pi r)}e^{\sin(2 \pi z/L)}$(shown in panel $(b)$ of Fig.\ref{fig:SI_ders}) and the maximal relative errors of solutions obtained from the Tau and Shen-Galerkin schemes for the Poisson problem with a source term given by $g(r,z) = \cos(3 \pi r/2) e^{\sin(2 \pi z/L)} $(shown in panel $(c)$ of Fig. \ref{fig:SI_ders}) at $N_r =512, N_z = 4096$. We plot the maximal relative errors of Fourier and Chebyshev derivatives $f_x$ versus the number of collocation points $N_x$ in Fig.\ref{fig:SI_ders} for $x=r,z$. \begin{itemize} \item $\max \left| \frac{\partial_z f }{ \partial_z f|_\text{analytical}} -1 \right|= 2.5257883428768696 \times 10^{-10}$ \item $\max\left| \frac{\partial_r f }{ \partial_r f|_\text{analytical}} -1 \right|= 1.3673073130793951 \times 10^{-6}$ \item $\max\left| \frac{\nabla^{-2}(\nabla^2 g)}{ f|_\text{analytical}} -1 \right|$ for the Tau scheme= $ 6.3672986585600666 \times 10^{-12}$ \item $\max\left| \frac{\nabla^{-2}(\nabla^2 g)}{ f|_\text{analytical}} -1 \right|$ for the Shen-Galerkin scheme = $ 1.0352409365986301 \times 10^{-11}$ \end{itemize} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{supp_4.png} \caption{(Color online) $(a)$ Plot versus $N_x$ (number of collocation points) of the $\log_{10} \left( \max \left| \frac{f_x }{ f_x|_\text{analytical}} -1 \right| \right)$ in the computation of the Fourier$(x=z)$ and Chebyshev$(x=r)$ derivatives for the function $f(r,z)$. Panels $(b)$ and $(c)$ show the surface plots of the test functions $f(r,z)$ and $g(r,z)$, respectively. } \label{fig:SI_ders} \end{figure} \section{Videos} \begin{itemize} \item[S1] Time evolution of $\ln(\mathcal{S}_1(k,r,t))$ for $(N_r,N_z)=(256,512)$. \item[S2] Time evolution of $\ln(\mathcal{S}_2(m,z,t)))$ for $(N_r,N_z)=(256,512)$. \item[S3] Fitting $\ln(\mathcal{S}_1(k,r=1,t))$ with time $t$ for $(N_r,N_z)=(256,512)$. \item[S4] Fitting $\ln(\mathcal{S}_2(m,z=0,t))$ with time $t$ for $(N_r,N_z)=(256,512)$. \item[S5] Composite figure (in order from left to right) of the time evolution of $\omega(z)$, $\ln(E(k))$ and time series of $\delta$ of the 1-D model for $N_z =2048$. \item[S6] Composite figure (in order from left to right) of the time series of the helicity, time evolution of helicity isosurface in real space $H(r,z)$ and spectral space $\ln(\mathcal{S}_4(m,k,t))$, followed by the separate contributions of even and odd modes to the spectra of $\ln(\mathcal{S}_4(m,k,t))$ for $(N_r,N_z)=(256,512)$. \item[S7] Composite figure (in order from left to right) of the time series of the energy, time evolution of $u^1(r,z)$, $\omega^1(r,z)$ and $\ln(\mathcal{S}_3(m,k,t))$ for $(N_r,N_z)=(256,512)$. \end{itemize}
{ "timestamp": "2022-07-06T02:15:03", "yymm": "2012", "arxiv_id": "2012.14182", "language": "en", "url": "https://arxiv.org/abs/2012.14182" }
\section{Code book for Thematic Analysis} \input{treeMap} \newpage \section{An Integrated Experiential Learning Approach for Software Engineering Courses} \label{sec:approach} In previous instances of our project management component in a Software Engineering course, the students worked in a fully-fledged, traditional design-implement task as the sole \textit{``experiential''} teaching-learning activity. This task was reported by students in course retrospectives and course evaluations as too technical and too difficult, as most of them lack deep development experience. Thus, students sometimes did not achieve the intended learning outcomes through real design-implement tasks, as they were too focused on solving the technical challenges. Therefore, our main objective is to propose an integrated experiential learning approach for SE courses that first simulates a real-life software development process's challenges. After that, the integrated approach comprises a design-implement task that allows students to experience these challenges we simulated, but this time in a real, still small-scale design-implement task. Students are expected to handle requirements elicitation, project planning, effort estimation, development, and testing while managing unexpected situations and uncertainty. From a pedagogical perspective, we expect the students to demonstrate the knowledge acquired with this experiential learning approach by the end of the course. The remaining of the section is structured as follows: in Sub-section~\ref{context} we describe the details of the course before implementing the intervention; in Sub-section~\ref{approach} we present the experiential learning approach; and finally, in Sub-section~\ref{execution} we present the different executions of the approach. \subsection{Context: The Course Before the Intervention}\label{context} The experiential learning approach has been integrated as part of a Software Engineering course, comprising of 7.5~ECTS\footnote{European Credit Transfer System}. The course's overall goal is to give the student basic knowledge of software engineering, the software development processes, and their main phases. The course contents also include the different software development models and practices and their impact at the product, process, and organizational levels. The course provides both theoretical knowledge and its application in practical situations. Theoretical knowledge is provided in a series of lectures covering themes such as requirements elicitation and management, testing, architecture design, project planning, and project follow-up. The practical component requires the student to participate in the planning and execution of a small project. This course was an interesting opportunity for our integrated experiential learning approach that combines a game activity \citep{molleri2018legacy} with a real design-implement task of a certain but controlled size. Although the SE course shares certain similarities with software project management courses, it is a core-area course that uses a small-scale, controlled-environment TLA for the students to experience a software project's challenges. Figure~\ref{fig:constructiveAlignment} illustrates the main components of the course comprising Intended Learning Objectives (ILOs), Teaching Learning Activities (TLAs), and Assessment Tasks (ATs), according to the constructive alignment framework~\citep{biggs2014constructive}. \begin{figure}[!ht] \centering \caption{Visual representation of the course components according to constructive alignment. ILOs are connected to the related TLAs and ATs via solid, dashed and dotted line, each line corresponding to a ILO.} \includegraphics[width=0.8\textwidth]{constructiveAlignment.pdf} \label{fig:constructiveAlignment} \end{figure} \subsubsection{Intended Learning Objectives} \label{sec:ilos} The course goals are twofold. On the one hand, there is a focus on theoretical knowledge acquisition; and on the other hand, on its application in practical, small-scale, controlled examples. The course objectives, as described in the course syllabus, are: \begin{itemize} \item \textbf{Knowledge and understanding.} The course aims to provide the student with knowledge about how software systems are developed. On completion of the course, the students will gain an understanding with regards to the main phases of software-intensive product development, the different processes, their strengths and weaknesses, practices, and methodologies in software development. \item \textbf{Skills and abilities.} The course provides knowledge about the development processes, requirements management, testing, architectural design, project planning, and project management, equipping the student with basic knowledge for participating in the planning and follow up of a project in practice, according to the selected development process. \item \textbf{Judgment and approach.} The students should be able to discuss the pros and cons of each process and method, but also how their potential impact on the software product, its users, and the development organization. The students are also expected to be able to present and argue for ethical aspects regarding current trends and products in society. \end{itemize} \subsubsection{Teaching-Learning Activities} \label{sec:tlas} The course is organized around traditional lectures, seminars, debates, and group assignments. Students are expected to work individually as well as in teams, especially in the seminars and workshops. \begin{itemize} \item \textbf{Theoretical lectures and seminars.} The lectures introduce the Software Engineering disciplines, different stages of software-intensive product development, development processes, the historical perspective and limitations of each of them. Besides, ethical aspects are also introduced and discussed in the context of software-intensive product development. The primary language of instruction Swedish, but seminars, some lectures, material, and reports can take place in English as well. \item \textbf{Project meetings and workshops.} The project planning and management activities, as well as the software development project require interaction between the teachers and teams of students. This interaction is organized around project meetings and workshops, in which the teams will discuss the project planning, management, and execution activities with regards to the software product being developed. \end{itemize} \subsubsection{Assessment Tasks}\label{assesment_task} \label{sec:ats} Both individual and group assignments are included in the course. The examination of the course was graded based on three different tasks: \begin{itemize} \item \textbf{Project planning, management and execution activities (2.5 ECTS).} Software project activities, in which the students, organized in teams of 3-4 participants, have to elicitate, document, and prioritize the requirements of a software system, document its design and plan and manage its execution, including a design-implement task. All activities are graded altogether as pass/fail and graded as a unique activity. \item \textbf{Development models report (1.5 ECTS).} At the end of the project planning, management, and execution activities, the students submit a summative reflective report summarizing the experiences with both the traditional and the agile models, pros and cons, when to use each one, and personal experiences. The report should include a discussion about ethical and professional practices in software development. This report is handled, submitted, and graded as a group (A-F). \item \textbf{Written exam (2 ECTS).} An individual on-line exam to sum up all the contents of the course, with the goal of having a way to assess the individual learning of each student. Although we hope students' performance been positively impacted by the integrated experiential learning approach, the written exam and its outcomes were not part of our study. \end{itemize} \subsection{Integrated Experiential Learning Approach}\label{approach} The proposed learning approach integrates two instances of a simulated project planning activity and one software design-implement task (see Project planning, management, and execution activities in Section~\ref{assesment_task}) as illustrated in Figure~\ref{fig:datacollection}. For the simulated project planning activity, we used our serious game~\citep{molleri2018legacy} twice, one in which the students follow a plan-driven development process, and one in which the students have to follow a self-tailored agile development process. We run each simulated project planning activity once the theoretical contents had been covered. In this way, the game challenges match the knowledge the students were exposed to. \begin{figure}[!ht] \centering \caption{Course schedule and evaluation timeline. The top part describe the specific course units (i.e. TLAs, ATs), while the bottom part relate the data collection instruments of our empirical validation.} \includegraphics[width=1\textwidth,trim={0 7.5cm 0 0},clip]{course-schedule.pdf} \label{fig:datacollection} \end{figure} The first instance of the game activity focused on a traditional, plan-driven development process. The students worked on a project plan following a more formal plan-driven (a.k.a. waterfall) development process model. This traditional planning incorporates more extensive documentation, detailed plans, and risk management that leads to the process monitoring. In the second instance of the game, students planned another project employing a self-tailored agile process model. The agile project incorporates strategies such as team interaction, customer collaboration, and response to changes. In this case, the students also need to find arguments for choosing the different agile practices and agile project management techniques they opted for. After the two instances of the game activity, the students are exposed to actual software development, i.e., the design-implement task. This design-implement task involves students working with a controlled-scaled, real-world-like project case using an agile approach. The project case is in the same domain as students planned in the second instance of the game (i.e., the one following the agile process). In this way, the design-implement task matches both the theoretical knowledge the students have gained and the domain knowledge from the game activity's simulated project planning. The actual performance on the simulated projects (i.e., whether they deviated from the original planning, or whether the planning was accurate) and on the design-implement task (i.e., the adherence to the requirements and the quality of the developed solution) is not part of the grading of the course. The main reason is that the main learning outcome is not about the programming skills of the students, but their ability to plan and follow a development process. The pass or fail is based on their participation on the different activities (i.e., project meetings and workshops), and the degree to which the students where following the development process selected, which was judged and assessed by the teaching staff. \subsubsection{Game Activity}\label{gamerules} Students worked in teams of 3-4 participants that shared a common goal during the games activities. Our serious game can be classified as collaborative according to~\citep{zagal2006collaborative} classification, since all participants pursue the same goal (i.e., there is no competition or competing goals). The serious game comprises a previously-defined number of turns based on the course schedule. We carried out a workshop with students as a kick-off session at the beginning of the game activity. In the kick-off session, teachers presented the project description and set up of the game and the number of turns the game will comprise. At the end of the kick-off session, there was a requirements elicitation session, in which each designated member of the teaching staff acted as a customer/product owner for each team. The different teams were able to ask for clarifications. Each turn, which corresponds to one- to two-week cycles, includes five steps. The first two steps took place outside the classroom, starting right after the kick-off session. The remaining steps were carried during the next workshop session at the end of a game turn. The game steps were as follows~\citep{molleri2018legacy}: \begin{enumerate}[1)] \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item At the beginning of each game turn, each team worked on and submitted a \textbf{project plan}. The project plan should meet the project description described by the teacher, also detailing resources (e.g. budget, workforce) and constrains (e.g. time to deliver, business rules). \item Then, the teachers \textbf{assessed the submissions} with regards to the intended learning objectives. If the team's project plan was sounding, adhered to good practices, and matched the project's needs, the team was awarded bonuses for the current game turn. On the contrary, the team was penalized if their project plan omitted certain required development activities or practices or included well-known bad practices. We have designed a set of rubrics to assess both bonuses and penalties\footnote{The rubrics are available as an appendix to the game rules in \url{https://goo.gl/1oUvvB}}. \item During the workshop session, the game mainly consisted of \textbf{rolling dices that represented the uncertainties} of a software development project. Teachers guided the students during the whole step, relating these uncertainties to real examples, e.g., a penalty in the implementation could be caused by a non-updated design or lack of requirements traceability. The uncertainty values were then added to the bonuses/penalties scored by the team's project (see step 2 above). This final result represents how much the actual process deviates from the original plan. \item Further, teachers \textbf{presented project changes and new challenges} to the teams, representing events that are likely to occur in a software process, such as new requirements or resource limitations. Teachers played the role of customers or product owners, fostering students to negotiate the inclusion of some of the new requirements or resource allocation. \item Finally, the teams were asked to \textbf{update the project plans} according to the deviation and new events resulting from steps 3 and 4. They were also suggested to discuss the challenges and make any improvements they thought were needed for the project's success. As an example, one team reflected on their updated plan as follows: \textit{``The decision we had to take was, deliver the minimum viable product in 5 sprints or be late. We chose to push for the minimum viable product and did not deal with technical debt or the bugs''}. \end{enumerate} In each game workshop session, the teaching staff provided feedback to students, expecting them to reflect on their choices and learn from the mistakes. At the end of each game, the teams were asked to provide their reflections regarding the process, the outcome, and perceptions about the gaming experience and lessons learned. \subsubsection{Design-Implement Task} The design-implement task started after the end of the Serious Game. The goal of this task was to develop the same software system as planned in the agile instance of the serious game. Students were expected to reuse parts of their project plans when carrying out the design-implement task. This set up allowed students to start the design-implement task with certain domain knowledge and to experience how different it is to only plan something compared to developing the solution themselves. The students had to develop different software components that together integrated a self-driving system for LEGO Mindstorms\textsuperscript{TM} vehicles (trucks, cranes, forklifts) controlled through EV3 Blocks using ev3dev/JAVA. The software components responded to different requirements expressed through User Stories, with pre-and post-conditions. The range of complexity ranged from more simple features like controlling the motor's speed to more complex stories like \textit{drive-following-a-line} or epic stories. One example of an epic story was self-driving the vehicle until reaching a specific location where they had to interact with another vehicle. In the \textit{drive-following-a-line} story, teams had to develop routines to detect a line in the floor and drive the vehicle following that line, keeping the line centered in the vehicle until the sensors read a stop signal (i.e., a transversal line filled with different color/pattern). The design-implement task was planned to comprise three Scrum development sprints, with a two-week sprint lead time. As part of the design-implement task, four workshops were conducted with students: the first as a kick-off session and three after each sprint. In the kick-off session, the teacher presented the design-implement task description and the task's goal to the teams. In this session, the teaching staff presented the tools to be used, the code provided as scaffolding, and the technical details needed to carry out the development work and constraints (e.g. hardware, software). The teaching staff also acted as Customers/Product Owners negotiating with each team the Minimal Viable Product (MVP) and prioritized the user stories. In the remaining three workshops, the teams discussed and planned releases. Teams did estimation based on story points and user story prioritization together with teachers acting as product owners. The second and third workshops were organized as a sprint review. In this sprint review, each team showed the progress done stories implemented during the sprint, adapted the backlog, calculated the velocity, and planned the next sprint. The fourth and last workshop was organized as a release demo, in which the students did a live demo of the user stories they have implemented, with the teaching staff acting as customers and product owners. During the design-implement task, the students were able to book time in the lab to provide them with access to hardware and supervision and mentoring by the teaching staff. Teams could book access to the lab for 2 hours per week upon registration, although no strict policy enforcing the access to only two hours was put in place (only when conflicts arose on specific dates). Students were encouraged to work off-line and use the time in the lab to test their solutions. Every team had their Scrum boards visible in the lab so that the teaching staff could monitor the progress of the different teams. \subsection{Executing the Integrated Experiential Approach}\label{execution} We executed the experiential approach in a Software Engineering course at BTH for two consecutive years, i.e., 2018 and 2019. Each year, we have conducted two instances of the game activity for two different software development processes and one instance of the design-implement task, as illustrated in Figure~\ref{fig:datacollection}. The assignments and rubrics were slightly different in each case. We informed the students beforehand that the game's outcome was not part of the course evaluations. So, the students were not penalized in their grades if their project did not get particular bonuses. One could expect that some students use that as an excuse for not working hard. However, we soon realized that the gamification of the exercise produced a motivational effect. In general, students were trying their best to attain the best possible results, sometimes showing frustration when they were unlucky with the dice-rolls. \subsubsection{2018} Our integrated experiential learning approach was first put in place during Spring 2018. Fifty-eight students divided into fourteen teams took part in the game and the design-implement task. They were registered in a five-year Master of Science in Engineering program (300 ECTS) in Industrial Economy\footnote{Civilingenjor i Industriell Ekonomi}. Forty students were in their second year, while eighteen students were in their third year pursuing a specialization in Software Engineering and Information Technology. Both for the students in their second and third year, the course becomes their introduction to Software Engineering. The second-year students have read their first course in programming. In contrast, the third-year students had read the introductory course to programming in Java, a course in object-oriented programming, and a course in Database Engineering. This mixture was due to a change in the program's structure. The course was moved from the third to the second year, and 2018 was the year we met both cohorts at the same time. There were no mixed groups, i.e., there were no groups with students in their second working together with students in their third year. Although initially, we thought having mixed teams might be a good idea, in practice, that meant that students did not have overlapping free slots in their schedules to work together, and therefore they chose to have homogeneous teams. Both the game activity and the design-implement task were supervised by at least two persons in each session. During the game activity, we further divided the 14 teams into two different groups with seven teams each. In other words, We conducted all workshops twice, for each group of seven teams. This strategy allowed the teaching staff to provide closer support for students without overloading them. The Lab time was supervised by at least one member of the teaching staff, although often two members were present providing support to students \subsubsection{2019} The second application of the integrated approach took part in Spring 2019. Twenty-four students were initially divided into eight teams and participated in the integrated approach. After the plan-driven workshops, one of the teams dissolved, and we continued with seven teams. All students were registered in a five-year Master of Science in Engineering program (300 ECTS) in Industrial Economy. All students were in their second year, where the course became their introduction to Software Engineering after reading their first course in programming. In this execution, we managed all the groups simultaneously. However, we ran the workshops in two adjoining lab rooms since the serious game activities became too noisy, as some of the students pointed out in their feedback. The students again were provided with access to the lab, where teaching staff supported and mentored them when they experience technical difficulties. It is important to mention that a change was introduced in the teaching staff, with one of the members taking a more leading role, since one of the team members was not part of the staff in 2019. \section{Conclusions}\label{sec:conclusions} This paper reported the execution and validation of an integrated approach to teaching SE courses using a problem-based, experiential learning approach. The approach integrates i) a serious game activity simulating the planning and execution of a software development project and ii) a design-implement task representing realistic, small-scale, software development project experience. The experiential approach builds on the software project's challenges simulated in the game and later represented in the design-implement task. We executed the approach with students of a SE course in two consecutive years. To empirically validate the approach we coded and analyzed, by using thematic analysis, and frequency count: \begin{inparaenum} [i)] \item the adherence of their project plans to a set of rubrics that drive the serious game; \item the adherence of their project plans to a set of rubrics that drive the serious game; and \item the student’s perceptions collected through an online survey instrument \end{inparaenum} The results suggest that the experiential approach enables students' progression, is well aligned with the SE course's intended learning objectives, and it supports the acquisition of wide range of theoretical concepts and knowledge related to software projects and software engineering process models. The main contribution is provided by the integration between the game activity and the design-implement task. Although the students perceived them as distinct activities, they realized that some key contents of the course are reinforced by their integration, particularly the uncertainties of software development, the need for an appropriate plan, or how to deal with troublesome concepts such as relative estimation with story points. Besides, students acknowledge that the approach helped them infer the value of the contextual solutions in software projects. We invite teachers on similar introductory courses to software engineering to replicate the setting with serious games (rules and rubrics can be found online), a controlled-scale, design-implement task, and evidence of the results of the intervention on the course. \section{Discussion}\label{sec:discussion} The results provided in the previous section summarize the student's perception through three distinct data collection sources. In order to discuss the findings, we detailed them with regards to our three research questions: \renewcommand{\thesubsection}{RQ\arabic{subsection}.} \subsection{Intended Learning Objectives} We can summarize students' perception regarding the ILOs of our approach by the three main codes associated with RQ1: \textbf{learning the real aspects of software projects}. The students perceived that the game approach is a simulation of the software process, and they could try different strategies to steer their projects. The simulation is grounded on the theoretical aspects they were instructed during the lectures. One student summarizes the objective as follows: \textit{``To simulate how our theoretical project would work out if applied to reality where there are uncertainties that can not be foreseen''}. The students' perception is, in our opinion, well-aligned to the course's intended learning objectives (see Section~\ref{sec:ilos}). Students also reported that the design-implement task provided them with real software development challenges, such as uncertainties and troubleshooting. The findings are supported by both the survey responses and the written report. From Bloom's taxonomy perspective, the game activity is associated with understanding how the software process works. In contrast, the design-implement task is associated with applying or experiencing it in a practical context. Finally, the written report also provides evidence about higher cognitive tasks, such as determining relationships and making inferences based on their own experiences. These three levels of objectives from Bloom's taxonomy relates to codes l-2, l-3, and l-4. They are the same described by the course's ILOs. In addition to this, students appreciated the experiential learning approach. We particularly noted their enthusiasm when rolling the dices for the game outcomes and when trying out their code with the LEGO Mindstorms\textsuperscript{TM} vehicles. Several students explicitly reported their appreciation in the formative survey and in the written report. Some even adopted roleplay to describe their experiences, such as a team discussing their product release: \textit{``Should we delay the delivery of our product or go ahead and launch our buggy product anyway? Since we don’t really know the weight and complexity of the bugs, it’s hard for us to evaluate how much they actually will affect the final product''}. \subsection{Theoretical Knowledge Reinforced} Our findings show that students could recall and describe a wide range of theoretical knowledge. The sunburst diagrams (Figures~\ref{fig:rq2sunburst} and~\ref{fig:totalsunburstreport}) illustrate the comprehensiveness of the contents (RQ2.1) reported in the survey and written report, respectively. The written report provides a more detailed description of reinforced knowledge, as students are asked to reflect and describe their experience. Although some aspects are better reinforced than others (e.g., agile is more frequently mentioned than plan-driven methodology), one can note that leaves from all main codes have been coded. That means that students not only mentioned the main themes but also discoursed different aspects of them. For example, a team provides the following reasoning for adopting the agile methodology: \textit{``Agile is requirement oriented, so it is flexible for the requirements changes that come during the processes''}. The team somehow successfully understood some of the theoretical characteristics of the methodology and related it to challenges (i.e., requirement changes) they experienced. One of the primary outcomes of the integrated approach is that, at the end of the design-implement task and in the reflection reports, the students express their awareness of uncertainty in software projects. In their reports, one can notice that they have ``connected the dots'': during the implementation task, they realize how the concept of uncertainty materializes in concrete events, like the troubleshooting they have to carry out during the design-implement tasks. Regarding the assessment of teams' project plans, we used a set of pre-defined criteria to trace teams' progression (RQ2.2). Students have shown increasing compliance with good practices and reduced the presence of bad practices in their project plans between two assessment rounds. The evidence also points to improvements regarding the estimation and prioritization tasks. Our insight is that the game design helped the students develop a better understanding of such practices by experience, i.e., learning by doing, and therefore we can conclude that the approach itself supports students' progression. \subsection{Challenges of Software Development} The written report also provided a good understanding of students' challenges during the execution of our integrated approach (RQ3.2). Students reported that the rules and consequences of their decisions were not always clear to some of them during the game activity. Some students noted that this was the teaching staff's intention: to provide good examples of uncertainties they could not account for. During the design-implement task, the main challenges reported by students are troubleshooting and unexpected technical issues. Interestingly, students were able to relate those issues to the uncertainties experienced during the game. The formative survey also supports those findings, as students reported that uncertainty and troubleshooting in the task are similar to real projects. could relate challenges to other aspects of the software process. As an example, estimation and prioritization are challenges related to project planning. These two tasks showed significant improvement by assessing teams' project plans, pointing out that the integrated approach supports reinforcing knowledge through problem-solving. The experiential approach also helped students to realize the importance of proper planning. In a plan-driven project, it is essential to detail a project plan that accounts for potential deviations. In an agile project, one should be ready to adapt and change according to the new situations. An important finding relates to the contextual solutions, which is one of the central ``hidden'' learning outcomes of the course, i.e., no single solution will solve all challenges. Examples of students' reflections are \textit{``Depending on what type of project it is then the way of doing it is different''}, and \textit{``it is important to understand how to plan a software in the right context, taking into account the characteristics and limitations of the project''}. \section{Threats to Validity} \renewcommand{\thesubsection}{\arabic{section}.\arabic{subsection}} Following the interpretivist paradigm, we describe threats to validity of our research according to the categories described by~\cite{lincoln2007naturalistic}: \subsection{Credibility} Our study aimed at evaluating an experiential learning approach from the perspective of the students. To achieve such a goal, we designed an empirical evaluation grounded on the Action Research method, i.e., our experiential learning approach acted as an intervention, as is described in~\ref{fig:researchpath}. We employed a mixed data collection approach to gathering a more holistic and rich picture of the phenomenon. To ensure that the participants could express their perceptions openly, we used an anonymous online questionnaire with open questions. There is a potential bias well-known to the participatory research, that the researcher could affect the data collection and analysis due its proximity to the phenomenon. We tried to mitigate such a threat by continuously engaging in joint discussions between the researchers acting as observers and others not directly involved in data collection. This multi-perspective reflection was crucial to challenge the view of the observers and make sense of the findings. Another limitation relates to the number of instances in which we collected data for validating the approach. Although we ran the evaluation twice in successive years, the participants drew out of convenience, and thus we could not assure a wide variety of cultural characteristics and cognitive abilities. It is important also to mention that during the spring of 2020, we run the course again, with a bigger number of students from three different cohorts: five years Master of Science in Engineering program (300 ECTS) in Industrial Economy, five years Master of Science in Engineering program (300 ECTS) in AI and Machine Learning, and five years Master of Science in Engineering program (300 ECTS) in Software Development. However, while we were running the first instance of the serious game, we switched to distance learning due to the COVID19 outbreak. The change to distance learning imposed many challenges to both students and teaching staff. On the one hand, we had to change several things on how we run the games (e.g., the workshops' structure, effective supervision time for each team), which might have added noise to the data gathered. On the other hand, and even more critical, the data gathering activities might represent additional work done by both the students and the teaching staff. Therefore, we decided to skip the data gathering and analysis in 2020. At the time of writing this paper, we are planning the course again in Spring 2021, but with much uncertainty regarding the extent to which we will meet students in serious game instances and the design-implement task. \subsection{Confirmability} The opinions of participants were coded in relation to the context of the course, according to a pre-conceived codebook. This codebook might have limited our initial analysis to the course's theoretical background, but we later expanded it using themes that emerged from participants' responses. We also acknowledge limitations in the researchers' ability to reflect upon the data due to experiences. However, we trust that researchers acting as teaching staff could interpret the students' opinions more appropriately. We also employed multiple coders, independent coding, and joint discussions to corroborated the findings among researchers. Another limitation refers to our ability to present a chain-of-evidence from observation to findings. Due to concerns about anonymity, we are not able to make our complete data set available. Part of the qualitative data, in particular of the written report, could be traced to teams and/or individuals that participated in the course. Aiming to provide a transparent and accountable view of our data analysis process, we provide access to summaries of the assessment rubrics, and coding of the survey responses, and the reflection reports, as complementary material, available in~\citep{molleri2020dataset} \footnote{The information are attached as appendixes to the manuscript for the double-blind review process, since the dataset contains traceable authorship information.} . \subsection{Transferability} Our experiential approach tries to bridge the gap between theory and practice in the classroom. We acknowledge that some of the challenges we propose are artificial, fictional, yet still based on reality. The solutions in a professional environment might differ radically from the simulation, and students were aware of this constraint. One of the teams reflected on this limitation as follows: \textit{``we believe that there is a difference between professional planning and simulated planning in a scholarly environment. This was also the case for our project since we did not start to solve new problems until the ones started on was finished.''} In a professional environment, a sequential approach to solve problems is unrealistic as priorities change due to factors not covered in our approach, e.g., time to market, customer satisfaction, etc. Although our validation results are meaningful in the context of the pedagogical application, they do not provide evidence supporting the use of this approach compared to other interventions, either other innovative learning approaches or the traditional lectures. We provide evidence of how the integrated experiential approach allows students to experience some of the challenges of software development projects and the aspects that the experimental approach reinforces. However, we can only make claims about the potential benefits of the approach in the studied context. Therefore we need to gather additional evidence to reinforce the results presented in this paper. We invite teachers on similar introductory courses to software engineering to replicate the setting with the serious games and the design-implement tasks and gather evidence of their interventions' results. \section{Introduction} Introducing students to Software Engineering (SE) is a challenging task~\citep{Broman2010,Malik2012}, that needs to integrate more than just classroom-based teaching and learning activities~\citep{Malik2012,Yadav2010}. Project management activities are intended to provide students with a dynamic environment that mirror real-world challenges~\citep{bruegge1991software}. This problem-based learning approach is supported by the pedagogical theories for knowledge acquisition through ``learn by doing''~\citep{dewey1897my}, and experiential learning~\citep{kolb2014experiential} philosophies, as well as a student-focused approach to teaching~\citep{hung2011theory,flener2006realism}. A major issue associated to project management courses or to \textit{design-implement} tasks in the software engineering curriculum is how to find a balance between the level of realism, and the relevance of the contents students will learn \citep{flener2006realism, bruegge1991software}. On the one hand, toy projects failed to connect theory taught in the classroom to real-world problems. On the other hand, real-world project conditions are often too demanding to fit a course schedule. Simulations have been used to achieve the balance without incurring these issues \citep{peterson2011teaching}. From our experiences with \textit{design-implement} tasks in Software Engineering courses, we would also add another risk: that students are more focused on the technical challenges of the project tasks \citep{bruegge1991software}. Thus, they are sometimes willing to ``hack'' the solution instead of focusing on the software development practices and models they are intended to be following. This focus on hacking the solution might prevent them from experiencing and reflecting on the intended learning outcomes (ILOs). Therefore, at Blekinge Institute of Technology (BTH), we are currently developing an integrated approach using a game activity and a development task to expose students to a software development project. The approach simulates some of the challenges faced when planning and managing real-life software project )but in a controlled scale). We implemented and empirically evaluated our approach in the context of a software engineering course for two consecutive years. This paper presents the results of this evaluation, in which we analyzed the adherence of their project plans to a set of rubrics that drive the serious game, analyzed the teams’ reports submitted at the end of the course, as well as students' perceptions regarding the integrated learning approach, gathered through an online survey. The paper is structured as follows: Section \ref{sec:background} presents the related work. Section \ref{sec:methods} describes the research method. Section \ref{sec:results} provides the results, which are further discussed Section \ref{sec:discussion}. Finally, Section \ref{sec:conclusions} concludes the paper. \section{Empirical Evaluation of the Approach}\label{sec:methods} We have also conducted an empirical study to gain evidence regarding the extent to which our integrated approach is an effective means to enable students learning concerning the course objectives and contents. \subsection{Research Objectives} Our empirical evaluation aims at providing answers to the following research questions: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[\textbf{RQ1}] How do students perceive that the integrated approach matches the learning objectives of the course? \item[\textbf{RQ2}] How does the integrated approach reinforce theoretical knowledge? \begin{enumerate} \item[\textbf{RQ2.1}] What are the contents of the course that students perceive as reinforced by the integrated approach? \item[\textbf{RQ2.2}] What are the contents of the course in which we can perceive progression? \end{enumerate} \item[\textbf{RQ3}] How does the integrated approach allow students to experience the challenges of a real software development project? \begin{enumerate} \item[\textbf{RQ3.1}] What do students perceive as challenges faced? \item[\textbf{RQ3.2}] What do students perceive as challenges that are a good representation of a software development project? \end{enumerate} \end{enumerate} \subsection{Research Design}\label{researchdesign} Our study is described according to the framework for selecting a research design in empirical software engineering by~\cite{wohlin2015towards}. We designed a research path (illustrated in Figure~\ref{fig:researchpath}) comprising a series of decision points that characterize our research. \begin{figure}[htb!] \centering \caption{Research path according to~\cite{wohlin2015towards}.} \includegraphics[width=1\textwidth]{researchpath.pdf} \label{fig:researchpath} \end{figure} On the strategic level (steps 1 to 4 in Figure~\ref{fig:researchpath}), our research is defined as (i) \textbf{applied} because it aims to provide solution to a specific problem; (ii) \textbf{inductive}, as we aim to identify patterns from observed data; (iii) with the purpose of \textbf{evaluate} the applicability of the integrated approach in the context; and it is (iv) \textbf{interpretivist} as it aims to understand the learning in the student's perspective. The central unit of our work is qualitative action research (steps 5 and 6 in Figure~\ref{fig:researchpath}) that investigates a problem introduced by the pedagogical practice. We employed a set of data collection methods (step 7), including assessment rubrics, survey inquiry with students, and analysis of their written reports. The data was analyzed by open coding and thematic analysis (step 8). \subsection{Data Collection}\label{datacollection} We collected our data from the different TLAs and Assessment tasks planned in the course: \begin{inparaenum}[1)] \item from the successive submissions of the project plan written by the teams as part of the serious game; \item from the formative assessment of a survey created to gather students opinions about the integrated approach; as well as \item from the assessment of the reflection report teams wrote at the end of the course. \end{inparaenum} The multiple-source data collection approach aims at triangulating data from different sources and viewpoints. Through triangulation, we expected to identify different dimensions of the same phenomenon and to validate the data gathered from diverse sources. In the following subsections, we detail the different data sources we collected to assess the effectiveness of the approach to enable students learning. \subsubsection{Assessment of Project Plan.} \label{data_collection_1} The quantitative data was gathered during the game activity. Teams were requested to submit their project plans to be assessed before next workshop. Then, teachers reviewed the submissions and assessed them according to checklists (see rubrics for Plan-Driven and Agile projects in Figure~\ref{fig:rubrics}). The rubrics account for the number of good practices \textbf{(+)} and missing details \textbf{(-)} in the project plan. For each good practice demonstrated by the team, they scored +1 point, while for each bad practice, the team scored -1 point. A few checklist criteria also had partial scoring points, i.e., plus or minus 0.5 points, for partial achievement. \begin{figure}[htb!] \centering \includegraphics[trim={2.3cm 3.3cm 2cm 3cm },clip,width=1\textwidth]{rubrics-only.pdf} \caption{Rubrics for Plan-Driven (left) and Agile (right) game activities. The checklist items are based on good and bad practices for software projects covered by the course material.} \label{fig:rubrics} \end{figure} The resulting scores from assessing the teams' projects were mainly used to calculate game modifiers the teams should apply for their dice rolls and condition the students' progress in the game. We informed the teams which checklist items they met and which ones they missed, so they have a guideline to improve their next submission. Therefore we expected teams to improve their project plans, and consequently, their performance after each round. The quantitative data collected from the project plans' assessment can help us answer research RQ2, and more particularly RQ2.2. \subsubsection{Formative Assessment.} \label{data_collection_2} After executing the integrated experiential learning approach, we conducted a retrospective meeting with the students to gather their opinion about the course and the approach. We created a survey questionnaire to collect data in a written format that we then used as data for assessing the effectiveness of the approach. The retrospective survey comprises ten open questions aligned to our research questions plus six questions aiming to improve the course, as detailed in Table~\ref{tab:survey}. \begin{table}[htb!] \centering \scriptsize \caption{Structure of the survey questionnaire and related research questions. The rightmost column maps the survey question to their related research questions.} \label{tab:survey} \begin{tabular}{lc} \toprule \textbf{Survey Questions} & \textbf{RQs} \\ \midrule \textbf{About the Project Game} & \\ Q1. In your opinion, what is the objective of the games? & RQ1 \\ Q2. Which contents of the SE project course are reinforced by the games? & RQ2 \\ Q3. Which contents of the SE project course should be better represented by the games? & \\%RQ2 \\ Q4. Briefly describe the (three) most important difficulties / challenges you faced during the games. & RQ3 \\ Q5. In your opinion, what aspects (e.g. challenges) of a real project are represented in the games? & RQ3 \\ \midrule \textbf{About the Development Task} & \\ Q6. In your opinion, what is the objective of the task? & RQ1 \\ Q7. Which contents of the SE project course are reinforced by the task? & RQ2\\ Q8. Which contents of the SE project course should be better represented by the task? & \\%RQ2\\ Q9. Briefly describe the (three) most important difficulties / challenges you faced during the task. & RQ3\\ Q10. In your opinion, what aspects (e.g. challenges) of a real project are represented in the task? & RQ3 \\ \midrule \textbf{Approaches Combined} & \\ Q11. In your opinion, what is the objective of combining the two approaches? & RQ1 \\ Q12. What have your learn with the approaches combined? & RQ2\\ \midrule \textbf{Course evaluation} & \\ Q13. Please describe positive / negative aspects of the game. & \\ Q14. Please describe positive / negative aspects of the development environment. & \\ Q15. Please describe positive / negative aspects of the course schedule. & \\ Q16. What is your suggestion to improve the course activities? & \\ \bottomrule \end{tabular} \end{table} The results from six survey questions (Q3, Q8, Q13-Q16) were merely used as feedback to the teaching staff to improve the course. The results from 2018's survey helped plan next year's application and, thus, impact how the integrated activity was conducted in 2019. The remaining questions contributed to our research by providing the students' perception about (RQ1) the learning objectives of our integrated approach, (RQ2) theoretical knowledge reinforced by it, and (RQ3) challenges of a real software project. \subsubsection{Written Reflection Report.} \label{data_collection_3} The last assessment task of the course was the submission of a reflection report. Students were asked to reflect on what was done during each step and keep notes about personal experiences during the integrated experiential learning approach. We encouraged the teams to critically discuss the causes of eventual deviations to their original plan within the groups. By the end of the course, we invited students to compile the notes into a group reflection report and to write a post-mortem analysis of both teaching-learning approaches (game activity and development task) describing their experiences on the two different process models used, i.e., plan-driven and agile, plus their insights after the design-implement activity. The written report should also include a meta-reflection about the pros and cons of each development model. The open-text reflection contributed to answering our research questions RQ1, RQ2, and RQ3, similar to the formative assessment. We provided the students with instructions for the written report, but we did not provide them a framework to follow. In that way, students were not limited by a formal structure and are fostered to express their thoughts more freely. The written reports were also used as an assessment task, and therefore part of the course grading. In our research, the assessment task was treated as an independent entity. The researchers conducting the data collection and data analysis did not participate in the grading of that particular assessment task. \subsection{Data Analysis} The three data sources provided us with both qualitative and quantitative data. We employed different methods to analyze each of them, as follows: \subsubsection{Quantitative Data.} For the quantitative data, we assessed the teams' adherence to the good and the bad practices according to the checklists (see Section~\ref{data_collection_1}). We computed an overall score for each submission of a project plan, i.e., the difference between frequencies in which teams demonstrated good and bad practices. This calculation resulted in a score between -10 and 10; if the good practices were more frequent, the resulting score was positive, while more frequent bad practices resulted in negative scores. By comparing the scores from two successive submissions of the project plans, we could assess the teams' progression and identify the criteria in which they progressed. This follow up allowed us to identify how many negative aspects were addressed, and how many positive aspects were reinforced with our game approach. The resulting data also helped teachers to identify theoretical content students did not understand well or could not express in their project plans. Such information is essential for our research, but it is also valuable for curriculum development and continuous improvement of the course. \subsubsection{Qualitative Data.} The formative assessment and the written report provided mostly qualitative data we collected via coding analysis (see Sections~\ref{data_collection_2} and \ref{data_collection_3}). We employed thematic analysis for analyzing qualitative data, following the guidelines by \cite{Braun2006}. The thematic analysis comprises the coding and categorization of the textual information, data triangulation, and interpretation from different viewpoints~\citep{cruzes2011recommended}. The coding process was carried out as follows: \begin{enumerate} \item One of the researchers (i.e., the second author) created the first iteration of the codebook by using theoretical concepts of software project management presented in the course lectures' materials. This codebook was organized as a hierarchical structure of themes, mapped to the course contents. \item The same researcher used this codebook for doing the first level coding. \citep{Saldana2015} of the survey responses, adding complementary codes that emerged from the survey responses. A list of the codes identified by the thematic analysis is provided in Appendix~\ref{appendixA}. \item Another researcher (i.e., third or fourth authors) used this updated codebook to do independent and blind coding of the same documents. \item After this, we aggregated codes from both researchers, and in the case where was a disagreement of coding in responses from the survey, the first author did the additional round of blind coding. \item We reached an agreement in cases in which two out of three researchers agreed on the same code. In three-way disagreement cases, all involved researchers participated in a joint discussion until reaching an agreement for the final code. \end{enumerate} Although we understand that qualitative thematic analysis's primary goal is not analyzing the frequency in which codes appear, we have computed code frequency for data synthesis. With this analysis, we can also assess how frequently two given codes co-occurred, i.e., occurred in the same piece of the text. By doing so, we identified the most frequent codes related to learning aspects (RQ1), course contents (RQ2), and students' experience (RQ3), thus helping us answering our these three research questions. \subsection{Execution of the Data Collection and Analysis} We conducted our empirical study alongside the two applications of the experiential approach, i.e., in 2018 and 2019. Each year, we collected data from the project plans, formative assessment, and written reports. The data collection instruments and data analysis follows the same procedure in both cases. Only the number of participants (students and teams) and one member of the teaching staff varied from the first year to the second. \subsubsection{2018} Students enrolled in the course took part in our study as follows: \textbf{Assessment of project plan.} 14 teams participated in the game activity. During three rounds, they submit a project plan document. For the first and second instances, we assessed their project plans and provided feedback for improvement. Unfortunately, data for the second instance of the plan-driven approach is missing. Thus our analysis is limited to improvements during the agile project gameplay. \textbf{Formative assessment.} We provided the students' explanation about each of the questions and the survey procedure. We further asked for their voluntary collaboration and ensured that no personal data is collected. The students took around 30 minutes to complete the survey. 45 out of 58 students were present and consented to our inquiry to respond to the electronic survey. Their answers provide textual feedback on their experience with the game, the development task, and the integration between the two approaches. \textbf{Written reflection report.} All the 14 groups provided a post-mortem analysis of the integrated learning approach, compiling notes from two-game activities and the development task. Initially, two of the teams provided only a partial reflection (information about one of the exercises was missing). We contacted them and asked to complement the report, and they promptly did it. \subsubsection{2019} Students enrolled in the course took part in our study as follows: \textbf{Assessment of project plan.} There were eight teams in the beginning of the course but one of the teams dispersed and their members joined other teams. Eventually, seven teams participated in the game activity. During four rounds (two for plan-driven and two for agile), they submit a project plan document. We provided assessment and feedback for their project plans in all the rounds, similar to last year. \textbf{Formative assessment.} Similar to the previous year, we provided explanation for the students about each of the questions and the survey procedure. We further asked their voluntary collaboration and ensure that no personal data is collected. The students took around 30 minutes to complete the survey. 15 out of 24 students were present and consented with our inquiry to respond the electronic survey. Their answers provide textual feedback on their experience with the game, the development task, and the integration between the two approaches. \textbf{Written reflection report.} All the seven groups provided a post-mortem analysis of the integrated learning approach, compiling notes from two game activities and the development task. \section{Background \& Related Work}\label{sec:background} In this section, we introduce some of the techniques used for teaching Software Engineering and Software Project Management, the use of games in Software Engineering education, and the use of Design-Implementation tasks in Software Engineering courses. \subsection{Teaching Software Engineering and Software Project Management} Software Engineering is defined in ISO/IEC/IEEE 24765:2010~\citep{ISO24765} as \textit{``The application of a systematic, disciplined, quantifiable approach to the development, operation and maintenance of software; that is, the application of engineering to software.''}. The ACM/IEEE Curriculum~\citep{ACM2014} provides guidelines for the definition of Software Engineering education at undergraduate level, and identifies the main expected student outcomes, the Software Engineering Education Knowledge (SEEK). The guidelines are explicit when recommending that \textit{``The curriculum should have significant real-world basis''}~\citep{ACM2014}, and suggest to incorporate at least some real-world related activities they identify, such as case studies, project-based activities, practical exercises, student work experiences or capstone projects. However, in any case, teaching Software Engineering is a challenging task \citep{Malik2012,Broman2010}. Among other challenges, different authors identify the fact that only classroom-based teaching and learning activities are not enough, and might be an ineffective approach towards SE teaching \citep{Malik2012,Yadav2010}. If we look at research on Software Engineering Education, several Systematic Mapping Studies, e.g., the ones presented \cite{Marques2015,Malik2012}, identify Problem-Based Learning as crucial activities in Software Engineering Education. Problem-Based Learning (PBL) promotes active learning and knowledge acquisition through group work \citep{hung2011theory, schmidt1994problem}. PBL is a teaching practice related to the constructivist theories, in which the student is directly responsible for the knowledge construction \citep{hendry1999constructivism,elmgren2014academic}, a.k.a. student-focused approach. It aims to produce knowledge by connecting the students' prior knowledge to new facts and understanding. PBL extensively uses reflection, critical thinking, and experimentation as learning facilitators. In this practice, students are presented with a situation that requires a solution, whereas the teachers act as supervisors (and sometimes simulated customers), stirring the group toward a potential solution. The PBL takes several meetings, and in the time between meetings, students should look to deepen their knowledge regarding the problem. \cite{kolb2014experiential} describes those stages in a experiential learning cycle (illustrated in Figure~\ref{fig:learningcyclekolb}). In the first stage, i.e., doing it, the student is faced with a new experience, herein the game challenge. In a group, the student is foster to reflect on the challenge (stage 2) and make sense on a candidate solution (stage 3). Finally, the student applies the solution and gather its results. The cycle starts again, as the student progresses into a deeper understanding of the topic. \begin{figure}[!ht] \centering \caption{Learning cycle by \cite{kolb2014experiential}.} \includegraphics[width=0.70\textwidth]{learningcyclekolb.pdf} \label{fig:learningcyclekolb} \end{figure} Both PBL and Experiential Learning are connected to the ``learn by doing'' philosophy introduced by John~\cite{dewey1897my}. These views favor relevant and practical learning and are particularly useful for skill-oriented engineering courses/programs. One suitable solution to enable this ``learn by doing'' philosophy in Software Engineering education is to have dedicated Software Project Management Courses (PMCs) to expose the students to the specificities of Software Project Management. PMCs have become increasingly popular in Software Engineering education, particularly at the end of undergraduate and subsequent graduate programs \citep{ralph2018re, broman2012company}. Ideally, PMCs should be aligned to PBL and related pedagogical philosophies but this is not always the case~\citep{ralph2018re, flener2006realism}. When students get exposed to Problem-Based Learning in Software Engineering education in basic PMCs, the problems tend to be adapted and simplified, and often linked to ``pre-fabricated'' solutions~\citep{Oliveira2013,Souza2018}, or extensive real-client projects that are likely to overload students with technical issues~\citep{koolmanojwong2009using, bruegge1991software}, and students perceive that they are not exposed to ``real'' project management~\citep{Kruchten2011}. Besides, our experiences using PBL, through design and implement tasks in Software Engineering introductory courses, is that the students tend to focus on the technical problem itself, trying to ``hack'' the solution, rather on the underlying process and methodology, that is the main focus of such courses. In software projects formal and familiar technical aspects can be against with human and sociological factors~\citep{Caulfield2011}. Monitoring the extent to which students follow the processes and methods is challenging, and might require additional monitoring by instructors, e-learning portfolios, or additional reporting activities by the students~\citep{Marques2018}. This is why simulations and serious games might help the students to experience some challenges of Software Project Management, without the complexity of developing the solution. \subsection{Games in Software Engineering Education} Serious Games and Simulations have been lately become popular as tools for experiencing some of the challenges of software development projects~\citep{Marques2015,Souza2018,Kosa2016}, and the Game-Based Learning~\citep{Wangenheim7} has been even defined in the context of Software Engineering Education. Recently, its usage has received bigger attention, and has been even the focus of Systematic Literature Reviews and Systematic Mapping Studies, e.g., in~\citep{Kosa2016,Souza2018,Marques2015}. The term “Game Based Learning” (GBL)~\citep{Wangenheim7} refers to any approach using games for learning purposes. \cite{Wangenheim7} define the terms as the use of game applications for defining learning outcomes. Games are any contest (play) among adversaries (players) operating under constraints (rules) for an objective (winning, victory, or pay-off) \citep{Wangenheim7}. While in competitive games players are opposed to each other, cooperative games encourage players to work together for mutual benefit. Collaborative games go a step further, and participants work together as a team towards a common goal~\citep{zagal2006collaborative}. One of the rationales behind the relatively wide usage of Simulated or Game Based Learning is the complex nature of the topic and the time restrictions imposed by the schedule of the courses~\citep{Souza2018}. Serious Games have been used to illustrate software project management~\citep{raabe2013serious, petri2017quality}, software development processes~\citep{hainey2011evaluation,navarro2004simse,benitti2008utilizaccao,baker2003problems,baker2005experimental,Souza2017,Souza2018}, to teach Risk Management in Software Engineering Projects e.g.,~\citep{Taran2007,Oliveira2013}, to introduce the usage of Kanban or Scrum in Software Engineering Projects e.g.,~\citep{Heikkila2016,Paasivaara2014,Fernandes2010}, to introduce Requirements Engineering, e.g.,~\citep{Knauss2008,hainey2011evaluation} or to illustrate the particularities of Global Software Engineering e.g.,~\citep{VanSolingen2011,Sablis2019}. However, the majority of theses serious games are intended to be played in a single class-session. Thus, this setup is not entirely aligned to the PBL approach, that requires the students: \begin{inparaenum}[i)] \item to search for candidate solutions to a problem, \item to critically analyze the candidates in order to make a decision, \item and finally to reflect on the impacts of such potential solution. \end{inparaenum} Which are indeed required skills to be trained in Engineering education~\citep{Crawley2014}. Moreover, the need to integrate all the game in a single session tends to oversimplify the tasks that can be carried out and the level of involvement by the students. As an alternative, in our previous work, we have prototyped and piloted a Legacy Game in the context of a SE-PMC course~\citep{molleri2018legacy}. A Legacy Game is designed to dynamically change throughout of a series of sessions~\citep{daviau2017legacy}. New game rules and contents can be introduced during the execution of the game, and similarly, old content may get overridden or removed~\citep{bbg2018glossary}. Our legacy game incorporates the concepts of PBL into a project management course. It is designed to illustrate two different development models (i.e., plan-driven and agile). However, some authors argue that, although games are useful pedagogical tools and well received by students, they might not be enough, and other learning activities might be needed to reinforce the learning~\citep{Caulfield2011,Souza2018,Heikkila2016}. This is the reason why we have defined an integrated approach that combines traditional lectures, serious games, as well as design-implement task. In the design-implement task, the students can experience some of the concepts and challenges introduced in the serious game. These concepts and challenges are later framed in the final report in which students discuss their experiences in the course, which has the goal of increasing the effects on student's learning, as suggested by~\cite{Hult2000} and~\cite{Enstrom2014}. \subsection{Design-Implement Tasks} Providing students with real-world experiences in an academic setting might help students to understand problems they will face once they go to the industry, as well as experience challenges when working in software development teams and difficulties and complexities when working in real-world software development environment~\citep{bavota2012teaching}. However, in an academic setting, it is challenging to provide these experiences, where students can deal with issues that arise from common projects, such as realistic requirement management, coping with time pressure and dealing with software and hardware constraints~\citep{Marques2018}. Typically PMCs in software development in academic settings range from weeks-long assignments to full-semester activities and from solving problems that students free-to-choice to working with companies with problems that companies did choose to assign to students. However, these approaches typically allow simulating only part of real-world experiences. Providing students with real-world experiences involving realistic requirement management, working with a customer, and dealing with deadlines is a complex challenge. Some of these projects are hard to find (i.e., with full scope and full end-to-end process for just a team with limited time) in real-world settings. And finally, PMCs in academic settings that offer more realistic environments tend to become complicated and require a lot of coordination~\citep{johns2013simulating}. In our programs, more realistic settings are achieved through capstone projects in big teams working on industrial projects, but this setup is not reasonable in introductory courses. Another feasible approach is to design courses with \textit{design-implement} tasks, following the terminology of the CDIO (Conceive Design Implement Operate) Framework~\citep{Crawley2014}. Design-implement experiences are, in the CDIO context, a ``range of engineering activities central to the process of developing new products or systems''~\citep{Crawley2014}. CDIO's design-implement experiences aim at providing students with practical experiences that consolidate theoretical knowledge and support their learning about the engineering process. The experiences are implemented by means of teaching-learning activities (TLAs) which allow the students to go through the planning and implementation stages of a project~\citep{westphal2018course}. Such experiences can also integrate disciplinary knowledge with development of soft skills such as personal skills, social skills, and communication skills~\citep{Crawley2014, westphal2018course}. These real-world experiences also offer means for teachers to monitor student progress and provide a development environment with realistic and complex technical challenges, yet scalable in terms of student skills and knowledge in software development. Thus, we have embedded a design-implement task in our proposed approach, that uses LEGO Mindstorms\textsuperscript{TM} to simulate a real-world environment regarding technical challenges in software development. LEGO Mindstorms\textsuperscript{TM} has been used previously in higher education to teach students in advanced software engineering project courses~\citep{Lew2010,Weissberger2014}, or embedded systems~\citep{kim2009introduction}. Our approach aims at providing students with a \textit{design-implement} task that resembles real-world project case, without part of its complexity. This \textit{design-implement} task is designed to illustrate and experience software project management in simulated real-world setting following agile software development practices. \section{Results}\label{sec:results} This section presents the empirical study results in relation to our three data collection instruments (see Section \ref{datacollection}). A summary of the data collected for each instrument is available online, as complementary material~\citep{molleri2020dataset}. \subsection{Assessment of Project Plan} The results of our assessment of the teams' project plans using the checklist provided us with an indication of the students' progression about using good practices for software projects and avoiding the bad practices. Overall, in all instances of the game, there was a certain degree of improvement between the first and second submissions (see Figure~\ref{fig:balanceBonuses}). Each plot shows an aggregate balance of positive and negative scores for each instance of the experiential approach. \begin{figure}[!ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{balancePD2.pdf} \caption{} \label{fig:balancePD} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{balanceAgile2.pdf} \caption{} \label{fig:balanceAgile} \end{subfigure} \caption{Average scores for first and second project submissions (left and right dots, respectively) in 2018 and 2019. A solid line shows the average improvement between the two assessments, while as the dotted lines above and below represent the maximum and minimum score obtained by a team.} \label{fig:balanceBonuses} \end{figure} For the \textbf{Plan-Driven} (left plot), both 2018 and 2019 had a rough start; their average scores for the first submission were -2.41 and -7.5, respectively. That means that they included a high number of bad practices. It is worth to mention that the one team in 2018 performed positively in the first submission, with a score of +1. In the second submission, we could notice that all the teams in 2019 improved considerably, although the average score was still slightly negative, i.e., -0.125. Missing data from the Plan-Driven project in 2018 limit our analysis of the improvement only to 2019. The teams started much better in the \textbf{Agile} (right plot), with an average score of 1.55 (2018) and -1.25 (2019). In 2018, the teams showed a slight improvement for the second submission, with an average score of 2.69. In 2019, the teams performed much better, raising their average scores to 5.31. None of the 2019 teams had an overall negative score for the second submission, the lowest one being 0. The results point out that a higher number of good practices were included in the Agile project plan than the Plan-Driven project plans. A detailed presentation of the data for each criterion is provided in Table~\ref{tab:resultsRubrics}. Each row presents a criterion and points out the number of teams whose project plan fulfill it during the first and second assessment submissions. Each rubric criterion is graded with marks ranging from 0 or 1. In the majority of cases, 0 for absence or 1 for presence, except for the ones with a star (*) that could receive a partial value (0.5), e.g., RP14: lack of (-1) or unclear (-0.5) testing strategy. \begin{table}[!ht] \centering \small \caption{Results of the rubric assessment for Plan-Driven game (left) and Agile game (right) in 2018 and 2019. Each row represents a checklist criterion and if it relates to a good (+) or a bad (-) practice. (*) refers to criteria in which students can receive partial points values, i.e., $0.5$.} \label{tab:resultsRubrics} \begin{tabular}{lc|ccr} \multicolumn{5}{c}{\textbf{Plan-Driven}} \\ \hline & & \multicolumn{3}{c}{\textbf{2019} (n = 8)} \\ \textbf{ID} & +/- & 1st & 2nd & $Prog$ \\ \hline RP1 & - & 1 & 1 & 0\% \\ RP2 & + & 2 & 5 & +37.5\% \\ RP3 & - & 8 & 3 & +62.5\% \\ RP4 & - & 8 & 2 & +75\% \\ RP5 & + & 0 & 3 & +37.5\% \\ RP6 & + & 0 & 7 & +87.5\% \\ RP7 & - & 4 & 0 & +50\% \\ RP8 & + & 0 & 1 & +12.5\% \\ RP9 & + & 0 & 0 & 0\% \\ RP10 & + & 1 & 0 & \textcolor{gray}{-12.5\%} \\ RP11 & - & 8 & 6 & +25\% \\ RP12 & + & 0 & 1 & +12.5\% \\ RP13 & + & 0 & 6 & +75\% \\ RP14* & - & 7 & 1.5 & +34.4\% \\ RP15 & + & 0 & 3 & +37.5\% \\ RP16 & - & 8 & 6 & +25\% \\ RP17 & - & 7 & 2 & +62.5\% \\ RP18* & - & 6 & 3.5 & +15.6\% \\ RP19 & - & 7 & 5 & +25\% \\ RP20 & + & 1 & 3 & +25\% \\ \hline \textbf{Average} & & & & \textbf{34.4\%} \\ \end{tabular} \quad \begin{tabular}{lc|ccr|ccr} \multicolumn{8}{c}{\textbf{Agile}} \\ \hline & & \multicolumn{3}{c}{\textbf{2018} (n = 14)} & \multicolumn{3}{c}{\textbf{2019} (n = 7)} \\ \textbf{ID} & +/- & 1st & 2nd & $Prog$ & 1st & 2nd & $Prog$ \\ \hline RA1 & + & 9 & 11 & +14.3\% & 1 & 3 & +28.6\% \\ RA2 & - & 3 & 4 & \textcolor{gray}{-7.1\%} & 2 & 0 & +28.6\% \\ RA3 & - & 8 & 6 & +14.3\% & 1 & 0 & +14.3\% \\ RA4 & + & 10 & 11 & +7.1\% & 1 & 3 & +28.6\% \\ RA5* & - & 0.5 & 0.5 & 0 & 3 & 0 & +28.6\% \\ RA6 & - & 3 & 3 & 0\% & 7 & 0 & +100\% \\ RA7 & - & 1 & 1 & 0\% & 2 & 0 & +28.6\% \\ RA8 & - & 2 & 4 & \textcolor{gray}{-14.3\%} & 6 & 0 & +85.7\% \\ RA9 & - & 3 & 2 & +7.1\% & 2 & 0 & +28.6\% \\ RA10 & + & 3 & 3 & 0\% & 4 & 4 & 0\% \\ RA11 & + & 5 & 6 & +7.1\% & 3 & 6 & +42.9\% \\ RA12 & + & 4 & 4 & 0\% & 3 & 4 & +14.3\% \\ RA13 & + & 1 & 1 & 0\% & 2 & 3 & +14.3\% \\ RA14 & + & 3 & 4 & +7.1\% & 1 & 5 & +57.1\% \\ RA15 & + & 0 & 2 & +14.3\% & 2 & 5 & +42.9\% \\ RA16* & - & 2 & 2 & 0\% & 4 & 0.5 & +25\% \\ RA17 & + & 3 & 7 & +28.6\% & 0 & 6 & +85.7\% \\ RA18 & + & 2 & 7 & +35.7\% & 0 & 4 & +57.1\% \\ \hline \textbf{Average} & & & & \textbf{+6.3\%} & & & \textbf{39.1\%}\\ \end{tabular} \end{table} We also computed a progress variable $Prog$ to illustrate the teams' progress based on the feedback provided between both submissions. The progress was computed according to equation (\ref{eq:prog_equation}). \begin{equation} \label{eq:prog_equation} Prog_{(criteria)} = (+/-) \frac{marks_{(2nd)} - marks_{(1st)}}{n} \end{equation} where: \begin{itemize} \item $\textbf{+/-}$ corresponds to the positive or negative aspect of the given rubric criteria; \item $\textbf{marks}$ corresponds to the number of teams that demonstrated the criteria; and \item $\textbf{n}$ is total number of teams in the corresponding year . \end{itemize} As an example, in 2019, two teams included the good practice RP2: \textit{requirements traceability} in their first submission of the project plan, and five teams in the second submission; therefore of three out of eight teams improved their project plans, as shown in equation (\ref{eq:example1}). If we now focus on a negative criterion, in 2019, eight teams included the bad practice RP3: \textit{uncovery dependencies} in their project plans in their first submission. Only three teams included the bad practice in their second submission; therefore, five out of eight teams improved their project plans by removing the bad practice, as shown in equation (\ref{eq:example2}). \begin{multicols}{2} \noindent \begin{equation} Prog_{(RP2,2019)} = + \frac{5 - 2}{8} = +37.5\% \label{eq:example1} \end{equation} \begin{equation} Prog_{(RP3,2019)} = - \frac{3 - 8}{8} = +62.5\% \label{eq:example2} \end{equation} \end{multicols} Negative values in Table~\ref{tab:resultsRubrics} point out to a decrease in $Prog$ for good practices, i.e., positive criteria, meaning that instead of having more teams including the good practice in their second submission of the project plan, we have fewer teams including it. On the contrary, we can also have negative values pointing out a decrease on $Prog$ if fewer teams included a bad practice in their second submission of the project plan, we have more teams including it. \textbf{Plan Driven.} As shown in Table \ref{tab:resultsRubrics}, most aspects presented some degree of improvement. Just one aspect (RP10: \textit{design to change}) showed a slight decline: a group has included this practice in the first submission but removed such description in the next submission. The highest improvement concern to RP6 \textit{using a glossary of terms}, RP4 \textit{detailing the minimum viable product}, and RP13 \textit{describing a risk plan}. It is important to notice that some good practices, e.g., RP4 and RP6, have not been included by any team in the first submission. After the students received feedback from the teaching staff, they improved their plans by considering such practices. None of the groups included RP9 \textit{design traceable to the analysis} in either submission. That points to a potential gap in how we introduce that topic during the lectures or in the game workshops. \textbf{Agile.} We observed that the majority of the criteria presented some degree of improvement during the agile game. Exceptions are RA2 \textit{wrong granularity} and RA8 \textit{lack of minimum product}, that shown decline during the 2018 instance. Overall, improvements were greater in 2019, but the teams in 2018 had a better start, i.e., they covered more good practices and avoided the bad ones already in the first submission. We observe a significant improvement in both years regarding criteria RA17 (describe sprint review strategy) and RA8 (detailing the minimum viable product). In 2019, the most substantial improvement concerns to RA6 (uncovered dependencies of tasks) with \textit{prog = 100\%}, i.e., all seven teams improved their project plans to address this criterion in their second submission. There is only a criterion in which we observed no differences between the first and second submissions in both years, RA10: \textit{a visualization tool for sprint backlog}. In both years has the same values for the first and second submissions. The teams that employed this practice since the first submission kept using it, and the others did not adopt it in their second submission. \subsection{Formative Assessment} For the formative assessment, we collected students' individual responses to survey questions (Table~\ref{tab:survey}) and coded them according to our code book (Appendix~\ref{appendixA}). Figures~\ref{fig:total2018} and \ref{fig:total2019} illustrate the frequency of codes identified in years 2018 and 2019, respectively. \begin{figure}[!ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{total2018.png} \caption{2018 (n = 45)} \label{fig:total2018} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{total2019.png} \caption{2019 (n = 15)} \label{fig:total2019} \end{subfigure} \caption{\textbf{Sunburst plots for relative frequency of codes in the formative assessment.} The size of each segment represents the number of times that code appear in relation to the total. Subcodes, a.k.a. leafs, also contribute to the size of their root code's segment, e.g. plf project lifecycle adds its frequency to spj software project.} \label{fig:totalsunburst} \end{figure} For both years, spj sofware project and its leafs are the themes more frequently mentioned by the respondents. The relative size of the segments shows that it is more frequently mentioned in 2018 than in 2019. The second more frequently cited is \textit{thr theory}, which comprises flexibility, velocity, or uncertainty. In 2018, third and fourth places are \textit{agl agile}, which comprises understand, analyse, or experience and \textit{lrn learning}, respectively. In 2019, the order of these two was inverse. Table~\ref{tab:surveyResults} summarize the top-3 more frequent codes related to each relevant survey question. In the bottom of the table, we summarize the top-5 codes from questions mapped to research questions RQ1 to RQ3. A full list of frequency values for each survey question is provided as complementary material \citep{molleri2020dataset}. \begin{table}[!ht] \centering \small \caption{Top-3 more frequent codes for each survey question and top-5 more frequent codes for the related research questions. Q3, Q8, Q13-Q16 are not presented in the table, as they are not intended to answer the research questions. Values in parenthesis show the frequency of the code with regards to that question.} \label{tab:surveyResults} \begin{tabular}{c|p{.4\textwidth}p{.4\textwidth}} \toprule & \multicolumn{2}{c}{\textbf{Survey Questions}} \\ \midrule \textbf{} & \textbf{2018 (n = 45)} & \textbf{2019 (n = 15)} \\ \midrule Q1 & lrn learning (35), spj software project (32), l-2 understand (16) & lrn learning (8), thr theory (8), unc uncertainty (7) \\ Q2 & spj software project (18), agl agile (18), pdr plan driven (16) & thr theory (7), unc uncertainty (7), spj software project (3) \\ Q4 & agl agile (13), thr theory (12), spj software project (9) & spj software project (6), agl agile (5), pld plan driven (4) \\ Q5 & thr theory (12), spj software project (11), unc uncertainty (8) & thr theory (8), unc uncertainty (8), spj software project (3) \\ Q6 & spj software project (27), rea realism (24), lrn learning (23) & lrn learning (8), spj software project (8), other codes (3) \\ Q7 & spj software project (21), plf project lifecycle (15), agl agile (13) & agl agile (4), spj software project (3), other codes (2) \\ Q9 & spj software project (17), dvt development task (13), trb troubleshooting (13) & dvt development task (6), trb troubleshooting (6), other codes (3) \\ Q10 & spj software project (12) plf project lifecyle (8), thr theory (8) & thr theory (6), unc uncertainty (6) spj software project (4) \\ Q11 & lrn learning (26), l-3 experience (20), other codes (16) & lrn learning (6), rea realism (5), spj software project (4) \\ Q12 & spj software project (16), thr theory (16), unc uncertainty (12) & rea realism (6), spj software project (5), com complexity (4) \\ \midrule & \multicolumn{2}{c}{\textbf{Research Questions}} \\ \midrule \textbf{RQ1} & lrn learning (84), spj software project (70), rea realism (55), l-3 experience (51), thr theory (44) & lrn learning (22), spj software project (15), rea realism (14), thr theory (13), unc uncertainty (12) \\ \textbf{RQ2} & spj software project (50), thr theory (35), agl agile (31), plf project lifecycle (28), rea realism (23) & spj software project (10), thr theory (10), unc uncertainty (10), rea realism (7), agl agile (6) \\ \textbf{RQ3} & spj software project (49), thr theory (38), plf project lifecycle (28), agl agile (27), other codes (20) & thr theory (17), unc uncertainty (16), spj software project (14), agl agile (9), other codes (8) \\ \bottomrule \end{tabular} \end{table} \subsubsection*{RQ1. Intended Learning Objectives} Combining the results from both years, \textit{lrn learning}, \textit{spj software project}, and \textit{rea realism} are the codes more frequently related to RQ1. One student summarizes the objective of the learning approach as follows: \textit{``(To) get an experience on how projects work irl (in real life) and to get a feel for it''}. Regarding the learning aspects, the game activity is more frequently related to \textit{l-2 understand}. In contrast, as \textit{l-3 experience} is more often related to the design-implement task (Q6), and to a combination of the two approaches (Q11). This shows an increasing level of complexity of the learning process, according to Bloom's taxonomy~\citep{krathwohl2009taxonomy}. \subsubsection*{RQ2. Theoretical Knowledge Reinforced} A wide range of codes related to the theoretical content of the course have been mentioned by the participants. The sunburst plots~\ref{fig:rq22018} and \ref{fig:rq22019} cover most of the elements in our code book. Among them, \textit{spj software project}, \textit{agl agile} and \textit{thr theory} are among the five topmost for both years. \begin{figure}[!ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{rq22018.png} \caption{2018 (n = 45)} \label{fig:rq22018} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{rq22019.png} \caption{2019 (n = 15)} \label{fig:rq22019} \end{subfigure} \caption{Distribution of code frequencies for theoretical knowledge reinforced, i.e. sum of results of Q2, Q7, and Q12.} \label{fig:rq2sunburst} \end{figure} \begin{itemize} \item \textit{\textbf{spj software project:}} In our codebook, this code has the longest tree structure (see Appendix~\ref{appendixA}), so it is not surprising it was mentioned several times. The sub-codes that contributed most for this are \begin{inparaenum}[i)] \item \textit{pln project planning} and \textit{ppl proper planning}; \item \textit{plf project lifecyle}, its sub-codes \textit{dev development}, and \textit{tst testing}; and \item \textit{pou project outcomes}, including \textit{cst costs} and \textit{dlt delivery time}. \end{inparaenum} Also, not surprisingly, project planning is more often mentioned in relation to the game activity (question Q2), and development in relation to the design-implement task (Q7). Overall, the student's perception pointed out aspects the integrated approach is designed to reinforce. \item \textit{\textbf{agl agile:}} \textit{ust user stories}, \textit{spr sprint} and \textit{apl agile planning} are the sub-codes more often mentioned. These three have been actively addressed by the second game instance and the design-implement task (Q2 and Q7). \item \textit{\textbf{pdr plan-driven:}} It is mostly mentioned in comparison to the agile method. For example, one student wrote \textit{``The things that are reinforced with the games are specially visualized with the agile model I would say. You've conducted the waterfall model in other projects, but the agile praxis was new and very educating.''}. \item \textit{\textbf{thr theory:}} Theories such as \textit{flx flexibility}, \textit{vel velocity}, and \textit{tdb technical debt} has been mentioned, but \textit{unc uncertainty} is overall three times more coded. Uncertainty has been perceived in all steps of our integrated approach. About the combination of the game and design-implement tasks, one student answered \textit{``the uncertainties in the game actually exist in real life projects''}. \end{itemize} \subsubsection*{RQ3. Challenges of a Real Project} Out of the 5-topmost frequent codes, we noted a few similarities in both years: \textit{thr theory}, \textit{spj software project}, and \textit{agl agile}. We identify that different challenges are related to different steps of our approach, as follows: \begin{itemize} \item \textbf{Game activity:} \textit{spe story point estimation} and \textit{spl sprint planning} got particular attention during the game activity (reported by Q4). The students mentioned challenges such as \textit{``make small but relevant stories''}, \textit{``planning the sprint based on velocity''} and \textit{``prioritize what should be done''}. Challenges related to \textit{est estimation} are also mentioned for the plan-driven game, in particular as this was the first experience students had with such a task. \item \textbf{design-implement task:} \textit{cod coding} (a sub-code of \textit{spj software project}) was a particularly though challenge for the students during the design-implement task (Q9). In most cases, it is mentioned alongside \textit{trb troubleshooting}. For example, one student wrote \textit{``Challenges that were faced were things like trying to understand the task and how to implement and develop so it could go as smoothly as possible''}. \item \textbf{Integrated Experiential Learning Approach:} According to the students, \textit{unc uncertainty} (a sub-code of \textit{thr theory}) is the challenge better represented by both the game and design-implement tasks (Q5 and Q10). Some students wrote: \textit{``the uncertainty levels were especially well represented by the ever changing requirements''} and \textit{``in real life the project do not goes as planed''}. \end{itemize} \subsection{Reflection Report} The written reflection report provides a comprehensive overview of all the aspects the students considered important during the course activity. The notes - took during the activity - describe their perception when they are carrying out the learning task, and their postmortem reflection shows insights and lessons learned. They also provide more details than the formative assessment, as the teams could discourse freely about their reasoning. Using coding we could identify a wide range of aspects covered, as seen in Figure~\ref{fig:totalsunburstreport}. The results are consistent between 2018 and 2019 regarding the order and relative size of segments in the sunburst plots. Some codes were mentioned by all the teams, regardless of the year: \textit{spj software project}, \textit{plf project lifecycle}, \textit{agl agile}, \textit{ust user stories}, \textit{pld plan driven}, and \textit{rea realism}. Other codes were mentioned less often, as shown in the dataset (see complementary material \citep{molleri2020dataset}. \begin{figure}[!ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{e2018.png} \caption{2018 (n = 14) } \label{fig:e2018} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={5cm 3cm 3cm 0},clip,width=1\textwidth]{e2019.png} \caption{2019 (n = 7)} \label{fig:e2019} \end{subfigure} \caption{Summary of the codes on the reflection report across years. The size of the blocks correspond to the number of reports in which the code appears. We do not present here the frequency or extent in which the codes are discussed.} \label{fig:totalsunburstreport} \end{figure} With regards to the different steps we instruct the students to reflect upon, results are as follows: \begin{enumerate}[1)] \item \textbf{Plan-driven game:} The aspects more often mentioned by students are \textit{spj software project}, \textit{plf project lifecycle}, \textit{req requirements}, \textit{tst testing}, and \textit{sch schedule plan}. Much of the reflection is dedicated to requirements specification and to plan the workflow as a schedule. Project management tools such as Gantt charts were frequently reported. From a learning perspective, students relate their learning to \textit{l-1 remember} and \textit{l-2 understand}. \item \textbf{Agile game:} Cover a range of topics such as \textit{apl agile planning}, \textit{spr sprints}, \textit{ust user stories}, also \textit{spj software project}, \textit{plf project lifecycle}. It became clear the challenges students faced when estimating user stories and prioritize them for sprint planning. Theoretical aspects such as \textit{vel velocity}, \textit{mvp minimum viable product} and \textit{tdb technical debt} are also frequently discussed. On the pedagogical side, levels \textit{l-1 remember} and \textit{l-2 understand} are more often evidenced. \item \textbf{Comparing the Game Instances}: It is important to notice a few similarities between the two-game instances. Firstly, \textit{spj software project} and \textit{plf project lifecycle} are among the codes mentioned in all reports. Also, codes that reflect the corresponding aspects in both methodologies such as the pairs \textit{req requirements}/\textit{ust user stories} and \textit{sch schedule}/\textit{spr sprints}. Finally, the same learning levels, \textit{l-1 remember} and \textit{l-2 understand}, are identified in both cases, thus demonstrating the organization of ideas provided by the course's theoretical background. \item \textbf{Design-Implement task:} Codes that are mentioned more often include \textit{dvt development}, \textit{trb troubleshooting}, \textit{rea realism}, \textit{fle flexibility}, \textit{ust user stories}, and \textit{spj software project}. Three of these (i.e. ,\textit{rea realism}, \textit{ust user stories} and \textit{spj software project}) are also well mentioned in the agile game approach, thus pointing out the similarities between the two activities. From the theoretical aspects, \textit{unc uncertainty} is the most cited. On a pedagogical perspective, level \textit{l-3 apply} is more frequently mentioned, and some reports also showed \textit{l-4 analysis} elements. This points out an increasing complexity in the cognitive domain of learning. \item \textbf{Meta-reflection:} In this step, teams discussed their interpretation of the integrated learning approach and the main aspects they have learned. Both \textit{agl agile} and \textit{pld plan-driven} approaches are often discussed, as well as \textit{spj software project}, \textit{plf project lifecycle}. In addition to that, \textit{cso contextual solutions} has been extensively discussed, as students reflect which method best fit the scenarios they faced during the integrated approach. For example, one team summarizes \textit{``there is no one-way approach to solve a problem. Different projects would need different ways to be handled based on the demands of the project''}. Other theoretical aspects such as \textit{fle flexibility} and \textit{unc uncertainty} are also well cited. From the pedagogical stance, \textit{l-4 analysis} is the most covered, which is also a good signal since it shows a progression in the way concepts are aggregated and discussed. \end{enumerate}
{ "timestamp": "2021-01-21T02:12:51", "yymm": "2012", "arxiv_id": "2012.14178", "language": "en", "url": "https://arxiv.org/abs/2012.14178" }
\section*{Introduction} \label{sec:intro} Let $X$ be a smooth projective toric variety over an algebraically closed field $k$ with Cox ring $S$ and irrelevant ideal $B$ (see \cite[\S5.2]{cox-little-schenck}). The Cox ring $S$ is a positively $\operatorname{Pic}(X)$-graded polynomial ring, i.e., $\operatorname{Pic}(X) \cong \mathbb{Z}^d$, and the multidegrees of the variables of $S$ lie in a single open half-space of $\mathbb{Z}^d$. There is a correspondence between $\operatorname{Pic}(X)$-graded $B$-saturated modules $M$ over $S$ and sheaves $\widetilde{M}$ on $X$~\cite{audin,musson,cox:homog} (see \cite{mustata-toric} when $X$ is not smooth). Unfortunately, the numerics of the minimal $\operatorname{Pic}(X)$-graded free resolutions for such $S$-modules do not obviously provide many geometric insights for $M$ when $X$ is not projective space. For example, a minimal $\operatorname{Pic}(X)$-graded free resolution of $M$ may be significantly longer than the dimension of $X$. However, this failure appears to be a consequence of imposing too much algebraic structure on the resolution. Approaching the problem from the geometric perspective, vector bundle resolutions of $\widetilde{M}$ are bounded in length by the dimension of $X$, but vector bundles on $X$ are significantly more complicated than line bundles on $X$. A proposed solution comes from \cite{virtual-original}, in which the authors introduce a type of resolution of $M$ by free $S$-modules, which they call a \emph{virtual resolution}, that better captures geometrically meaningful properties of $\operatorname{Pic}(X)$-graded $S$-modules, such as unmixedness, well-behavedness of deformation theory, and regularity of tensor products. Because virtual resolutions are defined up to the sheafification of $M$, the object of study is intrinsically geometric. Because the resolutions themselves are in the category of $S$-modules, they are naturally amenable to algebraic techniques. Although virtual resolutions are desirable for their ability to encode geometric information, we do not yet have the wealth of tools for studying them that we do for studying graded free resolutions. In particular, there are not yet many methods for constructing short virtual resolutions or for establishing the minimum possible length among the virtual resolutions of a chosen $\operatorname{Pic}(X)$-graded $S$-module $M$. We provide some of each in this paper. Our broad goal in this article is to work towards a rich understanding of \emph{virtually Cohen--Macaulay} modules (or virtually Cohen--Macaulay coherent sheaves) as an analogue to Cohen--Macaulay modules over the coordinate rings of affine or projective space. We provide two methods to construct short virtual resolutions either from longer virtual resolutions or from short resolutions of closely-related modules. These methods can be helpful in establishing that modules are virtually Cohen--Macaulay (see Propositions~\ref{prop:mapping-cone} and~\ref{prop:vreg-elt-quotient}). We also obtain homological obstructions to being virtually Cohen--Macaulay (see Section~\ref{sec:virtual-ext}). Guiding these structural developments is our production of a large class of virtually Cohen--Macaulay Stanley--Reisner rings in Section~\ref{sec:triangles}. The results on this class are hard won through the careful application of Hochster's formula, interpreted in a virtual setting, together with an analysis of the spectral sequence associated to a certain nerve complex. This not only provides us with a new source of examples of virtually Cohen--Macaulay modules as we work to develop the theory, but also, given the difficulty of studying even Stanley--Reisner rings in this context, highlights the need for the advent of more virtual homological tools. \subsection*{Acknowledgements} \label{subsec:acknowledgements} We would like to thank Daniel Erman and Gregory G. Smith for helpful conversations related to this work. We are also grateful to the anonymous referee for valuable feedback on a previous version of this paper. \section{Background and Statements of Main Results} \label{sec:prelim} Throughout this article, let $X$ be a smooth projective toric variety over the algebraically closed field $k$, and let $S = \Cox(X)$ with irrelevant ideal $B$. All $S$-modules are assumed to be finitely generated and $\Pic(X)$-graded, and all sheaves are assumed to be coherent. Let $M$ be an $S$-module. As in~\cite[Definition~1.1]{virtual-original}, a free graded $S$-complex $F_\bullet=[F_0\gets F_1\gets\dotsb]$ is a \emph{virtual resolution} of $M$ (or of $\widetilde{M}$) if the corresponding complex $\widetilde{F}_\bullet$ of vector bundles is a locally free resolution of the sheaf $\widetilde{M}$. Next, define the \emph{virtual dimension} of $M$, denoted $\vdim M$, to be the minimal length of a virtual resolution of $M$. For products of projective spaces, there is an inequality $\vdim M\ge\codim M$ (\cite[Proposition~2.5]{virtual-original}); in light of this and an analogue to the affine case, we say that $M$ is \emph{virtually Cohen--Macaulay} if $\widetilde{M} \neq 0$ and $\vdim M = \codim M$, the minimum possible. We say that a subscheme $V\subset X$ is \emph{virtually Cohen--Macaulay} if its Cox ring is virtually Cohen--Macaulay as an $S$-module. Although there is a precise description in the literature for when complexes are virtual resolutions (see \cite{Lop}), little is known about how to assess the virtual dimension of a module or how to construct virtual resolutions of minimal length, even when that minimal length is known. In this paper, we construct virtual resolutions of minimal length for a family of Stanley--Reisner rings in order to show that they are virtually Cohen--Macaulay. Before stating that result, we review the \emph{Stanley--Reisner correspondence} between simplicial complexes and squarefree monomial ideals. For a detailed introduction, we refer the reader to \cite{MS04}. \begin{definition*} Let $\Delta$ be a simplicial complex on $\{1,2,\dots,n\}$ and $R = k[x_1,x_2, \ldots, x_n]$. Define the \emph{Stanley--Reisner ideal} of $\Delta$ to be \[ I_\Delta = \left\< x_{i_1}x_{i_2} \cdots x_{i_k} \mid \{i_1, \ldots, i_k\} \notin \Delta \right\> \] and the \emph{Stanley--Reisner ring} of $\Delta$ to be $R/I_\Delta$. \end{definition*} We now state our main result on the existence of a new family of virtually Cohen--Macaulay rings (see Theorem~\ref{thm:monomial-vcm}). \begin{thm} Let $S$ be the Cox ring of $X = \PP^{n_1} \times \PP^{n_2}\times \cdots \times \PP^{n_r}$. If $\Delta$ is an $r$-dimensional simplicial complex and the variety $V(I_\Delta)\subseteq X$ is equidimensional, then $S/I_{\Delta}$ is virtually Cohen--Macaulay. \end{thm} Relationships between $\vdim M$ and $\dim X$ have been of interest since the introduction of virtual resolutions. In~\cite[Proposition~1.2, Theorem 5.1]{virtual-original} a Hilbert Syzygy Theorem-type bound, $\vdim M \le \dim X$, was given for an arbitrary $\Pic(X)$-graded $S$-module $M$ when $X$ is a product of projective spaces and for an arbitrary punctual scheme in any smooth projective toric variety $X$. Further, \cite{yang-monomial} shows that $\vdim S/I \le \dim X$ when $I$ is a relevant (i.e., $B^t \not\subseteq I$ for all $t \geq 1$) monomial ideal of $S$ and $X$ is a smooth projective toric variety. Our new result most directly compares with a similar theorem in the case of pure and balanced simplicial complexes, which are necessarily of dimension $r-1$ (see \cite[Theorem 1.3]{reu2019}). Our proof is constructive, and we illustrate its use in building explicit resolutions in Examples~\ref{ex:disjoint-lines} and~\ref{ex:monomial-nonCM-components}. Our second construction of short virtual resolutions for the purpose of realizing the virtual Cohen--Macaulay property comes by way of a mapping cone. It is precisely stated and proved as Proposition~\ref{prop:mapping-cone} and summarized below. There is not a directly analogous strategy for shortening locally free resolutions over $\mathbb{P}^n$. As such, this result is an example of a tool that is new to the virtual setting, rather than being a modification of a tool from the projective setting. \begin{Propo} Let $F_\bullet$ be a virtual resolution of an $S$-module $M$ of length $t$ such that $\Ext^t(M,S)^\sim = 0$. If $\Ext^t(M,S)$ admits a free resolution of length at most $t+1$, then we can construct a virtual resolution of $M$ of length $t-1$. \end{Propo} Additionally, we propose a notion of a \emph{virtually regular element} (see Definition~\ref{def:vreg-element}) and show (as Proposition~\ref{prop:vreg-elt-quotient}) that it can be used to produce virtual resolutions. \begin{Propo} If $M$ is a virtually Cohen--Macaulay $S$-module and $f$ is a virtually regular element on $M$, then $M/fM$ is virtually Cohen--Macaulay. \end{Propo} In Example~\ref{ex:vreg-element}, we use Proposition~\ref{prop:mapping-cone} to show that a particular squarefree monomial ideal defines a virtually Cohen--Macaulay quotient ring. We then quotient by a sequence of virtually regular elements to arrive at a virtually Cohen--Macaulay quotient ring outside of the squarefree monomial setting. This virtually Cohen--Macaulay quotient ring has an embedded associated prime, which we notice is irrelevant. One may note that if $M$ is an arithmetically Cohen--Macaulay $S$-module, then $M$ is virtually Cohen--Macaulay and that, if $M$ is virtually Cohen--Macaulay, then $M$ is \emph{geometrically Cohen--Macaulay} (Definition~\ref{def:geomCM}, Proposition~\ref{prop:aCM=>vCM=>gCM}). Additionally, if $M$ is virtually Cohen--Macaulay, then, if it has any associated primes of height other than $\codim M$, they must be irrelevant (Proposition~\ref{prop:equidim}), i.e., a virtually Cohen--Macaulay module is virtually unmixed. Through Examples~\ref{ex:disjoint-lines} and~\ref{ex:tangentbundle}, we show that all of these implications are strict. In addition to the tools above, we also provide some homological tools that provide exclusionary criteria for a module to have the virtually Cohen--Macaulay property. We also note that because the corresponding complex of sheaves is a locally free resolution, any virtual resolution can be used to compute $\Ext$ and $\Tor$ modules up to sheafification (as Propositions~\ref{prop:vExt} and~\ref{prop:vTor}), which implies the following (as Corollaries~\ref{cor:vanishingExt} and~\ref{cor:vanishingTor}). \begin{Propo} If the $S$-module $M$ has a virtual resolution of length $\ell$, then $\Ext^i_S(M,N)^\sim = 0 = \Tor_i^S(M,N)^\sim$ for all $S$-modules $N$ and all $i>\ell$. \end{Propo} We show how this fact can be used to give an example of a module that cannot be virtually Cohen--Macaulay (see Example~\ref{ex:disjoint-lines-P4}). We also show that the converses to Proposition~\ref{prop:vreg-elt-quotient} and Corollaries~\ref{cor:vanishingExt} and~\ref{cor:vanishingTor} are false. In particular, we see that the virtual dimension of a module is bounded below by the homological dimension of the associated coherent sheaf. Further, the homological dimension may be strictly greater than the virtual dimension, as demonstrated in Example~\ref{ex:tangentbundle}. \section{Virtually Cohen--Macaulay Stanley--Reisner rings} \label{sec:triangles} The purpose of this section is to prove the following theorem. \begin{theorem} \label{thm:monomial-vcm} Let $S$ be the Cox ring of $X = \PP^{n_1} \times \PP^{n_2}\times \cdots \times \PP^{n_r}$. If $\Delta$ is an $r$-dimensional simplicial complex and its associated variety $V(I_\Delta)\subseteq X$ is equidimensional, then $S/I_{\Delta}$ is virtually Cohen--Macaulay. \end{theorem} With the dimension constraints in Theorem~\ref{thm:monomial-vcm}, the variety $V(I_\Delta)\subseteq X$ must satisfy $\dim (V(I_\Delta))= 1$, and so this is a theorem about one dimensional subvarieties of $X$ determined by monomial ideals. As such, it is natural to ask whether such a statement holds in a more general setting, for example by taking $X$ to be any smooth projective toric variety. The techniques outlined in this section rely heavily on the structure of the Cox ring of a product of projective spaces. In particular, we rely on our ability to separate the variables into groups based on the product structure of $X$. Thus, though the statement may hold with fewer hypotheses on $X$, the proof would have to be meaningfully different than that given here. Let $S = k[x_{i,j}\mid {1\le i\le r, 0\le j\le n_i}]$ be the Cox ring of $X$ (of Theorem~\ref{thm:monomial-vcm}) and $B$ the irrelevant ideal of $S$. Throughout this section, we will consider simplicial complexes on the vertex set $\ensuremath{\mathcal{X}}$ corresponding to the variables $(x_{i,j})_{1\le i\le r, 0\le j\le n_i}$ of $S$. The vertices in $\ensuremath{\mathcal{X}}$ corresponding to $x_{i,\bullet}$ are said to have color $i$. Let $\Delta$ be a simplicial complex with vertices in $\ensuremath{\mathcal{X}}$. Define the \emph{color set of a face} $\sigma\in\Delta$ to be the set of the colors of the vertices of $\sigma$, denoted by $\colo(\sigma)$. We say that a face $\sigma\in\Delta$ is \emph{relevant} if $\colo(\sigma)=[r] = \{1,2,\dots,r\}$ and \emph{irrelevant} otherwise. A simplicial complex $\Delta$ is \emph{relevant} if it contains at least one relevant face, and it is \emph{irrelevant} otherwise. Note that if $\Delta$ is an irrelevant simplicial complex on $\ensuremath{\mathcal{X}}$, then $S/I_{\Delta}$ is irrelevant, i.e., the support of $S/I_{\Delta}$ is contained in $V(B) = \{P \in \Spec(S) \mid B \subseteq P\}$. If $\Delta$ is a relevant simplicial complex on $\ensuremath{\mathcal{X}}$, then $\Delta$ is said to be \emph{virtually Cohen--Macaulay} if $S/I_\Delta$ is virtually Cohen--Macaulay. Our proof of Theorem~\ref{thm:monomial-vcm} begins with a lemma treating irrelevant faces of a fixed dimension. We aim to understand reduced simplicial homology of complexes associated with Stanley--Reisner rings in order to apply Reisner's criterion, which we will use to detect the virtually Cohen--Macaulay property. We will need to recall two pieces of standard terminology and introduce one new piece of notation. Recall that if $\sigma$ is a face of the simplicial complex $\Delta$, then we define the \emph{link of $\sigma$ in $\Delta$} to be \[ \link_{\sigma}(\Delta) = \{\sigma' \in \Delta \mid \sigma \cup \sigma' \in \Delta, \sigma \cap \sigma' = \varnothing \}. \] Recall also that $\widetilde{H}_{i}(\Delta;k)$ denotes the $i^{th}$ reduced simplicial homology of the simplicial complex $\Delta$ with coefficients in $k$. Finally, let $\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r = \left\{\sigma \subseteq\ensuremath{\mathcal{X}} \mid \dim \sigma \le r, \sigma \text{ is irrelevant}\right\}$, the simplicial complex of all at most $r$-dimensional irrelevant simplices. \begin{lemma} \label{lem:irrelevant-almost-cm} The ring $S/I_{\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r}$ is Cohen--Macaulay on the punctured spectrum. Also, $\widetilde{H}_{r-2}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$ and $\widetilde{H}_{i}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=0$ for $i<r$ with $i\neq r-2$. \end{lemma} \begin{proof} We will show that for all $\sigma\in\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r\setminus\varnothing$, \[ \widetilde{H}_i(\link_\sigma(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)=0 \quad \text{ for } i<\dim(\link_\sigma(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)) = r-1-\dim(\sigma), \] $\widetilde{H}_{r-2}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$, and $\widetilde{H}_{i}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=0$ for $i\neq r-2$ and $i<r$. Let $\sigma\in \ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r$ be arbitrary. Let $\Delta = \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)$. Now for $C\subset [r]$ with $C^{c}\cup \mathcolor(\sigma)\neq [r]$, consider the subcomplex $\Delta_C$ given by the faces of $\Delta$ that do not include the colors in $C$. Note in particular that $\Delta_C\cap \Delta_D =\Delta_{C\cup D}$. For every face $\gamma\in \Delta$, since $\gamma\cup\sigma\in\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r$, it is irrelevant. Thus, there exists an $i$ such that $i\notin \mathcolor(\gamma\cup \sigma)$. In particular, $i$ satisfies both $i\notin\mathcolor(\sigma)$ and $\gamma\in \Delta_{\left\{i\right\}}$. Putting these together, $\left\{\Delta_{\left\{i\right\}}\right\}_{i\notin \mathcolor(\sigma)}$ provides a covering of $\Delta$, which induces the Mayer--Vietoris spectral sequence: \[ E_{p,q}^1 = \bigoplus_{\substack{|C|=p+1>0\\C^{c}\cup \mathcolor(\sigma)\neq [r]}} H_q(\Delta_C;k) \quad \Rightarrow \quad H_{p+q}(\Delta;k). \] We claim that $\Delta_C$ is the $\left(r-1-\dim(\sigma)\right)$-skeleton of the simplex on all vertices with color in $C^{c}$, excluding those vertices in $\sigma$. To see this, recall that we are restricting to $C$ with $C^{c}\cup \mathcolor(\sigma)\neq [r]$. Thus for every simplicial complex $\gamma$ on $\ensuremath{\mathcal{X}}$, with $\dim\gamma\le \dim(\link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r))=r-1-\dim(\sigma)$ and $\mathcolor(\gamma)\subset C^{c}$, it must be that $\gamma\cup \sigma$ is irrelevant and belongs to $\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r$, so $\gamma\in\Delta_C$. Now for $\sigma\neq \varnothing$, we must show that $\widetilde{H}_i(\Delta;k)=0$ for $i<\dim(\Delta) = r-1-\dim(\sigma)$ for $\sigma\neq\varnothing$. Since $\Delta_C$ is the $(r-1-\dim(\sigma))$-skeleton of a simplex, it cannot have reduced homology in degrees lower than $r-1-\dim(\sigma)$, and therefore $H_{q}(\Delta_C;k)=0$ for $0 < q < r-1-\dim(\sigma)$. Thus for $p+q<r-1-\dim(\sigma)$ with $q\neq 0$, we have that $E_{p,q}^1=0$, as can be seen in the $E^1$ page in Figure~\ref{fig:spectral-sequence}. In light of this, it suffices to show that the maps on $E_{p+1,0}^1\rightarrow E_{p,0}^1$ give $0$ homology. But this can be observed in the total complex, since $H_0(\Delta_C;k)=k$ for $C\neq [r]$, and $\Delta_C= \varnothing$ if and only if $C=[r]$, the complex given by these maps is simply the simplicial chain complex for the nerve of the covering of $\Delta$ by $\left\{\Delta_{\left\{i\right\}}\right\}_{i\notin \mathcolor(\sigma)},$ where the nerve is the simplicial complex given by \[ N\left(\left\{\Delta_{\left\{i\right\}}\right\}_{i\notin\mathcolor(\sigma)}\right)=\left\{F\subset [r]\setminus \mathcolor(\sigma) \ \bigg\vert \ \bigcap_{i\in F} \Delta_{\left\{i\right\}}\neq \varnothing\right\}. \] In this case, this nerve is the simplex on $[r]\setminus\mathcolor(\sigma)$. Thus $E_{0,0}^2=k$ and $E_{p,q}^2=0$ for $0<p+q<r-1-\dim(\sigma)$. And therefore $\widetilde{H}_{i}(\Delta;k)=0$ for $i<\dim(\Delta)$. If instead $\sigma=\varnothing$, then the nerve is the boundary of the simplex on $[r]$, which is a sphere. In light of this, the result holds by direct computation if $r\le 2$. If $r>2$, then $E_{0,0}^2=k$ and $E_{p,q}^2=0$ for $0<p+q<r$ except $(p,q)=(r-2,0)$ and $E_{r-2,0}^2=k$. Thus $\widetilde{H}_{r-2}(\Delta;k)=k$, and $\widetilde{H}_{i}(\Delta;k)=0$ for $i<r$ with $i\neq r-2$. \begin{figure} \caption{\label{fig:spectral-sequence} An illustration of the $E^1$ page and the resulting vanishing on the $E^2$ page in the case of $\sigma\neq\varnothing$. The dots represent the potentially non-zero entries on any particular page. The vanishing of the entries on the bottom row of the $E^2$ page are due to the entries on the bottom row of the $E^1$ page forming a chain complex isomorphic to the nerve complex of the covering.} \begin{tikzpicture} \begin{scope} \draw[->] (0,0) -- (5,0) node [below] {$p$}; \draw[->] (0,0) -- (0,5) node [left] {$q$}; \draw[dashed,red] (0,4)--(4,0) node [midway,above,sloped] {\tiny$p+q=r-1-\dim(\sigma)$}; \node [left,black] at (0,4) {\tiny $r-1-\dim(\sigma)$} ; \node[blue] at (0,0) {$\bullet$}; \node[blue] at (1,0) {$\bullet$}; \node[blue] at (2,0) {$\cdots$}; \node[blue] at (3,0) {$\bullet$}; \node[blue] at (4,0) {$\bullet$}; \node[blue] at (0,4) {$\bullet$}; \node[blue] at (1,4) {$\bullet$}; \node[blue] at (2,4) {$\cdots$}; \node[blue] at (3,4) {$\bullet$}; \node[blue] at (4,4) {$\bullet$}; \node at (2.5,-1) {$E^1$}; \end{scope} \begin{scope}[shift={(8,0)}] \draw[->] (0,0) -- (5,0) node [below] {$p$}; \draw[->] (0,0) -- (0,5) node [left] {$q$}; \node [left,black] at (0,4) {\tiny $r-1-\dim(\sigma)$} ; \draw[dashed,red] (0,4)--(4,0) node [midway,above,sloped] {\tiny$p+q=r-1-\dim(\sigma)$}; \node[blue] at (0,0) {$\bullet$}; \node[blue] at (0,4) {$\bullet$}; \node[blue] at (1,4) {$\bullet$}; \node[blue] at (2,4) {$\cdots$}; \node[blue] at (3,4) {$\bullet$}; \node[blue] at (4,4) {$\bullet$}; \node at (2.5,-1) {$E^2$}; \end{scope} \end{tikzpicture} \end{figure} \end{proof} In Lemma~\ref{lem:irrelevant-almost-cm}, the $r$ used need not be the same as $r$ in $X=\PP^{n_1}\times\cdots \times \PP^{n_r}$; however, in practice we will only need the case where the two $r$'s agree. Our next goal is to use Lemma~\ref{lem:irrelevant-almost-cm} to prove Theorem~\ref{thm:relevant-connected-ideal-VCM}, stated below, which is a special case of Theorem~\ref{thm:monomial-vcm}, a case to which we will ultimately reduce the main theorem. A relevant simplicial complex $\Delta$ is \emph{relevant-connected} if its geometric realization is (topologically) connected after removing the realization of any of its irrelevant faces. Further, a subcomplex of $\Delta$ is called a \emph{relevant-connected component} if it is maximal among the relevant-connected subcomplexes of $\Delta$. \begin{theorem} \label{thm:relevant-connected-ideal-VCM} If $\Delta$ is an $r$-dimensional relevant-connected simplicial complex on $\ensuremath{\mathcal{X}}$, then $S/I_{\Delta}$ is virtually Cohen--Macaulay. \end{theorem} The case $r=1$ is that of a single projective space. In this case, the complex $\Delta$ is pure of dimension $1$ and relevant-connected, and so $\Delta$ is Cohen--Macaulay by Reisner's criterion. In order to prove Theorem~\ref{thm:relevant-connected-ideal-VCM}, we will need to introduce and study interior and exterior faces, which we do now. Let $\Delta$ be a relevant simplicial complex and $\sigma \neq \varnothing$ a face of $\Delta$. Let $\mbox{Ex}(\sigma, \Delta) = \link_{\sigma}(\Delta)\cap \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)$, and call the faces in $\mbox{Ex}(\sigma, \Delta)$ the \emph{exterior} faces of $\link_{\sigma}(\Delta)$. Call the rest of the faces in $\link_{\sigma}(\Delta)$ the \emph{interior} faces of $\link_{\sigma}(\Delta)$. \begin{remark} \label{rmk:intersect-ext-int} Note that the intersection of an exterior and an interior face is an exterior face. \end{remark} \begin{example} \label{ex:interior-exterior} Consider the following example in $\PP^3\times \PP^3$, where in Figure~\ref{fig:PP3xPP3} the first copy of $\PP^3$ is colored red and the second copy of $\PP^3$ is colored blue. Consider the link of the red square vertex, whose faces consist of the red triangle vertices, blue pentagon vertices and dashed lines. The exterior faces are the red triangle vertices and the interior faces are the blue pentagon vertices and the dashed lines. Notice that the blue hexagons and dashed line on the right hand edge of the diagram are irrelevant, but are still interior faces. \begin{figure}[h] \centering \caption{Each column of vertices corresponds to a copy of $\PP^3$.} \label{fig:PP3xPP3} \begin{tikzpicture} \fill[gray] (1,-0.5)--(0,0)--(1,0.5) -- cycle; \fill[gray] (1,0.5)--(0,1)--(1,1.5) -- cycle; \fill[gray] (1,1.5)--(0,2)--(1,2.5) -- cycle; \fill[gray] (0,0)--(1,0.5)--(0,1) -- cycle; \fill[gray] (0,1)--(1,1.5)--(0,2) -- cycle; \fill[gray] (0,2)--(1,2.5)--(0,3) -- cycle; \draw (0,0)--(0,1)--(0,2)--(0,3); \draw (1,-0.5)--(1,0.5); \draw (1,1.5)--(1,2.5); \draw (1,-0.5)--(0,0); \draw (1,0.5)--(0,1)--(1,1.5); \draw (0,2)--(1,2.5)--(0,3); \draw[dashed] (0,0)--(1,0.5)--(1,1.5)--(0,2); \node[red,fill,regular polygon, regular polygon sides=3,scale=0.3] at (0,0) {$\bullet$}; \node[red,fill, rectangle, scale=0.9] at (0,1) {$$}; \node[red, fill,regular polygon, regular polygon sides=3,scale=0.3] at (0,2) {$\bullet$}; \node[red] at (0,3) {$\bullet$}; \node[blue] at (1,-0.5) {$\bullet$}; \node[blue,fill,regular polygon, regular polygon sides=5,scale=0.7] at (1,0.5) {$$}; \node[blue,fill,regular polygon, regular polygon sides=5,scale=0.7] at (1,1.5) {$$}; \node[blue] at (1,2.5) {$\bullet$}; \end{tikzpicture} \end{figure} The idea of interior and exterior faces can become considerably more complex. Consider the following illustration of the link of a cell in an example $\Delta$ on $\PP^n\times \PP^m\times \PP^{\ell}$. In Figure~\ref{fig:octagon1}, the vertices corresponding to each of the parts of the product are colored red, blue, and green. Only the link is illustrated, and it is the link of a vertex that would be colored blue. Therefore the bold faces are the exterior faces, and the others are interior faces. \begin{figure}[h!] \centering \caption{The link in some $\Delta$ of a certain blue vertex on $\PP^n\times \PP^m\times \PP^{\ell}$.} \label{fig:octagon1} \begin{tikzpicture}[scale=1.5] \coordinate (a) at (0,0); \node[blue] at (a) {$\bullet$}; \coordinate (b) at (1,-1); \node[red] at (b) {$\bullet$}; \coordinate (c) at (2,-1); \node[red] at (c) {$\bullet$}; \coordinate (d) at (3,0); \node[red] at (d) {$\bullet$}; \coordinate (e) at (3,1); \node[blue] at (e) {$\bullet$}; \coordinate (f) at (2,2); \node[green] at (f) {$\bullet$}; \coordinate (g) at (1,2); \node[green] at (g) {$\bullet$}; \coordinate (h) at (0,1); \node[green] at (h) {$\bullet$}; \coordinate (x1) at (1,1); \node[red] at (x1) {$\bullet$}; \coordinate (x2) at (1,0); \node[green] at (x2) {$\bullet$}; \coordinate (x3) at (1.5,0); \node[red] at (x3) {$\bullet$}; \coordinate (x4) at (2,1); \node[green] at (x4) {$\bullet$}; \coordinate (x5) at (2,1.5); \node[red] at (x5) {$\bullet$}; \coordinate (x6) at (1.5,1.25); \node[green] at (x6) {$\bullet$}; \draw[ultra thick] (a) -- (b) -- (c) -- (d) -- (e) -- (f) -- (g) -- (h) -- cycle; \draw (h)--(b); \draw (x1)--(b); \draw (d)--(f); \draw (h) -- (x2) -- (x1) -- (x4) -- (x3) -- (x2) -- (c); \draw (c) -- (x3); \draw (c) -- (x4) -- (d); \draw (x4) -- (x5) -- (f) -- (x1); \draw (x1) -- (x6) -- (x5) -- (d); \draw (x1) -- (h); \draw (x1) -- (g); \draw (x1)--(x3); \draw (f)--(x6)--(x4); \draw[ultra thick] (c)--(x3)--(x1); \draw[ultra thick] (h) -- (x2); \draw[ultra thick] (f) -- (x6) -- (x4); \draw[ultra thick] (d) -- (x5); \end{tikzpicture} \end{figure} \end{example} \begin{lemma} \label{lem:twoface} Let $\Delta$ be a pure, relevant $r$-dimensional simplicial complex. If $\sigma \neq \varnothing$ is a simplex of $\Delta$, then every facet of $\link_{\sigma}(\Delta)$ has at most two codimension $1$ faces that are interior faces of $\link_{\sigma}(\Delta)$. Moreover, \vspace{-1.5mm} \begin{enumerate} \item A facet of $\link_{\sigma}(\Delta)$ has no codimension $1$ faces that are interior if and only if $\sigma$ uses some color at least twice. \item A facet $\tau$ of $\link_{\sigma}(\Delta)$ has exactly one codimension $1$ face that is interior if and only if $\tau$ shares a color with $\sigma$. \item A facet $\tau$ of $\link_{\sigma}(\Delta)$ has exactly two codimension $1$ faces that are interior if and only if $\tau$ uses some color at least twice. \end{enumerate} \end{lemma} \begin{proof} Let $\tau$ be a facet of $\link_{\sigma}(\Delta)$. By assumption, $\tau\cup\sigma$ is relevant and $\dim(\tau\cup\sigma) = r$. Since $\tau\cup\sigma$ is relevant, it has color $[r]$; since $\tau\cup\sigma$ has dimension $r$, it has exactly $r+1$ vertices. Putting these two facts together, it follows that in $\tau\cup\sigma$ exactly one color is used twice. The rest of the argument proceeds by carefully considering the locations of that twice-used color. Let the vertices of the twice-used color in $\tau\cup\sigma$ be labeled $v_1$ and $v_2$. Now, a codimension 1 face $\gamma$ of $\tau$ is an interior face if and only if $\tau\setminus\gamma\subset \left\{v_1,v_2\right\}$, since both of these conditions are the same as requiring that $\gamma\cup\sigma$ be colored by $[r]$. But this immediately implies that $\tau$ contains at most two codimension $1$ faces that are interior. Moving now to the exact number of codimension $1$ faces of $\tau$ that are interior faces, we consider which of $\tau$ or $\sigma$ contains each of the vertices $v_1$ and $v_2$. \vspace{-1.5mm} \begin{enumerate} \item There are no codimension $1$ faces of $\tau$ that are interior faces if and only if $v_1,v_2\in\sigma$, which is equivalent to $\sigma$ using some color twice. \item There is one codimension $1$ face of $\tau$ that is an interior face if and only if $v_i\in\sigma$ for precisely one $i\in\{1,2\}$. In this case, for $j\neq i$, we have $v_j\in \tau$. Therefore, $\mathcolor(\sigma)\cap\mathcolor(\tau)\neq \varnothing$. \item There are two codimension $1$ faces of $\tau$ that are interior faces if and only if both $v_1,v_2\in\tau$. This is equivalent to $\tau$ using some color at least twice. \qedhere \end{enumerate} \end{proof} We are now prepared to prove Theorem~\ref{thm:relevant-connected-ideal-VCM}. The proof makes heavy use of Reisner's criterion, which we record below for convenience. Reisner showed in his thesis that $S/I_\Delta$ is Cohen--Macaulay if and only if $\Delta$ is Cohen--Macaulay as a simplicial complex. It is for this reason, combined with the statement of Reisner's criterion, that the proof of Theorem~\ref{thm:relevant-connected-ideal-VCM} centers on the computation of reduced simplicial homology. \begin{theorem}[Reisner's Criterion] A simplicial complex $\Delta$ is Cohen--Macaulay if and only if $\widetilde{H}_i(\link_\sigma(\Delta);k) = 0$ for all $i<\dim \link_\sigma(\Delta)$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:relevant-connected-ideal-VCM}] First note that we may assume that all facets of $\Delta$ are relevant. We claim that all $r$-dimensional relevant-connected simplicial complexes having no irrelevant facets are pure. Note that an $r$-dimensional simplicial complex corresponds to a $1$-dimensional subvariety of $\PP^{\mathbf{n}}$ and relevant-connectivity of $\Delta$ implies connectivity of the corresponding variety. Since all connected 1-dimensional subvarieties of $\PP^{\mathbf{n}}$ are equidimensional, there cannot be any lower dimensional relevant facets of $\Delta$. Then since $\Delta$ contains only relevant facets, $\Delta$ is pure. We will produce a simplicial complex $\Delta'$ that is Cohen--Macaulay and differs from $\Delta$ in only irrelevant faces. Note the theorem is trivially true when $r=1$, since this is the case of a single projective space, and, in this case, $\Delta$ must be a 1-dimensional pure and relevant-connected simplicial complex. It now follows easily from Reisner's criterion that $\Delta$ is Cohen--Macaulay. When $r>1$, there are two cases. First, consider the case that $\Delta$ is of the form $\Delta=\join(\tau,\Omega)$ for a face $\tau\neq \varnothing$ and a simplicial subcomplex $\Omega$ with $\mathcolor(\tau)\cap\mathcolor(\Omega)=\varnothing$. Since $\tau$ is a simplex, there is a bijection between the top-dimensional cells of $\Delta$ and those of $\Omega$. Thus, since $\Delta$ is relevant-connected, $\Omega$ is, too. Further, since $\dim \Delta=r$, $\dim \Omega = r-\dim \tau-1$. Moreover, since any face of $\Delta$ uses at most one color twice, we know that either $\dim\tau = \left|\mathcolor(\tau)\right|$ or $\dim\tau = \left|\mathcolor(\tau)\right|-1$. In the first case, we find that $\dim\Omega = \left|\mathcolor(\Omega)\right|-1$. Now restricting to the colors in $\mathcolor(\Omega)$ and applying \cite[Theorem 1.3]{reu2019} to $\Omega$, we can construct a Cohen--Macaulay simplicial complex $\Omega'$ that differs from $\Omega$ only on irrelevant faces. On the other hand, if $\dim\sigma = \left|\mathcolor(\sigma)\right|-1$, then $\dim\Omega = \left|\mathcolor(\Omega)\right|-1$ and $\dim\Omega<r$ so by replacing $\Delta$ with $\Omega$ and by using induction on $r$, we will construct a Cohen--Macaulay simplicial complex $\Omega'$ differing from $\Omega$ only on irrelevant faces. Now we take the simplicial complex $\Omega'$ and let $\Delta'=\join(\sigma,\Omega')$. Then $\Delta'$ differs from $\Delta$ only in irrelevant faces. Since $\sigma$ is a simplex, $\join(\sigma,\Omega')$ can be constructed by iteratively taking the cone over $\Omega'$ by the vertices in $\sigma$. Since the cone over a Cohen--Macaulay simplicial complex is Cohen--Macaulay, $\Delta'$ is Cohen--Macaulay, and so $\Delta$ is virtually Cohen--Macaulay. For the second and final case, suppose that $\Delta$ is not of the form $\Delta=\join(\tau,\Omega)$, where $\tau\neq\varnothing$ is a face of $\Delta$ and $\mathcolor(\tau)\cap\mathcolor(\Omega)=\varnothing$. Then, define $\Delta'=\Delta\cup \ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r$. We claim that $\Delta'$ is Cohen--Macaulay, and we will show this using Reisner's criterion, i.e., we will show that \begin{equation} \label{eqn:Reisner} \widetilde{H}_{i}(\link_{\sigma}(\Delta');k)=0 \quad\text{for each face $\sigma$ of $\Delta'$ and all $i<d = \dim\link_{\sigma}(\Delta')$}. \end{equation} Since $\Delta'=\Delta\cup \ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_{r}$, it follows that $\link_{\sigma}(\Delta') = \link_{\sigma}(\Delta) \cup \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r). $ Then the long exact sequence of a pair yields the exact sequence \begin{equation}\label{eqn:les-pair} \widetilde{H}_{i}( \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)\rightarrow \widetilde{H}_{i}(\link_{\sigma}(\Delta');k)\rightarrow H_{i}(\link_{\sigma}(\Delta'), \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)\rightarrow \widetilde{H}_{i-1}( \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k). \end{equation} Then for any $i$, so long as $\widetilde{H}_{i}(\link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_{r});k)=\widetilde{H}_{i-1}(\link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_{r});k)=0$, it suffices to show \begin{equation} \label{eqn:Reisner-relative} H_{i}(\link_{\sigma}(\Delta'), \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)=0. \end{equation} We will first treat the case of $\sigma \neq \varnothing$ and then separately handle the case of $\sigma = \varnothing$. For $\sigma\neq\varnothing$, since $H_{i}( \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)=0$ for $i<d$, by Lemma~\ref{lem:irrelevant-almost-cm} and thanks to~\eqref{eqn:Reisner-relative}, it suffices to show that $H_i(\link_{\sigma}(\Delta'), \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k)=0$ for all $i<d$. Notice that \[ H_i(\link_{\sigma}(\Delta'), \link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r);k) = H_{i}(\link_\sigma(\Delta),\mbox{Ex}(\sigma, \Delta);k), \] where $\mbox{Ex}(\sigma, \Delta)= \link_{\sigma}(\Delta)\cap\link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)$. To complete the proof for the case $\sigma\neq \varnothing$, we will show that $H_{i}(\link_\sigma(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$, starting with $i<d-2$. We will then treat separately the cases $i=d-2$ and $i=d-1$. When $i<d-2$, let $\tau$ be an $i$-face of $\link_{\sigma}(\Delta)$ of codimension at least $3$. Then we claim that $\tau$ is an exterior face. To see this, let $\tilde{\tau}$ be a facet of $\link_{\sigma}(\Delta)$ that contains $\tau$. Because $\tau$ is of codimension at least $3$ in $\tilde{\tau}$, it is contained in at least $3$ codimension $1$ faces of $\tilde{\tau}$, and therefore, by Lemma~\ref{lem:twoface}, it is contained in at least one exterior facet of $\tilde{\tau}$. Hence $\tau\in \mbox{Ex}(\sigma, \Delta)$, so $C_i(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$ and $H_i(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$ for $i<d-2$, as desired. When $i=d-2$, we must show that $H_{d-2}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$. To do so, we will again show that every $(d-2)$-face $\tau$ in $\link_{\sigma}(\Delta)$ is a boundary relative to $\mbox{Ex}(\sigma, \Delta)$. Without loss of generality, assume that $\tau$ is not in $\mbox{Ex}(\sigma, \Delta)$. Let $\tilde{\tau}$ be a facet of $\link_{\sigma}(\Delta)$ containing $\tau$. Since $\tau$ is of codimension $2$ in $\link_{\sigma}(\Delta)$, $\tau$ is contained in exactly two codimension $1$ faces of $\tilde{\tau}$. Further, since $\tau$ is not in $\mbox{Ex}(\sigma, \Delta)$, it must be that both of these codimension $1$ faces of $\tilde{\tau}$ are interior faces; call one of them $\xi$. By Remark~\ref{rmk:intersect-ext-int}, the other codimension $1$ faces of $\xi$ besides $\tau$ must be in $\mbox{Ex}(\sigma, \Delta)$. Therefore, up to sign, the relative boundary with respect to $\mbox{Ex}(\sigma, \Delta)$ of $\xi$ is $\tau$. Since $\tau$ was arbitrary, $H_{d-2}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$, as desired. Finally, when $i=d-1$, we must show that $H_{d-1}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$. To do so, we will use Lemma~\ref{lem:twoface} to construct a graph (with loops). In the graph $G$, there is a distinguished vertex $*$, while the other vertices correspond to the codimension $1$ faces of $\link_{\sigma}(\Delta)$ that are interior. Edges are placed to connect vertices corresponding to interior faces that are both contained in a common facet of $\link_{\sigma}(\Delta)$. When a facet of $\link_{\sigma}(\Delta)$ has one codimension $1$ face that is interior, then an edge is placed between the vertex for that facet and $*$. Finally, if a facet of $\link_{\sigma}(\Delta)$ has no interior faces, a loop is placed at $*$. Recall that we are currently in the case that $\Delta$ is not of the form $\join(\tau,\Omega)$, where $\tau$ is a face of $\Delta$ and $\mathcolor(\tau)\cap\mathcolor(\Omega)=\varnothing$. We claim that, in this case, $\link_{\sigma}(\Delta)$ contains at least one facet that has at most one codimension $1$ face that is interior. To see this, by way of contradiction, suppose that in $\link_{\sigma}(\Delta)$, all facets contain exactly two codimension 1 faces that are interior. Let $\tau\in \link_{\sigma}(\Delta)$ be such a facet, in which case $\tau\cup\sigma$ is a facet of $\Delta$. Since $\tau$ has exactly two codimension $1$ faces that are interior, by Lemma~\ref{lem:twoface}, $\tau$ uses some color at least twice. Then, since $\tau\cup\sigma$ contains $r+1$ vertices and there are only $r$ possible colors, it must be that the colors used in $\sigma$ are present only in $\sigma$ and not in $\tau$. But since a relevant simplex must contain all colors, the relevant facets of $\tau\cup\sigma$ must all contain $\sigma$. Further, since $\Delta$ is relevant-connected, repeating this for the successive neighbors of $\tau\cup\sigma$ in $\Delta$, we find that all facets of $\Delta$ contain $\sigma$, and thus $\Delta=\join(\sigma,\Omega)$ for some $\Omega$, a contradiction. Therefore, it must be that $\link_{\sigma}(\Delta)$ contains at least one facet for which at most one of its codimension $1$ faces is interior. By the previous paragraph, the graph $G$ is connected, and there is a commutative diagram \begin{equation} \label{eqn:graph-commutative-diagram} \begin{tikzcd} C_{d}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k) \arrow[r]\arrow[d,"\cong"] & C_{d-1}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)\arrow[d,"\cong"] \\ C_1(G,*;k) \arrow[r,twoheadrightarrow] & C_0(G,*;k). \end{tikzcd} \end{equation} The surjectivity of the bottom map in \eqref{eqn:graph-commutative-diagram} is a consequence of the fact that $G$ is connected, so $H_0(G,*,k)=0$. Since the vertical maps in the diagram are isomorphisms, the top map in \eqref{eqn:graph-commutative-diagram} is also surjective. Therefore, $H_{d-1}(\link_{\sigma}(\Delta),\mbox{Ex}(\sigma, \Delta);k)=0$, which concludes the proof of \eqref{eqn:Reisner} for any face $\sigma\neq \varnothing$ in $\Delta'$. It now remains to show that condition \eqref{eqn:Reisner} holds for $\sigma=\varnothing$. Before beginning this portion of the proof, note that the argument in the cases that $\sigma\neq\varnothing$ and $i\le d-2$ case above apply here as well to show that \begin{equation} \label{eqn:rel-vng} H_{i}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=0 \quad\text{for}\quad i\le r-2. \end{equation} Now consider the case that $\sigma=\varnothing$ and $i<r-2$, where $r=\dim(\Delta')$. By Lemma~\ref{lem:irrelevant-almost-cm}, $H_{i}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=0$ for $i<r-2$. Putting this together with \eqref{eqn:rel-vng}, it now follows from \eqref{eqn:les-pair} that $\widetilde{H}_i(\Delta';k)=0$ for all $i<r-2$. It remains to show that condition \eqref{eqn:Reisner} holds for $\sigma=\varnothing$ in the cases $i=r-2$ and $i=r-1$. The long exact sequence of a pair together with \eqref{eqn:rel-vng} yield the exact sequence: \[ H_{r-1}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)\rightarrow H_{r-1}(\Delta';k)\rightarrow H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)\rightarrow H_{r-2}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)\rightarrow H_{r-2}(\Delta';k)\rightarrow 0. \] Applying Lemma~\ref{lem:irrelevant-almost-cm}, this simplifies to \begin{equation} \label{eqn:empty-simplex-exact-sequence} 0\rightarrow H_{r-1}(\Delta';k)\rightarrow H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k) \rightarrow k \rightarrow H_{r-2}(\Delta';k)\rightarrow 0. \end{equation} Thus, it suffices to show that $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$ and that the map $H_{r-1}(\Delta';k)\rightarrow H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)$ is the zero map, since this would imply that the map $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)\rightarrow k$ is an isomorphism, so that $H_{r-1}(\Delta';k)=H_{r-2}(\Delta';k)=0$, as desired. To see that $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$, note first that the codimension $1$ faces of any $(r-1)$-simplex are irrelevant for dimension reasons. Thus, the boundary of any $(r-1)$-face in $\Delta'$ belongs to $\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r$, so the $(r-1)$-faces of $\Delta$ provide a generating set for $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)$. By Lemma~\ref{lem:twoface}, every $r$-dimensional relevant face of $\Delta$ has precisely two relevant $(r-1)$-faces in its boundary. Further, since $\Delta$ is relevant-connected, every relevant $(r-1)$-face of $\Delta'$ is nonzero and homologically equivalent up to sign. Therefore, $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$ and any relevant $(r-1)$-face of $\Delta'$ gives a generator of this homology group. Finally, to see that $H_{r-1}(\Delta';k)\rightarrow H_{r-1}(\Delta',\link_{\sigma}(\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r))$ is the zero map, let $\Gamma$ be a simplex on the color set $[r]$. There is a projection $\Delta'\rightarrow \Gamma$, given by mapping all vertices of a given color $i$ onto the vertex $i$. Since any relevant $(r-1)$-face of $\Delta'$ gives a generator of $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r;k)=k$ and the induced map sends this generator to a nonzero element of $H_{r-1}(\Gamma,\partial \Gamma) = k$, this induced map $H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)\rightarrow H_{r-1}(\Gamma,\partial \Gamma)$ is an isomorphism. Now consider the following diagram induced by the map $\Delta'\rightarrow \Gamma$ \[ \begin{tikzcd} H_{r-1}(\Delta') \arrow[r] \arrow[d] & H_{r-1}(\Gamma) = 0 \arrow[d] \\ H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r) \arrow[r,"\cong"] & H_{r-1}(\Gamma,\partial \Gamma). \end{tikzcd} \] This diagram commutes, and, thus, the map $H_{r-1}(\Delta')\rightarrow H_{r-1}(\Delta',\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_r)$ is the zero map. Applying this fact to the exact sequence \eqref{eqn:empty-simplex-exact-sequence}, we get $H_{r-1}(\Delta';k)=H_{r-2}(\Delta';k)=0$. \end{proof} \begin{example} Continuing with Example~\ref{ex:interior-exterior}, one of the critical steps in the proof of Theorem~\ref{thm:relevant-connected-ideal-VCM} is the reduction of some of the more troublesome homology groups (in the case that $\sigma\neq\varnothing$ and $i=d-1$) to the homology of a graph by the construction of the graph given by the interior faces of the link. It is Lemma~\ref{lem:twoface} that allows such a graph to be constructed. In Figure~\ref{fig:octagon2} that graph is shown with the vertices given by $\times$ symbols, the edges given by dashed lines, and the half edges are illustrated with an edge terminated with a $\circ$ symbol. \begin{figure}[h!] \centering \caption{The graph associated to the complex of interior faces of the link from Figure~\ref{fig:octagon1}.} \label{fig:octagon2} \begin{tikzpicture}[scale=1.5] \coordinate (a) at (0,0); \node[blue] at (a) {$\bullet$}; \coordinate (b) at (1,-1); \node[red] at (b) {$\bullet$}; \coordinate (c) at (2,-1); \node[red] at (c) {$\bullet$}; \coordinate (d) at (3,0); \node[red] at (d) {$\bullet$}; \coordinate (e) at (3,1); \node[blue] at (e) {$\bullet$}; \coordinate (f) at (2,2); \node[green] at (f) {$\bullet$}; \coordinate (g) at (1,2); \node[green] at (g) {$\bullet$}; \coordinate (h) at (0,1); \node[green] at (h) {$\bullet$}; \coordinate (x1) at (1,1); \node[red] at (x1) {$\bullet$}; \coordinate (x2) at (1,0); \node[green] at (x2) {$\bullet$}; \coordinate (x3) at (1.5,0); \node[red] at (x3) {$\bullet$}; \coordinate (x4) at (2,1); \node[green] at (x4) {$\bullet$}; \coordinate (x5) at (2,1.5); \node[red] at (x5) {$\bullet$}; \coordinate (x6) at (1.5,1.25); \node[green] at (x6) {$\bullet$}; \draw[ultra thick] (a) -- (b) -- (c) -- (d) -- (e) -- (f) -- (g) -- (h) -- cycle; \draw (h)--(b); \draw (x1)--(b); \draw (d)--(f); \draw (h) -- (x2) -- (x1) -- (x4) -- (x3) -- (x2) -- (c); \draw (c) -- (x3); \draw (c) -- (x4) -- (d); \draw (x4) -- (x5) -- (f) -- (x1); \draw (x1) -- (x6) -- (x5) -- (d); \draw (x1) -- (h); \draw (x1) -- (g); \draw (x1)--(x3); \draw (f)--(x6)--(x4); \draw[ultra thick] (c)--(x3)--(x1); \draw[ultra thick] (h) -- (x2); \draw[ultra thick] (f) -- (x6) -- (x4); \draw[ultra thick] (d) -- (x5); \coordinate (v1) at ($(h)!0.5!(b)$); \coordinate (v2) at ($(x2)!0.5!(b)$); \coordinate (v2p5) at ($(x2)!0.5!(c)$); \coordinate (v3) at ($(x2)!0.5!(x3)$); \coordinate (v4) at ($(x2)!0.5!(x1)$); \coordinate (v5) at ($(h)!0.5!(x1)$); \coordinate (v6) at ($(g)!0.5!(x1)$); \coordinate (v7) at ($(f)!0.5!(x1)$); \coordinate (v8) at ($(x6)!0.5!(x1)$); \coordinate (v9) at ($(x4)!0.5!(x1)$); \coordinate (v10) at ($(x4)!0.5!(x3)$); \coordinate (v11) at ($(x4)!0.5!(c)$); \coordinate (v12) at ($(x4)!0.5!(d)$); \coordinate (v13) at ($(x4)!0.5!(x5)$); \coordinate (v14) at ($(x6)!0.5!(x5)$); \coordinate (v15) at ($(f)!0.5!(x5)$); \coordinate (v16) at ($(f)!0.5!(d)$); \node at (v1) {$\times$}; \node at (v2) {$\times$}; \node at (v2p5) {$\times$}; \node at (v3) {$\times$}; \node at (v4) {$\times$}; \node at (v5) {$\times$}; \node at (v6) {$\times$}; \node at (v7) {$\times$}; \node at (v8) {$\times$}; \node at (v9) {$\times$}; \node at (v10) {$\times$}; \node at (v11) {$\times$}; \node at (v12) {$\times$}; \node at (v13) {$\times$}; \node at (v14) {$\times$}; \node at (v15) {$\times$}; \node at (v16) {$\times$}; \draw[dashed] (v1)--(v2)--(v2p5)--(v3)--(v4)--(v5)--(v6)--(v7)--(v8)--(v9)--(v10)--(v11)--(v12)--(v13)--(v14)--(v15)--(v16); \end{tikzpicture} \end{figure} \end{example} In light of Theorem~\ref{thm:relevant-connected-ideal-VCM}, to complete the proof of Theorem~\ref{thm:monomial-vcm} it remains to show that it is enough to show that $S/I_\Delta$ is virtually Cohen--Macaulay on each of the components of its support. \begin{prop} \label{prop:disjoint-support} Let $S$ be the Cox ring of a smooth projective toric variety $X$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. If $M$ is module with equidimensional support $\mathcal{X}=\bigsqcup X_i$ with disjoint components $X_i$, then $M$ is virtually Cohen--Macaulay if each $M|_{X_i}$ is virtually Cohen--Macaulay. \end{prop} \begin{proof} Let $N = \bigoplus M|_{X_i}$. Then we claim that $\widetilde{M} \cong \widetilde{N}$. Since $N$ is a direct sum, we can decompose $\widetilde{N}$ as \[ \widetilde{N}=\bigoplus \widetilde{M|_{X_i}}. \] And since $ \mathcal{X}=\bigsqcup X_i$, we have \[ \widetilde{M}=\bigoplus\widetilde{M|_{X_i}}. \] Thus a virtual resolution of $N$ is a virtual resolution of $M$. Since $M|_{X_i}$ is virtually Cohen--Macaulay, $\vdim M|_{X_i} =\codim M|_{X_i}$. Since $ \mathcal{X}$ is equidimensional, we have that $\vdim M|_{X_i}=\codim M$. Finally, a direct sum of virtual resolutions is a virtual resolution of the direct sum, so $\vdim M=\vdim N=\codim M$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:monomial-vcm}] This result is now an immediate consequence of Theorem~\ref{thm:relevant-connected-ideal-VCM} and Proposition~\ref{prop:disjoint-support}, where $M$ in the proposition is $S/I_\Delta$ and the $X_i$ correspond to the relevant-connected components of $\Delta$. \end{proof} \begin{example} \label{ex:disjoint-lines} Let $k[x_0,\ldots,x_3]$ be the Cox ring of $X=\PP^3$ and consider the ideal \[ J = \<x_0x_2,x_0x_3,x_1x_2,x_1x_3\>, \] for which $S/J$ has free resolution \[ S^{1} \xleftarrow{ \begin{bmatrix} x_0x_2 & x_1x_2 & x_0x_3 & x_1x_3 \end{bmatrix} } S^{4} \xleftarrow{ \begin{bmatrix} -x_1 & 0 & -x_3 & 0 \\ \phantom{-}x_0 & 0 & 0 & -x_3 \\ 0 & -x_1 & \phantom{-}x_2 & 0 \\ 0 & \phantom{-}x_0 & 0 & \phantom{-}x_2 \end{bmatrix} } S^{4} \xleftarrow{ \begin{bmatrix} \phantom{-}x_3 \\ -x_2 \\ -x_1 \\ \phantom{-}x_0 \end{bmatrix} } S^{1} \xleftarrow{} 0. \] Note that $J$ corresponds to a 1-dimensional simplicial complex with a single color, so Theorem~\ref{thm:monomial-vcm} implies that $S/J$ is virtually Cohen--Macaulay, with a short virtual resolution of the form \[ S^{2} \xleftarrow{ \begin{bmatrix} x_0 & x_1 & 0 & 0 \\ 0 & 0 & x_2 & x_3 \end{bmatrix} } S^{4} \xleftarrow{ \begin{bmatrix} -x_1 & 0 \\ \phantom{-}x_0 & 0 \\ 0 & -x_3 \\ 0 & \phantom{-}x_2 \end{bmatrix} } S^{2} \xleftarrow{} 0. \] See Example~\ref{ex:disjoint-lines-P4} for a discussion of the subscheme of $\PP^d$ cut out by $J$ when $d>3$. \end{example} \begin{example} \label{ex:monomial-nonCM-components} Let $X$ = $\PP^2\times \PP^2$, and consider the simplicial complex $\Delta$ that is homeomorphic to a a cylinder, as shown in Figure~\ref{fig:cyl1}. \begin{figure}[h!] \centering \caption{A cylindrical $\Delta$ on $\PP^2\times\PP^2$.} \label{fig:cyl1} \begin{tikzpicture} \node (y0) at (1.9,1.5) {$y_0$}; \node (y1) at (2.5,0) {$y_1$}; \node (y2) at (2.5,3) {$y_2$}; \node (x0) at (-0.6,1.5) {$x_0$}; \node (x1) at (0,0) {$x_1$}; \node (x2) at (0,3) {$x_2$}; \draw (x0)--(y0); \draw (x1)--(y1); \draw (x2)--(y2); \draw (x2) -- (x0)--(x1); \draw[dotted] (x1)--(x2); \draw (y0)--(y1)--(y2)--(y0); \draw (x0)--(y1); \draw[dotted] (x1)--(y2); \draw (x2)--(y0); \fill[gray,opacity=0.2] (x0.center)--(y1.center)--(x1.center)--cycle; \fill[gray,opacity=0.2] (y1.center)--(y0.center)--(x0.center)--cycle; \fill[gray,opacity=0.2] (x0.center)--(y0.center)--(x2.center)--cycle; \fill[gray,opacity=0.2] (y2.center)--(y0.center)--(x2.center)--cycle; \fill[gray,opacity=0.2] (x1.center)--(x2.center)--(y2.center)--cycle; \fill[gray,opacity=0.2] (x1.center)--(y1.center)--(y2.center)--cycle; \end{tikzpicture} \end{figure} The Stanley--Reisner ideal corresponding to $\Delta$ is $I_{\Delta}=\left<x_0y_2,x_1y_0,x_2y_1,x_0x_1x_2,y_0y_1y_2\right\>$. Since $\widetilde{H}_1(\Delta;k)\neq 0$ and $\dim\Delta=2$, Reisner's criterion implies that $S/I_{\Delta}$ is not Cohen--Macaulay. On the other hand, if we consider the simplicial complex given by $\ensuremath{\mathcal{B}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}_{2}\cup \Delta$, which is illustrated in Figure~\ref{fig:cyl2} and corresponds to the ideal $J=\left<x_0y_2,x_1y_0,x_2y_1\right\>$, then one can check that Reisner's criterion is satisfied in this case. Since $\widetilde{I_{\Delta}}=\widetilde{J}$, we conclude that $S/I_{\Delta}$ is virtually Cohen--Macaulay. \begin{figure}[h] \centering \caption{Adding irrelevant faces to $\Delta$ in Figure~\ref{fig:cyl1} yields a Cohen--Macaulay complex. } \label{fig:cyl2} \begin{tikzpicture} \node (y0) at (1.9,1.5) {$y_0$}; \node (y1) at (2.5,0) {$y_1$}; \node (y2) at (2.5,3) {$y_2$}; \node (x0) at (-0.6,1.5) {$x_0$}; \node (x1) at (0,0) {$x_1$}; \node (x2) at (0,3) {$x_2$}; \draw (x0)--(y0); \draw (x1)--(y1); \draw (x2)--(y2); \draw (x2) -- (x0)--(x1); \draw[dotted] (x1)--(x2); \draw (y0)--(y1)--(y2)--(y0); \draw (x0)--(y1); \draw[dotted] (x1)--(y2); \draw (x2)--(y0); \fill[gray,opacity=0.2] (x0.center)--(y1.center)--(x1.center)--cycle; \fill[gray,opacity=0.2] (y1.center)--(y0.center)--(x0.center)--cycle; \fill[gray,opacity=0.2] (x0.center)--(y0.center)--(x2.center)--cycle; \fill[gray,opacity=0.2] (y2.center)--(y0.center)--(x2.center)--cycle; \fill[gray,opacity=0.2] (x1.center)--(x2.center)--(y2.center)--cycle; \fill[gray,opacity=0.2] (x1.center)--(y1.center)--(y2.center)--cycle; \fill[blue,opacity=0.2] (x1.center)--(x2.center)--(x0.center)--cycle; \fill[blue,opacity=0.2] (y1.center)--(y2.center)--(y0.center)--cycle; \end{tikzpicture} \end{figure} \end{example} We will observe in Example~\ref{ex:virtreg-noconverse} that $S/\sqrt{I}$ being virtually Cohen--Macaulay does not imply the same for $S/I$, even when $I$ is a monomial ideal. This example shows that the general monomial case cannot be reduced to the squarefree monomial case; moreover, it highlights that being virtually Cohen--Macaulay is a scheme-theoretic property rather than a set-theoretic one. For this reason, it is essential to develop tools to check the intuition developed through Theorem~\ref{thm:monomial-vcm} in the not-necessarily-radical case and to use the examples it delivers us to scaffold new ones as we build towards a theory of virtual depth. The remainder of this paper is directed at that transition. \section{New virtual resolutions from old} \label{sec:new-from-old} We will now consider homological aspects of the virtually Cohen--Macaulay property. In particular, we will now introduce two homological constructions that allow us to build new virtually Cohen--Macaulay modules from those we have shown to be virtually Cohen--Macaulay. Then, in the next section, we give homological obstructions to being virtually Cohen--Macaulay. For the remainder of the article, let $X$ be an arbitrary smooth projective toric variety with Cox ring $S$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. \subsection{A mapping cone construction} \label{subsec:mapping-cone} In this subsection, we introduce a mapping cone construction that, under certain conditions, will allow us to use a virtual resolution of the module $M$ to construct a shorter virtual resolution of $M$, given the right conditions. To begin, let $F_\bullet$ be a virtual resolution of $M$ of length $t$, and assume that $\Ext^t(M,S)^\sim=0$. Let $G^*_\bullet$ be a free resolution of $\Ext^t(M,S)$, shifted and with indexing reversed as in~\eqref{eqn:two-res}. By \cite[Prop. A3.13]{eisenbud}, there is an induced map, which we denote by $\alpha^*$, from $F_\bullet^* = \Hom^\bullet_S(F,S)$ to $G^*_\bullet$: \begin{equation} \label{eqn:two-res} \begin{tikzcd} \cdots\arrow[r]&0\arrow[r]\arrow[d,"\alpha_{0}^*"] & F_0^*\arrow[r,"\varphi_1^*"]\arrow[d,"\alpha_{1}^*"] & F_1^*\arrow[r,"\varphi_2^*"] \arrow[d,"\alpha_{2}^*"] & \cdots \arrow[r,"\varphi_{t-2}^*"] & F_{t-2}^*\arrow[r,"\varphi_{t-1}^*"] \arrow[d,"\alpha_{t-2}^*"] & F_{t-1}^*\arrow[r,"\varphi_{t}^*"] \ar[d,"\alpha_{t-1}^*"] & F_t^*\arrow[r] \arrow[d,"\alpha_{t}^*"] & 0 \\ \cdots\arrow[r]& G_{-1}^*\arrow[r,"\psi_{0}^*"'] & G_0^*\arrow[r,"\psi_{1}^*"'] & G_1^*\arrow[r,"\psi_{2}^*"'] & \cdots\arrow[r,"\psi_{t-2}^*"']& G_{t-2}^*\arrow[r,"\psi_{t-1}^*"'] & G_{t-1}^*\arrow[r,"\psi_{t}^* = \varphi_t^*"'] & G_t^*\arrow[r]& 0. \end{tikzcd} \end{equation} Dualizing yields the diagram \begin{equation} \label{eqn:tworesagain} \begin{tikzcd} \cdots&0\arrow[l]&F_0\arrow[l] & F_1\arrow[l,"\varphi_1"'] & \cdots \arrow[l,"\varphi_{2}"']& F_{t-2}\arrow[l,"\varphi_{t-2}"'] & F_{t-1}\arrow[l,"\varphi_{t-1}"'] & F_t\arrow[l,"\varphi_{t}"'] & \arrow[l] 0 \\ \cdots& G_{-1}\arrow[l,"\psi_{-1}"] \arrow[u,"\alpha_{-1}"'] & G_0\arrow[l,"\psi_{0}"] \arrow[u,"\alpha_{0}"'] & G_1 \arrow[l,"\psi_{1}"] \arrow[u,"\alpha_{1}"'] & \cdots\arrow[l,"\psi_{2}"]& G_{t-2}\arrow[l,"\psi_{t-2}"] \arrow[u,"\alpha_{t-2}"'] & G_{t-1}\arrow[l,"\psi_{t-1}"] \arrow[u,"\alpha_{t-1}"'] & G_t\arrow[l,"\psi_{t}=\varphi_t"] \arrow[u,"\alpha_{t}"'] & \arrow[l] 0. \end{tikzcd} \end{equation} Then, the mapping cone of $\alpha\colon G\to F$, denoted $\cone(\alpha)$, is the complex \begin{align*} \cdots \gets G_{-2}\xleftarrow{\ \partial_0\ } \begin{matrix} F_{0}\\ \oplus\\ G_{-1}\end{matrix}\xleftarrow{\ \partial_1\ } \begin{matrix} F_{1}\\ \oplus\\ G_{0}\end{matrix}\xleftarrow{\ \partial_2\ } \begin{matrix} F_{2}\\ \oplus\\ G_{1}\end{matrix}\xleftarrow{\ \partial_3\ } \cdots\gets \begin{matrix} F_{t-1}\\ \oplus\\ G_{t-2}\end{matrix}\xleftarrow{\ \partial_{t}\ } \begin{matrix} F_{t}\\ \oplus\\ G_{t-1}\end{matrix}\xleftarrow{\ \partial_{t+1}\ } \begin{matrix} 0\\ \oplus\\ G_{t}\end{matrix}\gets 0, \end{align*} where the maps have the form $\partial_i = \begin{bmatrix} \varphi_i & \alpha_{i-1}\\ 0& \psi_{i-1}\end{bmatrix}$. Now, because $\psi_t = \varphi_t$, this reduces to the complex \begin{align} \label{eqn:mapping-cone} \cdots \gets G_{-2}\xleftarrow{\ \partial_0\ } \begin{matrix} F_{0}\\ \oplus\\ G_{-1}\end{matrix}\xleftarrow{\ \partial_1\ } \begin{matrix} F_{1}\\ \oplus\\ G_{0}\end{matrix}\xleftarrow{\ \partial_2\ } \begin{matrix} F_{2}\\ \oplus\\ G_{1}\end{matrix}\xleftarrow{\ \partial_3\ } \cdots\gets \begin{matrix} F_{t-2}\\ \oplus\\ G_{t-3}\end{matrix}\gets G_{t-2}\gets 0. \end{align} \begin{prop} \label{prop:mapping-cone} Let $S$ be the Cox ring of a smooth projective toric variety $X$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. Let $F_\bullet$ be a virtual resolution of $M$ of length $t$ such that $\Ext^t(M,S)^\sim = 0$, and let $\alpha$ be as in \eqref{eqn:tworesagain}. If $G_{-2}=0$ in \eqref{eqn:mapping-cone}, then (the minimization of) $\cone(\alpha)$ is a virtual resolution of $M$. \end{prop} \begin{proof} There is an exact triangle $G_\bullet \xrightarrow{\ \alpha\ }F_\bullet \to \cone(\alpha)\to G_\bullet[1]$, which induces the long exact sequence in homology \[ \cdots\to H_{i+1}(\cone(\alpha))\to H_i(G)\to H_i(F)\to H_i(\cone(\alpha))\to\cdots. \] Since $H_i(G)^\sim = 0$ for all $i$, it follows that the homology modules of $\cone(\alpha)$ is isomorphic to those for $F_\bullet$, and thus $\cone(\alpha)$ and its minimization are virtual resolutions of $M$. \end{proof} The mapping cone construction of Proposition~\ref{prop:mapping-cone} can be iterated as long as the hypotheses hold. \begin{example} \label{ex:disjoint-lines-continued} Referring again to Example~\ref{ex:disjoint-lines}, the mapping cone construction of Proposition~\ref{prop:mapping-cone} also yields a short resolution for $S/J$. Since the variety $V(J)\subset X$ is simply the disjoint union of two lines, $S/J$ is not arithmetically Cohen--Macaulay even though it is Cohen--Macaulay at every relevant $\Pic(X)$-graded prime ideal. The minimal free resolution of $S/J$ is \[ 0\leftarrow S\leftarrow S(-2)^4\leftarrow S(-3)^4\leftarrow S(-4)\leftarrow 0. \] We will take the mapping cone of the following map of chain complexes, where the top chain complex is the dual of the free resolution of $\Ext^3(S/J,S) \cong k$: \[ \begin{tikzcd} 0&S\arrow[l] \arrow[d,"\alpha_{-1}"] & S(-1)^4\arrow[l,"\psi_{0}"'] \arrow[d,"\alpha_{0}"]& S(-2)^6 \arrow[l,"\psi_{1}"'] \arrow[d,"\alpha_{1}"] & S(-3)^4\arrow[l,"\psi_{2}"'] \arrow[d,"\alpha_{2}"] & S(-4)\arrow[l,"\psi_{3}"'] \arrow[d,"\alpha_{3}"] & \arrow[l] 0. \\ 0 & 0\arrow[l]&S\arrow[l] & S(-2)^4\arrow[l,"\varphi_1"'] & S(-3)^4\arrow[l,"\varphi_2"'] & S(-4) \arrow[l,"\varphi_3"']& \arrow[l] 0. \end{tikzcd} \] The mapping cone yields \[ S^2\leftarrow \begin{matrix} S(-1)^4\\ \oplus\\ S(-2)^4 \end{matrix} \leftarrow \begin{matrix} S(-2)^6\\ \oplus\\ S(-3)^{4}\end{matrix} \leftarrow \begin{matrix} S(-3)^4\\ \oplus\\ S(-4)^{\phantom{4}} \end{matrix} \leftarrow S(-4)\leftarrow 0, \] which after minimizing provides a virtual resolution of $S/J$ of length $\codim(J)$: \[ S^2\leftarrow S(-1)^4\leftarrow S(-2)^2\leftarrow 0. \] Note that this resolution can also be constructed using the techniques of sheaves over simplicial complexes of \cite{yanagawa-sheaves}. \end{example} \begin{example} \label{ex:curve-mapping-cone} Consider the hyperelliptic curve $C$ of genus $4$ which can be embedded as a curve of bidegree $(2,8)$ in $\PP^1 \times \PP^2$ found in \cite[Example~1.4]{virtual-original}, where $S = k[x_0,x_1,y_0,y_1,y_2]$. Now $S/I$ has minimal free resolution \begin{align} \nonumber &S^1 \xleftarrow{\,\, \varphi_1\,\,} \begin{matrix} S(-3,-1)^1 \\[-3pt] \oplus \\[-3pt] S(-2,-2)^1 \\[-3pt] \oplus \\[-3pt] S(-2,-3)^2 \\[-3pt] \oplus \\[-3pt] S(-1,-5)^3 \\[-3pt] \oplus \\[-3pt] S(0,-8)^1 \end{matrix} \xleftarrow{\,\, \varphi_2\,\,} \begin{matrix} S(-3,-3)^3 \\[-3pt] \oplus \\[-3pt] S(-2,-5)^6 \\[-3pt] \oplus \\[-3pt] S(-1,-7)^1 \\[-3pt] \oplus \\[-3pt] S(-1,-8)^2 \end{matrix} \xleftarrow{\,\, \varphi_3\,\,} \begin{matrix} S(-3,-5)^3 \\[-3pt] \oplus \\[-3pt] S(-2,-7)^2 \\[-3pt] \oplus \\[-3pt] S(-2,-8)^1 \end{matrix} \xleftarrow{\,\, \varphi_4\,\,} S(-3,-7)^1 \gets 0 \, . \intertext{ Note that $\Ext^4_S(S/I,S)$ has finite length and that the dual of the shifted resolution of $\Ext^4_S(S/I,S)$ is } \begin{matrix} S(-1, -1)^1 \end{matrix} \xleftarrow{\,\, \psi_0\,\,} &\begin{matrix} S(-1, -3)^3\\[-3pt] \oplus \\[-3pt] S(-2, -1)^2 \end{matrix} \xleftarrow{\,\, \psi_1\,\,} \begin{matrix} S(-1, -5)^3\\[-3pt] \oplus \\[-3pt] S(-2, -3)^6\\[-3pt] \oplus \\[-3pt] S(-3, -1)^1 \end{matrix} \xleftarrow{\,\, \psi_2\,\,} \begin{matrix} S(-2, -8) \\[-3pt] \oplus \\[-3pt] S(-1, -7)\\[-3pt] \oplus \\[-3pt] S(-2, -5)^6 \\[-3pt] \oplus \\[-3pt] S(-3, -3)^3 \end{matrix} \xleftarrow{\,\, \psi_3\,\,} \begin{matrix} S(-3, -5)^3 \\[-3pt] \oplus \\[-3pt] S(-2, -7)^2\\[-3pt] \oplus \\[-3pt] S(-2, -8)^1 \end{matrix} \xleftarrow{\,\, \psi_4\,\,} S(-3, -7)^1 \gets 0\, . \intertext{Applying Proposition~\ref{prop:mapping-cone} yields a virtual resolution for $S/I$ of the form } \label{eqn:mapcone1} &\begin{matrix} S^1 \\[-3pt] \oplus \\[-3pt] S(-1,-1)^1 \end{matrix} \xleftarrow{\,\, \rho_1\,\,} \begin{matrix} S(-2,-1)^1 \\[-3pt] \oplus \\[-3pt] S(-2,-2)^1 \\[-3pt] \oplus \\[-3pt] S(-1,-3)^3 \\[-3pt] \oplus \\[-3pt] S(-0,-8)^1 \\[-3pt] \oplus \\[-3pt] S(-2,-1)^1 \end{matrix} \xleftarrow{\,\, \rho_2\,\,} \begin{matrix} S(-2,-3)^3 \\[-3pt] \oplus \\[-3pt] S(-1,-8)^2 \\[-3pt] \oplus \\[-3pt] S(-2,-3)^1 \end{matrix} \xleftarrow{\,\, \rho_3\,\,} S(-2,-8)^{1} \gets 0. \intertext{ However, the cokernel of $\rho_3^*$ is also irrelevant, so this procedure can be repeated in this case, so applying Proposition~\ref{prop:mapping-cone} to \eqref{eqn:mapcone1} yields the following virtual resolution for $S/I$: } \nonumber &\begin{matrix} S^1 \\[-3pt] \oplus \\[-3pt] S(0,-1)^2 \\[-3pt] \oplus \\[-3pt] S(0,-2)^1 \\[-3pt] \oplus \\[-3pt] S(-0,-2)^1 \end{matrix} \xleftarrow{\,\, \partial_1\,\,} \begin{matrix} S(-1,-1)^2 \\[-3pt] \oplus \\[-3pt] S(-1,-2)^1 \\[-3pt] \oplus \\[-3pt] S(0,-3)^1 \\[-3pt] \oplus \\[-3pt] S(-1,-2)^1 \\[-3pt] \oplus \\[-3pt] S(0,-3)^1 \\[-3pt] \oplus \\[-3pt] S(-1,-1)^1 \\[-3pt] \oplus \\[-3pt] S(0,-3)^2 \end{matrix} \xleftarrow{\,\, \partial_2\,\,} S(-1,-3)^{5}\gets 0, \end{align} where \[ \partial_1 = \begin{bmatrix} 0&-{x}_{1}{y}_{0}-{x}_{1}{y}_{1}&{x}_{0}{y}_{0}^{2}&{y}_{1}^{2}{y}_{2}&-{x}_{1}{y}_{1}^{2}-{x}_{0}{y}_{2}^{2}&{y}_{0}{y}_{2}^{2}+{y}_{1}{y}_{2}^{2}&{x}_{0}{y}_{2}&{y}_{0}^{3}+{y}_{0}^{2}{y}_{1}&0\\ {-{x}_{1}}&{-{x}_{0}}&0&{y}_{0}^{2}&0&{-{y}_{1}^{2}}&0&{-{y}_{2}^{2}}&0\\ 0&0&{-{x}_{1}}&0&{-{x}_{0}}&{y}_{0}+{y}_{1}&0&0&{-{y}_{2}}\\ {x}_{0}&0&0&{y}_{2}^{2}&0&0&{-{x}_{1}}&{-{y}_{1}^{2}}&{y}_{0}^{2} \end{bmatrix} \] and \[ \partial_2 = \begin{bmatrix} {-{y}_{1}^{2}}&0&{y}_{0}^{2}&{-{y}_{2}^{2}}&0\\ {y}_{2}^{2}&0&0&{y}_{0}^{2}&{-{y}_{1}^{2}}\\ {y}_{0}+{y}_{1}&{-{y}_{2}}&0&0&0\\ 0&0&{x}_{1}&{x}_{0}&0\\ 0&0&{y}_{2}&0&{y}_{0}+{y}_{1}\\ {x}_{1}&0&0&0&{x}_{0}\\ 0&{y}_{0}^{2}&{y}_{2}^{2}&{-{y}_{1}^{2}}&0\\ {-{x}_{0}}&0&0&{x}_{1}&0\\ 0&{x}_{1}&{-{x}_{0}}&0&0 \end{bmatrix}. \] \end{example} \subsection{The quotient by a virtually regular element} \label{subsec:vreg-element} The purpose of this subsection is to introduce the notion of a virtually regular element and to show that the quotient of a virtually Cohen--Macaulay module by a virtually regular element is again virtually Cohen--Macaulay. We do this by the explicit construction of a virtual resolution of the appropriate length for the quotient module arising from a virtual resolution of the original module. Recall that a module $M$ is irrelevant if $\widetilde{M} = 0$. \begin{definition} \label{def:vreg-element} Let $f \in S$ be homogeneous, and let $M$ be an $S$-module. If $\Ann_M f$ is irrelevant and $\dim M = 1+\dim M/fM$, then we say that $f$ is \emph{virtually regular on $M$} or that $f$ is a \emph{virtually regular element on $M$}. \end{definition} It is immediate that any regular element on $M$ is virtually regular and that no element of a minimal prime of $M$ can be virtually regular. The additional flexibility gained in considering virtually regular elements over regular elements alone is that an element of an embedded associated prime of $M$ can be virtually regular if its annihilator is sufficiently well controlled. Notice also that if $M'$ is an $S$-module satisfying $\widetilde{M'} = \widetilde{M}$, then $f$ is virtually regular on $M$ if and only if $f$ is virtually regular on $M'$. We will see below that quotienting by a virtually regular element preserves the virtually Cohen--Macaulay property, just as quotienting by a regular element preserves the Cohen--Macaulay property of modules. However, unlike in the affine setting, the converse is not true (see Example \ref{ex:virtreg-noconverse}). \begin{prop} \label{prop:vreg-elt-quotient} Let $S$ be the Cox ring of a smooth projective toric variety $X$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. If $M$ has a virtual resolution of length $\ell$ and $f$ is a virtually regular element on $M$, then $M/fM$ has a virtual resolution of length $\ell+1$. In particular, if $M$ is virtually Cohen--Macaulay, then $M/fM$ is either virtually Cohen--Macaulay or irrelevant. \end{prop} \begin{proof} Because $\dim M=1+\dim M/fM$, it suffices to prove the first claim. Let $F_\bullet$ be a virtual resolution of $M$ of length $\ell$. Consider the complex $G_\bullet = 0 \rightarrow S \xrightarrow{f} S \rightarrow S/\<f\> \rightarrow 0$. We claim that the total complex, $E_\bullet$, of the double complex of $F_\bullet\otimes_S G_\bullet$ gives a virtual resolution of $M/fM$. It is clear that $H_0(E) = M' \otimes_S S/\<f\>$ for some module $M'$ satisfying $\widetilde{M'} = \widetilde{M}$. Because $\Ann_M f$ is irrelevant, it follows that $H_0(E)^\sim = (M/fM)^\sim$. A standard diagram chase shows that the higher homology of the total complex is irrelevant. Because $E_\bullet$ has length $\ell+1$, we have found a virtual resolution of $M/fM$ of length $\ell+1$, as desired. \end{proof} \begin{definition} We say that the sequence $f_1, \ldots, f_k$ is a \emph{virtually regular sequence} on the module $M$ if $f_1$ is virtually regular on $M$ and if $f_i$ is virtually regular on $M/ \langle f_1, \ldots, f_{i-1} \rangle M$ for all $1<i \le k$. \end{definition} The following corollary is a immediate from Proposition \ref{prop:vreg-elt-quotient}. \begin{corollary} If the $S$-module $M$ is virtually Cohen--Macaulay, and $f_1, \ldots, f_k$ is a virtually regular sequence on $M$, then $M/ \langle f_1, \ldots, f_k \rangle M$ is either virtually Cohen--Macaulay or irrelevant. \end{corollary} \begin{example} \label{ex:vreg-element} Let $S = k[x_0,\ldots,x_5]$ be the Cox ring of $\PP^5$ and consider the ideal \[ J = \<x_0, x_1,x_2\> \cap \<x_3,x_4,x_5\>. \] With $M = S/J$ and $F_\bullet$ the minimal free resolution of $M$, the construction in Proposition~\ref{prop:mapping-cone} yields a virtual resolution of $M$ of length $\codim M = 3$, which shows that $M$ is virtually Cohen--Macaulay. We claim that $x_2-x_5$ is a virtually regular element on $M$, that $x_1-x_4$ is a virtually regular element on $M/\langle x_2-x_5 \rangle M$, and that $x_0-x_3$ is a virtually regular element on $M/ \langle x_2-x_5, x_1-x_4 \rangle M$. Because $x_2-x_5$ is a regular element, it is automatically a virtually regular element. Observe that \begin{align*} \overline{M}=\frac{M}{\langle x_2-x_5 \rangle M} &\cong \frac{S}{ \langle x_0x_3,x_0x_4,x_0x_2,x_1x_3,x_1x_4,x_1x_2,x_2x_3,x_2x_4,x_2^2,x_5 \rangle}\\ & \cong \frac{S}{\langle x_0,x_1,x_2,x_5 \rangle \cap \langle x_2,x_3,x_4,x_5 \rangle \cap \langle x_0,x_1,x_2^2,x_3,x_4,x_5 \rangle }. \end{align*} Now $x_1-x_4$ is not a regular element on $\overline{M}$, but it is not in either minimal prime of $\overline{M}$, and so $\dim \overline{M} = 1+\dim \overline{M}/ \langle x_1-x_4 \rangle \overline{M}$. The isomorphism presented above is given by $x_i \mapsto x_i$ for $i \neq 5$ and $x_5 \mapsto x_2-x_5$. After application of this isomorphism, it is easy to see that $\Ann_{\overline{M}} \langle x_1-x_4 \rangle = \langle x_2 \rangle \overline{M}$, which is irrelevant. Hence, $x_1-x_4$ is a virtually regular element that is not a regular element on $\overline{M}$. A similar computation shows that $x_0-x_3$ is in an embedded prime of $M/\langle x_2-x_5, x_1-x_4 \rangle M$ and has, after applying an analogous isomorphism to the one described above, an irrelevant annihilator generated by $x_1$ and $x_2$. Hence, $x_2-x_5, x_1-x_4,x_0-x_3$ is a virtually regular sequence on $M$. Therefore, since $M$ is virtually Cohen--Macaulay, so are each of the modules $M/\langle x_2-x_5 \rangle M$ and $M/ \langle x_2-x_5, x_1-x_4 \rangle M$ by Proposition~\ref{prop:vreg-elt-quotient}. Notice that $M/ \langle x_2-x_5, x_1-x_4,x_0-x_3 \rangle M$ is irrelevant. \end{example} It is worth noting, however, that the converse to Proposition~\ref{prop:vreg-elt-quotient} is false, even over the Cox ring of a single projective space, as seen in the example below. \begin{example}\label{ex:virtreg-noconverse} If $S = k[x_0,x_1,x_2]$ is the Cox ring of $\PP^2$, and $M = S/\langle x_0^2,x_0x_1 \rangle$, then there is no $f \in S$ so that $S/\< f \>^\sim \cong M^\sim$. Thus, the virtual dimension of $M$ is at least of length $2$ while $\codim M=1$, so $M$ is not virtually Cohen--Macaulay. However, $x_2$ is a (virtually) regular element on $M$, and $M/\<x_2\>M^\sim \cong S/\<x_0,x_2\>^\sim$, which is clearly virtually Cohen--Macaulay. Hence, we have an example of a module that is not virtually Cohen--Macaulay and a virtually regular on it so that the quotient by that virtually regular element yields virtually Cohen--Macaulay module. Notice also that $x_2, x_1,x_0$ is a virtually sequence on $M$ while $x_0,x_1,x_2$ is not, which shows that virtually regular sequences need not be permutable. Additionally, because $S/\langle x_0 \rangle$ is virtually Cohen--Macaulay, this example shows that it is possible that a monomial ideal $I$ does not define a virtually Cohen--Macaulay scheme while its radical $\sqrt{I}$ does. \end{example} \section{Derived functors via virtual resolutions} \label{sec:virtual-ext} In this section, we record that the sheafifications of $\Ext$ and $\Tor$ functors can be computed using virtual resolutions in place of free resolutions. This perspective gives necessary conditions on a module for it to be virtually Cohen--Macaulay. The goals of this treatment are to record the relationship between the length of shortest virtual resolutions with homological dimension and to note some conditions on modules that prevent them from being virtually Cohen--Macaulay. These conditions are those that would prevent a module from having any virtual resolution of length equal to its codimension. As in the previous section, $X$ will always denote an arbitrary smooth projective toric variety with Cox ring $S$ and that all $S$-modules will be finitely generated and $\Pic(X)$-graded. Nevertheless, we will endeavor to repeat these hypotheses in the statements of theorems. \begin{prop}\label{prop:vExt} Let $M$ and $N$ be finitely generated $\Pic(X)$-graded modules over the Cox ring $S$ of a smooth projective toric variety $X$. If $F_\bullet$ is any virtual resolution of $M$, then $\Ext^i_S(M,N)^\sim$ is the sheafification of the $i^{th}$ homology module of $Hom_S(F_\bullet,N)$. \end{prop} \begin{proof} Fix a virtual resolution $F_\bullet$ of $M$ and an injective resolution $I^\bullet$ of $N$. We consider the spectral sequence of the double complex whose $(i,j)^{th}$ entry on page $0$ is $\Hom_S(F_i, I^j)$. Taking homology along each column yields a page 1 whose only nonzero entries are $\Hom_S(F_\bullet,N)$. Hence, the module in position $(i,j)$ on page $\infty$ is the $i^{th}$ homology module of $\Hom_S(F_\bullet,N)$. As usual, let $B$ denote the irrelevant ideal of $S$. On the other hand, taking homology along rows first yields a page 1 whose entry in position $(i,j)$ is $\Hom(L_i, I^j)$ where $L_i$ is the $i^{th}$ homology of $F_\bullet$, which is supported only on $B$ unless $i = 0$, in which case $L_0 = M$. Hence, the entry in position $(i,j)$ on page $\infty$ is $H_i \oplus K_i$ for some module $K_i$ supported only on $B$ and $H_i \subseteq \Ext^i_S(M,N)$ with $\Ext^i_S(M,N)/H_i$ supported only on $B$. Hence, $\Ext^i_S(M,N)^\sim \cong (H_i \oplus K_i)^\sim$, as desired. \end{proof} It is a classical result that, for a module $M$ over the Cox ring $S$ of $\PP^d$, the condition that $H^i_B(M)^\vee \cong \Ext_S^{d+1-i}(M,S)$ be irrelevant (equivalently, finite length) for all $i<\dim(M)$ is equivalent to the condition that $M$ be equidimensional and Cohen--Macaulay at its localization at every relevant prime in its support. We will see in Section~\ref{sec:VCMvsOthers} that virtually Cohen--Macaulay implies geometrically Cohen--Macaulay and equidimensional. Putting these facts together, we have that when $X$ is a single projective space, $M$ being virtually Cohen--Macaulay implies that $\Ext_S^j(M,S)$ is irrelevant for all $j>\codim(M)$. The following corollary, which is immediate from Proposition~\ref{prop:vExt}, extends that result to the arbitrary smooth projective toric setting. \begin{corollary} \label{cor:vanishingExt} Let $S$ be the Cox ring of a smooth projective toric variety $X$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. If $M$ has a virtual resolution of length $\ell$, then $\Ext^i_S(M,N)^\sim = 0$ for all $\Pic(X)$-graded finitely generated $S$-modules $N$ and all $i>\ell$. \qed \end{corollary} \begin{example}\label{ex:disjoint-lines-P4} In Example~\ref{ex:disjoint-lines}, we saw that $I = \<x_0,x_1\> \cap \<x_2,x_3\>$ defined a virtually Cohen--Macaulay subscheme of $\PP^3$. Corollary~\ref{cor:vanishingExt} implies that $I$ does not define a virtually Cohen--Macaulay subscheme of $\PP^d$ whenever $d>3$. In particular, with $S = k[x_0,\ldots,x_d]$, we have that $\Ext_S^3(S/I,S) \cong S/\<x_0,x_1,x_2,x_3\>$, which is not irrelevant. \end{example} \begin{remark} With notation as above, the virtual dimension of $M$ is greater than or equal to the homological dimension of $\widetilde{M}$. To see this, observe that if $M$ has a virtual resolution of length $\ell$, then, because every virtual resolution of $M$ gives rise to a locally free resolution of $\widetilde{M}$, the homological dimension of $\widetilde{M}$ is at most $\ell$. However, it is not true that a module $M$ is virtually Cohen--Macaulay if and only if its sheafification has homological dimension equal to its codimension. For example, any module whose sheafification is a vector bundle that does not split as a direct sum of line bundles has homological dimension $0$ while its shortest virtual resolution must have positive length. Example~\ref{ex:tangentbundle} examines an explicit example of this type. \end{remark} We have seen in Corollary~\ref{cor:vanishingExt} that an $S$-module $M$ cannot have a virtual resolution of length $\ell$ if there is some $\Ext^i_S(M,N)^\sim \neq 0$ for some $S$-module $N$ and some $i>\ell$. The following two corollaries combine to show that one need not consider all possible $N$ but that it is instead sufficient to check only $N=S$. \begin{corollary}\label{cor:inductiveStep} Let $S$ be the Cox ring of a smooth projective toric variety $X$, and let $M$ be a finitely generated $\Pic(X)$-graded $S$-module. If $\Ext_S^\ell(M,S)^\sim= 0$ and $\Ext_S^{\ell+1}(M,L)^\sim = 0$ for every finitely generated $\Pic(X)$-graded $S$-module $L$, then $\Ext_S^\ell(M,N)^\sim = 0$ for every finitely generated $\Pic(X)$-graded $S$-module $N$. \end{corollary} \begin{proof} Let $N$ be an $S$-module. There is a short exact sequence (of $S$-modules but typically not of graded $S$-modules) for some $a \ge 1$ and some $K$ of the form $0 \longrightarrow K \longrightarrow S^a \longrightarrow N \longrightarrow 0$. Applying $\Ext_S(M,-)$ to this yields \[ \Ext_S^\ell(M,S^a)^\sim \longrightarrow \Ext_S^\ell(M,N)^\sim \longrightarrow \Ext_S^{\ell+1}(M,K)^\sim = 0. \] Since $\Ext_S^{\ell}(M, S^a) \cong (\Ext_S^{\ell}(M,S))^a= 0$, it must be true that $\Ext_S^\ell(M,N)^\sim= 0$. \end{proof} \begin{cor} Let $S$ be the Cox ring of a smooth projective toric variety $X$. Suppose the finitely generated $\Pic(X)$-graded $S$-module $M$ has the property that $\Ext_S^i(M,S)$ is irrelevant for all $i \ge \ell$, for some $\ell \ge 0$. Then $\Ext_S^i(M,N)$ is irrelevant for every finitely generated $\Pic(X)$-graded $S$-module $N$ for all $i \ge \ell$. \end{cor} \begin{proof} Because free resolutions are virtual resolutions, the claim is trivial if $\ell$ is greater than the projective dimension of $M$, denoted $\pdim M$, so we assume that $\ell \le \pdim M$. Now, using Corollary~\ref{cor:inductiveStep}, we proceed by induction on $\pdim M-\ell$. \end{proof} For completeness, we state an analogue for $\Tor$ of Proposition~\ref{prop:vExt}, which concerned $\Ext$. \begin{prop} \label{prop:vTor} Let $M$ and $N$ be finitely generated $\Pic(X)$-graded modules over the Cox ring $S$ of a smooth projective toric variety $X$. If $F_\bullet$ is any virtual resolution of $M$, then $\Tor^S_i(M,N)^\sim$ is the sheafification of the $i^{th}$ homology module of $F_\bullet \otimes_S N$. \end{prop} \begin{proof} The argument follows the proof of Proposition~\ref{prop:vExt} but uses the spectral sequence arising from the double complex $F_\bullet \otimes_S G_\bullet$, where $G_\bullet$ is a free resolution of $N$. \end{proof} \begin{corollary} \label{cor:vanishingTor} Let $S$ be the Cox ring of a smooth projective toric variety $X$. If a finitely generated $\Pic(X)$-graded $S$-module $M$ has a virtual resolution of length $\ell$, then $\Tor_i^S(M,N)^\sim = 0$ for all finitely generated $\Pic(X)$-graded $S$-modules $N$ and all $i>\ell$. \qed \end{corollary} Connecting back to Definition~\ref{def:vreg-element}, a virtually regular element has a description in terms of virtual $\Tor$, just as, in the affine case, a regular element can be described in terms of the vanishing of certain $\Tor$ modules. \begin{prop} Let $S$ be the Cox ring of a smooth projective toric variety $X$, $M$ be a finitely generated $\Pic(X)$-graded $S$-module, and $f\in S$ be homogeneous. Then $f$ is virtually regular on $M$ (as in Definition~\ref{def:vreg-element}) if and only if $1+\dim M/fM = \dim M$ and $\Tor_1^S(M,S/\<f\>)^\sim = 0$. \end{prop} \begin{proof} Tensor the short exact sequence $0 \rightarrow S \xrightarrow{f} S \rightarrow S/\<f\> \rightarrow 0$ with $M$ to see that there is an isomorphism of $S$-modules $\Tor^S_1(M,S/\<f\>) \cong \Ann_M f$. \end{proof} \section{Relationships among the arithmetically, virtually, and geometrically Cohen--Macaulay properties} \label{sec:VCMvsOthers} As in the previous section, $X$ will always denote an arbitrary smooth projective toric variety with Cox ring $S$ and irrelevant ideal $B$ and that all $S$-modules will be finitely generated and $\Pic(X)$-graded. We begin this section by recording a relationship between the arithmetically Cohen--Macaulay, virtually Cohen--Macaulay, and geometrically Cohen--Macaulay properties. Recall that an ideal $I$ is $\emph{relevant}$ if $B^t \not\subseteq I$ for all $t \geq 1$. \begin{defn}\label{def:geomCM} An $S$-module $M$ is \emph{geometrically Cohen--Macaulay} if $M_P$ is a Cohen--Macaulay $S_P$-module for all relevant primes $P$ in the support of $M$. \end{defn} Note that the condition that $M$ be geometrically Cohen--Macaulay is equivalent to the condition that $\widetilde{M}$ be a Cohen--Macaulay sheaf on $X$. \begin{prop}\label{prop:aCM=>vCM=>gCM} Let $S$ be the Cox ring of a smooth projective toric variety $X$. If $M$ is a finitely generated $\Pic(X)$-graded $S$-module, then \begin{enumerate} \vspace{-2mm} \item if $M$ is arithmetically Cohen--Macaulay, then $M$ is virtually Cohen--Macaulay; and \item if $M$ is virtually Cohen--Macaulay, then $M$ is geometrically Cohen--Macaulay. \end{enumerate} \end{prop} \begin{proof} If $M$ is arithmetically Cohen--Macaulay, then by the Auslander--Buchsbaum formula, $M$ has a free resolution of length $\codim M$. Because free resolutions are virtual resolutions, $M$ is thus virtually Cohen--Macaulay. Similarly, if $M$ is virtually Cohen--Macaulay, then it has a virtual resolution $F_\bullet$ of length $\codim M$. If $P$ is any relevant prime in the support of $M$, then localizing $F_\bullet$ at $P$ gives a free resolution $(F_P)_\bullet$ of $M_P$ of length $\codim M$. Because the codimension of $\Spec(S/\Ann_S(M))$ in $\Spec(S)$ is equal to the codimension of $\Spec(S_P/\Ann_{S_P}(M_P))$ in $\Spec(S_P)$, it follows from the Auslander--Buchsbaum formula that $M_P$ is Cohen--Macaulay. Hence $M$ is geometrically Cohen--Macaulay. \end{proof} We saw several times in Section~\ref{sec:triangles}, for example in Example~\ref{ex:disjoint-lines}, that implication (1) of Proposition~\ref{prop:aCM=>vCM=>gCM} is strict. We now give an example showing that implication (2) is also strict. \begin{example} \label{ex:tangentbundle} Even over the Cox ring of projective space, a module can be geometrically Cohen--Macaulay but not virtually Cohen--Macaulay. For example, if $S$ is the Cox ring of $\PP^d$ with $d > 1$ and $M$ corresponds to the tangent bundle, i.e., $M$ is the cokernel of the map \vspace{-1mm} \[ S^{d+1} \xleftarrow{ \begin{bmatrix} x_0 \\ \vdots \\ x_d \end{bmatrix} } S \leftarrow 0, \vspace{-1mm} \] then $\widetilde{M}$ is a vector bundle that does not split as a direct sum of line bundles, see~\cite[Theorem~8.1.6]{cox-little-schenck}. Thus $M$ has virtual dimension $1$ but codimension $0$. Meanwhile, for each $0 \le i \le d$, the matrix above has a unit entry after tensoring with $S[1/x_i]$, which shows that $M[1/x_i] \cong S[1/x_i]$, and so $M$ is a geometrically Cohen--Macaulay. In fact, not only is $M$ geometrically Cohen--Macaulay, but also it is a faithful module of depth $d$ on the homogeneous maximal ideal of $S$. These properties show that the virtual Cohen--Macaulay property is not captured by the geometric Cohen--Macaulay property along with depth information coming from the affine setting. \end{example} It was shown in \cite[Proposition 5.1]{virtual-original} that every $B$-saturated virtually Cohen--Macaulay module over the Cox ring $S$ of a product of projective spaces is unmixed. The argument, which we record here, generalizes to arbitrary smooth projective toric varieties. \begin{prop} \label{prop:equidim} Let $S$ be the Cox ring of a smooth projective toric variety $X$ with irrelevant ideal $B$. If $M$ is a finitely generated $\Pic(X)$-graded $B$-saturated $S$-module that is virtually Cohen--Macaulay, then $\dim S/P = \dim M$ for all associated primes $P$ of $M$. \end{prop} \begin{proof} Suppose that $M$ is a virtually Cohen--Macaulay module of codimension $c$, and suppose that there is some associated prime $P$ of $M$ of codimension $e>c$. Let $F_{\bullet}$ be a virtual resolution of length $c$. Because $M$ is $B$-saturated, every associated prime $P$ of $M$ is relevant. Hence, $(F_P)_\bullet$ gives an $S_P$-free resolution of $M_P$ of length $c$. Let $\pdim_{S_P} M_P$ denote the projective dimension of $M_P$ over $S_P$, and recall that the codimension of $\Spec(S/\Ann_S(M))$ in $\Spec(S)$ is equal to the codimension of $\Spec(S_P/\Ann_{S_P}(M_P))$ in $\Spec(S_P)$, which we record as $\codim M = \codim M_P$. Then we obtain a contradiction because \[ \pdim M_P \le c<e = \codim M = \codim M_P. \qedhere \] \end{proof} Proposition~\ref{prop:equidim} motivates the definition of a \emph{virtually unmixed $S$-module}: \begin{defn} Let $M$ be an $S$-module, and let $N$ denote the $B$-saturated $S$-module satisfying $\widetilde{N} = \widetilde{M}$. We say that $M$ is \emph{virtually unmixed} if it satisfies the following conditions, which are easily seen to be equivalent. \begin{enumerate} \item For all relevant associated primes $P$ of $M$, $\dim S/P = \dim M$. \item For all associated primes $P$ of $N$, $\dim S/P = \dim N$. \item The $B$-saturation of $\Ann_S M$ is an unmixed ideal. \item The annihilator $\Ann_S N$ is an unmixed ideal. \end{enumerate} \end{defn} The following corollary is immediate from Proposition~\ref{prop:equidim}. \begin{corollary} \label{cor:irrelevantEmbedded} If $M$ is a virtually Cohen--Macaulay $S$-module, then $M$ is virtually unmixed. \end{corollary} \raggedbottom \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{bibdiv} \begin{biblist} \bib{audin}{book}{ AUTHOR = {Audin, Mich\`ele}, TITLE = {The topology of torus actions on symplectic manifolds}, SERIES = {Progress in Mathematics}, VOLUME = {93}, NOTE = {Translated from the French by the author}, PUBLISHER = {Birkh\"{a}user Verlag, Basel}, YEAR = {1991}, PAGES = {181}, } \bib{virtual-original}{article}{ AUTHOR = {Berkesch, Christine}, AUTHOR = {Erman, Daniel}, AUTHOR = {Smith, Gregory G.}, TITLE = {Virtual resolutions for a product of projective spaces}, JOURNAL = {Alg. Geom.}, FJOURNAL = {Algebraic Geometry}, VOLUME = {7}, YEAR = {2020}, NUMBER = {4}, PAGES = {460--481}, } \bib{cox:homog}{article}{ AUTHOR = {Cox, David A.}, TITLE = {The homogeneous coordinate ring of a toric variety}, JOURNAL = {J. Algebraic Geom.}, FJOURNAL = {Journal of Algebraic Geometry}, VOLUME = {4}, YEAR = {1995}, NUMBER = {1}, PAGES = {17--50}, } \bib{cox-little-schenck}{book}{ AUTHOR = {Cox, David A.}, AUTHOR = {Little, John B.}, AUTHOR = {Schenck, Henry K.}, TITLE = {Toric varieties}, SERIES = {Graduate Studies in Mathematics}, VOLUME = {124}, PUBLISHER = {American Mathematical Society, Providence, RI}, YEAR = {2011}, PAGES = {xxiv+841}, } \bib{eisenbud}{book}{ AUTHOR = {Eisenbud, David}, TITLE = {Commutative algebra with a view toward algebraic geometry}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {150}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {1995}, PAGES = {xvi+785}, } \bib{Har}{book}{ AUTHOR = {Hartshorne, Robin}, TITLE = {Algebraic Geometry}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {52}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {2000}, } \bib{monomial-radical}{article}{ author={Herzog, J.}, author={Takayama, Y.}, author={Terai, N.}, title={On the radical of a monomial ideal}, journal={Arch. Math. (Basel)}, volume={85}, date={2005}, number={5}, pages={397--408}, } \bib{reu2019}{article}{ author={Kenshur, Nathan}, author={Lin, Feiyang}, author={McNally, Sean}, author={Xu, Zixuan}, author={Yu, Teresa}, title={On virtually Cohen--Macaulay simplicial complexes}, journal={arXiv:2007.09443}, } \bib{Lop}{article}{ author = {Loper, Michael C.}, title = {What makes a complex virtual}, journal = {arXiv:1904.05994}, } \bib{MS04}{book}{ author = {Miller, Ezra}, author = {Sturmfels, Bernd}, Title = {Combinatorial Commutative Algebra}, SERIES = {Graduate Texts in Mathematics}, YEAR = {2004}, VOLUME = {227}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {2005}, PAGES = {xiv+417}, } \bib{musson}{article}{ author={Musson, Ian M.}, title={Differential operators on toric varieties}, journal={J. Pure Appl. Algebra}, volume={95}, date={1994}, number={3}, pages={303--315}, } \bib{mustata-toric}{article}{ author={Musta\c{t}\u{a}, Mircea}, title={Vanishing theorems on toric varieties}, journal={Tohoku Math. J. (2)}, volume={54}, date={2002}, number={3}, pages={451--470}, } \bib{yanagawa-sheaves}{article}{ author={Yanagawa, Kohji}, title={Stanley--Reisner rings, sheaves, and Poincar\'{e}--Verdier duality}, journal={Math. Res. Lett.}, volume={10}, date={2003}, number={5-6}, pages={635--650}, } \bib{yang-monomial}{article}{ author={Yang, Jay}, title={Virtual resolutions of monomial ideals on toric varieties}, journal={Proc. Amer. Math. Soc. Ser. B, to appear}, year={2021} } \end{biblist} \end{bibdiv} \end{document
{ "timestamp": "2021-08-02T02:06:59", "yymm": "2012", "arxiv_id": "2012.14047", "language": "en", "url": "https://arxiv.org/abs/2012.14047" }
\section{Introduction} \label{sec:intro} The ultracool dwarfs (hereafter UCDs) are stars with spectral types later than M7 and mass below 0.3M$_\odot$. Empirically, UCDs are found to have weak chromospheric emission (Gizis et al., 2000; Basri 2000) and be dim in the X-ray wavelength. But the occurrence of flares on these stars at optical as well as X-ray (eg., Fleming et al. 2000), ultraviolet (eg., Linsky et al., 1995) and radio wavelengths show that magnetic activity does exist for very low-mass stellar configuration. The interior of UCDs are presumably fully convective. It is proposed that the dynamo mechanisms for the chromospheric and coronal activity of these UCDs might be different from the solar-type stars ( Chabrier \& Baraffe 2000). It is well known that the stellar flares are due to magnetic reconnection in a strong magnetic field (e.g, Shulyak et al., 2017). However, during these stellar flares, the underly mechanism of the white light continuum is still not fully understood though lots of researches have been presented including a hydrogen recombination model (Kunkel 1969), a two-component model consisting hydrogen recombination and impulsively heated photosphere (Kunkel 1970), and a multi-component model (Zhilyaev et al., 2007) in which blackbody radiation are dominated at flare peak, and the hydrogen continuum are primarily during the flare decay. Gizis et al. (2013) proposed that the white-light emission mainly contributed by thermal continuum. Thanks for the high cadence survey, like Kepler survey (Paudel et al. 2018) and ASAS-SNs (Schmidt et al. 2019), more late-type stellar flares were reported and analyzed in detail (Schmidt et al. 2019; Kowalski et al. 2010, 2013; Davenport 2016; Chang et al. 2018; Frith et al. 2013). Paudel et al. (2018) pointed out that white-light flares are ubiquitous in M6-L0 dwarfs as seen in Kepler survey (Borucki et al. 2010) of ultracool dwarfs. Schmidt et al. (2019) reported that the energy of M dwarf flares ranges from $10^{32}$ to $10^{35}$ erg after analyzing 47 ASAS-SN M dwarf flares. The occurrence rate of a flare with high energy ($E_U >10^{34}$) is expected once per month to year (Kowalski et al. 2010; Davenport et al. 2016; Rodriguez et al. 2018). These detections of flares of UCDs are helpful for understanding both the changes in the underlying magnetic dynamo and the interaction between the magnetic fields and surface of those ultracool stars. Observationally, a white-light flare is typical of a rapid transient that is characterized by an initial impulsive rise with a duration of seconds and then by a decay with a timescale of seconds to hours (e.g., Davenport et al. 2014). Since the flares occur stochastically, an attractive method of detection is to monitor a large proportion of the sky by an automated survey with a cadence down to seconds. Ideally, the survey should have self-trigger capability and dedicated follow-up telescopes, which are required to capture the flares and to cover the total duration of the flares from the quiescent state before the start of the events to the time at which the flares return back to the quiescent state. In this paper, we report the detection of a super stellar flare with an amplitude of $\Delta R=9.5$ mag on a M9 star by GWAC system. Fast photometries and an optical spectrum for the flare were carried out. The total energy in $R$ band is about $E_R=1.5\times10^{34}$ erg. This huge energy release places the event to be one of the strongest late-M dwarf flares up to now. The paper is organized as follows. The discovery of the super flare is described in section 2. Section 3 reports the rapid follow-ups by both photometry and spectroscopy. The properties of the flare are presented in section 4. Section 5 gives the discussion and summary for this discovery. \section{Detection by GWAC} \subsection{Detection and follow-up system of GWAC} As one of the main ground facilities of $\text{SVOM}$\footnote{SVOM is a China-France satellite mission dedicated to the detection and study of Gamma-ray bursts (GRBs)} mission (Wei et al. 2016; Yu et al. 2020), GWAC (Ground-based Wide Angle Cameras) system located at Xinglong observatory of NAOC is an optical transient survey that images the sky in optics down to $R\sim$16.0 mag at a cadence of 15 seconds, which aims to detect various of short-duration astronomical events, including the electromagnetic counterparts of gamma-ray bursts (Wei et al. 2016) and gravitational waves (Turpin et al. 2020), and stellar flares. The main characteristic and the survey strategy of GWAC is presented as follows. More detailed information of GWAC could be found in the reference (Wang et al., 2020). The effective aperture size of each GWAC JFoV camera is 18 cm. The f-ratio is $f/1.2$. Each camera is equipped with 4096$\times$4096 E2V back-illuminated CCD chip. The wavelength range is from 0.5 to 0.85 $\mu m$. The field of view for each camera is 150 deg$^2$ and a pixel scale is 11.7 arc seconds. For GWAC, each mount carries four JFoV cameras (an unit is called in GWAC system). The total FoV for each unit is $\sim$ 600 deg$^2$. Currently, four units have been seted at Xinglong observatory, Chinese academy of Sciences, China. More units will be seted before the lunch of SVOM mission at 2022 aiming to cover about 5000 deg$^2$ simultaneously. During the survey, each unit is assigned to a given grid which is pre-defined for the whole sky according to the FoV of each unit. The sky with a Galactic latitude of $ b < 20$ deg as well as the grids near the Moon are set with lower priority since the detection efficiency of any transient observing these sky will be reduced by the higher star density or higher background noise. A dedicated rapid follow-up system has been developed for each candidate by using two Guangxi-NAOC 60 cm optical telescopes (F60A and F60B) deployed beside GWAC with a typical delay time of one minute (Xu et al. 2020). More deep imaging and spectroscopy can be carried out through Target of Opportunity observations by the 2.16 m telescope (Fan et al. 2016) at Xinglong observatory and by the 2.4m telescope at Gaomeigu observatory, China. The high cadence, middle detection limit, self-automatic trigger capability and its dedicated rapid follow-up telescopes enable GWAC system to detect a great number of stellar flares and to capture the events similar to super flare ASASSN-16ae ( $\Delta V<11$ mag, Schmidt et al. 2016) with more intensive temporal resolution. \subsection{Detection of the flare} On 2018 December 29 UT10:42:51, an alert was generated by the GWAC on-line pipelines for a very bright optical transient (GWAC\,181229A) during a survey for one pre-defined field from 10:03:07.8 to 14:55:21.0 UT at the same night. The detection magnitude was 13.5 mag in $R-$band measured by the real-time pipelines. The coordinate of the new source measured from the GWAC images is R.A.=01:33:33.08, DEC=00:32:23.02 (J2000). The corresponding astrometric precise is about 2.0 arcsecond typically (1$\sigma$). This source was not detected in the reference image which was obtained by stacking 10 images taken at around 10:04:21 UT, i.e., about 38 min before the trigger time. The finding charts of the detection image and the reference image observed by GWAC are shown in Figure.\ref{findchart}. The candidate shows stellar profile indicating that it is likely not originated from hot pixel, fast moving objects or ghosts in GWAC system. No any apparent moving was obtained by the pipeline for the transient. No any known minor planet or comet brighter than $V=20.0$mag was found in the 15.0 arcminute region around the transient\footnote{https://minorplanetcenter.net/cgi-bin/mpcheck.cgi?}. All these information indicates that the transient is a real astronomical event with a high level of confidence. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{G181229_C02390_n0.eps} \includegraphics[width=0.3\textwidth]{G181229_C02390_n1.eps} \includegraphics[width=0.3\textwidth]{G181229_C02390_n2.eps} \caption{Finding Charts of GWAC\,181229A detected by GWAC. All these images were obtained by GWAC at the same night, and all the observation times are marked. The left panel is the reference image that was obtained at about 38 minutes before the onset of the event. The right two panels are the images taken after the onset. The central source marked by the red circles is the object. There was a clear fainting during our observations. } \label{findchart} \end{figure} The on-line data processing showed that the transient fading by 0.9 mag can be seen in all the single exposures within a duration of 2.5 minutes after the first detection by GWAC. The detection limit of all these single exposures was $R\sim$15.0 mag at a significance level of 3$\sigma$. We re-performed an off-line pipeline with a standard aperture photometry at the location of the transient and for several nearby bright reference stars by using the IRAF APPHOT package, including the corrections of bias, dark and flat-field in a standard manner. After a differential photometry, the finally calibrated brightness of transient was obtained by using the SDSS catalogues through the Lupton (2005) transformation \footnote{http://www.sdss.org/dr6/algorithms/sdssUBVRITransform.html\#Lupton2005 (R = r - 0.2936*(r - i) - 0.1439; sigma = 0.0072) }. \begin{figure}[htbp] \centering \includegraphics[width=0.295\textwidth]{F60A_n1.eps} \includegraphics[width=0.295\textwidth]{F60A_n2.eps} \includegraphics[width=0.3\textwidth]{SDSS_noinverse.eps} \caption{The left and middle panels are the finding Charts of GWAC\,181229A observed by F60A. The field size is about 3.0 arcmin. The observation times in UTC on 2018, Dec. 29 are labeled in the images. The sources marked in the images are the object GWAC\,181229A. The brightness of this object clearly fades out during our observations. The right panel is derived from SDSS DR13 survey for a comparison. The central red and faint source with a magnitude of $r$=24.05 mag (Annis et al. 2014) is the counterpart of the flare. The celestial distance of the object from the position derived from F60A to the SDSS source is 0.695 arcsec. The size of the right panels is different from the left and middle ones only for a clarity of display. } \label{findchart2} \end{figure} \section{Follow-ups by Imaging and Spectroscopy} \subsection{Photometries by F60A} Upon the flare was triggered by the GWAC real-time pipeline, it was immediately followed-up by F60A\footnote{The diameter is 60cm, the f-ratio is 8.0. The detector equipped on the mount is Andor 2k*2k CCD. The pixel scale is 0.52 arcseconds. } in standard Johnson-Cousins $R-$band via a dedicated real-time automatic transient validation system (RAVS, Xu et al. 2020) that is developed to confirm candidates triggered by GWAC and to obtain an adaptive light-curve sampling for an identified target. With RAVS, the exposure time can be dynamically adjusted automatically based on the evolution of brightness of an object. For the case of GWAC\,181229A, the range of exposure time is from 30 sec to 150 sec. The follow-up observations by F60A started at 2 minutes after the trigger, and stopped at the time when the object was fainter than the detection limit of $\sim$19.0 mag, which corresponds to a total duration of about 120 min. The raw images were reduced by following the standard routine in the IRAF\footnote{IRAF is distributed by the National Optical Astronomical Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} package, including bias and flat-field corrections. The correction of dark current was not made since the impact for the photometry can be negligible with the CCD cooling down to $-70$ deg. After an aperture photometry, absolute photometric calibration was performed with several nearby comparison stars with the Lupton (2005) transformation from SDSS data Release 14 catalog to the Johnson-Cousins system\footnote{http://www.sdss.org/dr6/algorithms/sdssUBVRITransform.html\#Lupton2005}. Figure 2 compares the Sloan Digital Sky Survey (SDSS) image centered at the target to the images obtained by F60A, in which there is a faint red counterpart within a distance of 0.697 arcseconds between the locations measured by F60A and reported by SDSS Stripe 82 catalogue (SDSS J013333.08+003223.7, Annis et al. 2014). Its brightness is $r=24.05\pm$0.15 mag (Annis et al. 2014) , which is taken as the quiescent brightness for the further analysis. \subsection{Spectroscopic Observation} One long-slit spectrum was obtained by the NAOC 2.16 m telescope (Fan et al. 2016) by using the Beijing Faint Object Spectrograph and Camera (BFOSC)\footnote{The BFOSC spectrograph is equipped with a back-illuminated E2V55-30 AIMO CCD. } via a ToO request. The start observation time for the spectrum was at 11:21:51.0 UT, 39 minutes after the trigger time. The exposure time was 30 minute. The coverage of the exposure time during the flare is shown with the yellow vertical area in Figure.\ref{fig:LC_C00090}. With a slit width of 1.8 arcsec oriented in the south-north direction, the spectral resolution is $\sim$10$\AA$ when grating G4 was used, which results in a wavelength coverage of 3850-8000$\AA$. The wavelength calibration was carried out with the iron-argon comparison lamps. Standard procedures were adopted to reduce the two-dimensional spectra by using the IRAF package, including bias subtraction and flat-field correction. The extracted one-dimensional spectrum was then calibrated in wavelength and in flux by the corresponding comparison lamp and standard calibration stars. \section {Results and Analysis} In this section, we investigate the nature of the quiescent counterpart of GWAC\,181229A from multi-wavelength catalogs. The properties of the flare is then analyzed by modeling the light curve, which yields an estimation of the total energy emitted during the flare. \subsection{The quiescent counterpart} \begin{table} \begin{center} \caption{Properties of SDSSJ0133 (the quiescent counterpart of GWAC\,181229A) extracted from various surveys.} \begin{tabular}{cc} \hline Parameter & Value \\ \hline & SDSS J013333.08+003223.7 (Annis et al. 2014) \\ \hline R.A. & 23.38779 \\ Decl. & 0.53991 \\ $u$ & $28.5450 \pm 2.1725$ \\ $g$ & $25.5569 \pm 0.4284$ \\ $r$ & $24.0556 \pm 0.1538$ \\ $i$ & $21.0491 \pm 0.0179$ \\ $z$ & $19.4138 \pm 0.0137$ \\ \hline & Pan-Starrs DR1 (108640233878278191, Chambers et al. 2016) \\ \hline R.A. & 23.387840550 \\ decl. & +00.539781430 \\ $i$ & $20.8993 \pm 0.0630$ \\ $z$ & $19.6418 \pm 0.0360$ \\ \hline & AllWISE Data Release (J013333.07+003222.9, Cutri et al., 2013) \\ \hline R.A. & 23.38787 \\ Decl. & 0.53992 \\ $W 1$ & $15.366 \pm 0.049$ \\ $W 2$ & $15.517 \pm 0.152$ \\ \hline & UKIDSS-DR9 Large Area Survey \\ & (J013333.07+003223.7, Lawrence et al., 2012; Ahmed et al. 2019) \\ \hline $Y$ & $17.97 \pm 0.03$ \\ $J$ & $17.11 \pm 0.02$ \\ $H$ & $16.52 \pm 0.03$ \\ $K$ & $16.10 \pm 0.03 $ \\ Spectral type & M9 \\ Dis & 144.6 pc \\ \hline \label{Survey} \end{tabular} \end{center} \end{table} In order to make a further investigation on the nature of this object, it is crucial to analyze the properties of the object in the quiescent state. We retrieved photometries from the Sloan Digital Sky Survey (SDSS: York et al. 2000), Wide field Infrared Survey Explorer (WISE; Wright et al. 2010), Pan-STARRS DR1 catalogue (PS1, Chambers et al. 2016) and other catalogues based on a coordinate cross-match through the VizieR Service\footnote{https://vizier.u-strasbg.fr/viz-bin/VizieR}. Each catalog returns only one source, named as SDSSJ0133, within our search radius of 2 arcsec. Parts of the queried parameters are shown in Table \ref{Survey}. At the beginning, based on the color-magnitude transformations given in Lupton et al. (2005)\footnote{http://www.sdss3.org/dr8/algorithms/sdssUBVRITransform.php}, we estimate a quiescent brightness in $R-$band of 23.03 mag, which results in a flare magnitude as large as $\Delta R=9.5$ mag. The derived quiescent flux is $F_{R,q} = 1.4 \times 10^{-18}$ erg cm$^{-2}$ s$^{-1}$ $\AA^{-1} $ by converting the quiescent magnitude above with the zero flux and the transformation for R band (Bessel et al., 1998). Ahmed et al., (2019) reported that the quiescent counterpart is a spectral type of M9. Due to the faint brightness of this source, no parallax or other report about the distance including the Gaia DR2 catalogue (Gaia Collaboration 2018). With the corresponding SDSS $i-$ and $z-$ bands magnitudes, based on the relation of color ($i-z$) and the absolute magnitude provided by Bochanski et al., (2020, 2012), an absolute magnitude of $M_{r}$ = 17.7 mag for quiescent counterpart is derived. Consequently, a distance of $d\sim$155.8 pc can be calculated with the estimation of the absolute magnitude and the apparent magnitude above. The reddening effect could be neglect for the above colors and the derived spectral type, since the extinction in the Galactic plane along the line of sight is not significant with E(B-V)=0.021\footnote{https://ned.ipac.caltech.edu/}. This distance is roughly consistent with the value of 144.6 pc reported by Ahmed et al., (2019). In the following analysis, the mean value of the distance of 150 pc will be used for further analysis. However, it is noted that a spectral type of M7 would be obtained if the estimation is based on the $i-z$ value provided by the PS1 catalogue. The difference in the derived spectral type is possibly caused by the difference between PanSTARRS and SDSS filters. The alternative possibility is that SDSSJ0133 is active with a low amplitude at the PS1 survey time. Other clue for an activity is the blue WISE\footnote{Wide-field Infrared sky Explore} infrared color of $\sim -0.15$ with $W1(15.366\pm0.049)$ and $W2 (15.517\pm0.152)$ (Cutri et al., 2013), which is slightly bluer than the expectation ($W1-W2\sim0.2$ ) made from the empirical relationships for ultracool dwarfs reported in Schmidt et al. (2015). According to the relation between metallicity and color of late type stars (Equation.3 in West et al. 2011), the metallicity-dependent parameter $\zeta$ is estimated to be 0.859, which is slightly larger than the criterion of the classification of the subdwarf ($\zeta<0.825$, L\'epine et al. 2007). \subsection{The flare} Figure \ref{fig:LC_C00090} shows the optical light curve of GWAC\,181229A, in which the data taken by GWAC and by F60A is shown by blue and red points, respectively. The horizontal red line marks the brightness level of the quiescent counterpart. The zoom panel at the upper right corner shows the GWAC data around the peak time. Before the first detection, the long-term monitors give an upper limit of 15.3 mag in $R$ band. At late phase, there are some fluctuations at low confidence since the signal-to-noise ratio decreases with time. The vertical errorbars are measurement-by-measurement estimates of the photon statistical error including instrumental characteristics. The horizontal errorbars correspond to 10 second exposure duration. With a cadence of 15 seconds, the first detection of GWAC\,181229A shows that the brightness of the object was 13.9 mag in $R$ band, and the second one reaches the peak with a brightness of 13.5 mag. The brightness then falls to less than half the maximum only in two images with 30 seconds. The total duration of the flare from the onset to the quiescent flux level is estimated to be about 14,465 seconds by assuming that the brightness fades with a constant slope determined by fitting the late data as shown in Figure.\ref{fig:LC_C00090}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{G181229_C02390_SDSSR_lc_filled.eps} \caption{$R$-band light curve of GWAC\,181229A observed by GWAC and F60A. The first detection occurs at $T_0=2458481.946482 $ day. The red line shows the quiescent brightness of this source with the magnitude of $R=23.03$ transformed from the SDSS $r$ and $i$ photometries. The green dash line presents the fitting result within the time interval of [2000 sec, 7000 sec], and gives a prediction of the time for the end of the flare. The inset panel shows the photometries obtained by GWAC around the peak time for more clarity. The yellow vertical area in the time interval of [2340 sec, 4140 sec] is for the exposure time ( 30 minutes ) of the spectrum observed by Xinglong 2.16m telescope.} \label{fig:LC_C00090} \end{figure} \subsection{Model the light curve} In order to have a more precise description of the morphology of the flare of GWAC\,181229A, we fit the light curve for the decay phase after the peak time by following the procedure of Davenport et al. (2014) (D14) , who tried to build a template from the single peak flares detected in active flare star GJ\,1243. Their procedure is as follows. For each flare, the flux and time after the onset are normalized to the quiescent level and the full time width at half the maximum flux ($t_{1/2}$), respectively. The key parameter $t_{1/2}$ can be obtained by 1) fitting the light curve as a free parameter; 2) estimating in advance if the sampling of the light curve around the peak is dense enough. The decaying light curve is described by a sum of two exponential curves as presented by Eq.4 in D14, standing for the two components: the impulsive decay phase and the gradual decay phase. For the case of GWAC\,181229, the uncertainty of peak time is less than 7.5 seconds due to the GWAC's short cadence of 15 seconds. By assuming that the peak magnitude we detected is the real peak brightness of the flare, the amplitude of $\Delta R\sim9.5$ mag corresponds to the relative flux of $F_{\text{amp}}=6500$ which will be fixed during the analysis in our work. We here model the rising and the decaying phase separately as follows. \subsubsection{Rising phase} In the template of D14, the rising phase is fitted with a fourth-order polynomial. However, for the case of GWAC\, 181229A, before the peak time, most of the observation data are upper limits except for one real detection. The behavior could not be well constrained with a fourth-order polynomial as the template of D14. Here we have only to describe the rising phase of the flare briefly by assuming that this part follows a linear curve for few detections. \begin{equation} F_{\text {decay}}/F_{\text{amp}}= k_0 + k_1 t \end{equation} where $F_{\text{decay}}$ is the relative flux and $F_{\text{amp}}$ the peak relative flux that is fixed to be 6500. The values of $k_0$ and $k_1$ are calculated to be 0.69 and 0.02, respectively. The uncertainties of two parameters can not be well estimated since there are only one positive detection before the peak. The uncertainties of these values are about 10\% if only the precise of photometry measurements are taken into account. With this model, the onset time for the flare is about 35 seconds before the first detection, or 50 seconds before the peak time. \subsubsection{Decaying phase} After the modeling of the rising phase, we started from examining whether the D14 model can fit the observed data in the decaying phase. In D14, a sum of two exponential laws as shown in the Equation \ref{func2} was adopted to describe the light curve. \begin{equation} F_{\text {decay}}/F_{\text{amp}}= k_1e^{-\alpha_1t/t_{1/2}} + k_2e^{-\alpha_2 t/t_{1/2} } \label{func2} \end{equation} where $k_1=0.6890\pm0.0008$, $k_2=0.3030\pm0.0009$, $\alpha_1=1.600\pm0.003$, and $\alpha_2=0.2783\pm0.0007$ as given in D14 are fixed in the subsequent modeling. By setting the peak flux ($F_{\text{amp}}$) and the time scale $t_{1/2}$ as free parameters, the best fitting returns $F_{\text{amp}}=3059\pm63.6$ and $t_{1/2}=517.4\pm12.0$ seconds. The reduced $\chi^2/dof=3.63$ with a degree of freedom of 54. The large $\chi^2$ indicates that the template of D14 does not provide a good fit to the data, especially near the peak time as shown in the left panel in Figure \ref{fig:D14fit}. In fact, by checking the light curve by eyes, the real $t_{1/2}$ should be around 30 seconds due to the sharp curve around the peak. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{G181229_C02390_relativeflux_fit_D14.eps} \includegraphics[width=0.4\textwidth]{G181229_C02390_relativeflux_fit.eps} \caption{\it Left panel: \rm Black data is the optical light curve of GWAC\,181229A observed by GWAC and F60A. Y-axis is the relative flux, and X-axis the time since $T_0=2458481.946482 $ day when the flare was first detected by GWAC. The red line shows the best fitted model described by a sum of two exponential laws. The blue and the green lines present the impulsive and gradual components, respectively. The left low panel gives the residual for each data. \it Right panel: \rm The same as the left one, but for the fitting in which the parameters is set to be free, except for the peak flux and the time scale unit. It is clear that the peak brightness deviates from the expectations of the two fittings, indicating that the data near the peak time are originating from an additional more steeper component.} \label{fig:D14fit} \end{figure} To improve the fitting, we set the parameters in Equation \ref{func2} to be free except for the $F_{\text{amp}}=6500$, $t_{1/2}=1$. The modeled values are tabulated in Table 3, and the reduced $\chi^2/dof=2.65$ with a degree of freedom of 52. The fitting results are shown in the right panel in Figure \ref{fig:D14fit}. In the upper panel of the figure, the total fitting result is displayed by the red line, and the two components with the blue and green lines, respectively. The time at which the two components have equivalent contributions is 793 sec since the peak time. The lower panel shows the residual data that is obtained by a subtraction of the total fitting result from the observation data. The data near the peak time are still poorly reproduced, indicating that they might be from a new, more steeper component that is not included in the Equation \ref{func2}. \begin{table} \begin{center} \caption{Parameters of the modeled decaying light curve of GWAC\,181229A. $\alpha_3$ is for the first impulsive decay phase. $\alpha_1$ and $\alpha_2$ stands for the gradual phase and shallow phase, respectively. } \begin{tabular}{cccccc} \hline \hline $k_1$ & $k_2$ & $k_3$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$\\ \hline \multicolumn{6}{c}{Two components model}\\ \hline $0.444\pm0.002$ & $0.145\pm0.007$ & \dotfill & $0.005\pm0.001$ & $0.0005\pm0.0001$ & \dotfill \\ \hline \multicolumn{6}{c}{Three components model}\\ \hline $0.373\pm0.016$ & $0.128\pm0.008$ & $2.248\pm1.061$ & $0.106\pm0.008$ & $0.014\pm0.001$ & $2.946\pm0.895$ \\ \hline \label{ModelingLC} \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{BIC for three models} \begin{tabular}{cc} \\ \hline \hline model & BIC \\ \hline D14 model & 660.03\\ Two components model & 602.56\\ Three components model & 522.46\\ \hline \label{BIC} \end{tabular} \end{center} \end{table} In order to reproduce the light curve around the peak, we then model the light curve in the decaying phase by adding an exponential component: \begin{equation} F_{\text {decay}}/F_{\text{amp}}= k_1 e^{-\alpha_1t/t_{1/2}}+k_2e^{-\alpha_2t/t_{1/2}}+k_3e^{-\alpha_3t/t_{1/2} } \label{func3} \end{equation} A much better fitting with a reduced $\chi^2/dof=1.15$ with a degree of freedom of 50 can be learned from Figure \ref{fig:LC_C00090_fit_func3}. The modeled parameters are again listed in Table.\ref{ModelingLC}. This good fitting suggests that there are three components in the decay phase. After the peak time, there is a very sharp decay component. At the time around 75 seconds, the light curve transfers to the second gradual component. After about 1500 seconds, the third shallow decay is dominant until the end of the flare. A Bayesian information criterion (BIC) is used to test whether the three components model used in the fitting is required or resulted from overfitting the data. The BIC values are 522.46, 660.03, 602.56 for three components model, D14 model, and two components model, respectively. All these BIC values are also summarized in Table.\ref{BIC}. This result confirms that three components model is more reasonable for the data. Although some complex light curves has been observed (e.g., Kowalski et al. 2010), previous works presented that the morphology of flare light curves are typically divided into two phases: an impulsive phase and a gradual decay phase(e.g., Moffett 1974; Moffett \& Bopp 1976; Hawley \& Pettersen 1991; Davenport et al., 2014). However, for the case of GWAC\,181229A, three phases are needed to describe well the high-cadence light curves. The initial decay is lasting to 20 sec after the first detection(5 sec after the peak time), which likely dominated by a brighter, hotter region that cools very shortly, and then a gradual decay phase from about 20 sec to 350 sec which corresponds to a cool region in which the radiation cools slowly. Finally, the event are moving to the last shallower decay phase lasting from about 350 sec to the quiescent state. \subsubsection{Ratio of decay indices} We define the ratio of decay indices, donated by ${R_{ij}}=\alpha_{i} / \alpha_{j}$ ($i,j=1,2,3$), to present how fast the cooling speed changes from one phase to another, which is independent on the time unit scale $t_{1/2}$. For the case of GWAC\,181229A, they are deduced to be ${R_{31}}\sim27.74$ from the impulsive decay phase to the gradual phase, and ${R_{12}}\sim7.47$ from the gradual phase to the shallow decay phase, respectively. To make a comparison, the value of ${R}$ from the template derived by D14 is $\alpha_{D1}/\alpha_{D2}=1.600/0.2783=5.749$. Such a difference might be attributed to the possible dependence on properties such as stellar effective temperature or magnetic field strength during the flares. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{G181229_C02390_relativeflux_fit_fun3.eps} \includegraphics[width=0.4\textwidth]{G181229_C02390_relativeflux_fit_fun3_logscale.eps} \caption{Left panel:The same as Figure \ref{fig:D14fit}, but for a fitting with three components. Right panel: The same as the left but in the logarithmic scale for more clarity. In the right panel, the fitting result for rising phase with black line is also displayed. The total fit in red line in two panels is only for the decay phase after the peak time.} \label{fig:LC_C00090_fit_func3} \end{figure} \subsection{Spectrum properties} Figure \ref{spec} shows the spectrum taken by the 2.16m telescope. A series of strong emission lines such as $\mathrm{H\alpha}$, \ion{He}{1}$\lambda5876$, $\mathrm{H\beta}$, $\mathrm{H\gamma}$ and $\mathrm{H\delta}$ are marked on the spectrum. The fluxes measured by a direct integration are presented in Table.\ref{spec_line}. After excluding the regions with the strong emission lines, we modeled the underlying continuum by a black body in the wavelength range 4000-8000\AA, which returns a temperature of $T_{\mathrm{bb}} = 5340\pm40$K. These emission lines are commonly detected during a dMe flare (e.g., Kowalski et al., 2013) and thought to be associated with chromospheric temperatures. By summarising the flux of these strong emission lines shown in Table.\ref{spec_line}, the total energy in the emission lines of $4.8\times10^{-14} \mathrm{ erg/s/cm^{2}}$ in our observation wavelength range could be derived. The total emission of $5.13\times10^{-13} \mathrm{ erg/s/cm^{2}}$ for the continuum emission within the wavelength range from 4000 to 8000 $\AA$ also be measured. The ratio of the energy in the emission lines and the underlaying continuum is about $\sim$9.3\% for GWAC\, 181229A, which is higher than the percentage ($\sim$4\%) in the impulsive phase (Hawley \& Pettersen 1991) and is significantly smaller than the values (17\%-50\%) in the gradual decay phase reported in the literatures ( e.g., Hawley \& Pettersen 1991; Hawley et al. 2007). Previous works in the literatures show that the temperature at gradual phase is lower than the values obtained at peak time (e.g., Fuhrmeister et al., 2008; Schmitt et al., 2008). Our measured temperature of $\sim$5340 K in the shallow decay phase is similar with the reported temperature of 5500-7000K in the decay phase of a flare event presented by Mochnacki \& Zirin (1980), but is slightly higher than the reported values in the decay phase (Fuhrmeister et a., 2008; Schmitt et al., 2008) where a blackbody temperatures of 3200-5600 K was given after measuring the continuum shape in their red higher cadence spectra. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{spec_bb.eps} \caption{The spectrum obtained by the 2.16m telescope at Xinglong observatory, China. A modeling of the underlying continuum by a hot black body is shown by the heavy red line. } \label{spec} \end{figure} \begin{table} \begin{center} \caption{Emission line measurements of the spectrum of GWAC\,181229A displayed in the Figure \ref{spec}} \begin{tabular}{cc} \hline \hline Line & Flux ($\mathrm{10^{-15}\,erg\,s^{-1}\,cm^{-2})}$\\ \hline $\mathrm{H\alpha}$ & 16.15 \\ \ion{He}{1}$\lambda$5876 & 2.79 \\ $\mathrm{H\beta}$ & 13.64 \\ $\mathrm{H\gamma}$ & 9.28 \\ $\mathrm{H\delta}$ & 6.50 \\ \hline \label{spec_line} \end{tabular} \end{center} \end{table} \subsection{Energy budget} The equivalent duration ($ED$) of a flare is defined to be the time needed to emit all the flare energy at a quiescent flux level (e.g. Kowalski et al. 2013). By integrating the model of the light curve over the range of the light curve from the start to the end of the flare, the $ED$ is estimated to be $\sim2.584601\times10^{6}$ seconds, or 29.9125 days for GWAC\,181229A. Following the method of Kowalski et al., (2013), the total energy $E_{R}$ in $R-$band can be calculated with the equation $E_{R}=4\pi r^2\times F_{R,q}\times ED$, where the quiescent flux $F_{R,q} = 1.4 \times 10^{-18}$ erg cm$^{-2}$ s$^{-1}$ $\AA^{-1} $ and the distance is r=150 pc, the energy $E_{R}$ is measured to be $ 1.54 \times 10^{34} $ ergs.\footnote{It is noticed that there is a caveat that this method is based on a simple assumption that the flare spectrum is similar to the one in the quiescent state which is however not fully consistent with fact. The uncertainty for the estimated energy shall be within 8\% as a maximum value with the different blackbody spectrum shape from T=10 000K to T=2300K. } To estimate the bolometric energy, one have to get the knowledge the effective temperature. In this work, our spectrum during the decay phase gives a temperature of $5430\pm40$ K by a blackbody spectrum fit. On the other hand, the temperature during the peak time for a dMe flare could be as high as $T_{eff}=10^{4} K$ (e.g., Kowalski et al. 2013). More evidences indicate that the temperature shall be evolving during the flare from peak time to the gradual decay phase (e.g., Hawley \& Pettersen 1991; Hawley \& Fisher 1992). Here for simplicity, the bolometric energy will be estimated based on two effective temperatures, one is $T_{eff}=10^{4} K$ and the other is $T_{eff}=5340 K$. By integrating the spectrum of a blackbody shape with effective temperatures shown above with the wavelength range from 1 nm to 3000 nm, and calibrated the energy with R band flux, the bolometric energy $E_{bol}$ of $9.25\times 10^{34} $ ergs and $5.56\times 10^{34}$ ergs for $T_{eff}=10^{4} K$ and $T_{eff}=5340\pm40 K$ could be obtained, respectively. With the same method, the $U$-band energy of the flare is $E_U\sim1.5\times10^{34}$ ergs and $E_U\sim3.6\times10^{33}$ ergs for the two temperatures, respectively. Such a large amount of energy makes this flare to be comparable to the flare event SDSSJ0221 ($E_U=(3.2-5.5)\times10^{34}$ ergs) reported by Schmidt et al. (2016) and CZ Cnc reported by Schaefer (1990), and to be one of the largest energy events from ultracool dwarfs. \subsection{Continuum emission in $R$-band} The flare emission at optical and $\text{UV}$ wavelengths are believed to be contributed by two major components. The dominated one is a hot blackbody emission (continuum emission) with a template of about $T\sim10,000$K (e.g., Hawley \& Fisher 1992) that is considered to be produced at the bottom in the stellar atmosphere near the foot points of the magnetic field loops. The second component is the atomic emission lines (e.g., Fuhrmeister et al. 2010) and hydrogen Balmer continuum (Kunkel 1970). The proportion of the two contributors changes with the evolution of the flare. Near the peak time, the continuum emission could contribute more than 90\% emission ( Hawley \& Pettersen 1991) of the total energy of the flare. In the gradual phase, the fraction of the continuum can drop to 69\%( Hawley \& Pettersen 1991) or even down to 0\% (Hawley et al., 2003). The filling factor $X_{\mathrm{fill}}$ is the fraction of the area of the projected visible stellar disk that emits flare continuum emission, which allows us to understand what type of heating distribution is responsible for the observed light curve (Kowalski et al. 2013). Following the method of Hawley et al. (2003), $X_{\mathrm{fill}}$ in the impulsive and gradual phase can be deduced from \begin{equation} F_{\lambda}=X_{\mathrm{fill}}\frac{R^{2}}{d^{2}} \pi B_{\lambda}(T) \label{Fillingfactor} \end{equation} where $R$ is the stellar radius, $d$ the distance, and $T$ the characteristic temperature of the blackbody emission. $F_{\lambda}$ is the flare flux observed at Earth at wavelength $\lambda$, which can be measured from the optical spectrum within a range of wavelength free of emission lines. For the case of GWAC\,181229A, only one spectra was obtained at about 54 min after the event (mid time of the exposure as presented in Figure \ref{spec}). The continuum flux level is measured to be $1.8\times10^{-16} \mathrm{ erg\ cm^{-2}\ s^{-1}\ \AA^{-1}}$ within the wavelength range of 6800-7200\AA. There is no any apparent emission lines within this wavelength range. Adopting $R=$ $0.1R_{\odot}$ for a typical radius of a M9 brown dwarf (Baraffe et al., 2015), $d=150$ pc, and a blackbody temperature of $T_{\mathrm{bb}}=5340$K yields a $X_{\mathrm{fill}}\sim$19.3\% for the decay phase, by assuming that all the emission measured within the wavelength range is produced by the blackbody emission. Although there was no spectra obtained near the peak time, the temperature and the corresponding filling factor $X_{\mathrm{fill}}$ can be estimated as follows. Assuming 95\% observed peak emission are contributed by continuum emission, a critical temperature $T_{\mathrm{c}}=10,000$K of a blackbody emission is deduced which corresponds to a filling factor of 100\% of the surface of the object, indicating that the temperature of the blackbody emission near the peak time is much higher than the $T_c$. Further calculations are made with $T=16,000$K, $T=20,000$K, $T=30,000$K and $T=35,000$K to estimate $X_{\mathrm{fill}}$, which results in a $X_{\mathrm{fill}}$ of 36\%, 24\%, 13\% and 10\%, respectively. We noted that Kowalski et al. (2013) reported that the temperatures of the blackbody body is from $T=9800$ to 14100 K for the peak of the flares of the mid-M dwarf. If it is true for the later-M dwarf in GWAC\,181229A, the value of $X_{fill}$ is at the level of $\sim 30\%$ at the peak time. The maximum magnetic field strength $B_{z}^{max}$ associated with the super flare observed on GWAC\,181229A could be estimated with the scaling relation in Aulanier et al. (2013) and Paudel et al., (2018) by assuming that the flare on GWAC\,181229A is similar with the solar flares. \begin{equation} E_{bol}=0.5 \times 10^{32}\left(\frac{B_{z}^{\max }}{1000 \mathrm{G}}\right)^{2}\left(\frac{L^{\text {bipole }}}{50 \mathrm{Mm}}\right)^{3} erg \label{MagneticStrength} \end{equation} where $E_{bol}$ is the bolometric flare energy, and $L^{bipole}$ is the linear separation between bipoles. Since the filling factor X$_{\mathrm{fill}}$ is at the level of 30\% at the early phase, we could take $L^{bipole}$ as $\pi R$ as the maximum distance between a pair of magnetic poles on the surface of GWAC 181229A. With these parameters, a strong magnetic field of (3.6-4.7)kG is deduced. This strong magnetic strength is at the level of the saturated value of 3-4 kG (Reiners et al., 2009), and slightly smaller than the reported values of 7.0 kG for WX Ursae Majories (Shulyak et al., 2017) and 5 kG for an M8.5 brown dwarf LSR J1835+3259 (Berdugina et al., 2017). \section{Summary} In this paper, we report a giant stellar flare GWAC\,181229A detected by GWAC with a survey cadence of 15 seconds. The peak brightness is measured to be $R=13.5$ mag. The counterpart of GWAC\,181229A is a M9 star with a brightness of $r$=24.0 (or $R$=23.03 mag), yielding an amplitude of 9.5 mag in $R$-band. The total energy in $R$-band and the bolometric energy are estimated to be $1.5\times10^{34}$ erg, and $(5.56-9.25)\times10^{34}$ erg, respectively. The magnetic strength B is deduced to be (3.6-4.7)kG. Such huge energy budget places the flare to be one of largest energy events for ultracool stars. A very fast follow-up observation in imaging was carried out by F60A via RAVS with a delay of 2 min since the trigger time. At 39 min after the trigger, a low-resolution spectrum was started to be taken by the 2.16m optical telescope at Xinglong observatory, China. The flare promptly rises from the quiescent flux level to the peak time in about 50 sec, and then returns to a decay modeled by a combination of three components which is required to properly reproduce the decaying light curve. Based on a fitting of the continuum emission in the spectrum by a blackbody, an effective temperature of $T=5340\pm40$ K. The filling factor is derived to be 19.3\% for the flare in the later gradual phase, while it is 36\% at the peak if a temperature of $T=16,000 $K is adopted. Thanks to the large field-of-view and the high survey cadence, GWAC is well-suited for the detection of white-light flares. Actually, we have hitherto detected more than $\sim130$ white-light flares with an amplitude more than 0.8 mag. More GWAC units are planed to work in the next two years, aiming to increase the detection rate of high amplitude stellar flare by monitoring more than 5000 square degrees simultaneously (Wei et al. 2016). This is essential for not only improving our understanding of the flares of late-type stars themselves, but also revealing the life-threatening on extrasolar planets by the largest flares. \section{Acknowledgement} The authors thank the anonymous referee for a careful review and helpful suggestions that improved the manuscript. This study is supported from the National K\&D Program of China (grant No. 2020YFE0202100) and the National Natural Science Foundation of China (Grant No. 11533003, 11973055, U1831207). This work is supported by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences, grant Nos. XDA15052600 \& XDA15016500, and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No.XDB23040000. YGY is supported by the National Natural Science Foundation of China under grants 11873003. JW is supported by the National Natural Science Foundation of China under grants 11473036 and 11273027. We acknowledge the support of the staff of the Xinglong 2.16m telescope. This work was partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the VizieR service was published in A\&AS 143, 23
{ "timestamp": "2020-12-29T02:26:10", "yymm": "2012", "arxiv_id": "2012.14126", "language": "en", "url": "https://arxiv.org/abs/2012.14126" }
\section{Introduction}\lb{sec1} \smallskip Let $V$ be a vector space equipped with a bilinear nondegenerate (symmetric or antisymmetric) form. The Brauer algebra \cite{Br} generalizes the tower of the centralizing algebras which appears in the Brauer--Schur--Weyl duality, related to $V$. The Birman--Murakami--Wenzl algebra \cite{BW,M1} is the quantum deformation of the Brauer algebra. Important particular cases of local representations of the tower of the Birman--Murakami--Wenzl (BMW) algebra are constructed with the use of orthogonal and symplectic R-matrices. These R-matrices give rise to the quantum matrix algebras of the orthogonal and symplectic types. More precisely, a quantum matrix algebra is defined by a compatible pair $\{ R, F\}$ of R-matrices, we recall the definitions below. The general structure properties of quantum matrix algebras with $R$ of the BMW type were investigated in \cite{OP}. In the present work we mainly assume $R$ to be of the symplectic type. Our principal goal is to derive, for the quantum matrices of the symplectic type, an analogue of the Cayley--Hamilton identity and to use it for a description of the spectra of the corresponding quantum matrices. \smallskip In Section \ref{BMWaRm} we recall the necessary facts about the Birman-Murakami-Wenzl (BMW) algebras, their R-matrix realizations, specializations to the symplectic and orthogonal cases and some R-matrix technique. \vskip .1cm Section \ref{secqma} contains the information from \cite{OP} about the quantum matrix algebra, its `characteristic subalgebra' (the subalgebra to which the coefficients of the Cayley-Hamilton identity belong) and the $\star$-multiplication. The main results are in Section \ref{secCHSp}. Here we establish the Cayley--Hamilton theorem for the symplectic quantum matrix algebras. Classically, the symplectic group is defined by the condition $M^t \Omega M = \Omega$ where $ \Omega$ is the symplectic form. However we have to a work with a bigger group defined by the condition $M^t \Omega M = g\Omega$ where $g$ is a constant, that is with the group of transformations which preserve the form up to a multiplicative factor. We call `2-contraction' the quantum analogue of this factor. It is an element $g$ of the quantum matrix algebra. For a general compatible pair $\{ R, F\}$, the element $g$ is not necessarily central so we cannot harmlessly set it to 1. We establish a strengthened form of the Cayley--Hamilton theorem which does not assume the invertibility of the element $g$ (and which is equivalent to the Cayley--Hamilton theorem under the assumption of the invertibility of $g$). \vskip .1cm Next, we define in Section \ref{secCHSp} a homomorphism from the characteristic subalgebra to the algebra of symmetric polynomials in some set of commuting (``spectral") variables. The nature of this homomorphism reflects the reciprocity properties of the characteristic polynomials for the symplectic matrices. The Cayley-Hamilton identities under the action of this homomorphism are completely factorized and hence the spectral variables can be treated as eigenvalues of the quantum matrix. We then give the spectral parameterization of the three series of elements of the characteristic subalgebra: the power sums $p_i$, the elementary symmetric functions $a_i$ and the complete symmetric functions $s_i$. \vskip .1cm Section \ref{secCHSp} contains also the low-dimensional examples illustrating the Cayley--Hamilton theorem for two most known quantum matrix algebras: the algebra of functions on the quantum group (corresponding to the compatible pair $\{ R, P\}$ where $P$ is the flip) and the reflection equation algebra \cite{C,KS} (the reflection equation algebra corresponds to the compatible pair $\{ R, R\}$). Also we discuss the classical limit of the Cayley--Hamilton theorem. \vskip .1cm The Cayley--Hamilton identity for the quantum matrix algebras of the orthogonal type will be considered in a separate publication. \section{BMW algebra and their R-matrix representations}\lb{BMWaRm} In this section we present definitions and describe necessary facts about the Birman-Murakami-Wenzl algebras and the BMW type R-matrices. We follow notation of ref. \cite{OP} where the reader can find detailed derivations and the references. Later in the section we investigate two families of the BMW type R-matrices, the $Sp(2k)$ type and the $O(k)$ type R-matrices. They are related, respectively, to the symplectic and orthogonal series of the quantum groups. We identify particular conditions on the eigenvalues which are specific for these families of R-matrices. In the following section we will use the symplectic R-matrices for the definition of $Sp(2k)$ type quantum matrix algebras. Specific properties of the $Sp(2k)$ type R-matrices will then dictate a form of the Cayley-Hamilton identites in these algebras. \subsection{BMW algebra} The {\em Birman-Murakami-Wenzl (BMW) algebra} ${\cal W}_{n}(q,\mu)$ \cite{BW,M1} depending an two complex parameters $q\in{\mathbb{C}}\backslash \{0,\pm 1\}$ and $\mu\in{\Bbb C}\backslash\{0,q,-q^{-1}\}$ is defined in terms of generators $\{\sigma_i, \kappa_i\}_{i=1}^{n-1}$ and relations \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent \sigma_i \sigma_{i+1} \sigma_i \, =\, \sigma_{i+1} \sigma_i \sigma_{i+1},&& \sigma_i \sigma_j \, =\, \sigma_j \sigma_i\qquad \qquad\;\; \forall\; i,j:\; |i-j|>1 , \\[2pt] \lb{kappa} \sigma_i \kappa_i \, =\, \kappa_i \sigma_i \,=\, \mu \kappa_i , && \kappa_{i} = { \textstyle {(q1-\sigma_i)(q^{-1} 1 +\sigma_i)\over \mu (q-q^{-1})}} , \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent \kappa_{i+1}\kappa_i \, =\, \kappa_{i+1}\sigma^{\pm 1}_{i}\sigma^{\pm 1}_{i+1}, && \kappa_i\kappa_{i+1}\kappa_i \, =\,\kappa_i\;\;\;\;\;\;\qquad \forall\; i . \end{eqnarray} The first line is the Artin's presentation of the braid group ${\cal B}_n$; the rest of relations define the quotient algebra ${\cal W}_{n}(q,\mu)\subset {\Bbb C}[{\cal B}_{n}]$. \smallskip Imposing further restrictions on the parameters \begin{equation} \lb{mu} j_q :={ q^j - q^{-j}\over q-q^{-1}}\neq\, 0\, , \quad \mu \neq \mp\, q^{\mp(2j-3)} \quad \forall\;j = 2,3,\dots , n\ , \end{equation} one can define recursively two sets of idempotents \begin{eqnarray} \lb{ind1} a^{(1)} := 1,\hspace{3.9cm} &&\;\;\, s^{(1)}\, :=\, 1, \\[2pt] \lb{a^k} a^{(i+1)}\, :=\,{q^i\over (i+1)_q} a^{(i)}\, \sigma^{-}_i(q^{-2i})\, a^{(i)} , && s^{(i+1)} :={q^{-i}\over (i+1)_q}\, s^{(i)}\, \sigma^{+}_i(q^{2i})\, s^{(i)}, \end{eqnarray} where $$ \sigma_i^{\pm}(x)\, :=\, 1\, +\, {x-1\over q-q^{-1}}\, \sigma_i\, +\, {\mu(x-1)\over \mu\mp q^{\mp 1} x}\, \kappa_i\, . $$ The idempotents $a^{(n)}$ and $s^{(n)}$ in the algebra ${\cal W}_{n}(q,\mu)$ are primitive. They correspond to the $q$-deformations of the `trivial' ($\sigma_i\mapsto q$) and the `alternating' ($\sigma_i\mapsto -q^{-1}$) one-dimensional representations. Therefore, they are called an {\em $n$-th order antisymmetrizer} and an {\em $n$-th order symmetrizer}, respectively. \subsection{R-matrices and their compatible pairs} Let $V$ denote a finite dimensional ${\Bbb C}$-linear space, $\dim V = \mbox{\sc n}$. Fixing some basis $\{v_i\}_{i=1}^{\mbox{\footnotesize \sc n}}$ in $V$ we identify elements $X\in {\rm End}(V^{\otimes n})$ with matrices $X_{i_1 i_2 \dots i_n}^{j_1 j_2 \dots j_n}$. \vskip .1cm Let $X\in {\rm End}(V^{\otimes k})$, $k\leq n$. For $1\leq m \leq n-k+1$, denote by $X_m\in {\rm End}(V^{\otimes n})$ an operator given by the matrix $$ (X_m)_{i_1 \dots i_n}^{j_1 \dots j_n}\ :=\ I_{i_1\dots i_{m-1}}^{j_1\dots j_{m-1}}\ X_{i_m\dots i_{m+k-1}}^{j_m\dots j_{m+k-1}}\ I_{i_{m+k}\dots i_n}^{j_{m+k}\dots j_n}\ . $$ Here $I$ denotes the identity operator. \smallskip An element $R\in {\rm Aut}(V^{\otimes 2})$ that fulfills an equation $$ R_{1}\, R_{2}\, R_{1}\, = \, R_{2}\, R_{1}\, R_{2}\ . $$ is called an {\em R-matrix}. The permutation operator $P$, defined by $P(u\otimes v)= v\otimes u \;\;\; \forall\; u, v\in V\,$ is the R-matrix. The operator $R^{-1}$ is the R-matrix iff $R$ is. \vskip .1cm Any R-matrix $R$ generates representations $\rho_R$ of the series of braid groups ${\cal B}_n$, $n=2,3,\dots$ $$ \rho_R:\, {\cal B}_n\rightarrow {\rm Aut}(V^{\otimes n})\ ,\quad \sigma_i \mapsto R_i, \quad 1\leq i\leq n-1 . $$ An R-matrix is called {\it skew invertible} if there exists an operator ${\Psi_{_{\hspace{-1.mm}R}}}\in {\rm End}(V^{\otimes 2})$ such that \begin{equation} {\rm Tr}\,_{\!(2)} R_{12} {\Psi_R}_{23} ={\rm Tr}\,_{\!(2)} {\Psi_R}_{12} R_{23} = P_{13}\, . \lb{s-inv} \end{equation} Here we use notation $X_{ij}$ which shows explicitly indices $i$ and $j$ of the spaces where operator $X$ acts, e.g., $P_{13}= P_{i_1 i_3}^{j_1 j_3}\, I_{i_2}^{j_2}$. Symbol ${\rm Tr}\,_{\!(i)}$ means taking the trace in the vector space with index $i$. \vskip .2cm For a skew invertible R-matrix $R$ define the operator $D_R\in \mbox{End}(V)$ \begin{equation} \lb{CandD} (D_R)_1:={\rm Tr}_{(2)}{\Psi_R}_{12} . \end{equation} The operator $R$ is called {\em strict skew invertible} if $D_R$ is invertible. The R-matrix $R^{-1}$ is skew invertible iff $R$ is strict skew invertible \cite{I,O}, the corresponding operator $D_{R^{-1}}$ reads $$ (D_{R^{-1}})_2= \left( {\rm Tr}_{(1)}{\Psi_R}_{12}\right)^{-1}. $$ With a skew invertible R-matrix $R$ we associate a linear map on the space of $\mbox{\sc n}\!\times\! \mbox{\sc n}$ matrices whose entries belong to some $\mathbb{C}$-linear space $W$ $$ {\rm Tr\str{-1.3}}_R:\; {\rm End}(V)\otimes W\,\rightarrow \, W ,\ {\rm Tr\str{-1.3}}_R(M) ={\textstyle \sum_{i,j=1}^{\mbox{\footnotesize\sc n}}}{(D_R)}_i^jM_j^i\, , \ M\in{\rm End}(V)\otimes W\,, $$ This map is called an {\em R-trace}. \smallskip It is easy to check that the R-matrix $P$ is strict skew invertible and ${\rm Tr}\,_{\! P}$ coincides with the usual trace. A characteristic property of the R-trace map is \begin{equation} \lb{Rtr} \Tr{2} R_1 = I_1. \end{equation} An ordered pair $\{ R, F\}$ of two R-matrices $R$ and $F$ is called {\em a compatible R-matrix pair} if the following conditions \begin{equation} R_1\, F_2\, F_1\, =\, F_2\, F_1\, R_2\, ,\qquad R_2\, F_1\, F_2\, =\, F_1\, F_2\, R_1\, , \lb{sovm} \end{equation} are satisfied. The equalities (\ref{sovm}) are called {\em twist relations}. Clearly, $\{ R,P\}$ and $\{ R,R\}$ are compatible pairs of R-matrices. \vskip .1cm A compatible pair of R-matrices $\{R,F\}$ gives rise to a new R-matrix \begin{equation} \lb{R_f} R_f := F^{-1} R F\, , \end{equation} called the {\em twisted} R-matrix. The R-matrix pair $\{ R_f , F\}$ is compatible. If $R$ is skew invertible and $F$ is strict skew invertible, then $R_f$ is skew invertible; if additionally $R$ is strict skew invertible, then $R_f$ is strict skew invertible as well \cite{OP}. \subsection{BMW type R-matrices} Assume that an R-matrix $R$ satisfies a third order minimal characteristic polynomial \begin{equation} \lb{charR} (qI-R)(q^{-1}I+R)(\mu I-R)=0 , \end{equation} and an element \begin{equation} \lb{K} K := \mu^{-1} (q-q^{-1})^{-1}\, (qI-R)(q^{-1}I+R)\end{equation} fulfills conditions \begin{equation} \lb{bmwRa} K_2\, K_1 \, = \, K_2\, R_1^{\pm 1}\, R_2^{\pm 1}, \qquad K_1\, K_2\, K_1 \, = \, K_1. \end{equation} In this case $R$ generates representations $\rho_R$ of the tower of the BMW algebras ${\cal W}_n(q,\mu) \rightarrow {\rm End}(V^{\otimes n})$ $\forall\, n>1$ $$ \rho_R:\,{\cal W}_n(q,\mu)\rightarrow {\rm Aut}(V^{\otimes n})\ ,\quad \sigma_i \mapsto R_i,\quad \kappa_i \mapsto K_i,\quad 1\leq i\leq n-1 . $$ Such R-matrix is said to be of {\em BMW type}. \smallskip If the R-matrix $R$ is skew invertible and of the BMW type, then it is strict skew invertible and the rank of the associated operator $K$ (\ref{K}) equals 1; the R-trace map in this case fulfills equalities \cite{IOP3} \begin{equation} \lb{BMW-trace} \Tr{2} K_1 = \mu\, I_1, \qquad {\rm Tr}\,_{\! R}\, I ={ \textstyle{ (q-\mu)(q^{-1}+\mu)\over q-q^{-1}}}. \end{equation} Let $\{R,F\}$ be a compatible pair of R-matrices, where $R$ is skew-invertible of the BMW type and $F$ is strict skew-invertible. In \cite{OP} we associated with such a pair an invertible operator $G\in {\rm Aut}(V)$ and two invertible linear maps, $\phi$ and $\xi$, acting on the space ${\rm End}(V)\otimes W$ where $W$ is an arbitrary vector space. We extensively use the operator $G$ and maps $\phi$ and $\xi$ in investigations of the BMW type quantum matrix algebras (see, e.g., sections \ref{secqma}, \ref{secCHSp} below). Here we present formulas for them and for their inverses. The operator $G$ and its inverse read \begin{equation} \lb{G} G_1 \, :=\, {\rm Tr}\,_{\!(23)} K_2 F_1^{-1} F_{2}^{-1}, \qquad G_1^{-1}\, =\, {\rm Tr}\,_{\!(23)} F_2 F_1 K_2 . \end{equation} The maps $\phi$ and $\xi$ are defined by \begin{eqnarray} \lb{phi} \phi(M)_1 &:=& \Tr{2} \left( F_{1}M_1 F^{-1}_{1} R_{1}\right), \\[2pt] \lb{xi} \xi(M)_1 &:=& \Tr{2} \left( F_{1} M_1 F^{-1}_{1} K_{1}\right). \end{eqnarray} Here $M$ is an arbitrary operator with values in a vector space $W$, $M\in {\rm End}(V)\otimes W$. The inverse maps read \begin{eqnarray} \lb{phi-inv} \phi^{-1}(M)_1 &=& \mu^{-2}\TR{2}{R_f} \left( F_{1}^{-1} M_1 R^{-1}_{1} F_{1}\right) \\[2pt] \lb{xi-inv} \xi^{-1}(M)_1 &=& \mu^{-2} \TR{2}{R_f}\left( F^{-1}_{1} M_1 K_{1} F_{1}\right) . \end{eqnarray} Here the matrix $D_{R_f}$ which is needed for calculations of the $R_f$-traces is $$ D_{R_f} = D_{F^{-1}} (D_{R^{-1}})^{-1} D_F. $$ \subsection{Orthogonal and symplectic type R-matrices}\lb{subsec3.4height} Consider R-matrix realizations $\rho_R(a^{(i)})$ of the antisymmetrizers (\ref{a^k}). We impose additional constraints on a skew invertible BMW-type R-matrix $R$ demanding that \begin{equation} \lb{spec4} {\rm rk}\,\rho_R(a^{(i)})\neq 0\, \quad \forall\; i=2,3,\dots ,k\quad\mbox{and} \quad \rho_R\left( a^{(k)}\sigma^-_{k}(q^{-2k})a^{(k)} \right)\equiv 0\, \end{equation} for some $k\geq 2$. Here we assume that the parameters $q$, $\mu$ fulfill conditions (c.f. with the conditions (\ref{mu})$\,$) \begin{equation} \lb{mu1} i_q\neq 0\, \;\;\forall\; i=2,3,\dots ,k; \qquad \mu\neq -q^{-2i+1}\, \;\;\forall\; i=1,2,\dots ,k . \end{equation} Note that in case $(k+1)_q\neq 0$ the last condition in eq. (\ref{spec4}) means vanishing of the $(k+1)$-st antisymmetrizer: $\rho_R(a^{(k+1)})=0$. We do not use this short form to avoid unnecessary restrictions on the parameter $q$. \medskip An R-matrix satisfying the conditions (\ref{spec4}) is called an {\em R-matrix of finite height}; the number $k$ is called the {\em height} of the R-matrix. \medskip Let us discuss some consequences of the relations (\ref{spec4}). Applying ${\Tr{i}}$ to $\rho_R(a^{(i)})$ and using the relations (\ref{a^k}), (\ref{Rtr}) and (\ref{BMW-trace}), we calculate \begin{equation}\lb{spec1}\Tr{i} \rho_R(a^{(i)})\, =\, \delta_i\, \rho_R(a^{(i-1)})\, \end{equation} where $\displaystyle{ \delta_i\equiv\delta_i(q,\mu) :=\, - {q^{i-1}(\mu + q^{1-2i})(\mu^2 - q^{4-2i})\over (\mu + q^{3-2i})(q-q^{-1}) i_q }. }$ In view of eqs.(\ref{spec1}), the last condition in (\ref{spec4}) implies, in particular, that $\delta_{k+1} = 0$, wherefrom one specifies three admissible values of $\mu$: $\mu\in\{-q^{-1-2k},\pm q^{1-k}\}$. \smallskip Notice that the choice $\mu = -q^{1-k}$ contradicts the conditions (\ref{mu1}) in the case when the number $k$ is even. In the case when $k$ is odd, the choices $\mu = -q^{1-k}$ and $\mu = q^{1-k}$ are related by a substitution $R \mapsto - R$. On the algebra level, this corresponds to an algebra isomorphism (see \cite{OP}, section 2.2) $\iota'' :{\cal W}_n(q,\mu)\rightarrow {\cal W}_n(-q,-\mu)$, $ \iota''(\sigma_i)= -\sigma_i$,~ $i=1,\dots ,n-1$. The antisymmetrizers $a^{(i)}$ are invariant under this map. Therefore we are left with only two essentially different choices of the parameter $\mu$: either $\mu=-q^{-1-2k}$ or $\mu=q^{1-k}$. With these choices, the consistency of the conditions on $\mu$ in eq.(\ref{mu1}) follows from the conditions on $q$. \smallskip We are now ready to define families of the orthogonal and symplectic R-matrices. \begin{defin}\lb{definition3.11} Let $R$ be a skew invertible BMW-type R-matrix. Assume additionally that the {\rm{R}}-matrix $R$ has finite height $k$ for some $k\geq 2$. This implies, in particular, restrictions on $q$: $i_q\ne 0$ for $i=2,\dots ,k$. Then \noindent \hspace{10mm}a)~ $R$ is called a $Sp(2k)$-type R-matrix in the case when $\mu=-q^{-1-2k}$; \medskip \noindent \hspace{10mm}b)~ $R$ is called an $O(k)$-type R-matrix in the case when $\mu=q^{1-k}$ and $\mbox{rk}\,\rho_R(a^{(k)})$ $=1$.\end{defin} {}For the standard R-matrices related to the quantum groups of the series $Sp_q(2k)$ and $SO_q(k)$ \cite{FRT}, the conditions a) and b), respectively, and the relations (\ref{spec4}) are fulfilled. This explains our terminology. \medskip The main subject of this paper is an investigation of the general structure of the quantum matrix algebras associated with the R-matrices of symplectic type (see next sections). For illustration purposes in subsection \ref{sec4.3} we consider examples of such algebras related to the standard $Sp(2k)$-type R-matrices. For reader's convenience we recall formulas for these particular symplectic R-matrices. \smallskip The standard $Sp(2k)$-type R-matrix (see \cite{FRT}) reads \begin{equation} \lb{R-Sp} R^{\mbox{\tiny (st)}} \! :=\!\! \sum_{i,j=1}^{2k} q^{(\delta_{ij}-\delta_{ij'})} E_{ij}\otimes E_{ji} \! +\! (q-q^{-1}) \!\!\sum_{1\leq j<i}^{2k}\! \bigl\{\!E_{jj}\otimes E_{ii} \, -\, q^{(\rho_i-\rho_j)}\epsilon_i \epsilon_j \,E_{i`j}\otimes E_{i j'}\!\bigr\}. \end{equation} Here $E_{ij}$ are $2k\times 2k$ matrix units; $\delta_{ij}$ is the Kronecker symbol; \begin{equation} \lb{notat-Sp} i'= 2k+1-i\,;\quad \epsilon_i=-\epsilon_{i'}=1\, ; \ \rho_i = -\rho_{i'}= (k+1-i)\ \forall i: 1\leq i\leq k. \end{equation} The corresponding matrices $K^{\mbox{\tiny (st)}}$ and $D_{R^{\mbox{\tiny (st)}}}$ are \begin{equation} \lb{K-Sp} K^{\mbox{\tiny(st)}} \, =\, \sum_{i,j=1}^{2k} q^{-(\rho_{i}+\rho_{j})} \epsilon_i \epsilon_{j'}E_{ij}\otimes E_{i'j'} , \ D_{R^{\mbox{\tiny (st)}}}\, =\, \sum_{i=1}^{2k} q^{-(2k+2\rho_i+1)} E_{ii}\, . \end{equation} \begin{rem}\lb{remark3.12.1} {\rm For the family of symplectic R-matrices, the case $k=1$ is particular: the antisymmetrizer $\rho_R(a^{(2)})$ vanishes and the minimal polynomial of $R$ becomes quadratic. The R-matrix $R^{\mbox{\tiny (st)}}$, up to normalization and reparameterization $q\mapsto q^{1/2}$, is of the Hecke type $GL(2)$ (see $Sp(2)$ examples in the subsection \ref{sec4.3}). This is a manifestation of the accidental isomorphism $SL(2)\sim Sp(2)$. Accidental isomorphisms for quantum groups, corresponding to the standard deformation, are discussed in \cite{JO}. } \end{rem} \begin{rem}\lb{remark3.12} {\rm Functions $$ \Delta^{(i)}(q,\mu):=\Tr{1,2,\dots ,i}\rho_R(a^{(i)}) = \prod_{j=1}^i \delta_j(q,\mu) $$ are, up to an overall factor, particular elements of a set of rational functions $Q_\lambda(\mu^{-1},q)$ labelled by partitions $\lambda\vdash i$; we have $\Delta(q,\mu)=\mu^i Q_{[1^i]}(\mu^{-1},q)$. The functions $Q_\lambda(\mu^{-1},q)$ were introduced in Theorem 5.5 in \cite{W}. They describe the q-dimensions of the highest weight modules $V_{\lambda}$ for the orthogonal and symplectic quantum groups (see \cite{W}, Section 5 and \cite{OW}, Lemma 3.1). }\end{rem} \section{Quantum matrix algebra}\lb{secqma} In this section we recall definitions and main facts about the quantum matrix algebras from \cite{OP}. A special attention is paid to the family of BMW type quantum matrix algebras. The notion of the characteristic subalgebra is introduced and two of its generating sets are described. The $\star$-product of the quantum matrices is defined. It substitutes for the usual matrix multiplication in the case of quantum matrices. All these data are necessary for a proper generalization of the Cayley-Hamilton theorem to the case of quantum matrix algebras. The latter is done in the next section. \medskip Let $\{R,F\}$ be a compatible pair of R-matrices. In the sequel we assume that $R$ and $F$ are strict skew invertible although some definitions can be given without this condition. A {\em quantum matrix algebra} ${\cal M}(R,F)$ is a quotient algebra of the free associative unital algebra $W={\Bbb C}\langle M_a^b\rangle$ by a two-sided ideal generated by entries of the matrix relation \begin{equation} R_1 M_{\overline 1}M_{\overline 2} = M_{\overline 1}M_{\overline 2}R_1\ . \label{qma} \end{equation} Here $M = \|M_a^b\|_{a,b=1}^{\mbox{\footnotesize\sc n}}$ is the matrix of generators; the matrix copies $M_{\overline i}$ are constructed with the help of the R-matrix $F$ in the following way \begin{equation} M_{\overline 1}:=M_1, \quad M_{\overline{i}}:= F^{\phantom{-1}}_{i-1}M_{\overline{i-1}}F_{i-1}^{-1}\ . \lb{kopii} \end{equation} The set of relations \begin{equation} R_i M_{\overline i}M_{\overline{i+1}} = M_{\overline i}M_{\overline{i+1}}R_i \label{qmai} \end{equation} for any given value of the index $i\geq 1$ is equivalent to (\ref{qma}) and can be as well used for the definition of the quantum matrix algebra. \medskip Denote by ${\cal C}(R,F)$ a vector subspace of the quantum matrix algebra ${\cal M}(R,F)$ spanned linearly by the unity and elements \begin{equation} \lb{char} ch(\alpha^{(n)}) := \Tr{1,\dots ,n}(M_{\overline 1}\dots M_{\overline n}\, \rho_R(\alpha^{(n)}))\ ,\quad n =1,2,\dots\ , \end{equation} where $\alpha^{(n)}$ is an arbitrary element of the braid group ${\cal B}_n$. The space ${\cal C}(R,F)$ is a commutative subalgebra in ${\cal M}(R,F)$ (this is proved in the article \cite{IOP1} which deals with the Hecke type quantum matrix algebras but the proof is valid for an arbitrary compatible pair $\{R,F\}$). The algebra ${\cal C}(R,F)$ is called the {\em characteristic subalgebra} of ${\cal M}(R,F)$. \medskip Denote by ${\cal P}(R,F)$ a linear subspace of ${\rm End}(V)\otimes {\cal M}(R,F)$ spanned by ${\cal C}(R,F)$-multiples of the identity matrix, $I\, ch\;\; \forall\, ch\in {\cal C}(R,F)$, and by elements \begin{equation} \lb{pow} M^1 :=\! M, \ (M^{\alpha^{(n)}})_{1} :=\! \Tr{2,\dots ,n}( M_{\overline 1} \dots M_{\overline n}\,\rho_R(\alpha^{(n)})),\ n =2,3,\dots, \end{equation} where $\alpha^{(n)}$ belongs to the braid group ${\cal B}_n$. The space ${\cal P}(R,F)$ carries a structure of a right ${\cal C}(R,F)$--module \begin{equation} \lb{r-module} M^{\alpha^{(n)}} ch(\beta^{(i)}) =M^{(\alpha^{(n)}\beta^{(i)\uparrow n})}\, \ \forall\, \alpha^{(n)}\in {\cal B}_n, \; \beta^{(i)}\in {\cal B}_i\, ,\ n,i=1,2,\dots\, , \end{equation} Here, in the right hand side, we denoted by the same symbol $\alpha^{(n)}$ the image of the element $\alpha^{(n)}$ under the natural monomorphism ${\cal B}_{n}\hookrightarrow {\cal B}_{n+i}: \sigma_j\mapsto \sigma_{j}$. The symbol $\beta^{(i)\uparrow n}$ in the right hand side denotes the image of the element $\beta^{(i)}$ under the natural monomorphism ${\cal B}_{i}\hookrightarrow {\cal B}_{n+i}: \sigma_j\mapsto \sigma_{j+n-1}$. Formula (\ref{r-module}) is just a component-wise multiplication of the matrix $M^{\alpha^{(n)}}$ by the element $ch(\beta^{(i)})$. \medskip We call {\it $\star$-product} the binary operation ${\cal P}(R,F) \otimes {\cal P}(R,F)\rightarrow \hspace{-4mm}^\star\hspace{3mm} {\cal P}(R,F)$ defined by \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent ( ch(\beta^{(i)}) I) \star M^{\alpha^{(n)}} := M^{\alpha^{(n)}} ch(\beta^{(i)})&=:& M^{\alpha^{(n)}}\! \star ( ch(\beta^{(i)})I) , \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent ( ch(\alpha^{(n)})I)\star ( ch(\beta^{(i)})I) &:=& ( ch(\alpha^{(n)}) ch(\beta^{(i)}))I , \\[2pt] \lb{MaMb} M^{\alpha^{(n)}}\! \star M^{\beta^{(i)}} &:=& M^{(\alpha^{(n)}\star \beta^{(i)})} , \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent \mbox{where we use the notation}\quad\;\; \alpha^{(n)}\star \beta^{(i)} &:=& \alpha^{(n)}\beta^{(i)\uparrow n} (\sigma_n\dots \sigma_2 \sigma_1\sigma_2^{-1}\dots \sigma_n^{-1}) . \end{eqnarray} The $\star$-product on ${\cal P}(R,F)$ is associative \cite{OP}. \vskip .1cm In what follows we often use the $\star$-multiplication by the matrix of generators of the quantum matrix algebra ${\cal M}(R,F)$. Explicitly it reads, see \cite{OP}, \begin{equation} \lb{M*} M \star N = M\cdot \phi(N) \quad \forall N\in {\cal P}(R,F), \end{equation} where $\cdot$ denotes the usual matrix multiplication and the map $\phi$ is defined in (\ref{phi}). In particular, one can introduce the noncommutative analogue of the matrix power: \begin{equation} \lb{M^k} M^{\overline{0}} := I\, , \qquad M^{\overline{n}}\, :=\, \underbrace{M\star M\star \dots \star M}_{\mbox{\small $n$ times}}\, =\, M^{(\sigma_1\sigma_2\dots \sigma_{n-1})}. \end{equation} Here we use symbol $M^{\overline{n}}$ for the {\em $n$-th power of the matrix $M$}. \subsection{BMW type} If $R$ is an R-matrix of the BMW, $Sp(2k)$ or $O(k)$ type then ${\cal M}(R,F)$ is called, respectively, a {\em BMW, $Sp(2k)$ or $O(k)$ type quantum matrix algebra}. \vskip .1cm For the BMW type quantum matrix algebra the following relations are satisfied as a consequence of (\ref{qma}) \begin{equation} \lb{tau2} K_{i}\, M_{\overline{i}}M_{\overline{i+1}}\!\! =\!\! \mu^{-2} K_{i}\, g\, \, =\,M_{\overline{i}}M_{\overline{i+1}}\, K_{i}\quad \forall\; i\geq 1 , \end{equation} where \begin{equation} \lb{tau} \displaystyle{ g\! :=\! { { \mu(q-q^{-1})\over (q-\mu)(q^{-1}+\mu) }}\, \Tr{1,2} \left( M_{\overline{1}}M_{\overline{2}}\, K_1\right) .} \end{equation} The element $g$ is called a {\em 2-contraction of $M$}. \vskip .1cm For the quantum matrix algebra of the BMW type the 2-contraction $g$ is an element of the characteristic subalgebra. The characteristic subalgebra of the BMW type quantum matrix algebra is generated by either one of the sets $\{g,p_i\}_{i\geq 0}$, where \begin{equation} \lb{P-i} p_0 = {\rm Tr}\,_{\!\! R} I ={ \textstyle{ (q-\mu)(q^{-1}+\mu)\over q-q^{-1}}} ,\ p_1 \!=\! {\rm Tr}\,_{\!\! R}\, M, \ p_i = ch(\sigma_{i-1}\dots\sigma_2\sigma_1) \ i=2,3,\ldots\ , \end{equation} or $\{g,a_i\}_{i\geq 0}$, where \begin{eqnarray} \lb{A_i} a_0 \!&=&\! 1,\qquad\quad\;\, a_i = ch(a^{(i)}) \qquad\qquad\; i=1,2,\dots . \end{eqnarray} Elements $p_i$ and $a_i$ are called {\em power sums} and {\em elementary symmetric functions}, respectively. For the Hecke type quantum matrix algebra, the corresponding algebra ${\cal P}(R,F)$, as the ${\cal C}(R,F)$-module, is spanned by the matrix powers $M^{\overline n}$, $n\geq 0$, of the generating matrix $M$. For the BMW type quantum matrix algebra this is not the case. Namely, as the ${\cal C}(R,F)$--module, the BMW type algebra ${\cal P}(R,F)$ is spanned by matrices (see \cite{OP}, proposition 4.11) $$ M^{\overline{n}}\, \quad \mbox{and}\quad M \raisebox{1mm}{$\intercal$}(M^{\overline{n+2}})\, , \quad n=0,1,\dots\, $$ Here we introduced a ${\cal C}(R,F)$--module map~ $M \raisebox{1mm}{$\intercal$} : {\cal P}(R,F)$ $\rightarrow$ $ {\cal P}(R,F)$ \begin{equation} \lb{Mt} M \raisebox{1mm}{$\intercal$} (N) := M\cdot \xi(N), \qquad \forall\, N\in {\cal P}(R,F), \end{equation} where the map $\xi$ is given in (\ref{xi}).\smallskip The BMW type algebra ${\cal P}(R,F)$ is commutative \cite{OP}. \medskip \vspace{0mm} To define inverse powers of the quantum matrix $M$ one considers the extension of the BMW type algebra ${\cal M}(R,F)$ by the inverse $g^{-1}$ of the 2-contraction \begin{equation} \lb{j-inv} g^{-1}\, g\, =\, g\, g^{-1}\, =\, 1\, , \qquad g^{-1}\, M\, =\, (G^{-1}MG)\, g^{-1}\, . \end{equation} where the numeric matrices $G^{\pm 1}\in {\rm Aut}(V)$ are defined in eqs. (\ref{G}). The latter relation in (\ref{j-inv}) is justified by the permutation rules for the 2-contraction. For an arbitrary matrix $N\in {\cal P}(R,F)$ it reads \begin{equation} \lb{g-perm} N\, g\, =\, g\, ( G^{-1}N G) . \end{equation} \noindent {\bf Proof.}~ In a particular case $N=M$ --- the matrix of generators of ${\cal M}(R,F)$ this formula is proved in \cite{OP}, lemma 4.13. Consequently, by lemma 3.11, eq. (3.45), \cite{OP}, we have $M_{\overline j}\,g\, =\, g\, ( G_j^{-1}M_{\overline j}\, G_j)$, $j=1,2,\ldots$ Thus for $u=M_{\overline 1}\dots M_{\overline n}\,\rho_R(\alpha^{(n)}))$, $\alpha^{(n)}\in {\cal B}_n$, we have $ug=g G_1G_2\dots G_n u G_n^{-1}\dots G_2^{-1}G_1^{-1}$. By the cyclic property of the trace and lemma 3.11, eq. (3.44), $G_2\dots G_n$ cancels with $G_n^{-1}\dots G_2^{-1}$ which proves eq.(\ref{g-perm}) for $N=\Tr{2,\dots ,n}(u)$.\hfill$\blacksquare$ \vskip .1cm The extended algebra, which we shall further denote by ${\cal M^{^\bullet\!}}(R,F)$, contains the inverse matrix to the matrix $M$ \begin{equation} \lb{M-inv} M^{-1}\, =\, \mu\, \xi(M)\, g^{-1},\qquad M\cdot M^{-1}\,=\, I\, =\, M^{-1}\cdot M. \end{equation} The matrix $M^{-1}$ is the inversion of $M$ with respect to the usual matrix product. Inversion with respect to the $\star$-product looks differently \begin{equation} \lb{M-star-inv} M^{\overline{-1}} = \phi^{-1}(M^{-1}),\qquad M^{\overline{-1}}\star M \, =\, I\, =\, M\star M^{\overline{-1}}. \end{equation} In general, $M^{\overline{-1}}\neq M^{-1}$.\medskip One can define the unique extension ${\cal P^{^\bullet\!}}(R,F)$ of the algebra ${\cal P}(R,F)$ by a repeated $\star$-multiplication with $M^{\overline{-1}}$ \begin{equation} \lb{Minv*N} M^{\overline{-1}}\star N \,:=\, \phi^{-1}(M^{-1}\cdot N)\, =:\, N \star M^{\overline{-1}} \qquad \forall\; N\in{\cal P^{^\bullet\!}}(R,F)\, . \end{equation} The algebra ${\cal P^{^\bullet\!}}(R,F)$ is associative and commutative with respect to the $\star$-product. It is also the right ${\cal C^{^\bullet\!}}(R,F)$-module algebra with respect to the extension ${\cal C^{^\bullet\!}}(R,F)\supset {\cal C}(R,F)$ of the characteristic subalgebra by the element $g^{-1}$. Particular examples of the $\star$-multiplication by $M^{\overline{-1}}$ are the inverse $\star$-powers of $M$ $$ M^{\overline{-n}}\, :=\,\underbrace{M^{\overline{-1}}\star \dots \star M^{\overline{-1}}\star }_{n\ \text{times}}I. $$ The $\star$-powers obey the usual rules of the $\star$-product of matrix powers: $M^{\overline{i}}\star M^{\overline{n}}\, =\, M^{\overline{i+n}}\;\; \forall\ i,n\in {\Bbb Z}$. \section{Cayley-Hamilton theorem}\lb{secCHSp} The Cayley-Hamilton theorem for the orthogonal and symplectic quantum groups was stated in the unpublished text \cite{OP2}. Here we establish and discuss in details a strengthened version of the Cayley-Hamilton theorem in the symplectic case. \vskip .1cm Throughout this section we assume that $\{R,F\}$ is a compatible pair of R-matrices, in which the operator $F$ is strict skew invertible and the operator $R$ is skew invertible of the BMW-type and, hence, strict skew invertible. \vskip .1cm In the subsection \ref{subsec5.1} we investigate matrix relations in the algebra ${\cal P}(R,F)$ involving `wedge' powers of the quantum matrix $M$: $M^{a^{(i)}}$, $0\leq i\leq n$. We confine the eigenvalues $q$ and $\mu$ of the matrix $R$ by conditions $$ i_q\neq 0,\;\mu\neq -q^{3-2i}\;\;\forall \; i=2,3,\dots ,n, $$ in which case all the antisymmetrizers $a^{(i)}\in {\cal W}_{n}(q,\mu)$, $i=2,3,\dots ,n$, and, hence, the elements $a_i\in {\cal C}(R,F)$ and the matrices $M^{a^{(i)}}\in {\cal P}(R,F)$ are well defined. \vskip .1cm Conditions on $R$ specific for the R-matrices of the type $Sp(2k)$, are imposed in Subsection \ref{subsec5.2}. \subsection{Basic identities}\lb{subsec5.1} Consider a set of `wedge' powers of the quantum matrix $M$: $M^{a^{(i)}}\in {\cal P}(R,F)$. Following \cite{OP}, we introduce series of matrices in ${\cal P}(R,F)$, which we further refer to as `descendants' of the matrices $M^{a^{(i)}}$. \begin{equation} \lb{La} \begin{array}{l} A^{(m,i)}\ :=\ i_q\, M^{\overline{m}} \star M^{a^{(i)}} \\[1em] B^{(m+1,i)}\ :=\ i_q\, M^{\overline{m}} \star M \raisebox{1mm}{$\intercal$} (M^{a^{(i)}})\end{array} \quad \forall\, i,m:\; 1\leq i\leq n, \;m\geq 0. \end{equation} It is suitable to set, by definition, \begin{eqnarray} \lb{LT0} &A^{(m,0)}\ :=0\ \ \ {\mathrm{and}}\ \ \ \ B^{(m,0)}\ :=\ 0\,\qquad \forall\; m\geq 0\, .&\end{eqnarray} and to complement the series by the elements\footnote{ Note that $A^{(-1,i)}$ and $B^{(0,i)}$ belong to the extension of the algebra ${\cal P}(R,F)$ by the $\star$-inverse matrix $M^{\overline{-1}}$. } \begin{equation} \lb{AB-boundary} \begin{array}{l} A^{(-1,i)}\, :=\, i_q\, \phi^{-1}\left( \Tr{2,3,\dots i} M_{\overline{2}}M_{\overline{3}}\dots M_{\overline{i}}\, \rho_R(a^{(i)}) \right)\ , \\[1em] B^{(0,i)}\ :=\ i_q\, \phi^{-1}\bigl(\xi\bigl(M^{a^{(i)}}\bigr)\bigr)\, . \end{array} \end{equation} The following recursive relations among the descendants are derived in \cite{OP}: \begin{lem}\lb{lemma5.1} {}For ~$0\leq i\leq n-1$~ and ~$m\geq 0$, the matrices $A^{(m-1,i+1)}$ and $B^{(m+1,i+1)}$ satisfy equalities \begin{eqnarray} \lb{rek1} A^{(m-1,i+1)} &=& q^i M^{\overline{m}}\, a_i\, -\, A^{(m,i)}\, -\, { \mu q^{2i-1}(q-q^{-1})\over 1+\mu q^{2i-1}}\ B^{(m,i)}\, , \\[1em] \lb{rek2} B^{(m+1,i+1)} &=&\Bigl( \mu^{-1}q^{-i} M^{\overline{m}}\, a_i\, +\, {q-q^{-1}\over 1+\mu q^{2i-1}}\ A^{(m,i)}\, -\,B^{(m,i)}\Bigr) g\, . \end{eqnarray} \end{lem} \medskip By a repeated use of these recurrent relations one can derive for a certain subset of the descendants their expansions in terms of non-negative matrix powers $M^{\overline{j}}$, $j\geq 0$, only\footnote{By Proposition 4.11 \cite{OP}, one expects also presence of the terms $M \raisebox{1mm}{$\intercal$}(M^{\overline{j}})$ in the expansions of generic descendants.}. For the Hecke type QM-algebras analogues of these expansions are known as the Cayley-Hamilton-Newton identities \cite{IOP,IOP1,IOPS}. \begin{prop}\lb{corollary5.2} {}For $1\leq i\leq n$ and $m\geq i-2$, one has \begin{equation} \lb{cor1a} A^{(m,i)}\;\, =\;\, (-1)^{i-1}\sum_{j=0}^{i-1} (-q)^j\Bigl\{ M^{\overline{m+i-j}} + {1-q^{-2}\over 1+\mu q^{2i-3}}\sum_{r=1}^{i-j-1}M^{\overline{m+i-j-2r}} (q^2 g)^r \Bigr\} a_j\, . \end{equation} \hspace{21mm}For $1\leq i\leq n$ and $m\geq i$, one has \begin{eqnarray} \nonumber B^{(m,i)}&=& (-1)^{i-1}\sum_{j=0}^{i-1} (-q)^j\Bigl\{ \mu^{-1} q^{-2j} M^{\overline{m-i+j}} g^{i-j}\hspace{56mm} \\[0em] &&\hspace{45mm} -{q^{-1}(1-q^{-2})\over 1+\mu q^{2i-3}}\sum_{r=1}^{i-j-1}M^{\overline{m+i-j-2r}} (q^2 g)^r \Bigr\} a_j\, . \lb{cor1b} \end{eqnarray} \end{prop} \noindent {\bf Proof.}~ We employ induction on $i$. In the case $i=1$, the relations (\ref{cor1a}) and (\ref{cor1b}) reproduce the definitions (\ref{La}): $$ A^{(m,1)}\, =\, M^{\overline{m+1}}\ , \qquad B^{(m,1)}\, =\, \mu^{-1} M^{\overline{m-1}} g\ . $$ \smallskip It is then straightforward to verify the induction step $i\rightarrow i+1$ with the help of the relations (\ref{rek1}) and (\ref{rek2}). \hfill$\blacksquare$ \begin{rem} {\rm When ~$m\geq i-2$~ (respectively, ~$m\geq i$), all the $\star\, $-powers of $M$ in the right hand side of the relation (\ref{cor1a}) (respectively, the relation (\ref{cor1b})$\,$) are non-negative. This is why we specify these restrictions on $m$. For an invertible matrix $M$, the restrictions on $m$ can be removed. } \end{rem} \begin{rem} {\rm The Hecke type version of these relations can be reproduced by setting $g=0$ in formulas of Proposition \ref{corollary5.2}. Relation for $B^{(m,j)}$ becomes trivial. Relation (\ref{cor1a}) for $A^{(m,j)}$ simplifies drastically, the terms with the element $g$ disappear and the condition $m\geq i-2$ weakens to $m\geq -1$. {}For $m=0$, the relation (\ref{cor1a}) reproduces the Cayley--Hamilton--Newton identities found in \cite{IOP,IOP1}. The R-trace maps of these identities are the Newton relations. In the $GL(k)$-case, that is, if the operator $R$ fulfills the condition $\rho_R(a^{(k+1)})=0$, the left hand side of the relation (\ref{cor1a}) vanishes in the case $i=k+1$. Then, with the choice $m=-1$ the relation (\ref{cor1a}) reproduces the Cayley--Hamilton identity.} \end{rem} \subsection{Cayley-Hamilton theorem: type $Sp(2k)$}\lb{subsec5.2} Specifying to the case of the $Sp(2k)$-type quantum matrix algebra, we notice that the condition $\mu= - q^{-1-2k}$ leads to the following linear dependency between $A^{(m-1,k+1)}$, see (\ref{rek1}), and $B^{(m+1,k+1)}$, see (\ref{rek2}): \begin{equation}\lb{previous-rem} \left. \left(B^{(m+1,k+1)} + q A^{(m-1,k+1)} g \right)\right\vert_{\mu=-q^{-1-2k}} = 0\,\ \quad \forall\; m\geq 0\, . \end{equation} The height $k$ condition (\ref{spec4}) on the $Sp(2k)$-type R-matrix $R$ cuts the series of 'descendants' $A^{(m,i)}$ and $B^{(m,i)}$ at the level $i=k+1$: $A^{(m-1,k+1)} = B^{(m+1,k+1)}=0\, \quad \forall\; m\geq 0$. The Cayley-Hamilton theorem follows exactly from these cutting conditions. The relations (\ref{previous-rem}) show that all the conditions for $B^{(m+1,k+1)}$ follow from the conditions for $A^{(m-1,k+1)}$. In turn, by eqs. (\ref{La}) and (\ref{AB-boundary}) we have \begin{equation} \lb{ahah-new} A^{(m-1,k+1)}=M^{\overline{m}} \star A^{(-1,k+1)}. \end{equation} Thus, all the cutting conditions arise from the single one \begin{equation} \lb{ahah} A^{(-1,k+1)}\, =\, 0\, . \end{equation} Unfortunately, the latter condition cannot be expressed in terms of nonnegative powers of the matrix $M$ only. By Proposition \ref{corollary5.2}, for the condition \begin{equation} \lb{ahah-1} A^{(k-1,k+1)}\, =\,0 \end{equation} such an expression does exist. The relations (\ref{ahah}) and (\ref{ahah-1}) are equivalent if the 2-contraction $g$ and, hence, the matrix $M$ are invertible. We shall first investigate the condition (\ref{ahah-1}). Substituting $\mu=-q^{-1-2k}$ and (\ref{cor1a}) into (\ref{ahah-1}) and rearranging terms of the sum we obtain the Cayley-Hamilton theorem for the quantum matrices of the type $Sp(2k)$: \begin{theor} \lb{theorem5.4} Let ${\cal M}(R,F)$ be the $Sp(2k)$-type quantum matrix algebra. Then the quantum matrix $M$ of the algebra generators satisfies the Cayley-Hamilton identity \begin{equation} \lb{CHSp-1} \sum_{i=0}^{2k} (-q)^i M^{\overline{2k-i}} \epsilon_i\, =\, 0\, , \end{equation} where \begin{equation} \lb{CHSp-2} \epsilon_i\, :=\, \sum_{j=0}^{[i/2]} a_{i-2j}\, g^j\, , \qquad \epsilon_{k+i}\, :=\, \epsilon_{k-i}\, g^{i}\,\qquad \forall\; i=1,2,\dots ,k\, . \end{equation} \end{theor} Let us now consider the matrix identity (\ref{ahah}). In case of non-invertible $g$, this identity is more informative than the Cayley-Hamilton identity (\ref{CHSp-1}). Matrix components of its left hand side are $k$-th order homogeneous polynomials in the components of the quantum matrix $M$, containing, apart of $M$, $\star$-powers of yet another quantum matrix obtained from $M$ by a linear map $\pi := \mu\, \phi^{-1}\!\circ \xi$ (see eqs. (\ref{xi}), (\ref{phi-inv})). \begin{lem} \lb{lem4.6} For the compatible pair $\{R,F\}$ of strict skew invertible R-matrices, where $R$ is of the BMW-type, the map $\pi := \mu\, \phi^{-1}\!\circ \xi$ does not depend on $F$. The explicit formulas for $\pi$ and $\pi^{-1}$ read: \begin{eqnarray} \lb{pi} \pi(M)_1 &=& \Tr{2} R_{12} M_1 K_{12}\, =\, \Tr{2} K_{12} M_1 R_{12}\, . \\[2pt] \lb{pi-inv} \pi^{-1}(M)_1&=& \mu^{-2}\, \Tr{2} R_{12}^{-1} M_1 K_{12}\, =\, \mu^{-2}\, \Tr{2} K_{12} M_1 R^{-1}_{12} . \end{eqnarray} \end{lem} \noindent {\bf Proof.}~ Instead of proving the first equality in (\ref{pi}) directly it is easier to verify the relation $\phi (\Tr{2} R_{1} M_1 K_{1})=\mu\, \xi(M)_1$: \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent \phi\bigl( \Tr{2} R_{1} M_1 K_{1} \bigr) & =& \Tr{2} F_{12} \Bigl\{ \Tr{2'} R_{12'} M_1 K_{12'} \Bigr\} F^{-1}_{12} R_{12} \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent &=&\, \Tr{23} \underline{F_1\bigl\{ F_2 R_1} M_1 K_1 \underline{F_{2}^{-1}\bigr\} F_1^{-1}} R_1 \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent &=& \Tr{23} \underline{R_2} F_1 \underline{F_2 M_1 F_{2}^{-1}} F_1^{-1} K_2 R_1 \\[2pt] &=&\, \Tr{23} F_1 M_1 F_1^{-1} \underline{K_2 R_1 R_2} \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent &= & \Tr{2\underline{3}} F_1 M_1 F_1^{-1} \underline{K_2} K_1\\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent &=&\, \mu \Tr{2} F_1 M_1 F_1^{-1} K_1 \, =\, \mu\, \xi(M). \end{eqnarray} Here in calculations we underline terms which undergo a transformation in the next step. For the transformations we used the compatibility relations for the pair $\{R,F\}$ (\ref{sovm}), BMW algebra relations for the matrices $R$ and $K$ (\ref{bmwRa}), first formula in (\ref{BMW-trace}), and the following properties of the R-trace (see \cite{OP}, lemma 3.2 and corollary 3.4) \[ \!\!\!\!\Tr{2} F_1^{\pm 1}\, X_1\, F_1^{\mp 1}= I_1\, {\rm Tr}\,_{\! R} X \quad\; \forall\; X\in {\rm End}(V)\otimes W, \] where $W$ is a $\Bbb C$-linear space, and \[ \Bigl[ R_{12}, \, {D_R}_1 {D_R}_2 \Bigr] = 0 \] which is equivalent to \[ \Tr{12} Y_1\, R_1 = \Tr{12} R_1\, Y_1 \qquad \forall\; Y\in {\rm End}(V^{\otimes 2})\otimes W.\] The second equality in relation (\ref{pi}) holds for an arbitrary BMW type R-matrix $R$. \vskip .1cm To prove formula (\ref{pi-inv}) one notices that the map $\pi$ is proportional to the map $\xi$ for the pair $\{R,R\}$. Thus the first equality in (\ref{pi-inv}) follows from the formula for $\xi^{-1}$ for the pair $\{R,R\}$, see (\ref{xi-inv}). The second equality in (\ref{pi-inv}) is obtained from the first one by the same remark as for the map $\pi$. \hfill$\blacksquare$ \medskip Until the end of this subsection we let $M$ to be the matrix of generators of the BMW type quantum matrix algebra ${\cal M}(R,F)$. \vskip .1cm In general, the matrix $\pi(M)$ does not belong to the algebra ${\cal P}(R,F)$. On the other hand, $\pi(M)$ is related to the $\star$-inverse of the matrix $M$ (see (\ref{M-star-inv})) \begin{equation} \lb{pi-Minv} \pi(M)\, =\, M^{\overline{-1}} g\, =\, M^{\overline{-1}}\star I g \end{equation} and thus belongs to the extended algebra ${\cal P}^{^\bullet}\!(R,F)$. The formula for the $\star$-product for the matrix $\pi(M)$ is clearly induced from that for $M^{\overline{-1}}$ (see (\ref{Minv*N})) and the permutation rules for $g$ (see (\ref{g-perm})): \begin{equation} \lb{piM-star} \pi(M)\star N\, := N\star \pi(M)\, :=\, \mu \phi^{-1}( \xi(M)\cdot G^{-1} N G), \qquad \forall\; N\in {\cal P}(R,F). \end{equation} Complementing the algebra ${\cal P}(R,F)$ with the $\star$-multiples of $\pi(M)$ $$ \pi(M)^{\overline{n}}\; :=\; \underbrace{\pi(M)\star \dots \star \pi(M) }_{n\ times} $$ one obtains an intermediate extension ${\cal P}^\circ(R,F)\supset {\cal P} (R,F)$, ${\cal P}^\circ (R,F)\subset {\cal P}^\bullet (R,F)$. It is this algebra where the matrix $A^{(-1,k+1)}$ belongs to. Now we are ready to write down the identity (\ref{ahah}) in terms of $\star$-powers of the matrices $M$ and $\pi(M)$. \begin{prop} \lb{preCH} Let ${\cal M}(R,F)$ be the $Sp(2k)$-type quantum matrix algebra. The matrix $M$ of generators of this algebra and its image $\pi(M)$ under the map (\ref{pi}) satisfy the following $k$-th order matrix polynomial identity \begin{equation} \lb{preCHSp} \sum_{i=0}^{k} (-q)^i M^{\overline{k-i}} \epsilon_i\, + \, q^{2k}\sum_{i=0}^{k-1} (-q)^{-i} \pi(M)^{\overline{k-i}} \epsilon_i\,=\, 0\, . \end{equation} Here the coefficients $\epsilon_i$, $i=1, \dots , k,$ are given by eq. (\ref{CHSp-2}). \end{prop} \subsection{Simple examples and classical limit.} \lb{sec4.3} In this section we present the Cayley-Hamilton and `pre-Cayley-Hamilton' identities (\ref{CHSp-1}) and (\ref{preCHSp}) for the standard RTT- and RE-algebras corresponding to the $Sp(2k)$-type R-matrix (\ref{R-Sp}) in cases $k=1,2$. \medskip {\em Standard $Sp(2)$-type RTT-algebra} is the quantum matrix algebra ${\cal M}(R^{\mbox{\tiny (st)}},P)$, where the R-matrix $R^{\mbox{\tiny (st)}}$ (\ref{R-Sp}) and permutation $P$ act on a tensor square of the 2-dimensional vector space. We use the symbol $T$ for the $2\times 2$ matrix of generators of this algebra. Permutation relations for its components $T^i_j$, $i,j\in\{1,2\}$ are identical to the permutaton relations of the standard $GL_{q^2}(2)$-type RTT-algebra: \begin{equation} \lb{RTT-Sp2} q^2 T^{i}_{2} T^{i}_{1}\, =\, T^{i}_{1} T^{i}_{2}, \quad q^2 T^{2}_{i} T^{1}_{i}\, =\, T^{1}_{i} T^{2}_{i},\quad [T^{2}_{1}, T^{1}_{2}] \, =\, 0,\quad [T^{2}_{2}, T^{1}_{1}] \, =\, (q^{-2}-q^2) T^{1}_{2} T^{2}_{1}. \end{equation} The R-matrix image $\rho_{{R^{\mbox{\tiny(st)}}}}(a^{(2)})$ of the second order antisymmetrizer vanishes in this particular case and, therefore, there is no any additional $g$-covariance conditions. The two generators of the characteristic subalgebra $g$ and $a_1$ read \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent g& =& {q^{-6}\over q^2+q^{-2}} \left( q^{-2}\, T^1_1 T^2_2 +q^{2}\, T^2_2 T^1_1 - T^1_2 T^2_1 -T^2_1 T^1_2\right) \\ \lb{g-RTT-Sp2} &=& q^{-6}\left( T^1_1 T^2_2 -q^2\, T^1_2 T^2_1\right),\\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent a_1& =& {\rm Tr}_{_{\! R}} T \, =\, q^{-5}\, T^1_1 + q^{-1}\, T^2_2, \end{eqnarray} where the second simplified expression for $g$ is obtained with the help of the permutation relations (\ref{RTT-Sp2}). The 2-contraction $g$ is central in this case (the matrix $G$ for the R-matrix pair $\{R^{\mbox{\tiny (st)}},P\}$ equals the unity), while the element $a_1$ is not. \vskip .1cm To write down the characteristic identities for this algebra we need explicit expressions for the maps $\phi$ (\ref{phi}) and $\pi$ (\ref{pi}) \begin{equation} \lb{phi-pi-Sp2} \phi(T)= \left(\!\! \begin{array}{cc} \scriptstyle q^{-4}\, T^1_1+{ (1-q^{-4})}\,T^2_2 & \scriptstyle q^{-6}\, T^1_2\\[2pt] \scriptstyle q^{-2}\, T^2_1 & \scriptstyle T^2_2 \end{array}\!\! \right)\! , \quad \pi(T)= \left(\!\! \begin{array}{cc} \scriptstyle (q^{-6}+q^{-2})\, T^1_1+q^{-2}\,T^2_2 & \scriptstyle -q^{-2}\, T^1_2\\[2pt] \scriptstyle -q^{-2}\, T^2_1 & \scriptstyle q^{-6}\,T^2_2 \end{array}\!\! \right)\! . \end{equation} The Cayley-Hamilton identity (\ref{CHSp-1}) and its parent identity (\ref{preCHSp}), respectively, read \begin{eqnarray} \lb{CH-RTT-Sp2} T^{\overline{2}}\, -\, q\, T\, a_1 \, +\, q^2\, I\, g\,=\, 0, \\[2pt] \lb{preCH-Sp2} T\, -\, q\, I\, a_1 \, +\, q^2\, \pi(T)\, =\, 0, \end{eqnarray} where $T^{\overline 2}= T\cdot \phi(T)$. \vskip .1cm We note that the identity (\ref{CH-RTT-Sp2}) coincides with the Cayley-Hamilton identity for the standard $GL_{q^2}(2)$-type RTT-algebra (see \cite{EOW,IOP,IOP2}), where the 2-contraction $g$ plays the role of the quantum determinant of the matrix $T$. In this particular case the Cayley-Hamilton identity (\ref{CH-RTT-Sp2}) encodes the half of the permutation relations (\ref{RTT-Sp2}); in general, a half-quantum matrix of $GL$ type satisfies the Cayley-Hamilton identity \cite{CFR,IO}. \vskip .1cm Another specific feature of the $Sp(2)$ case is that the `parent' Cayley-Hamilton identity (\ref{preCH-Sp2}) being linear in generators is satisfied without any reference to the quadratic permutation relations. \medskip {\em The standard $Sp(2)$-type Reflection Equation (RE) algebra} is the quantum matrix algebra ${\cal M}(R^{\mbox{\tiny (st)}},R^{\mbox{\tiny (st)}})$, where the R-matrix $R^{\mbox{\tiny (st)}}$ (\ref{R-Sp}) acts on the tensor square of the 2-dimensional vector space. We use the symbol $L$ for the $2\times 2$ matrix of generators of this algebra. The permutation relations for its components $L^i_j$, $i,j\in\{1,2\}$, are identical to the permutation relations for the standard $GL_{q^2}(2)$-type RE-algebra: \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent & L^{i}_{j} L^{1}_{1}\, =\, q^{4(j-i)}\,L^{1}_{1} L^{i}_{j}, \qquad [ L^{2}_{2}, L^{1}_{2}]\, =\, (1-q^{-4})\,L^1_1 L^1_2,&\\ \lb{REA-Sp2} & [ L^{2}_{2}, L^{2}_{1}]\, =\, -q^{-4}(1-q^{-4})\,L^1_1 L^2_1, & \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent & [L^{2}_{1}, L^{1}_{2}] \, =\, (1-q^{-4})\, L^1_1(L^{1}_{1} - L^{2}_{2}). & \end{eqnarray} The two generators of the characteristic subalgebra $g$ and $a_1$ are \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent g& =& {q^{-4}\over q^2+q^{-2}} \left( L^1_1 L^2_2 + L^2_2 L^1_1 -(1-q^{-4})\,(L^1_1)^2 - L^1_2 L^2_1 -q^4\, L^2_1 L^1_2\right) \\ \lb{g-REA-Sp2} &=& q^{-2}\left( L^1_1 L^2_2 -(1-q^{-4})\, (L^1_1)^2-L^1_2 L^2_1\right),\\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent a_1& =& {\rm Tr}_{_{\! R}} L \, =\, q^{-5}\, L^1_1 + q^{-1}\, L^2_2, \end{eqnarray} where the second expression for $g$ is obtained with the use of permutation relations (\ref{REA-Sp2}). As for any RE-algebra, the generators $g$ and $a_1$ are central. \vskip .1cm Another distinguishing property of the RE-algebras --- the identity of the map $\phi$ --- makes their characteristic identity (\ref{CH-RTT-Sp2}) particularly simple and similar to the classical case. In our situation it reads \begin{equation} \lb{CH-REA-Sp2} L^2\, -\, q\, L\, a_1 \, +\, q^2\, I\, g\,=\, 0, \end{equation} where $L^2$ means the usual matrix square of $L$ and the coefficients $g$ and $a_1$ are given by (\ref{g-REA-Sp2}). Again, this matrix equality encodes half of the permutation relations (\ref{REA-Sp2}). \vskip .1cm As stated in lemma \ref{lem4.6} the map $\pi$ depends on the first R-matrix from the compatible pair $\{R,F\}$ only. Hence, for the RTT- and RE-algebra generating matrices $T$ and $L$ the map $\pi$ is literally the same (see (\ref{phi-pi-Sp2})), and the parent Cayley-Hamilton identities for the Sp(2)-type RTT- and RE-algebras coincide (see \ref{preCH-Sp2}). \medskip Next, we consider a less trivial example in order to demonstrate the results of this section in a greater generality. It is the {\em standard $Sp(4)$-type RTT-algebra} --- the quantum matrix algebra ${\cal M}(R^{\mbox{\tiny (st)}},P)$, where the R-matrix $R^{\mbox{\tiny (st)}}$ (\ref{R-Sp}) and the permutation $P$ now act on the tensor square of the 4-dimensional vector space. We keep notation $M$ for the $4\times 4$ matrix of generators of this algebra. Quadratic relations in this algebra consist of 120 permutation relations for 16 matrix components, and of 10 additional conditions. The latter ones together with expression for the 2-contraction $g$ can be extracted from the matrix equalities (\ref{tau2}), where $i=1$ and $\mu=-q^{-5}$ in our case. All the quadratic relations and the expressions for $g$ are collected in the Appendix. There and in the formulas below it is suitable to break the $4\times 4$ matrix $M$ into four $2\times 2$ blocks $A$, $B$, $C$ and $D$: \begin{equation} \lb{M-part} M\, =\, \left(\!\! \begin{array}{cc} A & B \\ C & D \end{array}\!\! \right). \end{equation} The coefficients $\epsilon_1$ and $\epsilon_2$ of the Cayley-Hamilton identity, together with the 2-contraction $g$ generate the characteristic subalgebra. The 2-contraction $g$ is central, while $\epsilon_i$, $i=1,2$, are not. Expression for $g$ is given in the Appendix (see eq.(\ref{g-RTT-Sp4})); formulas for $\epsilon_i$ read \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent \epsilon_1 &=& a_1\, =\, q^{-9} A^1_1+q^{-7} A^2_2 +q^{-3} D^1_1 + q^{-1} D^2_2, \\[3pt] \nonumber}\def\lb{\label}\def\nin{\noindent \epsilon_2 &=& a_2 + g \, =\, q^{-16}(A^1_1 A^2_2 - q A^1_2 A^2_1) \\ \nonumber}\def\lb{\label}\def\nin{\noindent &&\phantom{ a_2 + g \, =\,} + q^{-4} (D^1_1 D^2_2 -q D^1_2 D^2_1)+ q^{-12} (D^1_1 +q^2 D^2_2)(A^1_1+q^2 A^2_2) \\[2pt] \nonumber}\def\lb{\label}\def\nin{\noindent &&\phantom{ a_2 + g \, =\,} - q^{-12} \left(q^{-1}C^1_1 B^1_1 -(q-q^{-1})C^1_1 B^2_2+ C^1_2 B^2_1 +C^2_1 B^1_2 +q^3 C^2_2 B^2_2\right) . \end{eqnarray} To write down the characteristic identities we also need expressions for the maps $\xi^{\pm 1}$, $\phi^{\pm 1}$. They are \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent \xi(M)& =& \left(\! \begin{array}{cc} -q^{-5} \sigma_q(D) & q^{-8}\sigma_q(B) \\[2pt] q^{-2} \sigma_q(C) &-q^{-5} \sigma_q(A) \end{array}\! \right), \\[3pt] \nonumber}\def\lb{\label}\def\nin{\noindent \phi(M)& =& \left(\! \begin{array}{cc} q^{-6} \alpha^+_q(A)+(1-q^{-2})\beta_q(D) & q^{-7} \alpha^-_q(B) \\[2pt] q^{-1} \alpha^-_q(C) & \alpha^+_q(D) \end{array}\! \right), \\[3pt] \lb{xi=phi-inv-Sp4} \xi^{-1}(M)& =& \xi(M)|_{q\leftrightarrow q^{-1}}, \qquad \phi^{-1}(M)\, =\, \phi(M)|_{q\leftrightarrow q^{-1}}. \end{eqnarray} Here $\sigma_q$, $\alpha^\pm_q$, $\beta_q$ are linear maps of the $2\times 2$ matrices \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent \sigma_q(X)= \left(\!\! \begin{array}{cc} \scriptstyle X^2_2 &\scriptstyle q^{-1} X^1_2 \\ \scriptstyle q X^2_1 &\scriptstyle X^1_1 \end{array}\!\! \right), \;\; \alpha^{\pm}_q(X)= \left(\!\! \begin{array}{cc} \scriptstyle q^{-2} X^1_1\,\pm\, (1-q^{-2}) X^2_2 &\scriptstyle q^{-3} X^1_2 \\ \scriptstyle q^{-1} X^2_1 &\scriptstyle X^2_2 \end{array}\!\! \right)\!, \end{eqnarray} \begin{equation} \beta_q(X) = {\scriptstyle (q^{-2} X^1_1+X^2_2)} I + {\scriptstyle q^{-4}}\sigma_q(X), \end{equation} The following properties of these maps make the check of the relations (\ref{xi=phi-inv-Sp4}) staightforward: \begin{eqnarray} \nonumber}\def\lb{\label}\def\nin{\noindent &&(\sigma_q)^{-1}\, =\,\sigma_{1/q},\qquad\quad (\alpha^\pm_q)^{-1}\, =\, \alpha^\pm_{1/q}, \qquad\quad \beta_q\circ \alpha^+_{1/q} \,=\, q^{-4}\, \alpha^+_q \circ \beta_{1/q}. \end{eqnarray} The composite map $\pi=-q^{-5} (\phi^{-1}\circ \xi)(M)$ reads explicitly \begin{equation} \nonumber}\def\lb{\label}\def\nin{\noindent \pi(M) = \left(\! \begin{array}{cc} q^{-4}( \alpha^+_{1/q}\circ \sigma_q)(D)-q^{-8}(1-q^{-2})(\beta_{1/q}\circ \sigma_q)(A) & -q^{-6}( \alpha^-_{1/q}\circ \sigma_q)(B) \\[2pt] -q^{-6} (\alpha^-_{1/q}\circ \sigma_q) (C) & q^{-10}(\alpha^+_{1/q}\circ \sigma_q)(A) \end{array}\! \right). \end{equation} Now we are ready to write down the characteristic identities (\ref{CHSp-1}) and (\ref{preCHSp}) for the case of the standard $Sp(4)$-type RTT-algebra: \begin{eqnarray} \lb{CH-RTT-Sp4} M^{\overline{4}} - q\, M^{\overline{3}}\,\epsilon_1 +q^2 M^{\overline{2}}\,\epsilon_2- q^3 M \, \epsilon_1 g +q^4 I\, g^2 & =& 0, \\[2pt] \lb{preCH-RTT-Sp4} M^{\overline{2}} - q\, M\, \epsilon_1 +q^2 I\, \epsilon_2 -q^3 \pi(M)\, \epsilon_1 +q^4 \pi^{\overline{2}}(M) & =& 0. \end{eqnarray} For reader's convenience we recall formulas for the powers of quantum matrices: $$ M^{\overline{i+1} }= M\cdot \phi(M^{\overline{i}})\;\; \forall\, i\geq 1,\quad \pi^{\overline{2}}(M)=-q^{-5}\, \phi^{-1}(\xi(M)\cdot \pi(M)), $$ where in the last formula we took into account that $\mu =-q^{-5}$ and $G=I$ in our particular case. \vskip .1cm Using the definitions of the maps $\xi$, $\phi^{\pm 1}$, $\pi$ and of the elements $\epsilon_1$, $\epsilon_2$ given above, and applying the quadratic relations from the Appendix one can check the parent characteristic identity (\ref{preCH-RTT-Sp4}) directly. The Cayley-Hamilton identity (\ref{CH-RTT-Sp4}) follows from it by the $\star$-multiplication by $M^{\overline{2}}$. \medskip Finally, we consider the classical limit of the parent Cayley-Hamilton identities. In the limit $q\rightarrow 1$ the standard $Sp(2k)$-type R-matrix (\ref{R-Sp}) becomes the usual permutation and the quadratic relations (\ref{qma}) in the corresponding algebra ${\cal M}(P,P)$ imply the commutativity of the components of matrix $M$. The rank~$=1$ projector $K^{\mbox{\tiny (st)}}$ (\ref{K-Sp}) decouples from the R-matrix and the $g$-invariance conditions (\ref{tau2}) become independent of (\ref{qma}) and should be treated separately. We rewrite them in the familiar form \begin{equation} \lb{g-inv-class} M^t \,\Omega\, M\, =\, g\, \Omega \, =\, M\, \Omega\, M^t. \end{equation} Here $M^t$ is the transposed matrix and $\Omega$ is the $2k\times 2k$ matrix of the symplectic quadratic form. With our choice of the rank~$=1$ matrix $K^{\mbox{\tiny (st)}}$ it reads $$ \Omega\, =\, \left(\!\! \begin{array}{cc} 0 & w \\ -w & 0 \end{array}\!\! \right), $$ where $w$ is the $k\times k$ antidiagonal matrix: $w^i_j=\delta^i_{j'},\quad j'=k+1-j$. \vskip .1cm Notice that in case $g\neq 0$ (more formally, if $g$ is invertible) the left and right equalities in (\ref{g-inv-class}) result in equivalent sets of conditions. On the contrary, in case $g=0$ these equalities are not equivalent and only together they give the complete set of the $g$-invariance conditions. \vskip .1cm Again, it is suitable to use the block notation (\ref{M-part}) for the matrix $M$, where now $A$, $B$, $C$ and $D$ are $k\times k$ matrices. The matrix $\pi(M)$ in this notation is $$ \pi(M)\, =\, -\,\Omega\, M^t\, \Omega\, =\, \left(\!\! \begin{array}{cc} D' & -B' \\ -C' & A' \end{array}\!\! \right), $$ where $X'= w X^t w$. This operation is a classical counterpart of the map $\sigma_q$ from our previous example. \smallskip The classical parent Cayley-Hamilton identity reads \begin{equation} \lb{preCHSp-class} \sum_{i=0}^{k} (-1)^i M^{k-i} \epsilon_i\, + \, \sum_{i=0}^{k-1} (-1)^{i} \pi(M)^{k-i} \epsilon_i\,=\, 0\, , \end{equation} where now all matrix powers are calculated according to the usual rules (the map $\phi$ in the classical limit is identical) and the coefficients $\epsilon_i$ become usual traces of the $i$-th wedge powers of the matrix $M$: $\epsilon_i = {\rm Tr}\, (\wedge^i M)$ (the antisymmetrizers computed with the permutation matrix $P$ automatically include contributions from the 2-contraction $g$). \smallskip Assuming the invertibility of the matrix $A$ one can solve the $g$-invariance relations explicitly \begin{equation} \lb{g-inv-solve} M\, =\, \left(\!\! \begin{array}{cc} A & A Y \\ X A & X A Y +g A'^{-1} \end{array}\!\! \right) \, =\, \left(\!\! \begin{array}{cc} I & 0 \\ X & I \end{array}\!\! \right) \left(\!\! \begin{array}{cc} A & 0 \\ 0 & g A'^{-1} \end{array}\!\! \right) \left(\!\! \begin{array}{cc} I & Y \\ 0 & I \end{array}\!\! \right). \end{equation} where matrices $X$, $Y$ are such that $X'=X, \;\; Y'=Y$. \vskip .1cm Substituting this parameterization for $M$ into the identity (\ref{preCHSp-class}) one can reduce it, at least in cases $k=1,2,$ to the Cayley-Hamilton identities for $k\times k$ matrices $A$ and $XY$. \subsection{Spectral parameterization} \medskip In this section we describe the parameterization the coefficients of the characteristic polynomial (\ref{CHSp-1}) by means of a $\Bbb C$-algebra ${\cal E}_{2k}$ of polynomials in $2k+1$ pairwise commuting variables $\nu_i$, $~i=0,1,\dots ,2k$, satisfying conditions \begin{equation} \lb{specSp} \nu_{k+i}\, \nu_{k+1-i}\, =\, \nu_0^2\,\quad\forall\; i=1,2,\dots ,k\, . \end{equation} We call $\nu_i$, $i=0,1,\dots ,2k$, {\em spectral variables}. These variables play a role of the eigenvalues of the symplectic type quantum matrix $M$. This parameterization was initially aimed at comparing our results with expressions given for the power sums for the RE-algebras in \cite{Mudr} (see the subsection 8.3 there). Although the derivation methods are very different the results agree up to some obvious changes in a notation. Notice that compared to \cite{Mudr} we are working in a more general setting. The generalization goes in several directions. First, we do not assume a ``standard'' Drinfel'd-Jimbo's form for the R-matrices defining the algebra and, moreover, we do not use any deformation assumptions in our constructions. Next, we are working with a wider family of QM-algebras. And, finally, we are working directly in the algebra without passing to representations\footnote{ Passing to the representations level is hardly possible except in the RE-algebra case. The reason is that the characteristic subalgebra belongs to the center of the RE-algebra, which is not true for the general QM-algebra.}. \vskip .1cm We are going to factorize the polynomial in the left hand side of the equation (\ref{CHSp-1}). To this end, we realize elements of the characteristic subalgebra ${\cal C}(R,F)$ as polynomials in the spectral variables and construct a corresponding extention of the algebra ${\cal P}(R,F)$. \begin{prop} \lb{corollary5.5} In the setting of the theorem \ref{theorem5.4}, assume that the elements $a_i$, $i=1,2,\dots ,k$, are algebraically independent. Consider an algebra homomorphism of the characteristic subalgebra ${\cal C}(R,F)$ to the algebra of the spectral variables ${\cal E}_{2k}$, $ \pi_{Sp(2k)}: {\cal C}(R,F)\rightarrow {\cal E}_{2k}\ , $ defined on the generators by \begin{equation} \lb{rep-charSp} \pi_{Sp(2k)}:\;\; g\mapsto \nu_0^2\, , \quad a_i\mapsto e_i(\nu_0,-\nu_0,\nu_1, \nu_2,\dots ,\nu_{2k})\, \quad \forall\; i=1,\dots ,k\, , \end{equation} where $e_i$ are the elementary symmetric polynomials of their arguments (for the symmetric polynomials we adopt a notation of \cite{Mac}). The map $\pi_{Sp(2k)}$ defines naturally a left ${\cal C}(R,F)$--module structure on the algebra ${\cal E}_{2k}$. Consider a corresponding completion of the algebra ${\cal P}(R,F)$, $$ {\cal P}_{Sp(2k)}(R,F)\, :=\, {\cal P}(R,F)\raisebox{-4pt}{$\bigotimes\atop {\cal C}(R,F)$} {\cal E}_{2k}\, ,\vspace{-2mm} $$ where the $\star \, $-product on the completed space is given by the formula \begin{equation} \lb{P-Sp2} (N\raisebox{-4pt}{$\bigotimes\atop {\cal C}(R,F)$}\nu)\star (N'\raisebox{-4pt}{$\bigotimes\atop {\cal C}(R,F)$}\nu') := (N\star N')\raisebox{-4pt}{$\bigotimes\atop {\cal C}(R,F)$}(\nu\nu')\ \ \ \forall\; N,N'\in {\cal P}(R,F)\ \; {\mathrm{and}}\ \; \forall\;\nu, \nu' \in {\cal E}_{2k}\, . \end{equation} Then, in the completed algebra ${\cal P}_{Sp(2k)}(R,F)$, the Cayley-Hamilton identity (\ref{CHSp-1}) acquires a factorized form \begin{equation} \lb{CHSp-factor} {\prod_{i=1}^{2k}}\hspace{-9.pt} {\scriptstyle \star}\; \left( M - q\nu_i I\right)\, =\, 0\, , \end{equation} where the symbol $\displaystyle\prod\hspace{-10.4pt} {\scriptstyle \star} $ denotes the product with respect to the $\star \, $-multiplication (\ref{P-Sp2}). \end{prop} \begin{rem} {\rm For the classical symplectic groups, the functions $a_i$, $i=1,\dots ,k,$ on the manifold $Sp(2k)$ are functionally independent. This justifies, at least perturbatively, the corresponding assumptions about the independence of the elements $a_i$ in the proposition above.} \end{rem} \begin{rem} {\rm For a general quantum matrix algebra ${\cal M}(R,F)$ the characteristic subalgebra does not belong to its center. So, there is no general rule to define an extension by the spectral variables $\{\nu_i\}$ of the algebra ${\cal M}(R,F)$. Nevertheless the commutative algebra ${\cal P}_{Sp(2k)}(R,F)$ admits the central extension by the spectral variables. Therefore we formulate the factorized Cayley-Hamilton identity for this extension. \vskip .1cm However, for the reflection equation algebra ${\cal M}(R,R)$ the characteristic subalgebra lies in the center, the $\star$-product coincides with the usual matrix product and therefore one can assume that eq.(\ref{CHSp-factor}) is satisfied in the central extension of ${\cal M}(R,R)$ by the spectral variables $\{\nu_i\}$.} \end{rem} \noindent {\bf Proof.}~ Using the equalities $$ e_i(\nu_0,-\nu_0,\nu_1, \nu_2,\dots ,\nu_{2k})\, =\, e_i(\nu_1, \nu_2,\dots ,\nu_{2k}) - \nu_0^2\, e_{i-2}(\nu_1, \nu_2,\dots ,\nu_{2k})\ \ \ \forall\ \ i\geq 0 $$ \vspace{-3mm} and $$ e_{k+i}(\nu_1,\nu_2,\dots ,\nu_{2k})\, =\,\nu_0^{2i}\, e_{k-i}(\nu_1,\nu_2,\dots , \nu_{2k})\ \ \ \forall\, i=1,\dots ,k, $$ if $\{\nu_i\}$ verifies eqs.(\ref{specSp}), it is straightforward to check that the map $\pi_{Sp(2k)}$ sends the coefficients (\ref{CHSp-2}) of the Cayley-Hamilton identity to the elementary symmetric functions in the spectral variables: $\ \epsilon_i\ $ $\mapsto\ $ $e_i(\nu_1, \nu_2,\dots ,\nu_{2k})\ $ $\quad\forall\ i=1,\dots ,2k\, .$ \hfill$\blacksquare$ \medskip In \cite{OP} we have derived the quantum analogs of the Newton and Wronsky relations among three series of elements of the characteristic subalgebra: the power sums $p_i$, the elementary symmetric functions $a_i$ and the complete symmetric functions $s_i$. Using these relations we now obtain the parameterization of the series $p_i$ and $s_i$ in terms of the spectral variables. \smallskip \begin{prop} \lb{corollary6.4} Let ${\cal M}(R,F)$ be the $Sp(2k)$-type quantum matrix algebra. Assume that the algebra parameter $q$ fulfills the conditions $i_q\neq 0$, $i=2,\dots ,n$, for some $n$.\footnote{For $n\leq k$ these conditions enter the initial settings for the $Sp(2k)$ type quantum matrix algebras.} Then the elements $a_n$ and $s_n$ can be defined recursively by the use of the Newton relations (see \cite{OP}, theorem 5.2) \begin{eqnarray} \lb{Newton-a} \sum_{i=0}^{n-1} (-q)^i a_i\, p_{n-i} &=& (-1)^{n-1} n_q\, a_n \, +\, (-1)^n \sum_{i=1}^{\lfloor {n/2}\rfloor}\Bigl( \mu q^{n-2i} -q^{1-n+2i}\Bigr)\, a_{n-2i}\, g^i, \\[2pt] \lb{Newton-s} \sum_{i=0}^{n-1} q^{-i} s_i\, p_{n-i} &=& n_q\, s_n \, +\,\sum_{i=1}^{\lfloor {n/2}\rfloor}\Bigl( \mu q^{2i-n} + q^{n-2i-1}\Bigr)\, s_{n-2i}\, g^i. \end{eqnarray} In this situation the elements $s_n$ and $p_n$ have the following images under the homomorphism $\pi_{Sp(2k)}$ (\ref{rep-charSp}): \begin{eqnarray} \lb{para-s-Sp} \pi_{Sp(2k)}:&& s_n \mapsto h_n(\nu_1,\nu_2,\dots ,\nu_{2k}) ,\qquad p_n \mapsto q^{n-1}\sum_{i=1}^{2k} d_i \nu_i^n\, , \end{eqnarray} where $h_n$ denotes the complete symmetric polynomial in its arguments and \begin{eqnarray} \lb{para-p-Sp} d_i\, := {\nu_i - q^{-4}\nu_{2k+1-i}\over \nu_i -\nu_{2k+1-i}} \prod _{j=1\atop j\neq i,\, 2k +1-i}^{2k} {\nu_i - q^{-2}\nu_j\over \nu_i - \nu_j}\, . \end{eqnarray} The power sums contain the rational functions $d_i$ in the spectral variables and are themselves rational functions in $\{ \nu_i\}$. However, as it follows from the Newton recursion (\ref{Newton-a}), the power sums simplify, modulo the relations (\ref{specSp}), to polynomials in the spectral variables.\end{prop} \noindent {\bf Proof.}~ For the proof, we use the following auxiliary statement: \begin{lem}\lb{lemma6.5} In the assumptions of proposition \ref{corollary6.4}, consider the iterations \begin{eqnarray} \lb{mod-s} s'_0=s_0\, , && s'_1=s_1\, , \quad\;\; s'_i = s_i + s'_{i-2}\, g\, ; \\[2pt] \lb{mod-p2} p'_0=(1-\mu^2 q^{2})/(q-q^{-1})\, , && p'_1=p_1\, , \quad\;\; p'_i = p_i + (q^{-2}p'_{i-2}-p_{i-2})\, g\;\;\; \forall i\geq 2. \end{eqnarray} The modified sequences $\{s'_i\}_{i=0}^n$, $\{p'_i\}_{i=0}^n$ satisfy the following versions of the Newton and Wronski relations \begin{eqnarray} \lb{mod-N} \qquad \sum_{i=0}^{n-1} q^{-i} s_i p'_{n-i}& =& n_q s_n\quad \forall\; n\geq 1\, ; \\ \lb{mod-W} \sum_{i=0}^{n} (-1)^i a_i s'_{n-i}& =&\delta_{n,0}\, \quad\; \forall\; n\geq 0\, .\end{eqnarray}\end{lem} \noindent {\bf Proof.}~ For $n<2$, the equalities (\ref{mod-N})--(\ref{mod-W}) are clearly satisfied. For $n\geq 2$, one can check them inductively, applying the iterative formulas (\ref{mod-s}), (\ref{mod-p2}). \hfill$\blacksquare$ \medskip We now notice that the images of the elements $a_i$, $i=1,\dots ,n$, are given by the elementary symmetric functions (see eq.(\ref{rep-charSp})). Hence, by the Wronski relations (\ref{mod-W}), the images of the modified elements $s'_n$, $i=1,\dots ,n$, are the complete symmetric functions in the same arguments. Using then eq.(\ref{mod-s}) and taking into account the relation $h_n(\nu_0,\nu_1,\dots)= \sum_{i=0}^n \nu_0^i\, h_{n-i}(\nu_1,\dots)$, it is easy to check the formulas for the images of the elements $s_n$, which are given in eq.(\ref{para-s-Sp}). \smallskip To check the formulas for the power sums, we use the following statement, which was proved in \cite{GS}: if the elements $s_i$ for $i=0,1,\dots ,n\geq 1$ are realized as the complete symmetric polynomials $h_i$ in some set of variables $\{\nu_i\}_{i=1}^{2k}$, then the elements $p'_n$, defined by eqs.(\ref{mod-N}), have the following expressions in terms of the variables $\nu_i$ \begin{equation} \lb{formula-GS} p'_n = q^{n-1} \sum_{i-1}^p \widehat d_i \nu_i^n , \qquad \mbox{where}\quad \widehat d_i := \prod_{j=1\atop j\neq i}^{p}{\nu_i -q^{-2} \nu_j\over \nu_i - \nu_j}. \end{equation} The proof of (\ref{para-s-Sp}), (\ref{para-p-Sp}) for the power sums $p_n$ proceeds as follows. \smallskip Assuming that the relation (\ref{formula-GS}) stays valid for $p'_0$ (note, $p'_0$ is not fixed by the recursion (\ref{mod-N})) and making the Ansatz (\ref{para-s-Sp}) for the power sums $p_i$ for $i=0,1,\dots ,n$, we make use of the recursion (\ref{mod-p2}). Upon substitutions, we find that the relations (\ref{mod-p2}) hold valid provided that \begin{equation}\lb{d-hat-d}d_i \, =\, {\nu_i^2 - q^{-4}\nu_0^2\over\nu_i^2 -q^{-2}\nu_0^2}\,\,\widehat d_i\, .\end{equation} Taking into account the relations (\ref{specSp}) for the spectral variables $\nu_i\in {\cal E}_{2k}$, we observe that the conditions (\ref{d-hat-d}) dictate the choice (\ref{para-p-Sp}) for $d_i$. \smallskip It remains to verify the initial settings for the recursion (\ref{mod-p2}). They are: \begin{eqnarray} \lb{init-1} p'_0& =& q^{-1} \displaystyle \sum_{i=1}^{2k} \widehat d_i\, =\, {1-\mu^2 q^2\over q-q^{-1}}|_{\mu=-q^{-1-2k}}\, =\, q^{-2k} (2k)_q\, , \\[2pt] \lb{init-3} p_1 &=&p'_1\quad \Leftrightarrow\quad \displaystyle \sum_{i=1}^{2k}\nu_i (d_i - \widehat d_i)\, =\, 0, \end{eqnarray} as well as the expression (\ref{P-i}) for $p_0$: \begin{eqnarray} \lb{init-2} p_0 & =& q^{-1} \displaystyle \sum_{i=1}^{2k} d_i\, =\, {\rm Tr}_R I|_{\mu=-q^{-1-2k}}\, =\, q^{-1-2k}\bigl( (2k+1)_q -1\bigr). \end{eqnarray} \smallskip To verify them, we use expansions of the following rational functions $$ w_1(z) := \prod_{i=1}^{2k} {z-q^{-2} \nu_i\over z-\nu_i}\, , \qquad w_2(z) := {\nu_0^2 w_1(z)\over z^2 - q^{-2} \nu_0^2}\, , \qquad w_3(z) := z w_2(z) $$ in simple ratios. Expanding $w_1(z)$ and evaluating the result at $z=0$, we prove immediately the condition (\ref{init-1}). A less trivial check of the condition (\ref{init-2}) we comment in more details. Expanding $w_2(z)$, we obtain $$ w_2(z)\, =\, \sum_{i=1}^{2k} q^2 (d_i-\widehat d_i) {\nu_i\over z-\nu_i}\, +\, {q\nu_0\over 2}\Bigl( {w_1(q^{-1}\nu_0)\over z-q^{-1}\nu_0} -{w_1(-q^{-1}\nu_0)\over z+q^{-1}\nu_0}\Bigr)\, . $$ Here, for the transformation of the first term in the right hand side, we used the formulas (\ref{formula-GS}) and (\ref{d-hat-d}) and applied the relations (\ref{specSp}), which confine the variables $\nu_i\in {\cal E}_{2k}$. The relations (\ref{specSp}) also allow us to calculate $w_1(\pm q^{-1}\nu_0)=q^{-2k}$. Thus, evaluating $w_2(z)$ at $z=0$, we obtain $$ w_2(0)\, =\, -q^{2-4k}\, =\, -q^3 (p_0-p'_0)\, -\, q^{2-2k}\, , $$ wherefrom the condition (\ref{init-2}) follows. \vskip .1cm A check of the condition (\ref{init-3}), by the expansion and evaluation of $w_3(z)$ at $z=0$, is a similar calculation. \hfill$\blacksquare$
{ "timestamp": "2020-12-29T02:21:41", "yymm": "2012", "arxiv_id": "2012.14014", "language": "en", "url": "https://arxiv.org/abs/2012.14014" }
\section{Introduction} \input{secs/introduction} \iftr \section{Platform Model} \label{sec:platform_model} \input{secs/faas} \section{Benchmark Specification} \label{sec:bench_spec} \input{secs/benchmark-specification} \section{Benchmark Implementation} \label{sec:bench_impl} \input{secs/benchmark-implementation} \section{Evaluation} \input{secs/evaluation} \section{Related Work} \input{secs/related-work} \section{Conclusions} \input{secs/conclusion} \else \vspace{-0.5em} \section{Platform Model} \vspace{-0.25em} \label{sec:platform_model} \input{secs/faas} \vspace{-0.5em} \section{Benchmark Specification} \vspace{-0.25em} \label{sec:bench_spec} \input{secs/benchmark-specification} \vspace{-0.5em} \section{Benchmark Implementation} \vspace{-0.25em} \label{sec:bench_impl} \input{secs/benchmark-implementation} \vspace{-0.5em} \section{Evaluation} \vspace{-0.25em} \input{secs/evaluation} \vspace{-0.5em} \section{Related Work} \vspace{-0.25em} \input{secs/related-work} \section{Conclusions} \input{secs/conclusion} \fi \begin{acks} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 programme (grant agreement DAPP, No. 678880), and from the German Research Foundation (DFG) and the Swiss National Science Foundation (SNF) through the DFG Priority Programme 1648 Software for Exascale Computing (SPPEXA) and the ExtraPeak project (Grant Nr. WO1589/8-1). \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection{Benchmark Characteristics} \label{sec:benchmarkcharacteristics} \begin{table}\centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lllllll@{}}\toprule Name & Lang. & Cold Time [ms] & Warm Time [ms] & Instructions& CPU\% \\ \cline{1-6} \multirow{2}{*}{dynamic-html} & \textbf{P} & $130.4 \pm 0.7$ & $1.19 \pm 0.01$ & $7.02M \pm 287K$ & 99.4\% \\ & \textbf{N} & $84 \pm 2.8$ & $0.28 \pm 0.5$ & - & 97.4\% \\ \cline{1-2} \multirow{2}{*}{uploader} & \textbf{P} & $236.9 \pm 12.7$ & $126.6 \pm 8.9$ & $94.7M \pm 4.45M$ & 34\% \\ & \textbf{N} & $382.8 \pm 8.9$ & $135.3 \pm 9.6$ & - & 41.7\%\\ \cline{1-2} \multirow{2}{*}{thumbnailer} & \textbf{P} & $205 \pm 1.4$ & $65 \pm 0.8$ & $404M \pm 293K$ & 97\% \\ & \textbf{N} & $313 \pm 4$ & $124.5 \pm 4.4$ & - & 98.5\% \\ \cline{1-2} video-processing & \textbf{P} & $1596 \pm 4.6$ & $1484 \pm 5.2$ & - & - \\ compression & \textbf{P} & $607 \pm 5.3 $ & $470.5 \pm 2.8$ & $1735M \pm 386K$ & 88.4\% \\ image-recognition & \textbf{P} & $1268 \pm 74$ & $124.8 \pm 2.7$ & $621M \pm 278K$ & 98.7\% \\ \multirow{1}{*}{graph-pagerank} & \multirow{3}{*}{\textbf{P}} & $194 \pm 0.8$ & $106 \pm 0.3$ & $794M \pm 293K $& 99\% \\ \multirow{1}{*}{graph-mst} && $125 \pm 0.8$ & $38 \pm 0.4$ & $234M \pm 289K$& 99\% \\ \multirow{1}{*}{graph-bfs} && $123 \pm 1.1$ & $36.5 \pm 0.5$& $222M \pm 300K$ & 99\% \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Standard characterization of \textbf{P}ython and \textbf{N}ode.js benchmarks over 50 executions in a local environment on AWS \emph{z1d.metal} machine.} \label{tab:applications_evaluation} \ifcnf \vspace{-3em} \fi \end{table} We begin with a local evaluation summarized in~\mbox{\autoref{tab:applications_evaluation}}. We selected applications representing different performance profiles, from website backends with minimal CPU overhead and up to compute-intensive machine learning inference. The evaluation allows us to classify applications, verify that our benchmark set is representative and select to experiments benchmarks accordingly to required resource consumption. \begin{figure*} \includegraphics[width=\textwidth]{figures/cold_startup.pdf} \ifcnf \vspace{-2em} \fi \caption{\textbf{Cold startup overheads of benchmarks on AWS Lambda and Google Cloud Functions, based on \emph{cold} and \emph{warm} executions (Figure~\ref{fig:time_evaluation}).} } \label{fig:cold_startup} \ifcnf \vspace{-1em} \fi \end{figure*} \ifcnf \vspace{-0.5em} \fi \subsection{Performance analysis} \label{sec:performance} \ifcnf \vspace{-0.25em} \fi We design a benchmarking experiment \emph{Perf-Cost} to measure the cost and performance of FaaS executions. We run concurrent function invocations, sampling to obtain \emph{N} \emph{cold} invocations by enforcing container eviction between each invocations batch. Next, we sample the function executions to obtain N calls to a \emph{warm} container. We measure client, function, and provider time (Section~\ref{sec:metrics}). We compute non-parametric confidence intervals\mbox{~\cite{10.1145/2807591.2807644}} for client time and select the number of samples \textit{N} = 200 to ensure that intervals are within 5\% of the median for AWS while the experiment cost stays negligible. We perform 50 invocations in each batch to include invocations in different sandboxes, and use the same configuration on Azure and GCP for a fair and unbiased comparison of performance and variability. \footnote{We generate more samples due to unreliable cloud logging services. We always consider first 200 correctly generated samples and don't skip outliers.} We benchmark network bandwidth (\emph{uploader}), storage access times and compute performance (\emph{thumbnailer}, \emph{compression}), large cold start deployment and high-memory compute (\emph{image-recognition}), significant output returned (\emph{graph-bfs}), and compare performance across languages (Python and Node.js versions of \emph{thumbnailer}). \textbf{\emph{\textbf{Q1} How serverless applications perform on FaaS platforms?}} Figure~\mbox{\ref{fig:time_evaluation}} presents significant differences in warm invocations between providers, with AWS Lambda providing the best performance on all benchmarks. Each function's execution time decreases until it reaches a plateau associated with sufficient resources to achieve highest observable performance. Only benchmark \emph{graph-bfs} achieves comparable performance on the Google platform, with the largest slowdown observed on benchmarks relying on storage bandwidth (\emph{thumbnailer}, \emph{compression}). On Azure, we note a significant difference between benchmark and provider times on Python benchmarks. To double-check our measurements' correctness and verify if initialization overhead is the source of such discrepancy, we sequentially repeat warm invocations instead of using concurrent benchmark executions. The second batch presents more stable measurements, and we observe performance comparable to AWS on computing benchmarks \emph{image-recognition} and \emph{graph-bfs}. Our results verify previous findings that CPU and I/O allocation increases with the memory allocation~\mbox{\cite{10.5555/3277355.3277369}}. However, our I/O-bound benchmarks (\emph{uploader}, \emph{compression}) reveal that the distribution of latencies is much wider and includes many outliers, which prevents such functions from achieving consistent and predictable performance. \emph{ Conclusions: AWS functions consistently achieve the highest performance. Serverless benefits from larger resource allocation, but selecting the right configuration requires an accurate methodology for measuring short-running functions. I/O-bound workloads are not a great fit for serverless. } \emph{\textbf{Q2 How cold starts affect the performance?} } We estimate cold startup overheads by considering all $N^2$ combinations of $N$ cold and $N$ warm measurements. In Figure~\mbox{\ref{fig:cold_startup}}, we summarize the ratios of cold and warm client times of each combination. This approach doesn't provide a representative usage of Azure Functions, where a single function app instance handles multiple invocations. To estimate real-world cold startups there, instead of \emph{cold} runs, we use concurrent \emph{burst} invocations that include cold and warm executions. We notice the largest cold startup overheads on benchmark \emph{image-recognition} (large deployment, download model from storage), where a cold execution takes on average up to ten times longer than a warm invocation, which correlates with previous findings (c.f.~\cite{Manner2018ColdSI}). Simultaneously, the \emph{compression} benchmark shows that cold start can have a negligible impact for longer running functions (> 10 seconds). Azure provides lower overheads, with the highest gains on benchmarks with a large deployment package and long cold initialization, at the cost of higher variance. However, we notice an unusual and previously not reported contrast between Amazon and Google platforms: while high memory invocations help to mitigate cold startup overheads on Lambda, providing more CPU allocation for initialization and compilation, they have an adverse effect on Google Functions, except for benchmark \emph{image-recognition} discussed above. A possible explanation of this unexpected result might be a smaller pool of more powerful containers, leading to higher competition between invocations and longer allocation times. \emph{Conclusions: more powerful (and expensive) serverless allocations are not a generic and portable solution to cold startup overheads. Functions with expensive cold initialization benefit from functions apps on Azure.} \emph{\textbf{Q3 FaaS performance: consistent and portable?} } Vendor lock-in is a major problem in serverless. We look beyond the usual concern of provider-specific services, and examine changes in function's performance and availability. \textbf{Performance deviations} In Figure~\ref{fig:time_evaluation}, we observe the highest variance in benchmarks relying on I/O bandwidth (\emph{uploader} and \emph{compression}). Compute-intensive applications show consistent execution times (\emph{image-recognition}) while producing a notable number of stragglers on long-running functions (\emph{compression}). Function runtime is not the primary source of variation since we don't observe significant performance differences between Python and Node.js. Google's functions produced fewer outliers on warm invocations. Contrarily, Azure's results present significant performance deviations. Provider and client time measurements were significantly higher and more variant than function time on all benchmarks, except Node.js one, implying that the Python function app generates observed variations. The Node.js benchmark shows a very variable performance, indicating that invocations might be co-located in the same language worker. Finally, we consider the network as a source of variations. The ping latencies to virtual machines allocated in the same resource region as benchmark functions were consistent and equal to 109, 20, and 33 ms on AWS, Azure, and GCP, respectively. Thus, the difference between client and provider times cannot be explained by network latency only. \textbf{Consistency} On AWS, consecutive warm invocations always hit warm containers, even when the number of concurrent calls is large. On the other hand, GCP functions revealed many unexpected cold startups, even if consecutive calls never overlap. The number of active containers can increase up to 100 when processing batches of 50 requests. Possible explanations include slower resources deallocation and a delay in notifying the scheduler about free containers. \textbf{Availability} Concurrent invocations can fail due to service unavailability, as observed occasionally on Azure and Google Cloud. On the latter, \emph{image-recognition} generated up to 80\% error rate on 4096 MB memory when processing 50 invocations, indicating a possible problem with not sufficient cloud resources to process our requests. Similarly, our experiments revealed severe performance degradation on Azure when handling concurrent invocations, as noted in Section~\ref{sec:performance}.Q1, with long-running benchmark \emph{compression} being particularly affected. While Azure can deliver an equivalent performance for sequential invocations, it bottlenecks on concurrent invocations of Python functions. \textbf{Reliability} GCP functions occasionally failed due to exceeding the memory limit, as was the case for benchmarks \emph{image-recognition} and \emph{compression} on 512 MB and 256 MB, respectively. Memory-related failure frequency was 4\% and 5.2\%, and warm invocations of \emph{compression} had recorded 95th and 99th percentile of memory consumption as 261 and 273 MB, respectively. We didn't observe any issues with the same benchmarks and workload on AWS, where the cloud estimated memory consumption as a maximum of 179 MB and exactly 512 MB, respectively. While the memory allocation techniques could be more lenient on AWS, the GCP function environment might not free resources efficiently. \emph{Conclusions: the performance of serverless functions is not stable, and an identical software configuration does not guarantee portability between FaaS providers. GCP users suffer from much more frequent reliability and availability issues. } \begin{table}\centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lllllll}\toprule & \textbf{Upl} & \textbf{Th, Py} & \textbf{Th, JS} & \textbf{Comp} & \textbf{Img-Rec} & \textbf{BFS} \\ \midrule IaaS, Local [s] & 0.216 & 0.045 & 0.166 & 0.808 & 0.203 & 0.03\\ IaaS, S3 [s] & 0.316 & 0.13 & 0.191 & 2.803 & 0.235 & 0.03\\ FaaS [s] & 0.389 & 0.188 & 0.253 & 2.949 & 0.321 & 0.075 \\ Overhead & 1.79x & 4.14x & 1.43x & 3.65x & 1.58x & 2.49x\\ Overhead, S3 & 1.23x & 1.43x & 1.24x & 1.05x & 1.37x & 2.4x\\ Mem [MB] & 1024 & 1024 & 2048 & 1024 & 3008 & 1536\\ \bottomrule \end{tabular} \end{adjustbox} \caption{\textbf{Benchmarks performance on AWS Lambda and AWS EC2 t2.micro instance. Median from 200 warm executions.}} \label{tab:iaas_performance} \ifcnf \vspace{-2.5em} \fi \end{table} \begin{figure*} \centering \resizebox*{0.95\width}{0.92\totalheight}{ \subfloat[Compute cost of 1M invocations (USD).]{% \includegraphics[width=\dimexpr0.66\textwidth,height=5cm]{figures/cost_two_plots.pdf} \label{fig:cost_compute_time} } } \resizebox*{0.95\width}{0.92\totalheight}{ \subfloat[Median ratio of used and billed resources (\%). ]{% \includegraphics[width=\dimexpr0.34\textwidth,height=5cm]{figures/cost_efficiency.pdf} \label{fig:resource_usage} } } \ifcnf \vspace{-1em} \fi \caption{\textbf{ The cost analysis of performance results from Figure~\ref{fig:time_evaluation}: execution cost of 1 million requests (a) and resource usage of cold (\ding{115}) and warm (\ding{72}) executions (b). Azure data is (a) limited to a single average and (b) not available due to limitations of the Azure Monitor systems. }} \end{figure*} \emph{\textbf{Q4 FaaS vs IaaS: is serverless slower?}} The execution environment of serverless functions brings new sources of overheads~\cite{10.1145/3352460.3358296}. To understand their impact, we compare serverless performance with their natural alternative: virtual machines, where the durable allocation and higher price provide a more stable environment and data locality. We rent an AWS t2.micro instance with one virtual CPU and 1 GB memory since such instance should have comparable resources with Lambda functions. We deploy SeBS with the local Docker-based execution environment and measure warm execution times of 200 repetitions to estimate latency of constantly warm service. Also, we perform the same experiment with AWS S3 as persistent storage. This provides a more balanced comparison of performance overheads, as cloud provider storage is commonly used instead of a self-deployed storage solution, thanks to its reliability and data durability. We compare the performance against warm provider times (Section~\mbox{\ref{sec:performance}}.Q1), selecting configurations where benchmarks obtain high performance and further memory increases don't bring noticeable improvements. We present the summary in Table~\mbox{\ref{tab:iaas_performance}}. The overheads of FaaS-ifying the service vary between slightly more than 50\% and a slowdown by a factor of four. Equalizing storage access latencies reduces the overheads significantly (Python benchmark \emph{thumbnailer}). \emph{Conclusions: performance overheads of FaaS executions are not uniformly distributed across application classes. The transition from a VM-based deployment to serverless architecture will be accompanied by significant performance losses.} \ifcnf \vspace{-1em} \fi \subsection{Cost Analysis} \label{sec:cost} While raw performance may provide valuable insights, the more important question for systems designers is how much does such performance costs. We analyze the cost-effectiveness of results from the \emph{Perf-Cost} experiment described, answering four major research questions. \emph{\textbf{Q1 How users can optimize the cost of serverless applications?} } Each provider includes two major fees in the pay-as-you-go billing model: a flat fee for 1 million executions and the cost of consumed compute time and memory, but the implementations are different. AWS charges for reserved memory and computing time rounded up to 100 milliseconds, and GCP has a similar pricing model. On the other hand, Azure allocates memory dynamically and charges for average memory size rounded up to 128 MB. Since computing and I/O resources are correlated with the amount of requested memory, increasing memory allocation might decrease execution time and a more expensive memory allocation might doesn't necessarily lead to an increase in cost. We study the price of 1 million executions for the I/O-bound \emph{uploader} and compute-bound \emph{image-recognition} benchmarks (Figure~\ref{fig:cost_compute_time}). Performance gains are significant for \emph{image-recognition}, where the cost increases negligibly, but the decreased execution time of \emph{compression} does not compensate for growing memory costs. For other benchmarks, we notice a clear cost increase with every expansion of allocated memory. The dynamic allocation on Azure Functions generates higher costs, and they cannot be optimized. \emph{Conclusions: to increase the serverless price-efficiency, the user has to not only characterize the requirements of their application, but the exact performance boundaries - compute, memory, I/O - must be learned as well. Azure Functions generate higher costs because of the dynamic memory allocations. } \emph{\textbf{Q2 Is the pricing model efficient and fair?} } FaaS platforms round up execution times and memory consumption, usually to nearest 100 milliseconds and 128 megabytes. Thus, users might be charged for unused duration and resources. With SeBS, we estimate the scale of this problem by comparing actual and billed resource usage. We use the memory consumption of each invocation and the median memory allocation across the experiment on AWS and GCP, respectively. We do not estimate efficiency on Azure because monitor logs contain incorrect information on the memory used\footnote{The issues have been reported to Azure team.}. The results in Figure~\ref{fig:resource_usage} show that the required computing power and I/O bandwidth are not always proportional to memory consumption. Changing the current system would be beneficial to both the user and the provider, who could increase the utilization of servers if declared memory configuration would be closer to actual allocations. Furthermore, rounding up of execution time affects mostly short-running functions, which have gained significant traction as simple processing tools for database and messaging queue events. \emph{ Conclusions: memory usage is not necessarily correlated with an allocation of CPU and I/O resources. The current pricing model encourages over-allocation of memory, leading in the end to underutilization of cloud resources.} \begin{table}\centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{cc|lllllll}\toprule &&& \textbf{Upl} & \textbf{Th, Py} & \textbf{Th, JS} & \textbf{Comp} & \textbf{Img-Rec} & \textbf{BFS} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{IaaS}} & Local & Request/h & 16627 & 79282 & 21697 & 4452 & 17658 & 119272 \\ & Cloud & Request/h & 11371 & 27503 & 18819 & 1284 & 15312 & 117153 \\ \hline \multicolumn{2}{c|}{\multirow{4}{*}{FaaS}} & Eco 1M [\$] & 3.54 & 2.29 & 3.75 & 32.1 & 15.8 & 2.08 \\ && Eco B-E & 3275 & 5062 & 3093 & 362 & 733 & 5568 \\ && Perf 1M [\$] & 6.67 & 3.34 & 10 & 50 & 19.58 & 2.5 \\ && Perf B-E & 1740 & 3480 & 1160 & 232 & 592 & 4640 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{\textbf{The break-even point (requests per hour) for the most efficient (Eco) and best performing (Perf) AWS Lambda configuration, compared to IaaS deployment (Table~\mbox{\ref{tab:iaas_performance}}). IaaS assumes 100\% utilization of the micro.t2 machine costing \$0.0116 per hour.}} \label{tab:iaas_cost} \ifcnf \vspace{-2.5em} \fi \end{table} \emph{\textbf{Q3 FaaS vs IaaS: when is serverless more cost efficient?}} The most important advantage of serverless functions is the pay-as-you-go model that enables efficient deployment of services handling infrequent workloads. The question arises immediately: how infrequent must be the use of service to achieve lower cost than a dedicated solution with virtual machines? The answer is not immediate since the FaaS environment negatively affects the performance (Section~\mbox{\ref{sec:performance}}.Q4). Thus, we attempt the break-even analysis to determine the maximum workload a serverless function can handle in an hour without incurring charges higher than a rental. We summarize in Table~\mbox{\ref{tab:iaas_cost}} the results for the most cost-efficient and the highest performing deployments of our benchmarks on AWS Lambda. While EC2-based solution seems to be a clear cost winner for frequent invocations, its scalability is limited by currently allocated resources. Adding more machines takes time, and multi-core machines introduce additional cost overheads due to underutilization. Serverless functions can scale rapidly and achieve much higher throughput. \emph{ Conclusions: the IaaS solution delivers better performance at a lower price, but only if a high utilization is achieved. } \emph{\textbf{Q4 Does the cost differ between providers?} } Cost estimations of serverless deployments are usually focused on execution time and allocated memory, where fees are quite similar (Section~\mbox{\ref{tab:aws_azure}}, Figure~\ref{fig:cost_compute_time}). There are, however, other charges associated with using serverless functions. While storage and logging systems are not strictly required, functions must use the provider's API endpoints to communicate with the outside world. AWS charges a flat fee for an HTTP API but meters each invocation in 512 kB increments~\mbox{\cite{awsAPIBiling}}. GCP and Azure functions are charged \$0.12 and from \$0.04 to \$0.12, respectively, for each gigabyte of data transferred out~\cite{azureNetworkPricing,gcpPricing}. Our benchmark suite includes use cases where sending results directly back to the user is the most efficient way, such as \emph{graph-bfs} returning graph-related data (ca. 78 kB) and \emph{thumbnailer} sending back a processed image (ca. 3 kB). The additional costs for one million invocations can vary from \$1 on AWS to almost \$9 on Google Cloud and Azure\footnote{ HTTP APIs have been available for Lambda since December 2019. REST APIs have higher fees of \$3.5 for 1M requests and \$0.09 per GB of traffic. }. \emph{ Conclusions: billing models of cloud providers include additional charges, and serverless applications communicating a non-negligible amount of data are particularly affected there. } \begin{figure} \includegraphics[width=\linewidth]{figures/inv_overhead.pdf} \ifcnf \vspace{-2.5em} \fi \caption{\textbf{Invocation overhead of functions with varying payload.}} \label{fig:invoc_overhead} \ifcnf \vspace{-0.5em} \fi \end{figure} \subsection{Invocation overhead analysis } \label{sec:invocation} While benchmarking FaaS by saturating bandwidth or compute power tells a part of a story, these services are not designed with such workloads in mind. On the contrary, they are intended to handle large amounts of smaller requests, often arriving in unpredictable behavior, where the latency of starting the system may play a pivotal role. The function invocation latency depends on factors that are hidden from the user (Section~\ref{sec:platform_model}), and performance results indicated that FaaS systems add non-trivial overheads there (Section~\ref{sec:performance}). As there are no provider-specific APIs to query such metrics, users must estimate these overheads by comparing client-side round-trip latency and function execution time. However, such comparison is meaningful only for symmetric connections. That assumption doesn't hold for serverless: invocation includes the overheads of FaaS controllers, whereas returning results should depend only on network transmission. Instead, we estimate latencies of black-box invocation systems accurately with a different approach in the experiment \textbf{Invoc-Overhead}. First, we use timestamps to measure the time that passes between invocation and the execution start, considering all steps, including language worker process overheads. To compare timestamps, we follow an existing clock drift estimation protocol~\cite{4536494}. We measure the invocation latency in regions \emph{us-east-1}, \emph{eastus}, and \emph{us-east1} on AWS, Azure, and GCP, respectively. We analyze round-trip times and discover that they follow an asymmetric distribution, as in~\cite{4536494}. Thus, for clock synchronization, we exchange messages until seeing no lower round-trip time in $N$ consecutive iterations. We pick $N = 10$, since the relative difference between the lowest observable connection time and the minimum time after ten non-decreasing connection times is ca.~5\%. Using the benchmarking methodology outlined above, we analyze how the invocation overhead depends on the function input size for 1kB--5.9MB (6MB is the limit for AWS endpoints). The results presented in~\autoref{fig:invoc_overhead} show the latency cost of cold and warm invocations. \emph{\textbf{Q1 Is the invocation overhead consistent?} } We found that invocation latency behavior to be fairly consistent and predictable for cold AWS runs and warm startups on both platforms. At the same time, cold startups on Azure and GCP cannot be easily explained. Similarly to findings in Section~\mbox{\ref{sec:performance}}, we observe a cold start behavior that can be caused by unpredictable delays when scheduling functions in the cloud, or the overheads associated with an inefficient implementation of local servers executing the function. \emph{Conclusions: while warm latencies are consistent and predictable, cold startups add unpredictable performance deviations into serverless applications on Azure and GCP. } \emph{\textbf{Q2 Does the latency change linearly with an increase in payload size?} } With the exception of Azure's and GCP's cold starts, the latency scales linearly. For warm invocations on AWS, Azure, and GCP, and cold executions on AWS, the linear model fits almost perfectly the measured data, with adjusted R\textasciicircum2 metric 0.99, 0.89, 0.9, and 0.94, respectively. \emph{Conclusions: network transmission times is the only major overhead associated with using large function inputs. } \ifcnf \begin{figure*}[t] \vspace{-1.25em} \centering % \subfloat[\footnotesize\rm\textbf{Language: NodeJs, memory allocated: 128 MB, function execution time: 1s.}] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_nodejs_sleep_1_results_128_1} \label{fig:nodejs_128_1}} % \hfill % \subfloat[\footnotesize\rm\textbf{Language: Python, memory allocated: 1536 MB, function execution time: 1s.}] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_python_sleep_1_results_1536_1} \label{fig:python_1536_1}} % \hfill \subfloat[\footnotesize\rm\textbf{Language: Python, memory allocated: 1536 MB, function execution time: 10s.}] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_python_sleep_10_results_1536_10} \label{fig:python_1536_10}} % \vspace{-1em} \caption{ \vspace{-1em} \textbf{{Representative scenarios of eviction policies of FaaS containers on AWS. } } \vspace{-0.5em} } \label{fig:performancePlotsSquare} \end{figure*} \else \begin{figure*}[t] \centering % \subfloat[Language: NodeJs, memory allocated: 128 MB, function execution time: 1s.] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_nodejs_sleep_1_results_128_1} \label{fig:nodejs_128_1}} % \hfill \subfloat[Language: Python, memory allocated: 128 MB, function execution time: 1s.] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_python_sleep_1_results_128_1} \label{fig:python_128_1}} % \hfill % \subfloat[Language: Python, memory allocated: 1536 MB, function execution time: 1s.] {\includegraphics[width=\fw \textwidth] {figures/eviction-model/aws_python_sleep_1_results_1536_1} \label{fig:python_1536_1}} \subfloat[Language: Python, memory allocated: 128 MB, function execution time: 10s.] { \includegraphics[width=\fw \textwidth]{figures/eviction-model/aws_python_sleep_10_results_128_10} \label{fig:nodejs_128_10} } % \hfill \subfloat[Language: Python, memory allocated: 1536 MB, function execution time: 10s.] { \includegraphics[width=\fw \textwidth]{figures/eviction-model/aws_python_sleep_10_results_1536_10} \label{fig:python_1536_10} } % \hfill % \subfloat[Language: Python, memory alloc.: 128 MB, function exec.~time: 1s. Code package 250 MB.] { \includegraphics[width=\fw \textwidth]{figures/eviction-model/aws_python_heavy_sleep_1_results_128_1} \label{fig:python_128_1_heavy} } \caption{ \textbf{{Representative scenarios of eviction policies of FaaS containers on AWS. } } } \label{fig:performancePlotsSquare} \end{figure*} \fi \begin{table}\centering \ifcnf \footnotesize \fi \begin{adjustbox}{max width=\linewidth} \begin{tabular}{llll}\toprule \textbf{Parameter} & \textbf{Range} & \textbf{Parameter} & \textbf{Range} \\ \midrule $D_{init}$ & 1-20 & $\Delta T$ & 1-1600 s \\ Memory & 128-1536 MB & Sleep time & 1-10 s\\ Code size & 8 kB, 250 MB & Language & Python, Node.js \\ \bottomrule \end{tabular} \end{adjustbox} \caption{\textbf{Container eviction experiment parameters.}} \label{tab:model_params} \ifcnf \vspace{-3.5em} \fi \end{table} \subsection{Container eviction model. } \label{sec:eviction} In the previous section, we observed a significant difference in startup times depending on whether we hit a cold or warm start. Now we analyze how we can increase the chance of hitting warm containers by adjusting the invocation policy. Yet, service providers do not publish their policies. Thus, to guide users, we created the experiment \textbf{Eviction-Model} to empirically model function's cold start characteristics. \emph{\textbf{Q1 Are cold starts deterministic, repeatable, and application agnostic?} } The container eviction policy can depend on function parameters like number of previous invocations, allocated memory size, execution time, and on global factors: system occupancy or hour. Hence, we use the following benchmarking setup: at a particular time, we submit $D_{init}$ initial invocations, we wait $\Delta T$ seconds, and then check how many $D_{warm}$ containers are still active. Next, we test various combinations of $D_{init}$ and $\Delta T$ for different function properties (Table~\ref{tab:model_params}). Our results reveal that the \textbf{AWS} container eviction policy is surprisingly agnostic to many function properties: allocated memory, execution time, language, and code package size. Specifically, after every 380 seconds, half of the existing containers are evicted. The container lifecycles are shown in Figures~\ref{fig:nodejs_128_1}- \ifcnf \ref{fig:python_1536_10}. \else \ref{fig:python_128_1_heavy}. \fi We also attempted to execute these benchmarks on \textbf{Azure} Functions. Yet, massive concurrent invocations led to random and unpredictable failures when invoking functions. \emph{Conclusions: the eviction policy of containers is deterministic and application agnostic, and cold startup frequency can be predicted when scaling serverless applications.} \emph{\textbf{Q2 Can we provide simple analytical eviction models?} } In Equation~\ref{eq:aws_model}, we provide a simple analytical model of the number of active containers. The model fits the data (Figures~\ref{fig:nodejs_128_1}-\ref{fig:python_1536_10}) extremely well. \ifcnf \vspace{-0.4em} \fi \begin{equation} \label{eq:aws_model} D_{warm} = D_{init} \cdot 2^{-p},\quad p = \left\lfloor {\Delta T}/{380s} \right \rfloor \end{equation} \ifcnf \vspace{-0.2em} \fi We performed a well-established $R^2$ statistical test to validate its correctness, and the $R^2$ metric is more than 0.99. The only exception are Python experiments with 10s sleep time, yet even there $R^2$ is $>$0.94. Thus, we can use Model~\ref{eq:aws_model} to find the time-optimal invocation batch size $D_{init}$, given that user needs to run $n$ instances of a function with runtime $t$: \ifcnf \vspace{-0.4em} \fi \begin{equation} \label{eq:aws_sol} D_{init, opt} = {n\cdot t}/{P} \end{equation} \ifcnf \vspace{-0.2em} \fi \noindent where $P = 380s$ is the AWS eviction period length. \emph{Conclusions: we derive an analytical model for cold start frequency that can be incorporated into an application to warm up containers and avoid cold starts, without using provisioned concurrency solutions that have a non-serverless billing model. } \subsection{Benchmark Design Principles} \label{sec:benchmarking-cloud} Designing benchmarks is a difficult ``dark art''~\cite{10.1007/978-3-642-36727-4_12}. For SeBS, we follow well-known guidelines~\cite{v.Kistowski:2015:BB:2668930.2688819,Binnig:2009:WTT:1594156.1594168,benchmarking}. \textbf{Relevance. } We carefully inspect serverless use cases in the literature to select representative workloads that stress different components of a FaaS platform. We focus on core FaaS components that are widely used on all platforms, and are expected to stay relevant for the foreseeable future. \textbf{Usability. } Benchmarks that are easy to run benefit from a high degree of self-validation~\cite{v.Kistowski:2015:BB:2668930.2688819}. In addition to a benchmark specification, we provide \emph{a benchmarking platform} and a \emph{reference implementation} to enable automatic deployment and performance evaluation of cloud systems, minimizing the configuration and preparation effort from the user. \textbf{Reproducibility \& Interpretability.} For reproducibility and interpretability of outcomes, we follow established guidelines for scientific benchmarking of parallel codes~\mbox{\cite{hoefler2015scientific}}. We compute the 95\% and 99\% non-parametric confidence intervals~\mbox{\cite{hoefler2015scientific,10.5555/1996385}} and choose the number of samples such that intervals are within 5\% of the median. Still, in multi-tenant systems with shared infrastructure, one cannot exactly reproduce the system state and achieve performance. The FaaS paradigm introduces further challenges with a lack of control on function placement. Thus, in SeBS, we also focus on understanding and minimizing the deviations of measured values. For example, we consider the geolocation of cloud resources and the time of day when running experiments. This enables us to minimize effects such as localized spikes of a cloud activity when many users use it. \textbf{Extensibility.} While the SeBS implementation uses existing cloud services and relies on interfaces specific to providers, the specification of SeBS depends only on the abstract FaaS model from Section~\ref{sec:platform_model}. Thus, we do not lock the benchmark in a dependency on a specific commercial system. \subsection{Applications} \label{sec:applications} \begin{table}\centering \footnotesize \setlength{\tabcolsep}{1.5pt} \begin{tabular}{@{}p{1.5cm}p{4.5cm}lll@{}}\toprule \textbf{Type} & \textbf{Name} & \textbf{Language} & \textbf{Deps} \\ \cline{1-5} \multirow{4}{*}{Webapps} & \multirow{2}{*}{dynamic-html} & Python & jinja2 \\ & & Node.js & mustache \\ \cline{3-5} & \multirow{2}{*}{uploader} & Python & - \\ & & Node.js & request \\ \cline{1-5} \multirow{3}{*}{Multimedia} & \multirow{2}{*}{thumbnailer} & Python & Pillow \\ & & Node.js & sharp \\ \cline{3-5} & video-processing & Python & \textbf{ffmpeg} \\ \cline{1-5} \multirow{2}{*}{Utilities} & compression & Python & - \\ & data-vis & Python & squiggle \\ \cline{3-5} \cline{1-5} \multirow{1}{*}{Inference} & \multirow{1}{*}{image-recognition} & Python & pytorch \\ \cline{1-5} \multirow{3}{*}{Scientific} & \multirow{1}{*}{graph-pagerank} & \multirow{3}{*}{Python} & \multirow{3}{*}{igraph} \\ & \multirow{1}{*}{graph-mst} &&& \\ & \multirow{1}{*}{graph-bfs} &&& \\ \bottomrule \end{tabular} \caption{\textbf{SeBS applications. One application - \emph{video.processing} - requires a non-pip package: ffmpeg (marked in bold).}} \label{tab:applications} \ifcnf \vspace{-3em} \fi \end{table} Our collection of serverless applications is in~\autoref{tab:applications}. They represent different performance profiles, from simple website backends with minimal CPU overhead to compute-intensive machine learning tasks. To accurately characterize each application's requirements, we conduct a local, non-cloud evaluation of application metrics describing requirements on computing, memory, and external resources (Section~\ref{sec:bench_impl}). The evaluation allows us to classify applications, verify that our benchmark set is representative, and pick benchmarks according to the required resource consumption. \textbf{Web Applications. } FaaS platforms allow building simplified static websites where dynamic features can be offloaded to a serverless backend. We include two examples of small but frequently involved functions: \emph{dynamic-html} (dynamic HTML generation from a predefined template) and \emph{storage-uploader} (upload of a file from a given URL to cloud storage). They have low requirements on both CPU and memory. \textbf{Multimedia. } A common serverless workload is processing multimedia data. Images uploaded require the creation of thumbnails, as we do in our benchmark kernel \emph{thumbnailer}. Videos are usually processed to compress, extract audio, or convert to more suitable formats. We include an application \emph{video-processing} that uses a static build of \emph{ffmpeg} to apply a watermark to a video and convert it to a gif file. \textbf{Utilities. } Functions are used as backend processing tools for too complex problems for a web server or application frontend. We consider \emph{compression} and \emph{data-vis}. In the former, the function compresses a set of files and returns an archive to the user, as seen in online document office suites and text editors. We use acmart-master template as evaluation input. In the latter, we include the backend of DNAVisualization.org~\cite{dna_vis, 10.1093/nar/gkz404}, an open-source website providing serverless visualization of DNA sequences, using the squiggle Python library~\cite{Lee2018}. The website passes DNA data to a function which generates specified visualization and caches results in the storage. \textbf{Inference. } Serverless functions implement machine learning inference tasks for edge IoT devices and websites to handle scenarios such as image processing with object recognition and classification. We use as an example a standard image recognition with pretrained ResNet-50 model served with the help of pytorch~\cite{NEURIPS2019_9015} and, for evaluation, images from \textit{fake-resnet} test from MLPerf inference benchmark~\cite{reddi2019mlperf}. Deployment of PyTorch requires additional steps to ensure that the final deployment package meets the limits on the size of the code package. In our case, the most strict requirements are found on AWS Lambda with a limit of 250 megabytes of uncompressed code size. We fix the PyTorch version to 1.0.1 with torchvision in version 0.3. We disable all accelerator support (only CPU), strip shared libraries, and remove tests and binaries from the package. While deep learning frameworks can provide lower inference latency with GPU processing, dedicated accelerators are not currently widely available on FaaS platforms, as discussed in Section~\ref{sec:challenges}. \textbf{Scientific.} As an example of scientific workloads, we consider irregular graph computations, a more recent yet established class of workloads~\cite{lumsdaine2007challenges, besta2017push, sakr2020future, besta2019substream}. We selected three important problems: Breadth-First Search (BFS)~\cite{beamer2013direction, besta2017slimsell}, PageRank (PR)~\cite{page1999pagerank}, and Minimum Spanning Tree (MST). BFS is used in many more complex schemes (e.g., in computing maximum flows~\cite{ford2009maximal}), it represents a large family of graph traversal problems~\cite{besta2017slimsell}, and it is a basis of the Graph500 benchmark~\cite{murphy2010introducing}. PR is a leading scheme for ranking websites and it stands for a class of centrality problems~\cite{brandes2007centrality, solomonik2017scaling}. MST is used in many analytics and engineering problems, and represents graph optimization problems~\cite{papadimitriou1998combinatorial, gianinazzi2018communication, besta2020high}. All three have been extensively research in a past decade~\cite{beamer2013direction, grygorash2006minimum, besta2019slim, besta2018log, schweizer2015evaluating, besta2015accelerating, besta2019demystifying, berkhin2005survey}. We select the corresponding algorithms such that they are all data-intensive but differ in the details of the workload characteristics (e.g., BFS, unlike PR, may come with severe work imbalance across iterations). \section{Serverless Model Analysis} \label{sec:opportunities} \ifcnf \vspace{-0.25em} \fi To design SeBS, we select candidate FaaS workloads and investigate the fundamental limitations that can throttle the migration of some workloads to the serverless environment. \ifcnf \vspace{-0.5em} \fi \subsection{Candidate applications} Workloads with a premise for immediate benefits have infrequent invocations, unpredictable and sudden spikes in arriving requests, and fine-grained parallelism. Yet, unprecedented parallelism offered by FaaS is not simple to harness, and many workloads struggle to achieve high performance and suffer from problems such as stragglers~\cite{10.1145/3304112.3325608,10.5555/3154630.3154660,10.1145/3267809.3267815}. Such FaaS workload classes that may be hard to program for high performance are data analytics~\cite{227653,Mller2019LambadaID,DBLP:journals/corr/abs-1907-11465}, distributed compilation~\cite{234886}, video encoding, linear algebra and high-performance computing problems~\cite{DBLP:journals/corr/abs-1810-09679}, and machine learning training and inference~\cite{10.1145/3357223.3362711,8360337,8457817}. \begin{table*}[t!]\centering \ifcnf \vspace{-1em} \fi \footnotesize \renewcommand{\arraystretch}{0.9} \begin{tabular}{@{}p{3.0cm}p{4.8cm}p{4.4cm}p{4.5cm}@{}} \toprule \textbf{Policy} & \textbf{AWS} & \textbf{Azure} & \textbf{GCP} \\ \midrule \textbf{Languages (native)} & Python, Node.js, C\#, Java, C++, and more. & Python, JavaScript, C\#, Java etc. & Node.js, Python, Java, Go\\ \textbf{Time Limit} & 15 minutes & 10 min / 60 min / Unlimited & 9 minutes\\ \textbf{Memory Allocation} & Static, 128 - 3008 MB & Dynamic, up to 1536 MB & Static, 128, 256, 512, 1024 or 2048 MB\\ \multirow{2}{*}{\textbf{CPU Allocation}} & Proportional to memory & \multirow{2}{*}{Unknown} & Proportional to memory \\ & 1 vCPU on 1792 MB & & 2.4 GHz CPU at 2048 MB\\ \textbf{Billing} & Duration and declared memory & Average memory use, duration & Duration, declared CPU and memory \\ \textbf{Deployment} & zip package up to 250 MB & zip package, Docker image & zip package, up to 100 MB \\ \textbf{Concurrency Limit} & 1000 Functions& 200 Function Apss& 100 Functions\\ \iftr \textbf{Temporary Disk Space} & 500 MB (must store code package) & Uses Azure Filess & Included in memory usage\\ \fi \bottomrule \end{tabular} \caption{\textbf{Comparison of major commercial FaaS providers - AWS Lambda~\cite{lambdaLimits}, Azure Functions~\cite{azureLimits} and Google Cloud Functions~\cite{gcpLimits}. While commercial services have comparable compute and storage prices, their memory management and billing policies differ fundamentally.}} \label{tab:aws_azure} \ifcnf \vspace{-2em} \fi \end{table*} \ifcnf \vspace{-1.5em} \fi \subsection{FaaS model aspects} \label{sec:challenges} Although the adaption of serverless computing is increasing in various domains, the technical peculiarities that made it popular in the first place are now becoming a roadblock for further growth~\cite{DBLP:journals/corr/abs-1812-03651,DBLP:journals/corr/abs-1902-03383}. Both the key advantages and the limitations of FaaS are listed in Table~\ref{tab:intuitive_comparison}. We now describe each aspect to understand the scope of SeBS better. \textbf{Computing Cost. } FaaS handles infrequent workloads more cost-effectively than persistent VMs. Problems such as machine learning training can be much more expensive than a VM-based solution~\cite{10.5555/3154630.3154660,DBLP:journals/corr/abs-1902-03383}, primarily due to function communication overheads. FaaS burst parallelism outperforms virtual machines in data analytics workloads but inflates costs~\cite{Mller2019LambadaID}. Thus, we need to think of computational performance not only in raw FLOP/s but, most importantly, as a FLOP/s per dollar ratio. Here, \emph{SeBS includes cost efficiency as a primary metric to determine the most efficient configuration for a specific workload, analyze the pricing model's flexibility, and compare the costs with IaaS approaches.} \textbf{I/O performance. } Network I/O affects cold startup latencies, and it is crucial in ephemeral computing as it relies on external storage. As function instances share the bandwidth on a server machine, the co-allocation of functions depending on network bandwidth may degrade performance. Investigation of major cloud providers revealed significant fluctuations of network and I/O performance, with the co-location decreasing throughput up to 20$\times$ on AWS~\cite{10.5555/3277355.3277369}. \emph{SeBS includes network and disk performance as a metric to understand I/O requirements of serverless functions better.} \textbf{Vendor Lock-In. } Lack of standardization in function configuration, deployment, and cloud services complicates development. Each cloud provider requires a customization layer that can be non-trivial. Tackling this, \emph{SeBS provides a transparent library for adapting cloud service interfaces for deployment, invocation, and persistent storage management.} \textbf{Heterogeneous Environments. } Major FaaS platforms limit user configuration options to the amount of memory allocated and an assigned time to access a virtual CPU. To the best of our knowledge, specialized hardware is only offered by nuclio~\cite{nuclio}, a data science-oriented and GPU-accelerated FaaS provider. While hardware accelerators are becoming key for scalability~\cite{vetter2019extreme}, serverless functions lack an API to allocate and manage such hardware, similar to solutions in batch systems on HPC clusters~\cite{slurmGRES}. \emph{SeBS includes dedicated tasks that can benefit from specialized hardware.} \textbf{Microarchitectural Hardware Effects. } The hardware and software stack of server machines is optimized to handle long-running applications, where major performance challenges include high pressure on instruction caches or low counts of instructions per cycle (IPC)~\cite{10.1145/2872887.2750392}. The push to microservices lowers the CPU frontend pressure thanks to a smaller code footprint~\cite{10.1145/3297858.3304013}. Still, they are bounded by single-core performance and frontend inefficiencies due to high instruction cache miss and branch misprediction rate~\cite{7856643}. Serverless functions pose new challenges due to a lack of code and data locality. A microarchitectural analysis of FaaS workloads discovered similar frontend bottlenecks as in microservices: decreased branch predictor performance and increased cache misses due to interfering workloads~\cite{10.1145/3352460.3358296}. \textit{SeBS enables low-level characterization of serverless applications to analyze short-running functions and better understand requirements for an optimal FaaS execution environment.} \ifcnf \vspace{-1.5em} \fi \subsection{FaaS platforms' limitations} \label{sec:faas_platforms} To support the new execution model, cloud providers put restrictions on user's code and resource consumption. Although some of those restrictions can be overcome, developers must design applications with these limitations in mind. Table~\ref{tab:aws_azure} presents a detailed overview of three commercial Faas platforms: AWS Lambda~\cite{awsLambda}, Azure Functions~\cite{azureFunctions}, and Google Cloud Functions~\cite{googleFunctions}. Azure Functions change the semantics with an introduction of \emph{function apps} that consists of multiple functions. The functions are bundled and deployed together, and a single function app instance can use processes and threads to handle multiple function instances from the same app. Thus, they benefit from less frequent cold starts and increased locality while not interfering with isolation and security requirements. \subsection{Application Metrics} \label{sec:metrics} We now discuss in detail metrics that are measured locally and in the cloud execution. \noindent \textit{Local metrics. } These metrics provide an accurate profile of application performance and resource usage to the user. \begin{itemize}[leftmargin=0.5em] \item \textbf{Time.} We measure execution time to find which applications require significant computational effort, and we use hardware performance counters to count instructions executed, a metric less likely influenced by system noise. \item \textbf{CPU utilization.} We measure the ratio of time spent by the application on the CPU, both in the user and the kernel space, to the wall-clock time. This metric helps to detect applications stalled on external resources. \item \textbf{Memory.} Peak memory usage is crucial for determining application configuration and billing. It also enables providers to bound the number of active or suspended containers. Instead of resident set size (RSS) which overapproximates actual memory consumption, we measure the unique set size (USS) and proportional set size (PSS). Thus, we enable an analysis of benefits from page sharing. \item \textbf{I/O.} I/O intensive functions may be affected by contention. Average throughput of filesystem I/O and network operations decreases with the number of co-located function invocations that have to share the bandwidth, leading to significant network performance variations~\cite{10.5555/3277355.3277369}. \item \textbf{Code size.} The size and complexity of dependencies impact the warm \emph{and} cold start latency. Larger code packages increase deployment time from cloud storage and the warm-up time of language runtime. \end{itemize} \textit{Cloud metrics. } The set of metrics available in the cloud is limited because of the black-box nature of the FaaS system. Still, we can gain additional information through microbenchmarks and modeling experiments (Section~\ref{sec:evaluation}). \begin{itemize}[leftmargin=0.5em] \item \textbf{Benchmark, Provider and Client Time.} We measure execution time on three levels: directly measure benchmark execution time in cloud, including work performed by function, but not network and system latencies; query cloud provider measurements, adding overheads of language and serverless sandbox; measure end-to-end execution latency on client side, estimating complete overhead with the latency of function scheduling and deployment. \item \textbf{Memory.} The actual memory consumption plays a crucial role in determining cost on platforms with dynamic memory allocation. Elsewhere, the peak memory consumption determines the execution settings and billing policies. \item \textbf{Cost.} The incurred costs are modeled from billed duration, memory consumption, and a number of requests made to persistent storage. While AWS enables estimating the cost of each function execution, Azure offers a monitoring service with query interval not shorter than one second. \end{itemize} \subsection{Implementation} \label{sec:implementation} We implement the platform from Figure~\ref{fig:suite_diagram} to fulfill three major requirements: application characterization, deployment to the cloud, and modeling of cloud performance and overheads. We describe SeBS modularity and the support for the inclusion of new benchmarks, metrics, and platforms. \textbf{Deployment.} SeBS handles all necessary steps of invoking a function in the cloud. We allocate all necessary resources and do not use third-party dependencies, such as the Serverless framework~\cite{serverless}, since a flexible and fine-grained control over resources and functions is necessary to efficiently handle large-scale and parallel experiments, e.g., the container eviction model (Sec.~\ref{sec:evaluation}). For each platform, we implement the simplified interface described below. Furthermore, benchmarks and their dependencies are built within Docker containers resembling function execution workers to ensure binary compatibility with the cloud. Google Cloud Functions use the cloud provider Docker-based build system as required by the provider. \emph{SeBS can be extended with new FaaS platforms by implementing the described interface and specifying Docker builder images.} \begin{lstlisting}[language=Python] class FaaS: def package_code(directory, language: [Py, JS]) def create_function(fname, code, lang: [Py, JS]), config) def update_function(fname, code, config) def create_trigger(fname, type: [SDK, HTTP]) def query_logs(fname, type: [TIME, MEM, COST]) \end{lstlisting} \textbf{Benchmarks.} We use a single benchmark implementation in a high-level language for all cloud providers. Each benchmark includes a Python function to generate inputs for invocations of varying sizes. SeBS implements provider-specific wrappers for entry functions to support different input formats and interfaces. Each benchmark can add custom build actions, including installation of native dependencies and supporting benchmark languages with a custom build process, such as the AWS Lambda C++ Runtime. \emph{New applications integrate easily into SeBS: the user specifies input generation procedure, configures dependencies and optional build actions, and adjusts storage access functionalities.} \begin{lstlisting}[language=Python] def function_wrapper(provider_input, provider_env) input = json(provider_input) start_timer() res = function() time = end_timer() return json(time, statistics(provider_env), res) \end{lstlisting} \textbf{Storage.} We use light-weight wrappers to handle different storage APIs used by cloud providers. Benchmarks use the SeBS abstract storage interface, and we implement one-to-one mappings between our and provider's interface. The overhead is limited to a single redirect of a function call. \emph{New storage solutions require implementing a single interface, and benchmarks will use it automatically.} \textbf{Experiments} SeBS implements a set of experiments using provided FaaS primitives. Experiments invoke functions through an abstract trigger interface, and we implement cloud SDK and HTTP triggers. The invocation result includes SeBS measurements and an unchanged output of the benchmark application. SeBS metrics are implemented in function wrappers and with the provider log querying facilities. Each experiment includes a postprocessing step that examines execution results and provider logs. \emph{New experiments and triggers are integrated automatically into SeBS through a common interface. SeBS can be extended with new types of metrics by plugging measurement code in SeBS benchmark wrappers, by using the provided log querying facilities, and by returning benchmark-specific measurements directly from the function.} \textbf{Technicalities.} We use Docker containers with language workers in Python and Node.js in local evaluation; minio~\cite{minio} implements persistent storage. We use PAPI~\cite{10.1007/978-3-642-11261-4_11} to gather low-level characteristics (we found the results from Linux \emph{perf} to be unreliable when the application lifetime is short). For cloud metrics, we use provider's API to query execution time, billing, and memory consumption, when available. We use \emph{cURL} to exclude the HTTP connection overheads for client time measurements. We enforce cold starts by updating function configuration on AWS and by publishing a new function version on Azure and GCP.
{ "timestamp": "2021-07-05T02:06:08", "yymm": "2012", "arxiv_id": "2012.14132", "language": "en", "url": "https://arxiv.org/abs/2012.14132" }
\section{Introduction} \label{sec:introduction} In recent years, there has been a growing interest in estimating different metrics of information theory related to parametric distributions. The Shannon entropy, also known as differential entropy, introduced by Claude Shannon \cite{shannon1948mathematical}, is an essential quantity that measures the amount of available information or uncertainty outcome of a random process. Given a density function $f(x|\alpha,\beta)$, the differential entropy is given by \begin{equation} H(\alpha,\beta)=\mathbb{E}\left(-\log f(x|\alpha,\beta)\right). \end{equation} The differential entropy depends on the distribution parameters, and, given a sample, it is necessary to be estimated. The commonly used method to estimate the parameters is the maximum likelihood approach due to its one-to-one invariance property. Hence, we need only to estimate the parameters of the original model and plug-in the entropy function. Under this approach many authors have derived the estimators for different distributions such as, Weibull \cite{cho2015estimating}, Inverse Weibull \cite{yu2019statistical}, Log-logistic \cite{du2018statistical} and for the exponential distribution with different shift origin \cite{kayal2013estimation}, to list a few. A major drawback of the maximum likelihood inference is that the obtained estimates are usually biased for small samples \cite{cordeiro2014introduction}. Another concern happens under small samples when constructing the confidence intervals for the parameters since such intervals are not precise and may not return good coverage probabilities. In this case, the maximum likelihood estimation (MLE) skewness study is essential to assess the quality of the interval \cite{cordeiro2001skewness}. To overcome these limitations, we can use objective Bayesian methods. In this context, the inference for the parameters of the gamma distribution have been discussed earlier under this approach by Miller \cite{miller1980bayesian}, Sun and Ye \cite{sun1996frequentist}, Berger et al. \cite{berger2015}, and Louzada and Ramos \cite{louzada2018efficient}. Moreover, Ramos et al. \cite{Pedro2020onposterior} revised the most common objective priors and provided sufficient and necessary conditions for the obtained posteriors and their higher moments to be proper. Although the authors have obtained different joint posterior distributions for the parameters of interest, the obtained posterior means can not be directly plunged in the Shannon entropy. Under the Bayesian approach, it is necessary to obtain the posterior distribution of the entropy measure. In this context, Shakhatreh \cite{shakhatreh2020objective} recently derived different posterior distributions using objective priors for the entropy assuming a Weibull distribution. On the other hand, the cited distribution's entropy expression is not as complicated as the gamma distribution's entropy expression. With this in mind, in this paper, focusing on the gamma distribution, we derive the posterior distributions using objective priors, such as Jeffreys prior \cite{jeffreys1946invariant}, reference priors \cite{bernardo1979a, berger1992development,berger2015}, and matching priors \cite{tibshirani1989}, and prove that the obtained posteriors are proper and can be used to construct the posterior distributions of the Shannon entropy. Moreover, even if the posterior distribution is proper, the posterior mean can be infinite, which is undesirable, and thus we shall also prove that the obtained posterior means for the entropy measure are finite. Finally, the credibility intervals are obtained to construct accurate interval estimates. The gamma distribution considered here is a two-parameter family of distribution among the most well-known distribution used to model different stochastic processes and to make statistical inferences, and has received attention from different fields. It surfaces in many areas of applications, including financial analysis~\cite{cizek2005statistical}, climate analysis~\cite{husak2007use}, reliability analysis~\cite{gupta1961gamma}, machine learning~\cite{kamalov2020gamma}, and physics~\cite{garcia2012stochastic}. Particularly, the gamma distribution includes the exponential distribution, Erlang distribution, and chi-square distribution as special cases. A random variable $X$ follows a gamma distribution, if its probability density function, parametrized by a shape parameter $\alpha>0$ and scale parameter $\beta>0 $, is given by, \begin{equation}\label{pdfgammma} f(x\,| \alpha,\beta)=\frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x} ,~~ x>0, \end{equation} where $\Gamma(\phi)=\int_{0}^{\infty}{e^{-x}x^{\phi-1}dx}$ is the gamma function. The paper is organized as follows. Section \ref{entropysec} presents the maximum likelihood estimators for the gamma distribution parameters and the Shannon Entropy computation. Section \ref{bayesianinf} presents the objective Bayesian analysis using objective priors for the Shannon entropy parameter's reparametrized posterior distribution. Section \ref{simulations} provides a simulation study to select the best objective prior. In Section \ref{application}, the methodology is illustrated on a real dataset. Some final comments are given in Section \ref{conclusions}. \section{Frequentist approach}\label{entropysec} The classical inference (frequentist) is a commonly used approach to conduct parameter estimation of a particular distribution. In this case, the parameter is treated as fixed, and the MLE is commonly used to obtain the estimates. The MLE has good asymptotic properties, such as invariance, consistency, and efficiency. This procedure search the parameter space of $\boldsymbol{\theta}$ where the maximum likelihood $\hat{\boldsymbol{\theta}}=\sup_{\boldsymbol{\theta}}L(\boldsymbol{\theta}|x)$ is obtained. Here our main aim is to obtain the estimate of a function of the parameters. Hence, firstly we need to obtain the entropy measure, mathematically defined as $H(\boldsymbol{\theta})=E(-log f(x|\boldsymbol{\theta}))$, which quantifies the amount of uncertainty in the data $x$. Besides, it should be noted that a higher realization of $H$ indicates more uncertainty. The entropy $H$ of the gamma density is given by \begin{equation}\label{entropy} \begin{aligned} H(\alpha, \beta)&=-\int^\infty_0\log\left(\frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}exp\{-\beta x\}\right)f(x|\alpha,\beta)dx \\ & =\alpha-\log(\beta)+\log(\Gamma(\alpha))+(1-\alpha)\psi(\alpha), \end{aligned} \end{equation} where $\psi(k)=\frac{\partial}{\partial k}\log\Gamma(x)$ is the digamma function. Now, consider a change of variable by setting $W=\alpha$, which implies $H = W - \log(\beta) + \log\Gamma(W) + (1-W)\psi(W)$. The aim of the transformation is to obtain a likelihood of $H$ and $W$ instead of $\alpha$ and $\beta$. Therefore, if $X_1$, $\ldots$, $X_n\,$ are a complete sample from (\ref{pdfgammma}) then the likelihood function of $H$ and $W$ is given as \begin{equation}\label{eq5} L(W, H\,| \boldsymbol{x})=\frac{\delta(W,H)^{nW}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\exp\left\{-\delta(W,H)\sum_{i=1}^n x_i\right\}, \end{equation} where $\delta(W,H)=\exp\left(W + \log\Gamma(W) + (1 - W)\psi(W) - H\right)$. The log-likelihood function is given by \begin{equation}\label{eq6} \begin{aligned} l(W, H\,|\boldsymbol{x})=W\log(\delta(W,H))-\log\Gamma(W) + W\sum_{i=1}^n \log(x_i) - \delta(W,H)\sum_{i=1}^{n}x_i.\\ \end{aligned} \end{equation} The MLEs for the parameters are obtained by directly maximizing the log-likelihood function $\ell (\lambda,\phi;\boldsymbol{t})$. Hence, after some algebraic manipulations the MLEs $\hat{W}$ and $\hat{H}$ are obtained from the solution of \begin{equation*} \frac{\partial l(W,H\,|\boldsymbol{x})}{\partial W}=\log(\delta(W,H))-\psi(W) - \sum_{i=1}^n \log(x_i) + \sigma \left(W - \delta(W,H)\sum_{i=1}^n x_i\right) \end{equation*} \begin{equation*} \frac{\partial l(W,H\,|\boldsymbol{x})}{\partial H}=-W + \delta(W,H)\sum_{i=1}^{n}x_i \end{equation*} where $\sigma = 1 + (1- W)\psi'(W)$. The solutions for these equations provide the maximum likelihood estimators for the entropy of the gamma distributions, $\widehat{H}$ and $\widehat{W}$. Since equation (3) cannot be solved easily using a closed-form solution, numerical techniques must estimate the true parameters. Following \cite{migon}, the MLEs are asymptotically normally distributed with a joint bivariate normal distribution given by \begin{equation*} \left(\hat{W}_{\rm MLE},\hat{H}_{\rm MLE}\right) \sim N_2 \left[\left(W,H \right),I^{-1} \left(W,H\right) \right] \quad \mbox{ as } \quad n \to \infty , \end{equation*} where $I(W,H)$ is the Fisher information matrix for the reparametrized model given by \begin{align} \begin{aligned} I(W,H) =&\begin{bmatrix} \psi'(W) -2\sigma + W\sigma^2 & 1-\sigma W\\ 1-\sigma W & W \end{bmatrix}, \end{aligned} \end{align} \vspace{0.2cm} \noindent and $\psi\,'$($W$) is the derivative of $\psi$($W$), called the trigamma function. In the present paper, we are only interested in $H$, and thus, given $0<a<1$ and using the element $ (I(W,H)^{-1})_{22}$, we can conclude that the confidence interval for the estimate of the entropy measure with a confidence level of $100(1-a)\%$ for $H$ is given by \begin{equation} \hat{H}-{Z_{\frac{a}{2}}}\sqrt{(1 - W)^2\psi'(W) + 2 - W}< H<\hat{H}+{Z_{\frac{a}{2}}}\sqrt{(1 - W)^2\psi'(W) + 2 - W}, \end{equation} where $a$ is the significance level and ${Z_{\frac{a}{2}}}$ is the $\frac{a}{2}$-th percentile of the standard normal distribution. \section{Bayesian Inference}\label{bayesianinf} Here, the parameter $\boldsymbol{\theta}$ is considered as a random variable and the distribution that represents knowledge about $ \boldsymbol {\theta}$ is refereed as a prior distribution and defined by $\pi(\boldsymbol{\theta})$. The distribution $\pi(\boldsymbol{\theta})$ provides the knowledge or uncertainty about $\boldsymbol{\theta}$ before obtaining the sample data $\boldsymbol{x}$. After the data $x$ is observed, a natural way of combining the resulting information from the a priori the distribution and the likelihood function is done by the Bayes' theorem, resulting in the posterior distribution of $\boldsymbol{\theta}$ given $\boldsymbol{x}$. In a Bayesian framework, Ramos $et\; al.$ \cite{Pedro2020onposterior} analyzed the properties of the posterior distribution of the gamma distribution parameters and stated the conditions for this distribution to have proper posterior and finite moments. To obtain the posterior distributions for the $H$ parameter, we can consider the one-to-one invariance property of the Jeffreys prior, reference prior, and matching prior, and thus we only need to obtain the Jacobian matrix related to the reparametrization from $\alpha$ and $\beta$ to $H$ and $W$. After some algebraic manipulations, we can conclude that the parameters $\beta$ and $\alpha$ can be written as \begin{equation*} \beta = \exp\left(W + \log(\Gamma(W)) + (1 - W)\psi(W) - H\right) \quad \mbox{ and } \quad \alpha=W, \end{equation*} and thus, from the relations \begin{align*} \frac{\partial \alpha}{\partial H}=0,\, \frac{\partial \alpha}{\partial W}=1,\, \frac{\partial \beta}{\partial H}=- \beta \ \ \mbox{ and } \ \ \frac{\partial \beta}{\partial W}= \left( 1+(1-W)\psi^{(1)}(W) \right)\beta, \end{align*} it follows that the Jacobian matriz (J) relative to the change of variable will be given by \begin{align} J = \begin{bmatrix} \frac{\partial \alpha}{\partial H} & \frac{\partial \alpha}{\partial W} \\ \frac{\partial \beta}{\partial H} & \frac{\partial \beta}{\partial W} \end{bmatrix} =\begin{bmatrix} 0 & 1\\ -\beta& \sigma \beta \end{bmatrix}, \end{align} where $\sigma = 1 + (1-W)\psi'(W)$. \vspace{0.1 cm} The use of objective priors plays an essential role in Bayesian analysis where the data provide the dominant information, and the posterior distribution is not overshadowed by prior information. Such priors allow us to conduct objective Bayesian inference. On the other hand, in most situations, they are not proper prior distributions and may lead to improper posterior, invalidating the analysis since we cannot compute the normalizing constant. Therefore, we need to check if the obtained posterior (and posterior mean) is proper (or finite). The priors for the entropy and its related posterior distributions will be discussed in the next subsections. Before we derive the priors and posterior distributions, hereafter, we shall always assume that there are at least two distinct data $t_i$, that is, there exists $1\leq i<j \leq n$ such that $t_i\neq t_j$. Additionally, before we proceed, we present below a definition and proposition that will be used to prove that the obtained posteriors are proper. In the following let $\overline{\mathbb{R}} = \mathbb{R}\cup \{-\infty, \infty\}$ denote the \textit{extended real number line} and let $\mathbb{R}^+$ denote the strictly positive real numbers. The following definition is a special case from the one presented in \cite{ramos2017bayesian} and will play an important role in proving that the analyzed posterior distributions and posterior means are proper. \begin{definition}\label{def31} Let $a\in \mathbb{\overline{R}}$, $\operatorname{g}:\mathcal{U}\to\mathbb{R^+}$ and $\operatorname{h}:\mathcal{U}\to\mathbb{R^+}$, where $\mathcal{U}\subset\mathbb{R}$ and suppose that $\lim_{x\to a} \dfrac{\operatorname{g}(x)}{\operatorname{h}(x)} = c\in \mathbb{R}$. Then, if $c>0$, we say that $g(x)\underset{x\to a}{\propto} h(x)$. \end{definition} Regarding the above definition, we have the following proposition from \cite{ramos2017bayesian}. \begin{proposition}\label{prop32} Let $\operatorname{g}:(a,b)\to\mathbb{R^+}$ and $\operatorname{h}:(a,b)\to\mathbb{R^+}$ be continuous functions in $(a,b)\subset\mathbb{R}$, where $a\in\overline{\mathbb{R}}$ and $b\in\overline{\mathbb{R}}$, and let $c\in(a,b)$. Then $\operatorname{g}(x)\underset{x\to a}{\propto} \operatorname{h}(x)$ implies in $ \int_a^c g(t)\; dt \propto \int_a^c h(t)\; dt$ and $\operatorname{g}(x)\underset{x\to b}{\propto} \operatorname{h}(x)$ implies in $\int_c^b g(t)\; dt \propto \int_c^b h(t)\; dt$. \end{proposition} \subsection{Jeffreys prior} Jeffreys \cite{jeffreys1946invariant} described a procedure to achieve an objective prior, which is invariant under one-to-one monotone transformations. The invariant property of the Jeffreys prior has been widely exploited to make statistical inferences from its posterior distribution numerical analysis. The prior construction is based on the square root of the determinant of the Fisher information matrix $I(\alpha,\beta)$. Thus, the Jeffreys prior to the gamma distribution is given by \begin{equation}\label{priorjnk} \pi_1\left(\alpha,\beta\right)\propto \frac{\sqrt{\alpha\psi'(\alpha)-1}}{\beta}. \end{equation} Additionally, from the determinant of the Fisher information, or using the change of variables over the Jeffreys prior we have \begin{equation}\label{priorjnk} \pi_1\left(H,W\right)\propto \sqrt{W\psi'(W)-1}. \end{equation} Finally, the joint posterior distribution for $H$ and $W$ produced by the Jeffreys prior is \begin{equation}\label{postjnk1} \begin{aligned} \pi_1(H,W|\boldsymbol{x})\propto\frac{\delta(W,H)^{nW}\sqrt{W\psi'(W)-1}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\exp\left\{-\delta(W,H)\sum_{i=1}^n x_i\right\} . \end{aligned} \end{equation} \begin{theorem}\label{theo33} The posterior density (\ref{postjnk1}) is proper for all $n\geq 2$. \end{theorem} \begin{proof} Using the change of variables $\exp(-H)=u\Leftrightarrow du = - \exp(-H)dH$ and denoting $\delta_1(W) = \exp(W + \log(\Gamma(W))+(1-W)\psi(W))$ it follows that \begin{equation*} \begin{aligned} d_1(x)&\propto \int_0^\infty \int_{-\infty}^\infty \pi_1(H,W|\boldsymbol{x})\; dH dW \\ & \propto \int_0^\infty \int_0^\infty\frac{\delta_1(W)^{nW}u^{nW-1}\sqrt{W\psi'(W)-1}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\exp\left\{-\delta_1(W)u\sum_{i=1}^n x_i\right\} du dW\\ & =\int_0^\infty \frac{\delta_1(W)^{nW}\sqrt{W\psi'(W)-1}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\int_0^\infty u^{nW-1}\exp\left\{-\delta_1(W)\left(\sum_{i=1}^n x_i\right) u\right\} du dW\\ & = \int_0^\infty \sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW =\int_0^1 g_1(W)dw + \int_1^\infty g_1(W) dw, \end{aligned} \end{equation*} where $g_1(W) =\sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n}>0$ for all $W\in (0,\infty)$. Now, according to \citep{ramos2017bayesian,ramos2018posterior}, we have $\frac{\Gamma(nW)}{\Gamma(W)^n}\underset{W\to 0^+}{\propto}W^{n-1}$ and $\sqrt{W\psi'(W)-1}\underset{W\to 0^+}{\propto} W^{-1/2}$ and since \begin{equation*} \lim_{W\to 0^+} \frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} = 1\Rightarrow \frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}}\underset{W\to 0^+}{\propto}1, \end{equation*} it follows by Proposition \ref{prop32} that \begin{equation*} \int_0^1 g_1(W) dW \propto \int_0^1 W^{-1/2}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Moreover, due to \citep{ramos2017bayesian,ramos2018posterior} we have $\frac{\Gamma(nW)}{\Gamma(W)^n}\underset{W\to \infty}{\propto} n^{nW}W^{(n-1)/2}$ and $\sqrt{W\psi'(W)-1}\underset{W\to \infty}{\propto} W^{-1/2}$, and since $x_i$ are not all equal, due to the inequality of the arithmetic and geometric means we have $q=\log\left(\frac{\frac{1}{n} \sum_{i=1}^n x_i}{\sqrt[n]{\prod_{i=1}^n{x_i}}}\right)>0$ and thus it follows that \begin{equation*}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}}=\left(\frac{\frac{1}{n} \sum_{i=1}^n x_i}{\sqrt[n]{\prod_{i=1}^n{x_i}}}\right)^{-nW}n^{-nW}= \exp\left(-nqW\right)n^{-nW}. \end{equation*} Therefore, from Proposition \ref{prop32} it follows that \begin{equation*} \begin{aligned} \int_1^\infty g_1(W) dW & \propto \int_1^\infty W^{-1/2}\times \exp\left(-nqW\right)n^{-nW}\times n^{nW}W^{(n-1)/2} dW\\ & = \int_1^\infty W^{n/2-1}\exp\left(-nqW\right) \; dW =\frac{\Gamma(n/2)}{(nq)^{n/2}}<\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} \begin{theorem}\label{theo34} The posterior mean of $H$ relative to (\ref{postjnk1}) is finite for any $n\geq 2$. \end{theorem} \begin{proof} Doing the change of variables $\exp(-H)=u\Leftrightarrow du = - \exp(-H)dH$ and denoting $\delta_1(W) = \exp(W + \log(\Gamma(W))+(1-W)\psi(W))$, it follows that \begin{equation*} \begin{aligned} E_1[H|x]&\propto \int_0^\infty \int_{-\infty}^\infty H \pi_1(\alpha,\beta|\boldsymbol{x})\; dH dW \\ & = \int_0^\infty \int_0^\infty -\log(u)\frac{\delta_1(W)^{nW}u^{nW-1}\sqrt{W\psi'(W)-1}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\exp\left\{-\delta_1(W)u\sum_{i=1}^n x_i\right\} du dW\\ & =\int_0^\infty \frac{\delta_1(W)^{nW}\sqrt{W\psi'(W)-1}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\int_0^\infty \left(-\log(u)\right) u^{nW-1}\exp\left\{-\delta_1(W)\left(\sum_{i=1}^n x_i\right) u\right\} du dW. \end{aligned} \end{equation*} Moreover, from the identity $\psi(z)\Gamma(z)=\Gamma'(z)=\int_0^\infty \log(t)t^{z-1}e^{-t}dz$ one obtains that \begin{equation*} \int_0^\infty \log(s)s^{z-1}e^{-as}ds = 1/a^z\int_0^\infty \log(t/a)t^{z-1}e^{-t}dt = 1/a^z \left(\psi(z)\Gamma(z)-\log(a)\Gamma(z)\right) \end{equation*} and thus, letting $\left|\cdot \right|$ denote the absolute value operator and letting $\delta_2(W) = \left| \psi(nW)\right|+\left|\log(\Gamma(W))\right|+(1+W)|\psi(W)|+W+\left|\log\left(\sum_{i=1}^n x_i\right)\right|$ for all $W>0$, and using the triangle inequality we have \begin{equation*} \begin{aligned} \left|E_1[H|x]\right| & \propto \left| \int_0^\infty \left(\psi(nW)-\log\left(\delta_1(W)\sum_{i=1}^n x_i\right)\right) \sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW\right|\\ &\leq \int_0^\infty \left| \psi(nW)-\log\left(\delta_1(W)\sum_{i=1}^n x_i\right)\right| \sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW\\ & \leq \int_0^\infty \delta_2(W) \sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW=\int_0^1 h_1(W) dW + \int_1^\infty h_1(W) dW,\\ \end{aligned} \end{equation*} where $h_1(W)=\delta_2(W) \sqrt{W\psi'(W)-1}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n}$ for all $W>0$. We shall now prove that $\delta_2(W)\underset{W\to 0^+}{\propto} W^{-1}$ and $\delta_2(W)\underset{W\to \infty}{\propto} W\log(W)$. Indeed, notice that $\delta_2(W)\geq W>0$ for $W>0$. Moreover, since due to Abramowitz \cite{abramowitz} we have $\lim_{W\to 0^+}W\Gamma(W)=1$ and $\lim_{W\to 0^+}-W\psi(W)=1$ it follows that \begin{equation*} \begin{aligned} &\lim_{W\to 0^+}\frac{\left|\psi(nW)\right|}{W^{-1}}=\lim_{W\to 0^+}\frac{1}{n}\left|(nW)\psi(nW)\right|=\frac{1}{n}\\ &\lim_{W\to 0^+}\frac{ \left|\log\left(\Gamma(W)\right)\right|}{W^{-1}} = \lim_{W\to 0^+} \left|W\log(\Gamma(W))-W\log(W)\right|=\left|0\cdot \log(1)-0\right| = 0\\ & \lim_{W\to 0^+}\frac{ (1+W)\left|\psi(W)\right|}{W^{-1}}=\lim_{W\to 0^+}(1+W)\left|W\psi(W)\right|=1\mbox{ and }\\ &\lim_{W\to 0^+} \frac{W+\left|\log\left(\sum_{i=1}^n x_i\right)\right|}{W^{-1}} =\lim_{W\to 0^+} \left(W^2+W\left|\log\left(\sum_{i=1}^n x_i\right)\right|\right) =0 \end{aligned} \end{equation*} and thus \begin{equation*} \lim_{W \to 0^+}\frac{\delta_2(W)}{W^{-1}} = \frac{1}{n}+1\Rightarrow \delta_2(W)\underset{W\to 0^+}{\propto} \frac{1}{W}. \end{equation*} On the other hand, since due to Abramowitz \cite{abramowitz} we have $\lim_{W\to \infty}\frac{\psi(W)}{\log(W)}=1$, it follows from the L'hopital rule that \begin{equation*} \lim_{W\to \infty} \frac{\log(\Gamma(W))}{W(\log(W)+1)}=\lim_{W\to \infty} \frac{(\log(\Gamma(W))'}{(W(\log(W)+1))'}= \lim_{W\to \infty} \frac{\psi(W)}{\log(W)}=1, \end{equation*} and therefore, considering $W\geq 1$ we have \begin{equation*} \begin{aligned} &\lim_{W\to \infty}\frac{\left|\psi(W)\right|}{W(\log(W)+1)} = \lim_{W\to \infty}\frac{1}{W}\frac{1}{(1+\log(W)^{-1})}\left|\frac{\psi(W)}{\log(W)}\right| = 0,\\ &\lim_{W\to \infty} \frac{\left|\log(\Gamma(W))\right|}{W(\log(W)+1)}= \lim_{W\to \infty}\left| \frac{\log(\Gamma(W))}{W(\log(W)+1)}\right|=1,\\ &\lim_{W\to \infty} \frac{(1+W)\left| \psi(W)\right|}{W(\log(W)+1)} =\lim_{W\to \infty} \left(1+W^{-1}\right)\frac{1}{\left(1+\log(W)^{-1}\right)}\left|\frac{\psi(W)}{\log(W)}\right|=1,\mbox{ and }\\ &\lim_{W\to \infty} \frac{W+\left|\log\left(\sum_{i=1}^n x_i\right)\right|}{W(\log(W)+1)} = \lim_{W\to \infty} \left(\frac{1}{\log(W)+1}+\frac{\left|\log\left(\sum_{i=1}^n x_i\right)\right|}{W(\log(W)+1)}\right)=0, \end{aligned} \end{equation*} and thus \begin{equation*} \lim_{W\to \infty}\frac{\delta_2(W)}{W(\log(W)+1)}=2\Rightarrow \delta_2(W)\underset{W\to \infty}{\propto} W\log(W). \end{equation*} Therefore, combining the obtained proportionality $\delta_2(W)\underset{W\to 0^+}{\propto} W^{-1}$ with the proportionalities proved in Theorem \ref{theo33} and using Proposition \ref{prop32} we have \begin{equation*} \int_0^1 h_1(W) dW \propto \int_0^1 W^{-1}\times W^{-1/2}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Finally, using the proportionality $\delta_2(W)\underset{W\to \infty}{\propto} W\log(W)$, letting $q>0$ be as in the proof of Theorem \ref{theo33} and using that $\log(W)+1\leq \exp(\log(W))=W$ for $W\geq 1$, it follows from the proportionalities proved during Theorem \ref{theo33} and from Proposition \ref{prop32} that \begin{equation*} \begin{aligned} \int_1^\infty h_1(W) dW & \propto \int_1^\infty W(\log(W)+1)\times W^{-1/2}\times \exp\left(-nqW\right)n^{-nW}\times n^{nW}W^{(n-1)/2} dW\\ & \leq \int_1^\infty W^{(n/2+2)-1}\exp\left(-nqW\right) \; dW =\frac{\Gamma(n/2+2)}{(nq)^{n/2+2}} <\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} In order to sample for the posterior distribution we obtain that the marginal posterior distributions of $W$ is given by \begin{equation*} \pi_1(W|\boldsymbol{x})\propto\sqrt{W\psi'(W)-1}\frac{\Gamma(nW)}{\Gamma(W)^n} \left(\frac{\sqrt[n]{\prod_{i=1}^n{x_i}}}{ \sum_{i=1}^n x_i}\right)^{nW}, \end{equation*} and the conditional posterior distribution of $H$ is given by \begin{equation*} \begin{aligned} \pi_1(H|W,\boldsymbol{x})\propto\exp\left\{-nWH-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{aligned} \end{equation*} \subsection{Reference prior} Bernardo \cite{bernardo1979a} discussed a different approach to obtain a new class of objective priors, named as reference priors. Further, many studies were presented to develop formal and rigorous definitions to derive such class of prior distributions under different contexts \cite{berger1989estimating, berger1992ordered, berger1992reference, berger1992development, berger2015}. The reference prior is obtained by maximizing the Kullback-Leibler (KL) divergence assuming some regularity conditions. The idea of the expected posterior information to the prior allows the data to have the maximum influence on the posterior distributions. The reference priors have essential properties such as consistent sampling, consistent marginalization, and one-to-one transformation invariance \cite{bernardo2005}. The reference priors may depend on the order of the parameters of interest. Hence, for the gamma distribution, we have two distinct priors that are presented below. \subsubsection{Reference prior when $\beta$ is the parameter of interest} The reference prior when $\beta$ is the parameter of interest and $\alpha$ is the nuisance parameter is given by \begin{equation}\label{priorgmr2f} \pi_2\left(\alpha,\beta\right)\propto \frac{\sqrt{\psi'(\alpha)}}{\beta}. \end{equation} Thus, using the Jacobian transformation it follows that the related reference prior is given by \begin{equation}\label{priorgmr2f} \pi_2\left(W,H\right)\propto \sqrt{\psi'(W)}. \end{equation} Finally, the joint posterior distribution for $H$ and $W$, produced by the reference prior (\ref{priorgmr2f}), is given by \begin{equation}\label{postgmr21} \pi_2(W,H|\boldsymbol{x})\propto\delta(W,H)^{nH}\frac{\sqrt{\psi'(W)}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^H}\right\}\exp\left\{-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{equation} \begin{theorem}\label{theo35} The posterior density (\ref{postgmr21}) is proper for all $n\geq 2$. \end{theorem} \begin{proof} Doing the change of variables $\exp(-H)=u\Leftrightarrow du = - \exp(-H)dH$, denoting $\delta_1(W) = \exp(W + \log(\Gamma(W))+(1-W)\psi(W))$ and proceeding analogously as in the proof of Theorem \ref{theo33} we have \begin{equation*} \begin{aligned} d_2(w)\propto &\int_0^\infty \int_{-\infty}^\infty \pi_2(W,H|\boldsymbol{x})\; dH dW \propto \int_0^1 g_2(W)dw + \int_1^\infty g_2(W) dw, \end{aligned} \end{equation*} where $g_2(W) =\sqrt{\psi'(W)}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n}>0$ for all $W\in (0,\infty)$. Now, according to \citep{ramos2017bayesian,ramos2018posterior}, we have $\frac{\Gamma(nW)}{\Gamma(W)^n}\underset{W\to 0^+}{\propto}W^{n-1}$ and $\sqrt{\psi'(W)}\underset{W\to 0^+}{\propto} W^{-1}$, and since we proved in Theorem \ref{theo33} that $\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}}\underset{W\to 0^+}{\propto}1$, it follows from Proposition \ref{prop32} that \begin{equation*} \int_0^1 g_2(W) dW \propto \int_0^1 W^{-1}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Moreover, from Abramowitz \cite{abramowitz} we have $\sqrt{\psi'(W)}\underset{W\to \infty}{\propto} W^{-1/2}$, which combined with $\sqrt{W\psi'(W)-1}\underset{W\to \infty}{\propto} W^{-1/2} $ implies in $\sqrt{\psi'(W)}\underset{W\to \infty}{\propto} \sqrt{W\psi'(W)-1}$. Therefore it follows that $g_2(W)\underset{W\to \infty}{\propto} g_1(W)$. and by Proposition \ref{prop32} it follows that \begin{equation*} \begin{aligned} \int_1^\infty g_2(W) dW \propto \int_1^\infty g_1(W) \; dW <\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} \begin{theorem}\label{theo36} The posterior mean of $H$ relative to (\ref{postgmr21}) is finite for all $n\geq 2$. \end{theorem} \begin{proof} Proceeding analogously as in the proof of Theorem \ref{theo34} it follows that \begin{equation*} \begin{aligned} \left|E_2[H|x]\right|&\propto \int_0^\infty \left|\int_{-\infty}^\infty H \pi_2(H,W|\boldsymbol{x})\; dH dW\right| \\ & \leq \int_0^\infty \delta_2(W) \sqrt{\psi'(W)}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW=\int_0^1 h_2(W) dW + \int_1^\infty h_2(W) dW, \end{aligned} \end{equation*} where $\delta_2(W)$ is the same as defined in the proof of Theorem \ref{theo34} and \begin{equation*} \begin{aligned} h_2(W) =\delta_2(W) \sqrt{\psi'(W)}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} \cdot \end{aligned} \end{equation*} Since in the proof of Theorem \ref{theo34} we showed that $ \delta_2(W)\underset{W\to 0^+}{\propto} W^{-1}$, together with the proportionalities proved in Theorem \ref{theo33} and Proposition \ref{prop32} we have \begin{equation*} \int_0^1 h_2(W) dW \propto \int_0^1 W^{-1}\times W^{-1}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Finally, from the proof of Theorem \ref{theo35} we know that $\sqrt{\psi'(W)}\underset{W\to \infty}{\propto} \sqrt{W\psi'(W)-1}$, which implies directly that $h_2(W)\underset{W\to \infty}{\propto} h_1(W)$, and thus from Proposition \ref{prop32} it follows that \begin{equation*} \begin{aligned} \int_1^\infty h_2(W) dW \propto \int_1^\infty h_1(W) \; dW <\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} The marginal posterior distributions of $W$ is given by \begin{equation*} \pi_2(W|\boldsymbol{x})\propto\sqrt{\psi'(W)}\frac{\Gamma(nW)}{\Gamma(W)^n} \left(\frac{\sqrt[n]{\prod_{i=1}^n{x_i}}}{ \sum_{i=1}^n x_i}\right)^{nW}. \end{equation*} Moreover, the conditional posterior distribution of $H$ is given by \begin{equation*} \begin{aligned} \pi_2(H|W,\boldsymbol{x})\propto\exp\left\{-nWH-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{aligned} \end{equation*} \subsubsection{Reference prior when $\alpha$ is the parameter of interest} The reference prior when $\alpha$ is the parameter of interest and $\beta$ is the nuisance parameter is given by \begin{equation}\label{priorgmr1f} \pi_3\left(\alpha,\beta\right)\propto \frac{1}{\beta}\sqrt{\frac{\alpha\psi'(\alpha)-1}{\alpha}}. \end{equation} Therefore, in terms of the reparametrized model, the reference prior when $W$ is the parameter of interest and $H$ is the nuisance parameter is given by \begin{equation}\label{priorgmr1f} \pi_3\left(W,H\right)\propto \sqrt{\frac{W\psi'(W)-1}{W}}. \end{equation} Finally, the joint posterior distribution for $\alpha$ and $\beta$, produced by the reference prior (\ref{priorgmr1f}) is given by \begin{equation}\label{postref1} \pi_3(W,H|\boldsymbol{x})\propto\sqrt{\frac{W\psi'(W)-1}{W}}\frac{\delta(W,H)^{nW}}{\Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^W}\right\}\exp\left\{-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{equation} \begin{theorem}\label{theo37} The posterior density (\ref{postref1}) is proper for all $n\geq 2$. \end{theorem} \begin{proof} Since $\sqrt{\frac{W\psi'(W)-1}{W}}=\sqrt{\psi'(W)-\frac{1}{W}}\leq \sqrt{\psi'(W)}$ it follows that $\pi_3(W,H)\leq \pi_2(W,H)$ for all $W\in (0,\infty)$ and $H\in (-\infty,\infty)$ and thus Theorem \ref{theo37} follows directly from Theorem \ref{theo35}. \end{proof} \begin{theorem}\label{theo38} The posterior mean of $H$ relative to (\ref{postref1}) is finite for all $n\geq 2$. \end{theorem} \begin{proof} Since $\sqrt{\frac{W\psi'(W)-1}{W}}=\sqrt{\psi'(W)-\frac{1}{W}}\leq \sqrt{\psi'(W)}$, it follows that $|H\pi_3(W,H)|=|H|\pi_3(W,H)\leq |H|\pi_2(W,H)= |H\pi_2(W,H)|$ for all $W\in (0,\infty)$ and $H\in (-\infty,\infty)$ and thus Theorem \ref{theo38} follows directly from Theorem \ref{theo36} \end{proof} The marginal posterior distributions of $W$ is given by \begin{equation*} \pi_3(W|\boldsymbol{x})\propto\sqrt{\frac{W\psi'(W)-1}{W}}\frac{\Gamma(nW)}{\Gamma(W)^n} \left(\frac{\sqrt[n]{\prod_{i=1}^n{x_i}}}{ \sum_{i=1}^n x_i}\right)^{nW}. \end{equation*} Moreover, the conditional posterior distribution of $H$ is given by \begin{equation*} \begin{aligned} \pi_3(H|W,\boldsymbol{x})\propto\exp\left\{-nWH-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{aligned} \end{equation*} \subsection{Matching priors} Tibshirani \cite{tibshirani1989} considered a different method to obtain a class of one parameter non-informative prior distribution with nuisance parameters. Letting $\pi(\theta_1, \theta_2)$ be a prior distribution with the parameter of interest $\theta_1$ and a nuisance $\theta_2$, the proposed approach requires that the resulting credible interval of the posterior distribution for $\theta_1$ have a frequentist coverage accurate to $O$($n^{-1}$), that is, it requires that \begin{equation}\label{matchingp} P\left[\theta_1\leq\theta_1^{1-\alpha}(\pi;X)|(\theta_1,\theta_2)\right]=1-\alpha-O(n^{-1}), \end{equation} where $\theta_1^{1-\alpha}(\pi;X)|(\theta_1,\theta_2)$ denotes the $(1-\alpha)$th quantile of the posterior distribution of $\theta_1$. The priors that satisfy (\ref{matchingp}) up to $O(n^{-1})$ are know as matching priors. Under parametric orthogonality, Mukerjee \& Dey \cite{mukerjee1993frequentist} discussed sufficient and necessary conditions for a class of Tibshirani priors to be a matching prior up to $o(n^{-1})$. Sun and Ye \cite{sun1996frequentist} derived a Berger and Bernardo's \citep{berger1989estimating} forward and backward reference prior for a two-parameter exponential family, and further showed that the reference prior are special cases of the matching priors. For a gamma distribution, they showed that the reference prior (\ref{priorgmr1f}) is a matching prior when $\beta$ is set as a nuisance parameter and $\alpha$ is the interest parameter, and proved that there exist no matching prior up to order $O(n^{-1})$. Again, they showed that the reference prior (\ref{priorgmr1f}) is a matching prior when $\beta$ is the interest parameter and $\alpha$ is the nuisance parameter with order $O(n^{-1})$ and proved that there exists a matching prior up to order $o(n^{-1})$. The cited matching prior is defined as \begin{equation}\label{priorgmtb1f} \pi_{4}\left(\alpha,\beta\right)\propto \frac{\alpha\psi'(\alpha)-1}{\beta\sqrt{\alpha}}. \end{equation} Thus, the reparametrized version of the proposed matching prior is given by \begin{equation}\label{priorgmr2f} \pi_4\left(H,\delta(W,H)\right)\propto \frac{W\psi'(W)-1}{\sqrt{W}}. \end{equation} Finally, the joint posterior distribution for $H$ and $W$, produced by the matching prior (\ref{priorgmr2f}) is given by \begin{equation}\label{postgmr22} \pi_4(W,H|\boldsymbol{x})\propto\delta(W,H)^{nH}\frac{(W\psi'(W)-1)}{\sqrt{W}\, \Gamma(W)^n}\left\{\prod_{i=1}^n{x_i^H}\right\}\exp\left\{-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{equation} \begin{theorem} The posterior density (\ref{postgmr22}) is proper for all $n\geq 2$. \end{theorem} \begin{proof} Doing the change of variables $\exp(-H)=u\Leftrightarrow du = - \exp(-H)dH$, denoting $\delta_1(W) = \exp(W + \log(\Gamma(W))+(1-W)\psi(W))$ and proceeding analogously as in the proof of Theorem \ref{theo33} we have \begin{equation*} \begin{aligned} d_4(w)\propto &\int_0^\infty \int_0^\infty \pi_{4}(W,H|\boldsymbol{x})\; dH dW \propto \int_0^1 g_{4}(W)dw + \int_1^\infty g_{4}(W) dw, \end{aligned} \end{equation*} where $g_{4}(W) =\frac{(W\psi'(W)-1)}{\sqrt{W}}\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n}>0$ for all $W\in (0,\infty)$. Now, according to \citep{ramos2017bayesian,ramos2018posterior}, we have $\frac{\Gamma(nW)}{\Gamma(W)^n}\underset{W\to 0^+}{\propto}W^{n-1}$ and $\sqrt{W\psi'(W)-1}\underset{W\to 0^+}{\propto} W^{-1/2}$, which implies in particular that $\frac{(W\psi'(W)-1)}{\sqrt{W}}\underset{W\to 0^+}{\propto} W^{-3/2}$, and since we already proved in Theorem \ref{theo33} that $ \frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}}\underset{W\to 0^+}{\propto}1$, it follows by Proposition \ref{prop32} that \begin{equation*} \int_0^1 g_{4}(W) dW \propto \int_0^1 W^{-3/2}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Moreover, due to \citep{ramos2017bayesian,ramos2018posterior} we have $\frac{\Gamma(nW)}{\Gamma(W)^n}\underset{W\to \infty}{\propto} n^{nW}W^{(n-1)/2}$ and $\sqrt{W\psi'(W)-1}\underset{W\to \infty}{\propto} W^{-1/2}$, which implies in particular that $\frac{W\psi'(W)-1}{\sqrt{W}}\underset{W\to \infty}{\propto} W^{-3/2}$, and since we already proved in Theorem \ref{theo33} that $\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}}=\exp\left(-nqW\right)n^{-nW}$, where $q=\log\left(\frac{\frac{1}{n} \sum_{i=1}^n x_i}{\sqrt[n]{\prod_{i=1}^n{x_i}}}\right)>0$, by Proposition \ref{prop32} it follows that \begin{equation*} \begin{aligned} \int_1^\infty g_{4}(W) dW & \propto \int_1^\infty W^{-3/2}\times \exp\left(-nqW\right)n^{-nW}\times n^{nW}W^{(n-1)/2} dW\\ & = \int_1^\infty W^{(n/2-1)-1}\exp\left(-nqW\right) \; dW =\frac{\Gamma(n/2-1)}{(nq)^{n/2-1}}<\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} \begin{theorem}\label{theo310} The posterior mean of $H$ relative to (\ref{postgmr22}) is finite for all $n\geq 2$. \end{theorem} \begin{proof} Proceeding analogously as in the proof of Theorem \ref{theo34} it follows that \begin{equation*} \begin{aligned} \left|E_4[H|x]\right|&\propto \int_0^\infty \left|\int_{-\infty}^\infty H \pi_4(H,W|\boldsymbol{x})\; dH dW\right| \\ & \leq \int_0^\infty \delta_2(W) (W\psi'(W)-1)\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n} dW=\int_0^1 g_4(W) dW + \int_1^\infty g_4(W) dW, \end{aligned} \end{equation*} where $\delta_2(W)$ is given as in the proof of Theorem \ref{theo34} and \begin{equation*} \begin{aligned} h_4(W)=\delta_2(W) (W\psi'(W)-1)\frac{\left\{\prod_{i=1}^n{x_i^W}\right\}}{\left( \sum_{i=1}^n x_i\right)^{nW}} \frac{\Gamma(nW)}{\Gamma(W)^n}. \end{aligned} \end{equation*} Since in the proof of Theorem \ref{theo34} we showed that $ \delta_2(W)\underset{W\to 0^+}{\propto} W^{-1}$, together with the proportionalities proved in Theorem \ref{theo33} and Proposition \ref{prop32} we have \begin{equation*} \int_0^1 h_4(W) dW \propto \int_0^1 W^{-1}\times W^{-3/2}\times 1\times W^{n-1}\; dW < \infty. \end{equation*} Moreover, letting $q>0$ as in the proof of Theorem \ref{theo33}, since we proved during the proof of Theorem \ref{theo34} that $\delta_2(W)\underset{W\to \infty}{\propto} W(\log(W)+1)$ and since $\log(W)+1\leq \exp(\log(W))=W$ for $W\geq 1$ it follows from Proposition \ref{prop32} that \begin{equation*} \begin{aligned} \int_1^\infty h_4(W) dW & \propto \int_1^\infty W(\log(W)+1)\times W^{-3/2}\times \exp\left(-nqW\right)n^{-nW}\times n^{nW}W^{(n-1)/2} dW\\ & \leq \int_1^\infty W^{(n/2+1)-1}\exp\left(-nqW\right) \; dW =\frac{\Gamma(n/2+1)}{(nq)^{n/2+1}}<\infty, \end{aligned} \end{equation*} which concludes the proof. \end{proof} The marginal posterior distributions of $W$ is given by \begin{equation*} \pi_4(W|\boldsymbol{x})\propto\frac{W\psi'(W)-1}{W^{\frac{1}{2}}}\frac{\Gamma(nW)}{\Gamma(W)^n} \left(\frac{\sqrt[n]{\prod_{i=1}^n{x_i}}}{ \sum_{i=1}^n x_i}\right)^{nW}. \end{equation*} Moreover, the conditional posterior distribution of $H$ is given by \begin{equation*} \begin{aligned} \pi_4(H|W,\boldsymbol{x})\propto\exp\left\{-nWH-\delta(W,H)\sum_{i=1}^n x_i\right\}. \end{aligned} \end{equation*} \section{Simulation Study}\label{simulations} A Monte Carlo simulation study is conducted to quantify and compare the different non-informative priors' impact on the entropy measure's posterior distribution. The Bias and Mean Square Error (MSE) were used to identify the prior that provides the posterior distribution with posterior estimates closer to the true value. These metrics are given by \begin{equation} \operatorname{Bias}_{H}=\frac{1}{N}\sum_{i=1}^{N}(\hat{H}_{i}-H) \ \ \mbox{ and } \ \ \operatorname{MSE}_H\sum_{i=1}^{N}\frac{(\hat{H}_{i}-H)^2}{N}, \end{equation} where $N=10,000$ is the number of samples used to estimate the MLE and posterior quantities of interest. Here, we used the posterior mean as the Bayes estimate due to its good properties. The estimates of $W$ are not presented since we only considered $W$ as an auxiliary parameter to conduct the Jacobian transformation, and therefore we are not interested in its respective estimates. In addition to the Bias and MSE, the coverage probabilities $CP$ were also presented. Such metrics were obtained from the Bayesian credibility intervals (CI) and the asymptotic confidence intervals of $H$. The nominal level assumed was 0.95, i.e., we expect an adequate procedure to compute the confidence/credibility intervals should return coverage probabilities closer to 0.95. Regarding the Bias and MSE, the best approach among the selected ones should return the Bias and MSE closest to zero. The Newton-Raphson iterative method was used to maximize the likelihood in order to obtain the MLE. For a fair comparison, the initial values used to start the iterative procedures were the same values as the used to generate the samples. In real applications, there is a need to set initial values. To this end, we can use the closed-form maximum a posteriori estimator derived by Louzada and Ramos \cite{louzada2018efficient} given by \begin{equation}\label{ini01} \tilde\alpha=\left(\frac{n-2.9}{n}\right)\cfrac{n\sum_{i=1}^n t_i}{\left(n\sum_{i=1}^{n}t_i \log\left(t_i\right) - \sum_{i=1}^n t_i \sum_{i=1}^n \log\left(t_i\right) \right)} \end{equation} and \begin{equation}\label{ini01} \tilde\beta=\frac{1}{n^2}\left(n\sum_{i=1}^{n}t_i \log\left(t_i\right) - \sum_{i=1}^n t_i \sum_{i=1}^n \log\left(t_i\right) \right). \end{equation} Therefore, the initial values for $H$ and $W$ are computed from $\tilde{H}=\tilde{\alpha}-\log(\tilde{\beta})+\log\Gamma(\tilde{\alpha})+(1-\tilde{\alpha})\psi(\tilde{\alpha})$ and $\tilde{W}=\tilde\alpha$. In the Bayesian framework, the posterior distribution's marginal densities involve double integrals to obtain the normalizing constants. Therefore, the MCMC approach was adopted to obtain the posterior estimates. Moreover, the Metropolis-Hastings algorithm was adopted to simulate quantities of interest from the posterior densities. The first 500 samples were discarded in the burn-in stage for each data simulated, and 5000 iterations were further conducted. It was considered a thin parameter of 5 to avoid significant autocorrelation among the sample, returning at the end,- 1000 simulated values for each marginal distribution. The Geweke diagnostics \cite{geweke1992evaluating} was considered to confirm the convergence of chains under a confidence level of 95\%. The generated samples were used to estimate the posterior mean and the credibility intervals, resulting in 10,000 estimates for $H$ and $W$. The R software (R Core Development Team) was used for the simulation, where the codes can be obtained on request from the corresponding author. For $n=(20,\ldots,120)$, only the results sets at $(\alpha,\beta)=\left(4,2)\right)$ and $(\alpha,\beta)=\left(2,0.5\right)$ were presented, which leaded respectively to $H=1.33$ and $H=2.27$. However, the results were similar for different $\alpha$ and $\beta$ and therefore were not presented here. For each sample from the posterior distribution, the posterior mode and the credible intervals were evaluated for $\alpha$, $\beta$, and $H$. \begin{table}[!h] \centering \caption{The Bias(MSE) from the estimates of $\mu$ considering different values of $n$ with $N=100,000$ simulated samples, using the estimation methods: 1 - MLE, 2 - Jeffreys's rule, 3 - Reference 1 prior, 4 - Reference 2 prior, and 5 - Tibshirani Prior.} \begin{tabular}{c|c|c|c|c|c|c} \hline $\boldsymbol{\theta}$ & n & MLE & Jeffreys's & Reference 1 & Reference 2 & Tibshirani \\ \hline & 20 & 0.0509(0.0374) & 0.0374(0.0355) & 0.0359(0.0354) & 0.0251(0.0339) & 0.0114(0.0326) \\ & 30 & 0.0329(0.0234) & 0.0233(0.0225) & 0.0221(0.0224) & 0.0148(0.0218) & 0.0056(0.0213) \\ & 40 & 0.0260(0.0170) & 0.0185(0.0165) & 0.0176(0.0164) & 0.0121(0.0161) & 0.0052(0.0158) \\ & 50 & 0.0215(0.0139) & 0.0153(0.0136) & 0.0147(0.0135) & 0.0102(0.0133) & 0.0047(0.0131) \\ & 60 & 0.0154(0.0113) & 0.0101(0.0111) & 0.0097(0.0111) & 0.0059(0.0109) & 0.0014(0.0108) \\ $H = 1.33$ & 70 & 0.0171(0.0099) & 0.0125(0.0097) & 0.0121(0.0097) & 0.0090(0.0095) & 0.0049(0.0094) \\ & 80 & 0.0118(0.0083) & 0.0078(0.0081) & 0.0074(0.0081) & 0.0047(0.0081) & 0.0012(0.0080) \\ & 90 & 0.0107(0.0073) & 0.0073(0.0072) & 0.0068(0.0072) & 0.0044(0.0071) & 0.0012(0.0071) \\ & 100 & 0.0090(0.0067) & 0.0058(0.0067) & 0.0054(0.0066) & 0.0032(0.0066) & 0.0005(0.0066) \\ & 110 & 0.0085(0.0061) & 0.0056(0.0061) & 0.0053(0.0061) & 0.0033(0.0060) & 0.0007(0.0060) \\ & 120 & 0.0084(0.0055) & 0.0056(0.0055) & 0.0054(0.0055) & 0.0036(0.0054) & 0.0012(0.0054) \\ \hline & 20 & 0.0431(0.0290) & 0.0221(0.0272) & 0.0208(0.0271) & 0.0026(0.0261) & 0.0191(0.0261) \\ & 30 & 0.0353(0.0204) & 0.0205(0.0193) & 0.0199(0.0194) & 0.0077(0.0187) & 0.0065(0.0185) \\ & 40 & 0.0254(0.0152) & 0.0140(0.0147) & 0.0135(0.0146) & 0.0044(0.0143) & 0.0061(0.0142) \\ & 50 & 0.0220(0.0121) & 0.0127(0.0117) & 0.0122(0.0117) & 0.0050(0.0115) & 0.0033(0.0114) \\ & 60 & 0.0157(0.0097) & 0.0078(0.0095) & 0.0075(0.0095) & 0.0015(0.0093) & 0.0054(0.0093) \\ $H = 2.27$ & 70 & 0.0141(0.0083) & 0.0073(0.0081) & 0.0070(0.0081) & 0.0019(0.0080) & 0.0040(0.0079) \\ & 80 & 0.0143(0.0072) & 0.0083(0.0070) & 0.0081(0.0070) & 0.0036(0.0069) & 0.0016(0.0069) \\ & 90 & 0.0117(0.0065) & 0.0064(0.0064) & 0.0061(0.0064) & 0.0022(0.0063) & 0.0024(0.0063) \\ & 100 & 0.0110(0.0056) & 0.0062(0.0055) & 0.0060(0.0055) & 0.0024(0.0055) & 0.0017(0.0055) \\ & 110 & 0.0079(0.0053) & 0.0035(0.0052) & 0.0033(0.0052) & 0.0000(0.0052) & 0.0037(0.0052) \\ & 120 & 0.0072(0.0048) & 0.0031(0.0047) & 0.0030(0.0047) & 0.0000(0.0047) & 0.0035(0.0047) \\ \hline \end{tabular} \label{tableres1} \end{table} \begin{table*}[!h] \centering \caption{The $CP_{95\%}$ from the estimates of $\mu$ and $\Omega$ considering different values of $n$ with $N=10,000,000$ simulated samples, using the estimation methods: 1 - MLE, 2- Jeffreys's rule, 3 - Reference 1 prior, 4 - Reference 2 prior, and 5 - Tibshirani Prior.} \begin{tabular}{c|c|c|c|c|c|c} \hline $\boldsymbol{\theta}$ & n & MLE & Jeffreys & Ref $W$ & Ref $H$ & Tibshirani \\ \hline \multirow{11}{*}{$H = 1.33$ } & 20 & 0.923 & 0.932 & 0.936 & 0.943 & 0.951 \\ & 30 & 0.937 & 0.940 & 0.942 & 0.947 & 0.953 \\ & 40 & 0.941 & 0.946 & 0.946 & 0.952 & 0.955 \\ & 50 & 0.936 & 0.942 & 0.943 & 0.945 & 0.950 \\ & 60 & 0.941 & 0.946 & 0.946 & 0.948 & 0.951 \\ & 70 & 0.937 & 0.945 & 0.942 & 0.946 & 0.946 \\ & 80 & 0.943 & 0.948 & 0.948 & 0.949 & 0.950 \\ & 90 & 0.946 & 0.952 & 0.952 & 0.954 & 0.955 \\ & 100 & 0.945 & 0.950 & 0.948 & 0.950 & 0.951 \\ & 110 & 0.940 & 0.945 & 0.945 & 0.946 & 0.948 \\ & 120 & 0.949 & 0.953 & 0.954 & 0.954 & 0.956 \\ \hline \multirow{11}{*}{$H = 2.27$} & 20 & 0.946 & 0.951 & 0.952 & 0.956 & 0.957 \\ & 30 & 0.938 & 0.945 & 0.946 & 0.951 & 0.953 \\ & 40 & 0.937 & 0.941 & 0.941 & 0.946 & 0.947 \\ & 50 & 0.939 & 0.945 & 0.946 & 0.948 & 0.949 \\ & 60 & 0.941 & 0.951 & 0.951 & 0.951 & 0.954 \\ & 70 & 0.943 & 0.954 & 0.956 & 0.955 & 0.957 \\ & 80 & 0.944 & 0.956 & 0.957 & 0.959 & 0.959 \\ & 90 & 0.945 & 0.954 & 0.956 & 0.956 & 0.956 \\ & 100 & 0.946 & 0.960 & 0.960 & 0.962 & 0.963 \\ & 110 & 0.943 & 0.956 & 0.956 & 0.956 & 0.957 \\ & 120 & 0.948 & 0.960 & 0.960 & 0.960 & 0.961 \\ \hline \end{tabular} \label{tableres2} \end{table*} Tables \ref{tableres1} and \ref{tableres2} present the Bias, MSEs and $CP_{95\%}$ for the MLE and Bayesian estimators of the entropy measure $H$. In particular, the results revealed that: \begin{enumerate} \item For all the parameter estimators, the Bias and MSE approach zero for large n, which implies asymptotic unbiasedness, i.e., the Bias approaches zero, and the MSE decreases as the number of samples increases. \item The posterior means using the reference prior 1 and 2 were superior to the posterior means using Jeffreys prior and MLE. However, the posterior mean using reference prior 2 was consistently superior to the posterior mean using reference prior 1. The said performance is validated through the coverage probability informed by the CI. Additionally, the coverage probability was high for all the estimators, and the credibility of the interval increases with sample size. \item For all estimators, the highest drop in Bias and MSE was observed when the sample size increased from $20$ to $30$. \item Overall, the results show that the MLE performed worst, given its high bias and MSE. On the other hand, the posterior estimates using the matching prior provided Bayes estimates with smaller Bias and MSE; it was considered the most adequate prior to estimating $H$. \end{enumerate} According to the simulation results, the posterior distribution with associated matching prior leads to the most precise results with the least bias and MSE. The cited prior outperforms other objective priors and ML estimates considered in this study, and therefore should be chosen as the most appropriate prior for inference. Besides, the posterior estimates obtained from the matching have superior theoretical properties, such as invariance under one-to-one parameter transformation, consistent sampling, and consistency under marginalization. Therefore, we conclude that the posterior estimates derived from the matching prior distribution are more appropriate and superior in making inferences of the gamma distribution's population parameter. To conduct the Bayesian analysis with the proposed Bayes estimator, we have presented a function in R that can be used for this purpose, where the details can be seen in Appendix A. \section{Application}\label{application} \subsection{Achaemenid dynasty} In this section, the proposed model is applied in the Achaemenid dynasty's rule time to quantify the variability in the Persian Empire's political institutions. The Achaemenid dynasty of the Achaemenid empire was the royal house of the ancient Persians who ruled over Persia kingdom. The authority is passed to the descendant of the same bloodline. The Persian Empire was build and expanded through military conquest to extend political control to a broader territory and people. They set out for wars to enlarge territories, resulting in an imbalance in the political institutions. Despite the governmental strategic techniques introduced in Cyrus the Great, the Persian dynasty suffered several reoccurring internal political conflicts, assassinations, and wars from internal and external entities, which shaped the political institutions over the years. Consequentially, the conflicts influenced each emperor's tenor duration and the pattern of a new emperor's ascendancy. A stable government often has a long time interval between successive emperors, whereas an unstable government has a short tenor period. The induced patterns can be accounted for using statistical tools such as entropy. The more frequent new emperors ascend the throne, the higher the entropy and the more unstable the government. For the sake of comparison, the proposed model was applied to the Roman Empire timeline data, which was also analyzed by Ramos $et~ al.$ \cite{Ramos2020Power}. \begin{figure}[!h] \centering \includegraphics[scale=0.55]{high2.jpg} \caption{Timeline (BC) containing the data, time series plot of the posterior distribution of the entropy and autocorrelation plot for the same distribution.}\label{fsimulation1} \end{figure} Figure \ref{fsimulation1} presents the timeline of the Achaemenid dynasty (top panel), the time series of the Bayesian estimate of the entropy $H$ (down left panel), and the autocorrelation plot (downright panel). The time series and the autocorrelation plot indicate the chain's convergence, which was also confirmed by the Geweek test \cite{geweke1992evaluating}. It is worth mentioning that the Kolmogorov-Smirnov (KS) test was used (statistic D = 0.21) to confirm if the data can be assumed to follow a gamma distribution. Using the posterior distribution obtained from the matching prior, the Bayes estimate of the Achaemenid dynasty's entropy is $4.13$ with a $95\%$ credible interval of $(3.55;4.73)$. Moreover, with the same prior, the posterior estimate for the Roman Empire is $3.08$ with a $95\%$ credible interval of $(2.80;3.36)$. Although the estimated entropies obtained for the Roman Empire were high, the Achaemenid dynasty had a higher entropy, which implies that the Achaemenid dynasty political institution is more volatile compared to the Roman Empire. That is, the time between the successive emperor is significantly different, shorter, and irregular for the Achaemenid dynasty, which signifies instability in their political institutions. These results support the historian's claim that the Achaemenid Empire set out for wars and consequently were exposed to internal and external insurgence, resulting in instability and high entropy. \subsection{Harvest Sugarcane machine} Sugarcane farming is pertinent to Brazil's economic growth and has heavily contributed to its Gross Domestic Product (GDP). The production process involves an automated harvesting mechanism, and the interest of the sugarcane farmers is to sustain its harvesting mechanism for an extended period. Moreover, the production chain must be kept in stable conditions to avoid fluctuation in production and prevent wastage due to the deterioration of sugarcane. This application modeled the entropy of the harvest machine failure times in a Bayesian framework, using the Tibshirani prior, to quantify the production process's irregularity. A high entropy indicates a severe irregularity in the production process. Otherwise, the production process is steady. The considered data was collected from January 2015 till August 2017, which corresponds to 2.5 harvests and describe twenty-one failure times in days of the suspension of a harvester sugarcane machine: 11, 19, 36, 4, 8, 11, 39, 74, 168, 27, 116, 3, 34, 1, 46, 12, 2, 56, 14, 52, 14. Figure \ref{fsimulation1} presents the time series of the Bayesian estimation of the average entropy (left panel) and the correlation lags (right panel). The convergence of the estimate was tested using the Geweek test, and the KS test was used (statistic D = 0.12) to confirm if the data can be assumed to follow a gamma distribution. \begin{figure}[!h] \centering \includegraphics[scale=0.52]{application2.pdf} \caption{Time series plot of the posterior distribution of the entropy and autocorrelation plot for the same distribution.}\label{fsimulation1} \end{figure} The Bayes estimate of the entropy for the failure time of harvest sugarcane machine is $4.55$ with a $95\%$ credible interval of $(4.04; 5.09)$. The estimated entropy is significantly high, which implies instability in the production process. Moreover, the process is disrupted in an irregular time duration, which causes the harvest output to be unpredictable and unreliable. The harvesting machine must regularly pass a thorough maintenance check within the harvester's life circle to keep a steady production flow. \section{Final Remarks}\label{conclusions} The entropy concept was considered first in statistical thermodynamics, and it was further modified to be applied in other fields. In information theory, the Shannon entropy can measure the uncertainty of a random process. In statistical inference, the Shannon entropy parameters are determined by the maximum likelihood approach (MLE). However, using this approach, the results are biased for small samples, and the confidence intervals may not achieve the desired coverage probabilities as the asymptotic assumptions are not fulfilled. In the present paper, we introduced a fully objective Bayesian analysis to obtain the Shannon entropy's posterior distribution to overcome this limitation. We considered objective priors where the obtained posterior distributions are not overshadowed by prior information. The posterior distributions were derived assuming the Jeffreys prior, reference priors, and matching priors, all invariant under one-to-one transformations. Since the obtained priors are improper, they could lead to improper posteriors, which is undesirable. We proved that the obtained posteriors are proper distributions to address this issue and thus can be used to conduct the Bayesian analysis. The posterior mean was considered a Bayes estimator, and since they may not exist or not be finite, we also proved that the posterior means are finite for any sample sizes. Hence, four posterior distributions were proposed to conduct inference. An intensive simulation study was conducted in order to select one Bayesian estimator or the MLE. The posterior distribution using the matching prior returned better results in terms of bias, mean square error, and coverage probabilities compared with other methods, while the MLE returned the worst results. We analyzed the gamma distribution's particular case, a more flexible and general model than the exponential distribution, and has been applied to describe many real phenomena. Although we considered the gamma distribution's particular case, our approach is general and can be further extended for any probability distribution function. The proposed Bayes estimator was implemented in an R language, available in the appendix, to estimate the Shannon entropy measure. We apply the implementation to estimate the entropy related to the Achaemenid dynasty's rule time, which returned a higher value compared with the Roman Empire. It shows that the change in the throne in the Achaemenid dynasty was less probable than the ones in the Roman Empire and indicates a significant instability in their political institutions, which probably contributed to its fall. Further, we analyzed the time until the failure of suspension of the harvest sugarcane machine, where its entropy was estimated using the Bayesian approach. There are a large number of possible extensions of the current work. Other distributions can be considered assuming the same context, and the Bayes estimator of the Shannon entropy can be derived. Different entropy measure types, such as Hartley, Rényi, and Tsallis entropy, could also be estimated under a Bayesian approach. We plan to explore this line of research in the future. \section*{Acknowledgements} Eduardo Ramos acknowledges financial support from S\~ao Paulo State Research Foundation (FAPESP Proc. 2019/27636-9). Pedro L. Ramos acknowledges support from S\~ao Paulo State Research Foundation (FAPESP Proc. 2017/25971-0). Francisco Rodrigues acknowledges financial support from CNPq (grant number 309266/2019-0). Francisco Louzada is supported by the Brazilian agencies CNPq (grant number 301976/2017-1) and FAPESP (grant number 2013/07375-0). \bibliographystyle{plain}
{ "timestamp": "2020-12-29T02:24:21", "yymm": "2012", "arxiv_id": "2012.14081", "language": "en", "url": "https://arxiv.org/abs/2012.14081" }
\section{Introduction} In probability theory~\cite{Feller}, full counting statistics are generating functions for the cumulants of a random variable. After the recent groundbreaking advances with cold atom experiments~\cite{exprev, exp1, exp2, exp3}, their calculation has become of increasing relevance in one-dimensional quantum many-body systems. For such models, indeed, exact derivations, which are otherwise not possible, can be performed by relying on mathematical tools borrowed from asymptotics of block Toeplitz determinants~\cite{Toeplitz_rev}, random matrices~\cite{Metha} or field theory and integrability~\cite{Kbook, giuseppe_book}. The characterization of quantum fluctuations in one dimension beyond the first few cumulants is nowadays a recurrent theme of research both in~\cite{Demler, Lamacraft, Ivanov, Abanov, KMT, NR, SP, AG, Najafi3, Bastianello, Calabrese2} and out of equilibrium~\cite{Eisler1, Eisler2, IT, LDDZ, Collura2, Groha, Collura, Gambassi, Gamayun2020, Gamayun2020-2}. Analytical calculations have been also performed for non-translation invariant systems such as Fermi gases in a trap~\cite{LD}. Through their Fourier transforms, full counting statistics allow accessing the probability distribution of a set of quantum measurements, whose outcomes are inherently random. In a real system at equilibrium, local observables are measurable on a finite interval of length $L$. The theoretical analysis is then focused on the estimation of the asymptotics for large $L$ of their cumulant generating functions. In the large-$L$ limit, full counting statistics depend crucially on the conserved quantities of the whole quantum system and might include universal contributions at a quantum phase transition~\cite{Stephan}. These features are inherited by the correlation functions. Moreover, similarly to entanglement entropies~\cite{KL, Klich2, Klich3, CMV}, they can distinguish among different universality classes of quantum critical behavior at equilibrium~\cite{Stephan} and far from it~\cite{BD}. This paper is then devoted to a detailed study of full counting statistics in the XY spin chain at zero temperature, complementing the existing literature on the subject~\cite{Demler, Ivanov, Abanov, Franchini}. The XY spin chain is a paradigmatic model of statistical mechanics~\cite{Lieb}. Its phase diagram contains a quantum critical point with $\mathbb Z_2$ symmetry and a critical line where such a discrete group is enhanced to a global $U(1)$ symmetry, which preserves the total magnetization. It is, therefore, an ideal testbed to scrutinize universality conjectures and understand how symmetries can suppress quantum fluctuations. In particular, by exploiting asymptotic expansions for large sizes of block Toeplitz determinants, we will calculate analytically generating functions for the transverse and staggered magnetization and the domain walls. Whenever possible, comments will be made about the existence of universal terms and their comparison with field theory predictions. We will also obtain a universal formula in the scaling limit for the full counting statistics of the transverse magnetization and the domain walls by solving a Painlev\'e V equation~\cite{Gamayun, AV}. The latter result applies to any system close to a quantum critical point within the Ising universality class. Finally, in the large-coupling limit cumulant generating functions are proportional to the expectation value of the projector onto a given spin configuration. The best known example is the so-called emptiness formation probability introduced in a Bethe Ansatz context by~\cite{KI}. Our approach is also suitable to determine analytically variants thereof, such as the formation probabilities considered in~\cite{Najafi2, ARV}. In particular, we calculate exactly the probability of formation of ferromagnetic and antiferromagnetic domains in both the $\sigma^z$ and $\sigma^x$ basis in the ground state. The summary of this paper is as follows: in Sec.~\ref{s_fc}, we will review how generating functions can be obtained from the subsystem reduced density matrix, in Sec.~\ref{tmag},~\ref{smag}, and~\ref{sec:kinks} the formalism will be applied to the transverse, staggered magnetization and the domain walls respectively. We conclude in Sec.~\ref{conc}. Most of the technical details for the interested reader are relegated to a series of Appendices. \section{Reduced density matrix and Full Counting Statistics} \label{s_fc} \textit{General formalism.---}We start by recalling~\cite{NR} how Full Counting Statistics (FCS) of fermionic quadratic forms on an interval $A$ of the real line can be calculated from the knowledge of the reduced density matrix. Consider then a free fermionic Hamiltonian \begin{equation} \label{hamf} H=\sum_{l,m=1}^N c^{\dagger}_{l}P_{lm}c_m+\frac{1}{2}\sum_{l,m=1}^N(c^{\dagger}_lQ_{lm}c^{\dagger}_m+H.c.), \end{equation} the matrix $P$ is real and symmetric while $Q$ is real and antisymmetric; the operators $c_{l}$ and $c_{l}^{\dagger}$ are fermionic annihilation and creation operators and satisfy $\{c^{\dagger}_l,c_{m}\}=\delta_{lm}$. The state $|\Omega\rangle$ is defined by $c_{l}|\Omega\rangle=0$, $l=1,\dots,N$. Following Lieb, Schultz and Mattis~\cite{Lieb}, it is convenient to introduce Majorana fermions \begin{equation} a_{l}=c^{\dagger}_l+c_l,~b_l=c^{\dagger}_l-c_l, \end{equation} which obey \begin{equation} \{a_l,b_m\}=0,~\{a_l,a_m\}=-\{b_l,b_m\}=2\delta_{lm}. \end{equation} Suppose now that the correlation matrix $(G_{ba})_{lm}\equiv\langle \Psi| b_l a_m|\Psi\rangle$ is known for a certain quantum state $|\Psi\rangle$. The reduced density matrix $\rho_A$ of an interval $A$ of length $L$ is then defined implicitly by \begin{equation} \label{redrho} (G_{ba})_{lm}=\mbox{Tr}[\rho_A b_la_m]. \end{equation} The matrix elements of $\rho_A$ could be obtained from those of $G_{ba}$~\cite{P}, however, for our analytical calculations it is more useful to represent $\rho_A$ on the basis of fermionic coherent states. These are eigenstates of the fermionic annihilation operators $c_{i}$ with eigenvalue $\xi_i$ defined as $|\xi\rangle=e^{-\sum_{l}\xi_l c^{\dagger}_l}|\Omega\rangle$, being $\xi^{T}=(\xi_1,\dots,\xi_L)$ an $L$-dimensional vector of Grassmann numbers. Analogously, we define $\langle\eta|=\langle\Omega|e^{\sum_l\eta_l^*c_l}$ the dual coherent state. There is not an obvious way to extract the matrix elements of the reduced density matrix in the coherent state basis. However, taking a pragmatic approach, one can postulate an expression for $\rho_A$ that reproduces the correlation matrix $G_{ba}$ through Eq.~\eqref{redrho}. Such an expression is~\cite{Chung} \begin{equation} \label{rep_an} \langle\eta|\rho_A|\xi\rangle=K e^{-\frac{1}{2}(\eta^*-\xi)F(\eta^*+\xi)},~F=\frac{G_{ba}+1}{G_{ba}-1}, \end{equation} with an obvious notation for the matrix inverse. Notice that Eq.~\eqref{rep_an} can be valid only if $(G_{aa})_{lm}\equiv\langle\Psi|a_l a_m|\Psi\rangle=0$ and $(G_{bb})_{lm}\equiv\langle\Psi|b_l b_m|\Psi\rangle=0$. The normalization $K$ is obtained requiring $\mbox{Tr}[\rho_A]=1$, yielding $K=\det[\frac{1}{2}(1-G_{ba})]$. Given Eq.~\eqref{rep_an}, the validity of Eq.~\eqref{redrho} can be then checked by standard manipulations with coherent state integrals. See for instance Appendix A in~\cite{CSS} for a neat presentation of the relevant formulas that will not be repeated here. The coherent state representation of the reduced density matrix can be then exploited to obtain determinant representations for the FCS of fermionic quadratic forms. Let us consider an operator $\mathfrak{O}$ with finite support in the interval $A$ and of the form \begin{equation} \label{loc_o} \mathfrak{O}=\sum_{l,m\in A}c^{\dagger}_{l}M_{lm}c_m+\frac{1}{2}\sum_{l,m\in A}(c^{\dagger}_lN_{lm}c^{\dagger}_m+H.c.)-\frac{1}{2}\text{Tr}[M], \end{equation} its FCS for a quantum state $|\Psi\rangle$ is \begin{equation} \label{fcsdef} \chi_{\mathfrak{O}}(\lambda)=\langle\Psi| e^{\lambda \mathfrak{O}}|\Psi\rangle, \end{equation} with $\lambda\in\mathbb R$. Being the support of $\mathfrak{O}$ the interval $A$, we can also obtain $\chi_{\mathfrak{O}}$ as \begin{equation} \label{fcs_red} \chi_{\mathfrak{O}}(\lambda)=\text{Tr}[\rho_{A}e^{\lambda \mathfrak{O}}]. \end{equation} To derive an explicit expression for the FCS, first one recasts the exponential in Eq.~\eqref{fcsdef} in normal ordered form~\cite{BB} \begin{equation} \label{BB} e^{\lambda \mathfrak{O}}= e^{-\frac{1}{2}\text{Tr}Y} e^{\frac{1}{2}c^{\dagger}_lX_{lm}c^{\dagger}_m}~e^{c^{\dagger}_lY_{lm}c_m}~e^{\frac{1}{2}c_lZ_{lm}c_m}, \end{equation} with suitable matrices $X,Y,Z$ that will be specified later. For our purposes, it turns out that $X=-X^T=-Z$ and $Y=Y^T$. After inserting Eq.~\eqref{BB} into Eq.~\eqref{fcs_red}, the trace can be expanded in the coherent state basis, and its calculation boils down to the evaluation of Gaussian Grassmann integrals. It is then not difficult to show~\cite{NR} \begin{equation} \label{det_fcs} \chi_{\mathfrak{O}}(\lambda)=\frac{1}{\sqrt{\det e^{Y}}}\det\left[\frac{1-G_{ba}}{2}+\frac{1+G_{ba}}{2}(-X+e^Y)\right], \end{equation} which is our final expression for the FCS. Finally, the matrices $X,Y$ are obtained as follows; given Eq.~\eqref{loc_o} let \begin{equation} \label{tdef} T=e^{\lambda \begin{bmatrix} M & N\\ -N & -M \end{bmatrix}}\equiv\begin{bmatrix}T_{11}& T_{12} \\ T_{21}& T_{22}\end{bmatrix} \end{equation} then~\cite{BB} $X=T_{12}T_{22}^{-1}$ and $e^{-Y}=T_{22}^T$. \textit{The XY spin chain.---}The final formula in Eq.~\eqref{det_fcs} is useful to calculate the ground state FCS in the XY spin chain. The latter is defined by the Hamiltonian~\cite{Lieb} \begin{equation} \label{Ham_XY} H_{XY}=-\sum_{l=1}^N\left[\left(\frac{1+\gamma}{2}\right)\sigma_{l}^x\sigma_{l+1}^x+\left(\frac{1-\gamma}{2}\right)\sigma_{l}^y\sigma_{l+1}^y+h\sigma_{l}^z\right], \end{equation} where $\gamma>0$ is the anisotropy, $h$ is called the transverse field and $\sigma_{l}^{\alpha}$ are Pauli matrices. In short we will refer to the eigenvalues of $\sigma_l^z$ as the \textit{transverse} spins, while those of $\sigma_l^x$ will be dubbed the \textit{longitudinal} spins. In the thermodynamic limit $N\rightarrow\infty$ and for $h=\pm 1$, $\gamma\not=0$, the XY chain has a quantum phase transition, belonging to the Ising universality class~\cite{Onsager, LSM} whose order parameter is the longitudinal magnetization $\mathfrak{M}_x=\sum_{l}\sigma_{l}^x$. When $\gamma=0$ and $|h|<1$ instead, it is also critical but its large distance fluctuations are described by a two-dimensional Euclidean free bosonic action compactified on a circle. Eq.~\eqref{Ham_XY} can be mapped to a fermionic quadratic form of the type introduced in Eq.~\eqref{hamf} by a Jordan-Wigner transformation. Our conventions for the Jordan-Wigner mapping are the following \begin{align} \label{JW} c^{\dagger}_{l}=\left(\prod_{j=1}^{l-1}\sigma_{j}^z\right)\sigma_{l}^{+},\\ \sigma_{l}^z=2c^{\dagger}_{l}c_{l}-1. \end{align} Even if the formalism outlined in this Section allows keeping track of the finite-$N$ and finite temperature effects, we will only consider the thermodynamic limit at zero temperature from now on. The ground state correlation matrix $G_{ba}$ in the thermodynamic limit is then~\cite{Lieb} \begin{equation} (G_{ba})_{lm}=\int_{0}^{2\pi}\frac{d\phi}{2\pi}e^{i\phi(l-m)} e^{i\theta(\phi)}, \end{equation} where we have introduced the shorthand notation \begin{equation} \label{thetadef} e^{i\theta(\phi)}\equiv \frac{h-\cos\phi-i\gamma\sin\phi}{\sqrt{(h-\cos\phi)^2+\gamma^2\sin^2\phi}}. \end{equation} \section{Example I: The transverse magnetization} \label{tmag} We start by analyzing the ground state FCS of the transverse magnetization. The results in the Secs.~\ref{fcs_tm} and ~\ref{p_tm} were already obtained (for $\gamma=1$) by~\cite{Demler, Franchini, Stephan} but we rederive and integrate them to introduce some notations. The determination of the FCS in the scaling limit given in Sec.~\ref{fcs_pv} is instead new. The focus will be on the calculation of the asymptotic behaviour of the FCS when the length of the interval $A$, denoted by $L$, is large. \subsection{Full Counting Statistics} \label{fcs_tm} The transverse magnetization is the operator $\mathfrak{M}_z=\sum_{j\in A}\sigma_{j}^z$, which according to Eqs.~\eqref{loc_o} and~\eqref{JW}, leads to $M=2I, N=0$ and therefore $X=0$ and $e^{Y}=e^{2\lambda}I$. From the determinant representation in Eq.~\eqref{det_fcs}, one obtains the FCS \begin{equation} \label{fcsmag} \chi_{\mathfrak{M}_z}(\lambda)=\det[\cosh\lambda I+\sinh\lambda~ G_{ba}]. \end{equation} The $L\times L$ matrix inside the determinant is a Toeplitz matrix. One can estimate the large-$L$ behavior of its determinant analytically by exploiting known---and sometimes less known---theorems or conjectures, which are collected in Appendix~\ref{app_asym}. In particular, the key quantity for the asymptotic analysis is the so-called symbol of the Toeplitz matrix, which for Eq.~\eqref{fcsmag} reads \begin{equation} \label{symbol_m} g_{\mathfrak{M}_z}(\phi)=\cosh\lambda+\sinh \lambda e^{i\theta(\phi)}, \end{equation} where $\theta(\phi)$ was defined in Eq.~\eqref{thetadef}. The large-$L$ limit of the FCS is then obtained by studying, for real values of $\lambda$, the zeros and jump discontinuities of Eq.~\eqref{symbol_m} in the interval $\phi\in[0,2\pi]$. Away from criticality (when $|h|\not=1$), the symbol is free of zeros and jump discontinuities, therefore the Szeg\H o theorem~\cite{Szego} holds and for $L\gg 1$ \begin{multline} \label{mag_1} \log\chi_{\mathfrak{M}_z}(\lambda)=L\left[\int_{0}^{2\pi}\frac{d\phi}{2\pi}\log(g_{\mathfrak{M}_z}(\phi))\right]+O(1). \end{multline} The $O(1)$ term can be estimated numerically, see Eq.~\eqref{szego}. Along the critical lines $h=\pm 1$ instead, the argument of the function $g_{\mathfrak{M}_z}(\phi)$ is discontinuous at $\phi=0$ and $\phi=\pi$ respectively. As first noticed in~\cite{Fisher}, jump discontinuities in the symbol are responsible for the divergence of the $O(1)$ term in Eq.~\eqref{mag_1} and the appearance of subleading logarithmic corrections. Explicitly, one finds the asymptotic expansion for the critical FCS \begin{multline} \label{mag_2} \log\chi_{\mathfrak{M}_z}(\lambda)=L\left[\int_{0}^{2\pi}\frac{d\phi}{2\pi}\log(g_{\mathfrak{M}_z}(\phi))\right]\\ -\beta^2(\lambda)\log L+O(1), \end{multline} being $\beta(\lambda)=\frac{1}{\pi}\arctan\tanh(\lambda)$ and the $O(1)$ term is given in Eq.~\eqref{O1-FH}. \textit{Universal terms in the critical FCS.---} At criticality and in the limit $\lambda\rightarrow\pm\infty$, the $O(\log L)$ term in the asymptotic expansion of the FCS was argued~ to be universal~\cite{Stephan}. Its prefactor, which is $\gamma$-independent, can be computed by relying only on (boundary) Conformal Field Theory (CFT) techniques, following the crucial assumption that the spin configuration in region $A$ renormalizes at large distances to a conformal invariant boundary condition~\cite{Cardy}. In the limit $\lambda\rightarrow \infty$, the operator $e^{\lambda\mathfrak{M}_z}$ is proportional to the projector onto a configuration of $L$ consecutive spins aligned in the positive $z$-direction. Ref.~\cite{Stephan} conjectured that the latter renormalizes to a free boundary condition for the longitudinal spins. In such a case, field theory predicts that $\beta^2(\infty)=c/8$, where $c=1/2$ is the central charge of the CFT associated with the quantum critical point of the XY spin chain for $\gamma\not=0$. As anticipated in Sec.~\ref{s_fc}, critical fluctuations are encoded into the action of a free massless Majorana fermion. The same holds in the $\lambda\rightarrow-\infty$ limit, when the FCS is proportional to the Emptiness Formation Probability (EFP) for the transverse magnetization~\cite{Franchini}. In this case, a configuration of $L$ consecutive spins aligned in the negative $z$-direction flows toward a linear combination of fixed boundary conditions for the longitudinal spins~\cite{ARV}. Within a field theoretical framework, however, the coefficient of the logarithmic term in Eq.~\eqref{mag_2}, does not depend on the boundary conditions---albeit conformal---and is always $-c/8=-1/16$. In the limit $|\lambda|\rightarrow\infty$, Ref.~\cite{Stephan} conjectured further the existence at criticality of a semi-universal $O(\log L/L)$ term in the expansion of Eq.~\eqref{mag_2}. Such subleading correction is produced by an irrelevant deformation of the CFT action localized on the interval $A$ and its explicit form for $h=1$ is~\cite{Stephan} \begin{align} \label{CFT_sub1} &\frac{ c\xi_{\text{free}}(\gamma) }{8\pi}\frac{\log L}{L},~\text{if $A$ flows to a free bc}\\ \label{CFT_sub2} &\frac{\xi_{\text{fixed}}(\gamma)}{8\pi}\frac{\log L}{L}(c-16 h_{\text{bcc}}),~\text{if $A$ flows to a fixed bc}. \end{align} For $\gamma>0$, in Eqs.~\eqref{CFT_sub1} and \eqref{CFT_sub2}, $c=1/2$ and $h_{\text{bcc}}=1/16$ is the dimension of the free-fixed boundary condition changing operator. The positive quantity $\xi_{\text{free}/\text{fixed}}(\gamma)$ is the so-called extrapolation length and in principle it depends on the boundary conditions to which the longitudinal spins renormalize. Ref.~\cite{Stephan} argued through numerical lattice calculations that $\xi_{\text{free}}(\gamma)=\xi_{\text{fixed}}(\gamma)=\frac{1}{2\gamma}$ and verified the validity of Eqs.~\eqref{CFT_sub1} and \eqref{CFT_sub2} by a non-rigorous---but numerically backed---asymptotic analysis, see also Eq.~\eqref{logL/L}. In particular, the $O(\log L/L)$ contribution to the expansion of the Toeplitz determinant in Eq.~\eqref{mag_2} is~\cite{Stephan} \begin{equation} \label{sub_mag} \text{sign}(h)\frac{\tanh(2\lambda)\arctan^2(\tanh(\lambda))}{2\pi^3\gamma}L^{-1}\log L. \end{equation} As anticipated, for $\lambda\rightarrow\pm\infty$, Eq.~\eqref{sub_mag} is fully consistent with the CFT predictions, provided the identification of the extrapolation length proposed in~\cite{Stephan}. In Sec.~\ref{smag} and Sec.~\ref{sec:kinks}, we will repeat the calculation of the subleading $O(\log L/L)$ corrections to the FCS for the staggered magnetization and the domain walls to further test the validity of Eqs.~\eqref{CFT_sub1} and \eqref{CFT_sub2}. \subsection{The probability distribution at criticality} \label{p_tm} The probability distribution for the transverse magnetization is \begin{equation} \label{p_d_mag} P_{\mathfrak{M}_z}(M)=\langle\delta(\mathfrak{M}_z-M)\rangle. \end{equation} Taking for simplicity $L$ even and exploiting $\chi_{\mathfrak{M}_z}(i\lambda)=\chi_{\mathfrak{M}_z}(i\lambda+\pi)$, it turns out that Eq.~\eqref{p_d_mag} can be rewritten as \begin{equation} \label{p_dist} P_{\mathfrak{M}_z}(M)=\mathcal{P}_z(M)~\sum_{s\in\mathbb Z}\delta(s-M/2). \end{equation} The function $\mathcal{P}_z(M)$ is the Fourier transform of the FCS analytically continued to imaginary values of $\lambda$, that is \begin{equation} \mathcal{P}_z(M)\equiv\int_{0}^{\pi}\frac{d\lambda}{2\pi}e^{-iM\lambda}\chi_{\mathfrak{M}_z}(i\lambda). \end{equation} It can be calculated exactly in the large-$L$ limit and turns out to be a Gaussian, see Appendix~\ref{app_prob}. The Gaussian behavior of the probability distribution of the transverse magnetization has been first observed in~\cite{ERW}, when the subsystem $A$ coincides with the full chain. A quick derivation is the following. The exponential decay of the critical FCS in Eq.~\eqref{mag_2} implies that all the cumulants of the transverse magnetization are $O(L)$. In particular, let us define \begin{equation} \mu\equiv\lim_{L\rightarrow\infty}\frac{\langle\mathfrak{M}_z\rangle}{L};~\sigma^2\equiv\lim_{L\rightarrow\infty}\frac{\langle(\mathfrak{M}_z-\langle\mathfrak{M}_z\rangle)^2\rangle}{L}, \end{equation} then the rescaled random variable \begin{equation} m_z\equiv\lim_{L\rightarrow\infty}\frac{\mathfrak{M}_z-\mu L}{\sqrt{\sigma^2 L}}, \end{equation} is Gaussian with zero mean and unit variance, since all its cumulants of order larger than two are zero. The same conclusion can be obtained more formally from a saddle point analysis~\cite{Demler} of Eq.~\eqref{p_dist}, which is discussed in Appendix~\ref{app_prob}. One finds for large $L$ \begin{equation} \mathcal{P}_{z}(M)=\frac{~e^{-\frac{(M-\mu L)^2}{2\sigma^2 L}}}{\sqrt{2\pi \sigma^2L}}\left[ 1+B\cos\frac{\pi(L-M)}{2}L^{-\frac{1}{4}}\right], \end{equation} valid for $L, M$ even and $B$ is a constant. We observe then that the Shannon entropy of the probability distribution of the transverse magnetization scales as $O(\log(L))$ for large enough $L$. The values of the parameters $\mu, \sigma^2$ and $B$ can be explicitly determined from Eq.~\eqref{mag_2} and one has \begin{equation} \mu=\frac{2\log(\gamma+\sqrt{\gamma^2-1})}{\pi\sqrt{\gamma^2-1}},\quad \sigma^2=\frac{2\gamma}{\gamma+1}. \end{equation} The coefficient $B$ is the exponential of the $O(1)$ term in the expansion \eqref{mag_2} evaluated at $\lambda=i\pi/2$, which can be determined from Eq.~\eqref{O1-FH}. For arbitrary $\gamma$, $B$ has an involved expression, but it is particularly simple when $\gamma=1$: $B=2^{1/12}e^{1/4}\mathfrak{A}^{-3}$, where $\mathfrak{A}$ is the Glaisher constant \cite{Demler}. \subsection{Interpolation formula in the scaling limit and Painlev\'e V equation} \label{fcs_pv} Let us consider preliminarily the Ising spin chain, i.e. $\gamma=1$. By defining $z=e^{i\phi}$, we can rewrite the symbol in Eq.~\eqref{symbol_m} as \begin{equation} \label{ising_symb} \tilde{g}_{\mathfrak{M}_z}(z)=\cosh\lambda+\sinh\lambda\frac{h-z}{\sqrt{(h-z)(h-z^{-1})}}. \end{equation} Eq.~\eqref{ising_symb} shows explicitly that for $h>1$ (resp. $h<1$) the branch cuts of the symbol can be chosen along the segments $(0,1/h)$, $(h,\infty)$, (resp. $(0,h), (1/h,\infty)$). When $|h|\rightarrow 1$, and approaching the critical point, the branch points merge at $z=1$ on the unit circle, generating a discontinuity in the imaginary part of $\tilde{g}_{\mathfrak{M}_z}$. This is associated with the Fisher-Hartwig exponent $\beta(\lambda)$, which was discussed below Eq.~\eqref{mag_2}. The emergence of a Fisher-Hartwig singularity in the scaling limit $|h|\rightarrow 1$ is described by a Painlev\'e V equation~\cite{Claeys}. By introducing the scaling variable $x=2L|\log|h||$, one has for $L\rightarrow\infty$ and $h\rightarrow 1^{\pm}$ \begin{multline} \label{pv} \log\chi_{\mathfrak{M}_z}(\lambda; x)=L\int_{0}^{2\pi}\frac{d\phi}{2\pi}\log(g_{\mathfrak{M}_z}(\phi))\\-\beta(\lambda)^2\log x+\log\tau_{V}(x)+O(1), \end{multline} where $O(1)$ denotes finite terms in the $L\rightarrow\infty$ limit given in Appendix~\ref{app_det} and $\tau_V(x)$ is the $\tau$-function of the Painlev\'e V equation~\cite{Jimbo} \begin{equation} (x\zeta'')^2=(\zeta-x\zeta'+2(\zeta')^2)^2-4(\zeta')^2(\zeta'+\beta(\lambda))(\zeta'-\beta(\lambda)), \end{equation} namely $\zeta(x)=x\frac{d\log\tau_V(x)}{dx}-\beta(\lambda)^2$. The function $\tau_V$ is constructed in such a way that for $x\rightarrow 0$, at short distances, $\log\chi_{\mathfrak{M}_z}(\lambda; x)$ coincides with Eq.~\eqref{mag_2}, while for $x\rightarrow\infty$, at large distances, $\log\chi_{\mathfrak{M}_z}(\lambda; x)$ is given by the Szeg\H o theorem in Eq.~\eqref{mag_1}. Following the ideas introduced in~\cite{Gamayun, AV}, to which we refer for any additional details, it is possible to write down an explicit power series expansion about $x=0$ of the function $\tau_V$. The latter could be also extended to $\gamma\not=0$, provided~\cite{AV} one considers the scaling variable $x=2L|\log|h||/\gamma$; the first few terms of such an expansion are \begin{multline} \label{tau_expansion_small} \log\tau_V(x)= -\beta^2(\log(x)-s_0-1)x\\ -\beta^4\left[\frac{1}{2} (\log x)^2-(s_0+1)\log(x) +\frac{s_0^2}{2}+s_0+\frac{5}{4} -\frac{1}{4\beta^2}\right]x^2\\ +O(x^3(\log x)^3), \end{multline} where $s_0=-\psi(1+\beta(\lambda)) -\psi(1-\beta(\lambda))+3\psi(1)+1,$ and $\psi(z)$ is the Digamma function. The expansion of $\tau_V$ until order $O(x^4)$ can be also straightforwardly obtained from~\cite{AV}. For $x\to\infty$, $\log\tau_V(x)$ must behave as \begin{equation}\label{tau_expansion_large} \log\tau_V(x)\sim \beta^2(\lambda)\log(x) -\log[G(1+\beta(\lambda))G(1-\beta(\lambda))], \end{equation} where $G(z)$ is the Barnes double gamma function, in order to recover the asymptotics \eqref{mag_1} outside the critical lines; see also Appendix~\ref{app_det}. In physical terms, Eqs.~\eqref{pv} and \eqref{tau_expansion_small} provide a complete interpolation formula for the FCS of the transverse magnetization in the scaling limit $\xi\gg 1$, where $\xi=\frac{\gamma}{|h-1|}$ is the XY chain correlation length. In the limit $|h|\rightarrow 1$ the scaling variable $x$ is the ratio $2L/\xi$; the regime of small $x$ ($\xi\gg L$) describes the fluctuation of the transverse magnetization at the quantum phase transition, while in the regime of large $x$ ($\xi\ll L$) the chain is off-critical. The validity of Eq.~\eqref{pv} can be checked numerically, as described in the caption of Fig.~\ref{fig:painleve_m}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{painleve_m.pdf} \caption{Numerical check of the interpolation formula of Eq.~(\ref{pv}) between the non critical and the critical regimes of the FCS of the transverse magnetization $\chi_{\mathfrak{M}_z}(\lambda)$. We represent $\Delta_{\rm P}$, defined in Eq.~(\ref{Delta_painleve}), as a function of $x=2 L |\log |h||/\gamma$, for different fixed values of $\gamma$ and $L$, and varying $h$. The dots have been calculated numerically using Eq.~(\ref{fcsmag}) for $\chi_{\mathfrak{M}_z}(\lambda)$. As we explain in Appendix~\ref{app_det}, $\Delta_\text{P}\sim\log\tau_V(x)$ for large $L$. The solid curves represent the expansion (\ref{tau_expansion_small}) around $x=0$ up to order $O(x^4)$ of $\log\tau_V(x)$. The dashed curves correspond to the asymptotic behaviour (\ref{tau_expansion_large}) for large $x$ of this $\tau$-function.} \label{fig:painleve_m} \end{figure} It is an interesting question to understand whether the subleading corrections discussed in the previous section (cf. Eqs.~ \eqref{CFT_sub1},\eqref{CFT_sub2}) are also encapsulated in the expansion of the $\tau$-function in Eq.~\eqref{tau_expansion_small}. \subsection{The case $\gamma=0$} \label{tmzero} At $\gamma=0$, the XY spin chain after a Jordan-Wigner transformation reduces to a model of spinless fermions hopping on a one dimensional lattice, that is---up to a constant--- \begin{equation} \label{ff} H_{XY}|_{\gamma=0}=\frac{1}{2}\sum_{l=1}^{N}(c^{\dagger}_lc_{l+1}+H.c.)-h\sum_{l=1}^Nc^{\dagger}_lc_l. \end{equation} By further assuming $0\leq h<1$, let us define $k_F\equiv\arccos h$, with $0<k_F\leq \pi/2$. The ground state of Eq.~\eqref{ff} is then a Fermi sea, where all the single-particle states with momenta $\phi\in[k_F, 2\pi-k_F]$ are filled. The average transverse magnetization per site is $\lim_{N\rightarrow\infty}N^{-1}\langle\mathfrak{M}_z\rangle=2\rho-1$, with $\rho=1-\frac{k_F}{\pi}$, the average ground state fermion density. In spin language, the model in Eq.~\eqref{ff} is also dubbed the XX chain and corresponds to the zero anisotropy limit of the XXZ spin chain~\cite{Kbook}. The FCS of the transverse magnetization at $\gamma=0$ can be determined directly, by studying the $\gamma\rightarrow 0$ limit of the symbol in Eq.~\eqref{symbol_m}. The resulting function is piecewise constant, with two jump discontinuities at $\phi=\{k_F, 2\pi-k_F\}$. An immediate application of Eq.~\eqref{f-h} leads to~\cite{Abanov} \begin{equation} \label{fcs_ff} \log\chi_{\mathfrak{M}_z}(\lambda)|_{\gamma=0}=(2\rho-1)\lambda L+\frac{2\lambda^2}{\pi^2}\log L+O(1), \end{equation} valid for large $L$. By calculating the Fourier transform of the analytic continuation to imaginary $\lambda$ ($\lambda\to i\lambda$) of the FCS in Eq.~\eqref{fcs_ff}, one obtains the probability distribution of the transverse magnetization at $\gamma=0$. For $L\gg 1$, this is a Gaussian, with mean centered at $\mu=L(2\rho-1)$ and variance $\sigma^2=4\log L/\pi^2$. The Shannon entropy of the transverse magnetization at $\gamma=0$ scales then as $O(\log\log L)$. Its sublogarithmic behaviour (cf. Sec.~ \ref{p_tm}) is a consequence of the conservation of the \textit{total} transverse magnetization $\lim_{N\rightarrow\infty}\sum_{l=1}^N\sigma_{l}^z$: when the interval $A$ is big, large fluctuations of the transverse magnetization are severely suppressed. Furthermore, the limits $\lambda\rightarrow\pm\infty$ and $L\rightarrow\infty$ cannot be interchanged at $\gamma=0$, and~\cite{STN, Franchini} the EFP of the transverse magnetization is, for $L\gg 1$, $O(e^{-L^2})$ contrary to what happens at $\gamma\not=0$ where it decays exponentially, cf. Sec.~\ref{fcs_tm}. The Gaussian decay can also be interpreted as an arctic phenomenon (see e.g. Refs.~\cite{Stephan, Allegra}): taking imaginary time, one can argue that, if the transverse magnetization is conserved, the ferromagnetic string generates an area of order $L^2$ in which all the degrees of freedom are frozen. The fluctuating degrees of freedom outside the frozen region are described by a massless, but not conformal, field theory. We will see how these conclusions change for the staggered magnetization in Sec.~\ref{gammazeros} and the domain walls in Sec.~\ref{gammazerok}. \section{Example II: The staggered transverse magnetization} Another observable whose ground state fluctuations can be fully characterized within our formalism is the transverse staggered magnetization, defined as \begin{equation} \label{smagdef} \mathfrak{M}_s=\sum_{l\in A} (-1)^{l+1}\sigma^{z}_l. \end{equation} In this Section we present an exact and comprehensive study of its fluctuations for the XY chain. Our results also apply in the limit $\gamma\rightarrow 0$, which corresponds to the zero anisotropy case of the XXZ spin chain, Sec.~\ref{gammazeros}. Partial computations of the staggered magnetization FCS have been done in~\cite{Groha} for this model in the scaling limit, by relying on field theoretical tools. \label{smag} \subsection{Full Counting Statistics} \label{fcs_sts} For the staggered magnetization, cf. Eq.~\eqref{det_fcs}, we have $M=2 I_{s}$, where ${I_s}=\text{diag}(1,-1,1,-1,\dots)$ and $N=0$. The length of the interval $A$ will be taken for convenience $2L$. By applying the results in Sec.~\ref{s_fc}, we end up with the following determinant representation for the ground state FCS \begin{equation} \label{fcs_st} \chi_{\mathfrak{M}_s}(\lambda)=(-\sinh^2\lambda)^{L}\det[G_s], \end{equation} where $G_s$ is a block Toeplitz matrix, built from the $2\times 2$ blocks $g_{lm}$ ($l,m=1,\dots, L$), given by \begin{equation} g_{lm}=\begin{bmatrix}\coth(\lambda)+(G_{ba})_{2(l-m)}& (G_{ba})_{2(l-m)-1}\\ (G_{ba})_{2(l-m)+1} & -\coth(\lambda)+(G_{ba})_{2(l-m)} \end{bmatrix}. \end{equation} Generalizing the discussion of Sec.~\ref{fcs_tm}, the symbol of the block Toeplitz matrix $G_s$ is the Fourier transform $\tau(\phi)$ of the $2\times 2$ matrix $g_{lm}$, that is \begin{equation} \label{symb_s} \tau(\phi)=\begin{bmatrix} \coth(\lambda)+h_+(\phi) & e^{-i\phi/2}h_{-}(\phi)\{\mathrm e}^{i\phi/2}h_{-}(\phi)& -\coth(\lambda)+h_+(\phi) \end{bmatrix}, \end{equation} with $h_{\pm}(\phi)=(e^{i\theta(\phi/2)}\pm e^{i\theta(\phi/2-\pi)})/2$. For $\lambda\in\mathbb R$, the matrix elements of $\tau$ have winding number zero and the large-$L$ limit of the FCS can be then computed by recalling a generalization of the Szeg\H{o} theorem due to H. Widom~\cite{Widom2, Widom3, Widom4} and a conjecture formulated in~\cite{Ares}, see again Appendix~\ref{app_asym}. In particular, for $|h|\not=1$, the matrix elements of $\tau(\phi)$ do not have zeros or jump discontinuities for $\phi\in[0,2\pi]$. The Szeg\H o-Widom theorem~\cite{Widom2} then applies and one obtains \begin{multline} \label{swidom} \log \chi_{\mathfrak{M}_s}(\lambda)=L\left[\log(-\sinh^2\lambda)+\int_{0}^{2\pi}\frac{d\phi}{2\pi}\log(\det\tau(\phi))\right]\\ +O(1). \end{multline} The $O(1)$ term in Eq.~\eqref{swidom} can be also estimated numerically, see Eq.~\eqref{O1-SW}. At the quantum critical point, the matrix elements of the symbol $\tau(\phi)$ have a jump discontinuity at $\phi=0$. We can then apply a generalization of Fisher-Hartwig theorem non-rigorously derived in~\cite{Ares}, see Eqs.~\eqref{block_toeplitz_asymp}~and~\eqref{b}, and it turns out \begin{multline} \label{mag_st} \log \chi_{\mathfrak{M}_s}(\lambda)=L\left[\log(-\sinh^2\lambda)+\int_{0}^{2\pi}\frac{d\phi}{2\pi}\log(\det\tau(\phi))\right]\\-\beta^2_s(\lambda)\log(L)+O(1), \end{multline} where $\beta_s(\lambda)=\frac{1}{\pi}\arctan(\tanh^2\lambda)$. \textit{Universal terms in the critical FCS.---} In the limit $|\lambda|\rightarrow\infty$, the FCS is proportional to the probability of observing an antiferromagnetic domain of length $2L$ in the ground state. More explicitly, from Eq.~\eqref{fcsdef} and Eq.~\eqref{smagdef}, it turns out that \begin{equation} \label{proj_def} \langle e^{\lambda\mathfrak{M}_S}\rangle\stackrel{\lambda\gg 1}{\rightarrow}e^{2\lambda L}\langle P_{\uparrow\downarrow\dots\uparrow\downarrow}\rangle, \end{equation} where $P_{\uparrow\downarrow\dots\uparrow\downarrow}$ is the projector onto an antiferromagnetic configuration of transverse spins of length $2L$. The ground state expectation value of such a projector will be denoted by $\mathcal{E}_s(h)$. Therefore, in the limit $|\lambda|\rightarrow\infty$, the prefactor of the logarithmic term in Eq.~\eqref{mag_st} has a CFT interpretation, since a staggered sequence of spins in the $z$-direction renormalizes to a linear combination of fixed boundary conditions for the longitudinal spins~\cite{ARV}. Consistently one has $\beta_s^2(\pm\infty)=c/8=1/16$ as for the transverse magnetization, cf. Sec.~\ref{fcs_tm}. At criticality, one expects~\cite{Stephan} also subleading $O(L^{-1}\log L)$ corrections to Eq.~\eqref{mag_st}. By generalizing the method of Ref.~\cite{Stephan}, see Appendix~\ref{app_asym} below Eq.~\eqref{b}, we are able to determine them for $\gamma>0$ and it turns out \begin{equation}\label{logL/L_staggered} -\frac{\gamma^2+1}{2\pi^3\gamma}\frac{\tanh^2\lambda}{\tanh^4\lambda+1} \arctan^2(\tanh^2\lambda) L^{-1}\log L. \end{equation} Details of the derivation of Eq.~\eqref{logL/L_staggered}, together with a numerical check, are given in Appendix~\ref{app_det}. Notice that for $|\lambda|\rightarrow\infty$, Eq.~\eqref{logL/L_staggered} agrees with the CFT prediction given in Eq.~\eqref{CFT_sub2} provided $\xi_{\text{fixed}}(\gamma)=(\gamma^2+1)/(8\gamma)$. \subsection{The probability distribution at criticality} The probability distribution of the staggered magnetization is defined as in Eq.~\eqref{p_d_mag}. Being all the eigenvalues of $\mathfrak{M}_s$ even integers, the relation $\chi_{\mathfrak{M}_s}(i\lambda)=\chi_{\mathfrak{M}_s}(i\lambda+\pi)$ still holds and therefore also Eq.~\eqref{p_dist}. We will be then interested in estimating the large-$L$ limit of the Fourier transform \begin{equation} \label{FCS_s} \mathcal{P}_s(M)=\int_0^{\pi}\frac{d\lambda}{2\pi}e^{-i M\lambda}\chi_{\mathfrak{M}_s}(i\lambda). \end{equation} The analytic continuation of the symbol in Eq.~\eqref{symb_s} to imaginary $\lambda$ introduces a winding when $|h|<1$ for $\lambda\in[\pi/4,3\pi/4]$. At criticality instead, the Wick rotation $\lambda\to i\lambda$ is allowed and the large $L$-limit of Eq.~\eqref{FCS_s} can be obtained in complete analogy to what is done in Sec.~\ref{p_tm}, see also Appendix~\ref{app_prob}. One finds \begin{equation} \label{p_staggered} \mathcal{P}_{s}(M)=\frac{~e^{-\frac{M^2}{2\sigma^2 L}}}{\sqrt{2\pi \sigma^2L}}\left[ 1+B\cos\frac{\pi M}{2}L^{-\frac{1}{4}}\right], \end{equation} with \begin{equation} \sigma^2=2-\frac{4(\gamma^2+1)^2\arccos(2\gamma/(\gamma^2+1))}{\pi|\gamma^4-1|}. \end{equation} In this case, $\mu=0$ and $\log B$ is the $O(1)$ term in the expansion \eqref{mag_st} of $\log\chi_{\mathfrak{M}_s}(i\pi/2)$. Note that $\chi_{\mathfrak{M}_s}(i\pi/2)=\det[G_{ba}]$, and the correlation matrix $G_{ba}$ is Toeplitz. Hence, in this particular case, one can use the Fisher-Hartwig conjecture \eqref{f-h} instead of \eqref{block_toeplitz_asymp}, which allows us to determine $B$. In fact, applying \eqref{f-h}, we obtain $\chi_{\mathfrak{M}_s}(i\pi/2)\sim B L^{-1/4}$, and $B$ is given by Eq.~\eqref{O1-FH}. For $\gamma=1$, its expression is specially compact, $B=2^{-1/6}e^{1/4}\mathfrak{A}^{-3}$. The Gaussian form of the probability distribution implies that the Shannon entropy of the staggered magnetization also scales as $O(\log L)$ for large $L$. \subsection{The case $\gamma=0$} \label{gammazeros} The fluctuations of the staggered magnetization at $\gamma=0$ are also interesting, since they depend on the value of the transverse field. Let us assume, without loss of generality, $0\leq h<1$ and define, as in Sec.~\ref{tmzero}, $k_F=\arccos h$; also we will refer to the case $h=0$ as \textit{half-filling}. At $\gamma=0$ and away from half-filling, the matrix elements in Eq.~\eqref{symb_s} are piecewise functions of $\phi$ with two jump discontinuities at $\phi=\{2k_F, 2\pi-2k_F\}$. By applying the conjecture in Eq.~\eqref{block_toeplitz_asymp} and in particular Eq.~\eqref{b}, it is possible to calculate the large-$L$ limit for the FCS of the staggered magnetization as follows \begin{multline} \label{s01} \log\chi_{\mathfrak{M}_s}(\lambda)|_{\gamma=0}=\frac{2k_F L}{\pi}\log(\cosh2\lambda)\\+ \frac{1}{2\pi^2}\log^2(\cosh 2\lambda)\log L+O(1). \end{multline} For $h=0$ instead, the matrix elements of the symbol in Eq.~\eqref{symb_s} when evaluated at $\gamma=0$ are piecewise functions of $\phi$ but with only one jump discontinuity located at $\phi=2k_F$. The results in Eqs.~\eqref{block_toeplitz_asymp}-\eqref{b} are still valid but the large-$L$ asymptotics is now \begin{multline} \label{s02} \log\chi_{\mathfrak{M}_s}^{\text{h=0}}(\lambda)|_{\gamma=0}=L\log(\cosh2\lambda)\\ -2\beta^2(\lambda) \log L+O(1), \end{multline} with $\beta(\lambda)$ defined below Eq.~\eqref{mag_2}. As we already observed in~\eqref{proj_def}, in the limit $|\lambda|\to\infty$, the FCS is proportional to the probability of finding an antiferromagnetic string in the ground state. There is a qualitative difference between the two limits $|\lambda|\rightarrow\infty$ of Eq.~\eqref{proj_def} at half-filling and away from it. In the first case, the large-$L$ limit commutes with the large-$\lambda$ limit and the latter can be calculated directly from Eq.~\eqref{s02} with the result \begin{equation} \label{empt_0} \log\mathcal{E}_s(h=0)=-L\log 2-\frac{1}{8}\log L+O(1). \end{equation} At $h=0$, the probability of observing a region with antiferromagnetic order within the ground state of the XX spin chain is exponentially suppressed by its length, i.e. is $O(e^{-L})$. It also contains a logarithmic correction $O(\log L)$ whose prefactor is compatible with the CFT interpretation recalled at the end of Sec.~\ref{fcs_tm}. For $h=\gamma=0$, an antiferromagnetic string of transverse spins renormalizes~\cite{BS} at large distances to a Dirichlet boundary condition for a compactified boson with central charge $c=1$. The prefactor of the subleading logarithmic contribution in Eq.~\eqref{empt_0} is then $-c/8=-1/8$. It is natural to expect the same logarithmic correction with prefactor $-1/8$ also in the gapless phase~\cite{Kbook} of the XXZ spin chain. To the authors best knowledge, this has not been verified yet. Away from half-filling, the large-$L$ limit and the large-$\lambda$ limit do not commute any longer. Mathematically, this is due to the fact that the determinant of the symbol in Eq.~\eqref{symb_s} in the limit $|\lambda|\rightarrow\infty$, and for $\gamma=0,~h>0$, vanishes along the interval $I\equiv [2k_F, 2\pi-2k_F]$. Then, Eqs.~\eqref{block_toeplitz_asymp} and \eqref{b} cannot be used to derive its large-$L$ asymptotics. For scalar symbols which vanish on an interval $I\subset[0,2\pi]$, the asymptotics of the corresponding Toeplitz determinant was worked out by H. Widom in~\cite{Widom}, see Eq.~\eqref{widom}. By generalizing such a result, see Eq.~\eqref{block_widom}, we conjecture that \begin{multline} \label{form_prob_kf_neq_pi_2} \log \mathcal{E}_{s}(h>0)= L^2\log\sin k_F-L\log 2\\-\frac{1}{4}\log L+ O(1). \end{multline} In Fig.~\ref{fig:form_prob_kf_neq_pi_2}, we check numerically the above conjecture. The points are the values for $\log\mathcal{E}_s(h>0)$ obtained from the numerical computation of Eq.~(\ref{fcs_st}) at $\gamma=0$ and $h>0$ while the curves correspond to the analytical prediction in Eq.~(\ref{form_prob_kf_neq_pi_2}). \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{form_prob_kf_neq_pi_2.pdf} \caption{Numerical check of the expansion conjectured in Eq.~(\ref{form_prob_kf_neq_pi_2}) for the formation probability of an antiferromagnetic domain in a XX spin chain with Fermi momentum $k_F\neq \pi/2$. The dots have been obtained by calculating numerically the determinant in Eq.~(\ref{fcs_st}) in the limit $\lambda\to-\infty$. The curves correspond to the conjectured expansion (\ref{form_prob_kf_neq_pi_2}); the $O(1)$ term has been determined from $\log \mathcal{E}_s(L=500)- 500^2\log\sin k_F- 500\log 2-1/4\log(500)$, taking as $\mathcal{E}_s(L=500)$ the value obtained numerically for this quantity at $L=500$ for each Fermi momentum considered.} \label{fig:form_prob_kf_neq_pi_2} \end{figure} The physical explanation of Eq.~\eqref{form_prob_kf_neq_pi_2} mimics the discussion in Sec.~\ref{tmzero}. For $h>0$, the conservation law $[H_{XY}|_{\gamma=0},\mathfrak{M}_z]=0$ requires all the eigenstates of the XX Hamiltonian to be eigenstates of $N^{-1}\mathfrak{M}_z$ with eigenvalue different from zero in the thermodynamic limit $N\rightarrow\infty$. Therefore the probability of observing a large region with antiferromagnetic order at zero temperature must fall off more quickly, as $O(e^{-L^2})$, than at $h=0$. At half-filling, instead, the N\'eel state $|\uparrow\downarrow\dots\uparrow\downarrow\rangle$ is not orthogonal to the ground state and its overlap~\cite{ARV} decays exponentially as $O(e^{-N})$, leading to an exponential decay of $\mathcal{E}_s(h=0)$ in Eq.~\eqref{empt_0}. Finally, we note that at criticality Eqs.~\eqref{s01} and \eqref{s02} imply that the probability distribution of the staggered magnetization is Gaussian at $\gamma=0$, with variance of $O(L)$. Therefore its Shannon entropy is $O(\log L)$ for large $L$. There are also subleading power law corrections of the type $L^{-\alpha}$ as in Eq.~\eqref{p_staggered} but with exponent $\alpha=1/2$. \section{Example III: The domain walls} \label{sec:kinks} We characterize exactly at large $L$ the fluctuations of the number of domain walls in the XY spin chain, that is \begin{equation} \label{dw_def} \mathfrak{K}=\sum_{l\in A}(1-\sigma_l^x\sigma_{l+1}^x). \end{equation} At $\gamma=1$, this observable can be accessed by a Kramers-Wannier transformation applied to the transverse magnetization~\cite{Demler}. Our analysis will be more general and valid for any values of the anisotropy $\gamma>0$. Eventually, we will also show how our results for the FCS allow determining analytically the EFP of the longitudinal magnetization $\mathfrak{M}_x=\sum_{l\in A}\sigma_l^x$. The latter has been recently discussed in~\cite{Collura} resorting to certain numerical approximations. \subsection{The Full Counting Statistics} By applying the Jordan-Wigner transformation in Eq.~(\ref{JW}), the operator $\mathfrak{K}$ of Eq.~\eqref{dw_def} can be put in the form of Eq.~\eqref{loc_o} with \begin{equation} \label{mat_kink} (M)_{lm}=\delta_{|l-m|,1},~(N)_{lm}=\delta_{|l-m|,1}\mathop{\mathrm{sign}}\nolimits(m-l), \end{equation} for $l,m=1,\dots, L+1$. Even if the matrix $N$ in Eq.~\eqref{mat_kink} is non-zero, it is simple enough to carry over the calculation of the matrix exponential in Eq.~\eqref{tdef}. One finds, see Appendix~\ref{app_det}, the following determinant representation for the domain wall FCS \begin{equation} \label{kink_fcs} \chi_{\mathfrak{K}}(\lambda)=\det[G_{\mathfrak{K}}]. \end{equation} The $L\times L$ matrix $G_{\mathfrak{K}}$ is of Toeplitz type with symbol, cf. Eq.~\eqref{symbol_m}, \begin{equation} \label{symbol_kink} g_{\mathfrak{K}}(\phi)=\frac{e^{2\lambda}+1}{2} +\frac{e^{2\lambda}-1}{2} e^{-i\phi}e^{i\theta(\phi)}. \end{equation} The analysis of its large-$L$ behaviour is then straightforward. Indeed the same Eqs.~\eqref{mag_1} and \eqref{mag_2} hold for the large-$L$ limit of the domain wall FCS upon the replacement of the symbols: $g_{\mathfrak{M}_z}(\phi)\to g_{\mathfrak{K}}(\phi)$. \textit{EFP of the order parameter.---}The exact expression derived in Eq.~\eqref{kink_fcs} for the domain wall FCS is in accordance with the Kramers-Wannier duality between longitudinal and transverse spin configurations analyzed in~\cite{ARV} for the Ising spin chain. In particular, by comparing Eq.~\eqref{symbol_kink} with Eq.~\eqref{symbol_m} at $\gamma=1$ and taking for simplicity $h>0$, one concludes that \begin{equation} \label{KW} \chi_{\mathfrak{K}}(\lambda)|_{h}= e^{\lambda L}\chi_{\mathfrak{M}_z}(-\lambda)|_{1/h},~~~h>0. \end{equation} In the limit $\lambda\rightarrow-\infty$, Eq.~\eqref{KW} implies \begin{equation} \label{proj} \lim_{\lambda\rightarrow-\infty}\chi_{\mathfrak{K}}(\lambda)=\langle P_{\rightarrow\dots\rightarrow}+P_{\leftarrow\dots\leftarrow}\rangle, \end{equation} where $P_{\rightarrow\dots\rightarrow}$ and $P_{\leftarrow\dots\leftarrow}$ are projectors onto states that contain a ferromagnetic region of length $L$ with spins aligned along the $x$-axis. The Kramers-Wannier duality maps both these configurations to one where all the spins are polarized along the positive $z$-axis~\cite{ARV}. Notice that such a conclusion follows directly from Eq.~\eqref{KW}. Analogously, an antiferromagnetic domain of longitudinal spins is the Kramers-Wannier dual of a ferromagnetic domain of negatively polarized transverse spins~\cite{ARV}. This last statement is implied by the limit $\lambda\rightarrow\infty$ of Eq.~\eqref{KW}. In absence of a longitudinal field coupling to $\sigma_{l}^x$, the two projectors in Eq.~\eqref{proj} have the same ground state expectation value. The latter is the EFP, $\mathcal{E}_{x}(h)$, of the order parameter $\mathfrak{M}_x|_{A}=\sum_{l\in A}\sigma_l^x$ restricted to the subsystem $A$. For the Ising spin chain, and positive transverse field, after some contour integral manipulations outlined in Appendix~\ref{app_det}, we obtain the compact expressions for $\mathcal{E}_x(h)$ \begin{equation} \label{emptiness} \log\mathcal{E}_{x}(h)=\begin{cases} L\bigl[\int_{0}^{1/h}\frac{dy}{\pi}~K(y^2)-\log 2\bigr]+O(1),~~~h>1\\ L\left(\frac{2\mathfrak{C}}{\pi}-\log 2\right)-\frac{1}{16}\log L+O(1),~~~h=1\\ -L\int_0^h\frac{dy}{\pi}\frac{1}{y}\left(K(y^2)-\frac{\pi}{2}\right)+O(1),~~~h<1. \end{cases} \end{equation} The function $K(y)$ is the complete elliptic integral of the first kind, written in~\texttt{Mathematica} notations, while $\mathfrak{C}=\frac{1}{2}\int_{0}^{1}dy~K(y^2)$ is the Catalan constant. The first few terms of the series expansion about $h=0$ of Eq.~\eqref{emptiness} reproduce the approximate formula proposed recently in~\cite{Collura}. Of course the calculation of the EFP for the order parameter could be extended to any $\gamma>0$, simply by considering the limit $\lambda\rightarrow-\infty$ of the symbol in Eq.~\eqref{symbol_kink}. Though the extension of Eq.~\eqref{emptiness} to $\gamma\not=1$ has not a nice compact form. \textit{Universal terms in the critical FCS.---} As expected from the discussion at the end of Sec.~\ref{fcs_tm}, the prefactor of the $O(\log L)$ term of the critical domain wall FCS in the limit $\lambda\rightarrow\pm\infty$ is $\gamma$-independent and with value: $-\frac{1}{16}$. By pushing the asymptotics analysis of the Toeplitz determinant in Eq.~\eqref{kink_fcs} further, one can single out along the critical lines also an $O(\log L/L)$ term, see Eq.~\eqref{logL/L}, \begin{equation}\label{logL/L_dw_ising} -\frac{2\gamma-1}{2\pi^3\gamma} \tanh(2\lambda)\arctan^2(\tanh\lambda) L^{-1}\log L, \end{equation} whose presence can be checked numerically, see Fig.~\ref{fig:fcs_dw_subleading_terms} and Appendix~\ref{app_det}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{fcs_dw_xy_subleading_terms.pdf} \caption{Numerical check of the $O(L^{-1}\log L)$ term in the expansion of $\log\chi_{\mathfrak{K}}(\lambda)$ along the critical line $h=1$. We plot $\Delta_{\mathfrak{K}}$, defined in Eq.~(\ref{Delta_k}), as a function of $\log L$ for several fixed values of $\gamma$ and $\lambda$, and choosing $L_0=10^3$. The dots have been obtained by calculating numerically $\chi_{\mathfrak{K}}(\lambda)$ through Eq.~(\ref{kink_fcs}). The lines represent $d_\mathfrak{K}\log(L/L_0)$, taking for $d_{\mathfrak{K}}$ the coefficient of the $O(L^{-1}\log L)$ term predicted in Eq.~(\ref{logL/L_dw_ising}).} \label{fig:fcs_dw_subleading_terms} \end{figure} Notice however that the lattice result in Eq.~\eqref{logL/L_dw_ising} has not a definite sign as a function of $\gamma>0$; it vanishes for $\gamma=1/2$. These considerations suggest that the limit $\lambda\rightarrow\pm\infty$ of Eq.~\eqref{logL/L_dw_ising} has not an immediate CFT interpretation~\cite{Stephan} and deserves futher study. \textit{Painlev\'e V equation in the scaling limit.---} An interesting consequence of Eq.~\eqref{symbol_kink} is that the same analysis of the scaling limit discussed in Sec.~\ref{fcs_pv} for the transverse magnetization also applies to the domain walls. In the limits $|h|\rightarrow 1$, $L\rightarrow\infty$ keeping $x=2L|\log|h||/\gamma$ finite, the domain wall FCS interpolates from the critical ($x\ll 1$) asymptotics given in Eq.~\eqref{f-h} to the off-critical asymptotics ($x\gg 1$) in Eq.~\eqref{szego}. The crossover is again analytically captured by Eq.~\eqref{pv} after replacing $g_{\mathfrak{M}_z}(\phi)$ with $g_{\mathfrak{K}}(\phi)$. In particular, since the Fisher-Hartwig exponent $\beta(\lambda)$ in Eq.~\eqref{pv} is the same for the transverse magnetization and the domain walls, the expansion of the Painlev\'e V $\tau$-function in Eq.~\eqref{tau_expansion_small} is also identical. A numerical check of the interpolation formula in Eq.~\eqref{pv} adapted to the domain wall fluctuations is given in Fig.~\ref{fig:painleve_dw}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{painleve_dw.pdf} \caption{Numerical analysis of the crossover between the non-critical and the critical behaviour of the FCS of the number of domain walls, $\chi_{\mathfrak{K}}(\lambda)$. We study the quantity $\Delta_{\rm P}$, defined in Eq.~(\ref{Delta_painleve}) but replacing $\chi_{\mathfrak{M}_z}$ and $g_{\mathfrak{M}_z}$ by $\chi_{\mathfrak{K}}$, $g_{\mathfrak{K}}$, versus the ratio $x=2L|\log|h||/\gamma$; we vary $h$, keeping $\gamma$, $\lambda$ and $L$ fixed. The dots correspond to calculate numerically $\chi_{\mathfrak{K}}(\lambda)$ using Eq.~(\ref{kink_fcs}). As we point out in Appendix~\ref{app_det}, we expect $\Delta_\text{P}\sim\log\tau_V(x)$ when $L$ is large enough. The solid curves represent the expansion (\ref{tau_expansion_small}) of the logarithm of the Painlev\'e V $\tau$-function around $x=0$ up to order $O(x^4)$. The dashed curves are the asymptotic behaviour (\ref{tau_expansion_large}) of this function for $x\to \infty$.} \label{fig:painleve_dw} \end{figure} \subsection{The probability distribution at criticality} The eigenvalues of the operator $\mathfrak{K}$ in Eq.~\eqref{dw_def} are the even integers $2n_w$ where $n_w=0,\dots,L$ is the number of domain walls present in the subsystem $A$. Therefore, the probability distribution $P_{\mathfrak{K}}(W)=\langle\delta(\mathfrak{K}-W)\rangle$ can be recast in the form of Eq.~\eqref{p_dist} and we will calculate here \begin{equation}\label{P_K} \mathcal{P}_K(W)=\int_{0}^{\pi}\frac{d\lambda}{2\pi}e^{-iW\lambda}\chi_{\mathfrak{K}}(i\lambda), \end{equation} at the quantum critical point $|h|=1$. The exponential decay for large $L$ of the domain wall FCS already implies that $\mathcal{P}_{K}(W)$ is a Gaussian, as pointed out in Sec.~\ref{p_tm}. More formally, by applying the saddle point analysis of Appendix~\ref{app_prob} one finds \begin{equation} \label{kink_dist} \mathcal{P}_{K}(W)=\frac{~e^{-\frac{(W-\mu L)^2}{2\sigma^2 L}}}{\sqrt{2\pi \sigma^2L}}\left[ 1+B\cos\frac{\pi W}{2}L^{-\frac{1}{4}}\right], \end{equation} with parameters $\mu, \sigma$ given by \begin{equation} \mu=1-\frac{2\gamma}{\pi(\gamma+1)} \left(1+\frac{\text{arccosh}(\gamma)}{\sqrt{\gamma^2-1}}\right), \end{equation} and \begin{equation} \sigma^2=\frac{\gamma^2(\gamma-1)+7\gamma+1}{(\gamma+1)^3}. \end{equation} The coefficient $\log B$ is the $O(1)$ term in the expansion of $\log\chi_{\mathfrak{K}}(\lambda)$ at $\lambda=i\pi/2$. Therefore, it can be calculated from Eq.~\eqref{O1-FH}. We omit to write here the explicit form of $B$ since it is a lengthy expression. In Fig. \ref{fig:prob_dw}, we check numerically the probability distribution in Eq.~\eqref{kink_dist}. At the critical point $|h|=1$, $\gamma\not=0$. the Shannon entropy of the domain wall probability distribution scales as $O(\log L)$ for large $L$. In fact, the same conclusion also applies to the other critical point of the XY chain: $\gamma=0$ and $|h|<1$. This will be discussed in detail in the next Section. In the non-critical regions, $|h|\neq 1$, $\gamma>0$, $\mathcal{P}_K(W)$ is also a Gaussian for large $L$, but there is no a $L^{-1/4}$ subleading correction, see Eqs.~\eqref{p_dw_l1} and \eqref{p_dw_s1} of Appendix~\ref{app_prob}. In that Appendix, we give the technical details to obtain those results. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{prob_dw.pdf} \caption{Probability distribution $\mathcal{P}_K(W)$ of domain walls in an interval of length $L$ along the critical line $h=1$. The dots correspond to the direct numerical integration of Eq.~(\ref{P_K}), considering for $\chi_{\mathfrak{K}}(\lambda)$ the asymptotic expansion obtained by applying the Fisher-Hartwig conjecture. The solid and dashed curves represent the analytical approximation found in Eq.~(\ref{kink_dist}) for $W=4n$ and $W=4n+2$ respectively, with $n\in\mathbb{N}$. For the case $\gamma=0.2$ and $L=100$, $B=0.327387$ while, for $\gamma=1.3$ and $L=50$, $B=0.682844$.} \label{fig:prob_dw} \end{figure} \subsection{The case $\gamma=0$} \label{gammazerok} We investigate the FCS of the domain walls for the XX spin chain, cf. Eq.~\eqref{ff}. For simplicity, we assume $0\leq h<1$ and define $k_F=\arccos h$ as in Sec.~\ref{tmzero}. In the limit $\gamma\rightarrow 0$, the symbol in Eq.~\eqref{symbol_kink} is a piecewise function of $\phi$ with two jump discontinuities at $\phi=\{k_F, 2\pi-k_F\}$. We can then apply Eq.~\eqref{f-h} and obtain the following large-$L$ asymptotics of the Toeplitz determinant in Eq.~\eqref{kink_fcs} \begin{multline} \label{kinkXX} \log\chi_{\mathfrak{K}}(\lambda)|_{\gamma=0}=a(\lambda)L\\ +\frac{2}{\pi^2}\Re\left[\text{arctanh}^2\bigl(\tanh\lambda e^{-ik_F}\bigr)\right]\log L\\+O(1). \end{multline} Eq.~\eqref{kinkXX} shows that the domain wall FCS decays exponentially with the subsystem length $L$. The coefficient $a(\lambda)$ and the $O(1)$ term in Eq.~\eqref{kinkXX} can be explicitly determined but their expressions are rather lengthy. For $|h|<1$, the logarithm of the FCS also contains an $O(\log L/L)$ subleading contribution that can be calculated from Eq.~\eqref{logL/L} with the result \begin{multline}\label{logL/L_dw_xx} \frac{4}{\pi^3} \Re \left[ \frac{i e^{i k_F}\tanh\lambda} {e^{2i k_F}-\tanh^2\lambda} \text{arctanh}^2 \left(e^{-i k_F}\tanh\lambda \right)\right]\\\times L^{-1}\log L. \end{multline} Contrary to the case of the transverse magnetization discussed in Sec.~\ref{tmzero}, Eqs.~\eqref{kinkXX} and \eqref{logL/L_dw_xx} are finite in the limits $\lambda\rightarrow\pm\infty$. They can then be used to test the universality and semi-universality of the prefactors of the $O(\log L)$ and $O(\log L/L)$ terms. In particular for $\lambda\rightarrow-\infty$, the FCS in Eq.~\eqref{kinkXX} is proportional to the formation probability of a ferromagnetic domain of longitudinal spins. This spin configuration flows toward a Neumann boundary condition for a free boson compactified on a circle~\cite{SMA}. In this case, as also pointed out in Sec.~\ref{gammazeros}, CFT~\cite{Stephan} predicts that the prefactor of the $O(\log L)$ term is $-1/8$. It is easy to realize that the exact lattice result in Eq.~\eqref{kinkXX} agrees with the field theory conjecture only at half-filling. A similar discrepancy for $k_F\not=\pi/2$ was also pointed out in other comparisons between CFT expectations and lattice calculations for free fermions, see for instance~\cite{ZA, SD}. At half-filling, the limit $\lambda\rightarrow-\infty$ in Eq.~\eqref{logL/L_dw_xx} is also consistent with Eq.~\eqref{CFT_sub1} at $c=1$ if the non-universal extrapolation length is $\xi=1$. In the limit $\lambda\rightarrow\infty$, the domain wall FCS is proportional to the formation probability of an antiferromagnetic domain of longitudinal spins. To the authors best knowledge, it is not clear to which conformal boundary condition of a free bosonic theory this spin configuration should flow. Anyway, the limit $\lambda\rightarrow\infty$ of the prefactor in the $O(\log L)$ in Eq.~\eqref{kinkXX} still admits a $c=1$ CFT interpretation while Eq.~\eqref{logL/L_dw_xx} is in agreement with Eq.~\eqref{CFT_sub2}, provided one postulates the presence of a boundary condition changing operator~\cite{Cardy}~with conformal dimension $h_{\text{bcc}}=1/8$. We close this Section by highlighting an exact expression for the probability of formation of a ferromagnetic length-$L$ domain of longitudinal spins at half-filling and zero temperature. By evaluating the limit $\lambda\rightarrow-\infty$ and $k_F\rightarrow\pi/2$ in Eqs.~\eqref{kinkXX}-\eqref{logL/L_dw_xx}, including the $O(1)$ term in Eq.~\eqref{O1-FH} we obtain \begin{multline} \label{f_XX_kink} \log\mathcal{E}_{x}(h=0)|_{\gamma=0}= \left(\frac{2 \mathfrak{G}}{\pi}-\log 2\right)L -\frac{1}{8}\log L+\\ \frac{\mathfrak{G}}{\pi} - \frac{9\log 2}{8} + 2 \log[G(3/4)G(5/4)] - \frac{7 \zeta(3)} {8\pi^2}+\\ \frac{1}{8\pi}L^{-1}\log L+O(L^{-1}) \end{multline} where $\zeta(z)$ is the Riemann zeta. We quote the result in Eq.~\eqref{f_XX_kink} as a mathematical curiosity: the appearance of the Riemann zeta function with odd argument in calculations of formation probabilities in the XXZ spin chain is the leitmotif of Ref.~\cite{BK}. \section{Conclusions} \label{conc} In this paper, we characterized exactly the quantum fluctuations of the transverse, staggered magnetization and the domain walls in the ground state of the XY spin chain. We also derived an analytic expression that captures the behavior of the full counting statistics for the transverse magnetization and the domain walls in the scaling limit, close to the quantum phase transition. The interpolation formula is built from the solution of a Painlev\'e V equation, for which it is possible to write down an explicit power series expansion. The lattice calculations allow a direct verification of the field theoretical conjectures formulated in~\cite{Stephan} for the $O(\log L)$ and $O(\log L/L)$ subleading contributions to the critical formation probabilities. These are extracted as limits for a large value of the coupling $\lambda$ of the cumulant generating functions. In particular, we showed that the field theory predictions for the semi-universal $O(\log L/L)$ term do not have an obvious application to the domain walls when $\gamma<1/2$. An analogous issue, already observed in~\cite{ZA, SD}, is found for their critical fluctuations in the XX spin chain away from half-filling. By determining exactly the domain wall full counting statistics, we have also calculated the probability of observing in the ground state a ferromagnetic and antiferromagnetic domain of transverse and longitudinal spins. Fluctuations of the latter are harder to access since the order parameter is not a quadratic fermionic form. Our results hinge on the asymptotic expansion of Toeplitz determinants, for which we have also formulated and checked numerically a new conjecture in Appendix~\ref{app_asym}, in particular Eq.~\eqref{block_widom}. The technique is suitable to detect any pattern of order~\cite{NRV} in the transverse direction, by properly modifying the observable $\mathfrak{O}$. \section*{Acknowledgements} MAR thanks CNPq and FAPERJ (grant number 210.354/2018) for partial support. FA and JV are partially supported by the Brazilian Ministries MEC and MCTC, the CNPq (grant number 306209/2019-5) and the Italian Ministry MIUR under the grant PRIN 2017 ``Low-dimensional quantum systems: theory, experiments and simulations".
{ "timestamp": "2021-04-09T02:01:40", "yymm": "2012", "arxiv_id": "2012.14012", "language": "en", "url": "https://arxiv.org/abs/2012.14012" }
\section{Introduction}\label{sec1} The occurrence of neutrino flavour oscillations has been, undoubtedly, experimentally demonstrated~\cite{deSalas:2017kay}. Of special interest here is that, it has become clear in recent years that for the analysis of neutrino oscillations in matter, e.g. the Mikheyev-Smirnov-Wolfenstein (MSW) effect~\cite{1978PhRvD..17.2369W,Mikheyev1986}, refractive effects of neutrinos on themselves, due to the neutrino self-interaction potential, are essential. Over the last two decades, the achievements of experimental neutrino physics and the constant development of observational astronomy, have caused an increasing interest in the study of the occurrence of neutrino flavour oscillations in astrophysical sources. Although the bulk of astrophysical analyses has been limited to supernova (SN) neutrinos, flavour oscillations may also occur in other relativistic astrophysics sources. In particular, as we are showing here, this phenomenon is expected to occur in known scenarios of short- and long-duration GRBs. The emergent picture of GRBs is that both, short-duration and long-duration GRBs, originate in binary systems (see, e.g., \cite{2016ApJ...832..136R}). Short bursts are associated with mergers of NS-NS and/or NS-BH binaries. For this case, the role of neutrino-antineutrino ($\nu\bar\nu$) annihilation leading to the formation of an electron-positron plasma ($e^{-}e^{+}$) has been introduced \cite{1992ApJ...395L..83N} (for general relativistic effects, see \cite{2002ApJ...578..310S}). For long bursts, it has been introduced binary progenitor composed of a carbon-oxygen star (CO$_{\rm core}$) and a companion NS~\cite{2012ApJ...758L...7R,2014ApJ...793L..36F}. These binaries can form in an evolutionary path including a first SN explosion, common-envelope phases, tidal interactions and mass loss \cite{2015PhRvL.115w1102F}. The GRB is expected to occur when the binary experiences the second SN, i.e. the one of the CO$_{\rm core}$. Part of the ejected matter produces a hypercritical accretion (i.e. highly super-Eddington) process onto the NS companion. The NS then reaches its critical mass for gravitational collapse, hence forming a rotating BH \cite{2015ApJ...812..100B,2016ApJ...833..107B}. These systems have been called binary-driven hypernovae (BdHNe), and they lead to a variety of observable emissions from the X-rays all the way to high-energy gamma-rays (see e.g. \cite{2019ApJ...886...82R,2019Univ....5..110R} for details). As we are showing below, a key ingredient in the above systems is a copious emission of neutrinos during the hypercritical accretion. The high neutrino and matter density involved suggests that a study of neutrinos oscillations may lead to new neutrino physics in these sources. Our aim here is to compile the main results of neutrino oscillations in the physical conditions expected in the above scenarios of GRBs. \section{Neutrino Oscillations}\label{sec2} To study the flavour evolution of neutrinos within a particular system, a Hamiltonian governing neutrino oscillation must be set up. The relative strength of the potentials appearing in such Hamiltonian depends on four elements: geometry, mass content, neutrino content and neutrino mass hierarchy. Geometry refers to the nature of net neutrino fluxes and possible gravitational effects. Mass and neutrino content refers to the distribution of leptons of each flavour $(e,\mu,\tau)$ present in the medium. Finally, mass hierarchy refers to the relative values of the masses $m_{1},m_{2},m_{3}$ for each neutrino mass eigenstates. The equations that govern the evolution of an ensemble of mixed neutrinos are the quantum Liouville equations \begin{subequations} \begin{gather} i\dot{\rho}_{\mathbf{p}} = [H_{\mathbf{p}},\rho_{\mathbf{p}}]\\ i\dot{\bar{\rho}}_{\mathbf{p}} = [\bar{H}_{\mathbf{p}},\bar{\rho}_{\mathbf{p}}]\end{gather}\label{eq:Liouville}\end{subequations} The Hamiltonian is (see, e.g.,~\cite{2018ApJ...852..120B,2019arXiv190901841U} and references therein) \begin{widetext} \begin{subequations} \begin{align} \mathsf{H}_{\mathbf{p},t}&=\Omega_{\mathbf{p},t}+\sqrt{2}G_{F}\!\!\int\!\!\left( l_{\mathbf{q},t}-\bar{l}_{\mathbf{q},t}\right)\left( 1-\mathbf{v}_{\mathbf{q},t}\cdot\mathbf{v}_{\mathbf{p},t} \right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^3} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad + \sqrt{2}G_{F}\!\!\int\!\!\left( \rho_{\mathbf{q},t}-\bar{\rho}_{\mathbf{q},t}\right)\left( 1-\mathbf{v}_{\mathbf{q},t}\cdot\mathbf{v}_{\mathbf{p},t} \right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^{3}}\\ \mathsf{\bar{H}}_{\mathbf{p},t}&=-\Omega_{\mathbf{p},t}+\sqrt{2}G_{F}\!\!\int\!\!\left( l_{\mathbf{q},t}-\bar{l}_{\mathbf{q},t}\right)\left( 1-\mathbf{v}_{\mathbf{q},t}\cdot\mathbf{v}_{\mathbf{p},t} \right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^3} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad + \sqrt{2}G_{F}\!\!\int\!\!\left( \rho_{\mathbf{q},t}-\bar{\rho}_{\mathbf{q},t}\right)\left( 1-\mathbf{v}_{\mathbf{q},t}\cdot\mathbf{v}_{\mathbf{p},t} \right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^{3}} \end{align}\label{eq:FullHam}\end{subequations}\end{widetext} In these equations $\rho_{\mathbf{p}}$ ($\bar{\rho}_{\mathbf{p}}$) is the matrix of occupation numbers $(\rho_{\mathbf{p}})_{ij}=\langle a^{\dagger}_{j}a_{i}\rangle_\mathbf{p}$ for neutrinos ($(\bar{\rho}_{\mathbf{p}})_{ij}=\langle \bar{a}^{\dagger}_{i}\bar{a}_{j}\rangle_\mathbf{p}$ for antineutrinos), for each momentum $\mathbf{p}$ and flavours $i,j$. The diagonal elements are the distribution functions $f_{\nu_{i}\left(\bar{\nu}_{i}\right)}\left(\mathbf{p}\right)$ such that their integration over the momentum space gives the neutrino number density $n_{\nu_{i}}$ of a determined flavour $i$. The off-diagonal elements provide information about the \emph{overlapping} between the two neutrino flavours. $\Omega_{\mathbf{p}}$ is the matrix of vacuum oscillation frequencies, $l_{\mathbf{p}}$ and $\bar{l}_{\mathbf{p}}$ are matrices of occupation numbers for charged leptons built in a similar way to the neutrino matrices, and $\mathbf{v}_{\mathbf{p}}=\mathbf{p}/ p$ is the velocity of a particle with momentum $\mathbf{p}$ (either neutrino or charged lepton). Since the matter in the accretion zone is composed by protons, neutrons, electrons and positrons, $\nu_e$ and $\bar\nu_e$ interact with matter by both charged and neutral currents, while $\nu_\mu$, $\nu_\tau$, $\bar\nu_\mu$ and $\bar\nu_\tau$ interact only by neutral currents. Therefore, the behavior of these states can be clearly divided into electronic and non-electronic allowing us to use the two-flavour approximation. Within this approximation, $\rho$ in Eq.~(\ref{eq:Liouville}) can be written in terms of Pauli matrices and the polarization vector $\mathsf{P}_\mathbf{p}$ as: \begin{equation} \small \rho_{\mathbf{p}}=\left( \begin{array}{cc} \rho_{ee} & \rho_{ex}\\ \rho_{xe} & \rho_{xx}\\ \end{array}\right)_{\mathbf{p}} = \frac{1}{2}\left(f_{\mathbf{p}}\mathbb{1} +\mathsf{P}_\mathbf{p} \cdot \vec \sigma\right), \label{eq:expansion of rho} \end{equation} where $f_{\mathbf{p}}={\rm Tr}[\rho_{\mathbf{p}}]=f_{\nu_e}(\mathbf{p})+f_{\nu_x}(\mathbf{p})$ is the sum of the distribution functions for $\nu_e$ and $\nu_x$. Note that the $z$ component of the polarization vector obeys \begin{equation} \mathsf{P}^{z}_{\mathbf{p}} =f_{\nu_e}(\mathbf{p})-f_{\nu_x}(\mathbf{p}). \label{eq:pzeta1} \end{equation} Hence, this component tracks the fractional flavour composition of the system and appropriately normalizing $\rho_{\mathbf{p}}$ allows to define a survival and mixing probability \begin{subequations} \begin{gather} P_{\nu_{e} \leftrightarrow \nu_{e}} = \frac{1}{2}\left( 1 + \mathsf{P}^{z}_{\mathbf{p}} \right),\\ P_{\nu_{e} \leftrightarrow \nu_{x}} = \frac{1}{2}\left( 1 - \mathsf{P}^{z}_{\mathbf{p}} \right). \end{gather}\label{eq:survprobability1} \end{subequations} On the other hand, the Hamiltonian can be written as a sum of three interaction terms: \begin{equation} \mathsf{H} = \mathsf{H}_{\mbox{\footnotesize{vac}}} + \mathsf{H}_{\mbox{\footnotesize{m}}} + \mathsf{H}_{\nu\nu}. \label{neutrinohamiltonian} \end{equation} where $\mathsf{H}$ is the two-flavour Hamiltonian. The first term is the Hamiltonian in vacuum~\cite{Qian:1994wh}: \begin{equation} \mathsf{H}_{\mbox{\footnotesize{vac}}} =\frac{\omega_\mathbf{p}}{2} \left( \begin{array}{cc} -\cos 2\theta & \sin 2\theta\\ \sin 2\theta & \cos 2\theta \\ \end{array}\right) =\frac{\omega_\mathbf{p}}{2} \mathbf{B}\cdot \vec{\sigma} \label{Hvacuum} \end{equation} where $\omega_\mathbf{p} = \Delta m^2/2p$, $\mathbf{B}=(\sin2\theta,0,-\cos 2 \theta)$ and $\theta$ is the smallest neutrino mixing angle in vacuum. The other two terms in Eqs.~(\ref{eq:FullHam}) are special since they make the evolution equations non-linear. Even though they are very similar, we are considering that the electrons during the accretion form an isotropic gas; hence, the vector $\mathbf{v}_{\mathbf{q}}$ in the first integral is distributed uniformly on the unit sphere and the factor $\mathbf{v}_\mathbf{q}\cdot\mathbf{v}_\mathbf{p}$ averages to zero. After integrating the matter Hamiltonian is given by: \begin{equation} \mathsf{H}_{\mbox{\footnotesize{m}}} = \frac{\lambda}{2}\left( \begin{array}{cc} 1 & 0\\ 0 & -1 \\ \end{array}\right) =\frac{\lambda}{2} \mathbf{L} \cdot \vec{\sigma} \label{Hmatter} \end{equation} where $\lambda = \sqrt{2}G_{F}\left(n_{e^-} - n_{e^+}\right)$ is the charged current matter potential and $\mathbf{L}=(0,0,1)$. Such simplification cannot be made with the final term. Since neutrinos are responsible for the energy loss of the infalling material during accretion, they must be escaping the accretion zone and the net neutrino and anti-neutrino flux is non-zero.In this case the factor $\mathbf{v}_\mathbf{q}\cdot\mathbf{v}_\mathbf{p}$ cannot be averaged to zero. At any rate, we can still use Eq.~(\ref{eq:expansion of rho}) and obtain \cite{1992PhLB..287..128P,2016arXiv160704671Z,2016PhRvD..93d5021M}: \begin{equation} \mathsf{H}_{\nu\nu} = \sqrt{2}G_{F}\left[ \int\!\! \left(1- \mathbf{v}_{\mathbf{q}}\cdot\mathbf{v}_{\mathbf{p}}\right) \left(\mathsf{P}_\mathbf{q}-\bar{\mathsf{P}}_\mathbf{q}\right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^3}\right]\cdot \vec{\sigma} \label{Hnunu} \end{equation} Introducing every Hamiltonian term in Eqs.~(\ref{eq:Liouville}), and using the commutation relations of the Pauli matrices, we find the equations of oscillation for neutrinos and anti-neutrinos for each momentum mode $\mathbf{p}$: \begin{widetext} \begin{subequations} \begin{gather} \dot{\mathsf{P}}_\mathbf{p} = \left[ \omega_\mathbf{p} \mathbf{B} + \!\lambda \mathbf{L} + \!\! \sqrt{2}G_{F}\!\!\! \int\!\! \left(1- \mathbf{v}_{\mathbf{q}}\!\cdot\mathbf{v}_{\mathbf{p}}\right) \left(\mathsf{P}_\mathbf{q}-\bar{\mathsf{P}}_\mathbf{q}\right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^3} \right] \times \mathsf{P}_\mathbf{p}\\ \dot{\bar{\mathsf{P}}}_\mathbf{p} = \left[ -\omega_\mathbf{p} \mathbf{B} + \!\lambda \mathbf{L} + \!\! \sqrt{2}G_{F}\!\!\! \int\!\! \left(1- \mathbf{v}_{\mathbf{q}}\!\cdot\mathbf{v}_{\mathbf{p}}\right) \left(\mathsf{P}_\mathbf{q}-\bar{\mathsf{P}}_\mathbf{q}\right)\frac{d^3\mathbf{q}}{\left(2\pi\right)^3}\right]\times \bar{\mathsf{P}}_\mathbf{p}.\end{gather}\label{eq:Hnu1}\end{subequations} \end{widetext} This set of equations is the starting point of any analysis of neutrino oscillation in an astrophysical system. In the next sections \subsection{Neutrino Oscillation in Spherical Accretion}\label{sec2.1} In the BdHN scenario of GRBs, the SN material first reaches the gravitational capture region of the NS companion, namely the Bondi-Hoyle region. The infalling material shocks as it piles up onto the NS surface forming an accretion zone where it compresses and eventually becomes sufficiently hot to trigger a highly efficient neutrino emission process. Neutrinos take away most of the infalling matter's gravitational energy gain, letting it reduce its entropy and be incorporated into the NS. It was shown in~\cite{2016ApJ...833..107B} that the matter in the accretion zone near the NS surface develops conditions of temperature and density such that it is in a non-degenerate, relativistic, hot plasma state. The most efficient neutrino emission channel under those conditions becomes the electron positron pair annihilation process. The neutrino emissivity can be approximated with a very good accuracy by \cite{2001PhR...354....1Y}. \begin{widetext} \begin{equation} \varepsilon^{m}_{i} \approx \frac{2G^{2}_{F}\left(T\right)^{8+m}}{9\pi^{5}}C^{2}_{+,i}\left[\mathcal{F}_{m+1,0}\left(\eta_{e^{+}}\right)\mathcal{F}_{1,0}\left(\eta_{e^{-}}\right) + \mathcal{F}_{m+1,0}\left(\eta_{e^{-}}\right)\mathcal{F}_{1,0}\left(\eta_{e^{+}}\right)\right]\label{eq:approximationyakovlev} \end{equation} \end{widetext} where $\mathcal{F}_{k,\ell}\left(y, \eta \right)$ are the generalized Fermi functions (see \cite{2018ApJ...852..120B} for details) and $\mathcal{F}_{k,\ell}\left( \eta \right) = \mathcal{F}_{k,\ell}\left( y=0, \eta \right)$. For $m=0$ and $m=1$ Eq. (\ref{eq:approximationyakovlev}) gives the neutrino and anti-neutrino number emissivity (neutrino production rate), and the neutrino and anti-neutrino energy emissivity (energy per unit volume per unit time) for a certain flavour $i$, respectively. Using Eq.~(\ref{eq:approximationyakovlev}) we find that the ratio of emission rates between electronic and non-electronic neutrino flavours obey the relation \begin{equation} \frac{\varepsilon^{0}_{e}}{\varepsilon^{0}_{x}} \approx \frac{7}{3}. \label{eq:neutrinoratio} \end{equation} and because of the symmetry of the annihilation process, the neutrinos and anti-neutrinos are produced in equal quantities. We can also find an expression for the average neutrino energy \begin{equation} \langle E_{\nu} \rangle = \langle E_{\bar{\nu}} \rangle \approx 4.1\,T \end{equation}\label{eq:neutrinotwomoments} for all neutrino flavours. The neutrino energy emissivity in Eq.~(\ref{eq:approximationyakovlev}) can be written as \begin{equation}\label{eq:L_neutrinos} \epsilon_{e^{-}\!e^{+}} \approx 8.69\times 10^{30}\left(\frac{T}{1\,{\rm MeV}}\right)^9\,\, {\rm MeV}\,{\rm cm}^{-3}\,{\rm s}^{-1}, \end{equation} which allows us to define an effective neutrino emission region \cite{2018ApJ...852..120B} \begin{equation} \Delta r_{\nu} = \frac{\epsilon_{e^{-}\!e^{+}}}{\nabla \epsilon_{e^{-}\!e^{+}}} = \approx 0.08R_{\rm NS}. \label{neutrinoshell} \end{equation} with $R_{\rm NS}$ the radius of the NS. Recollecting results we can make another simplifying assumption~\cite{2018ApJ...852..120B}: Since the neutrino emission region is thin, we will consider it as a spherical shell. This allows us to use the single-angle approximation~\cite{Duan:2006an,Dasgupta:2007ws} and simplify the last term in Eq.~(\ref{eq:Hnu1}). Precisely the \emph{multi-angle} term and the one responsible for kinematic decoherence~\cite{Hannestad:2006nj,Raffelt:2007yz,Fogli:2007bk}. With the single-angle approximation and the inverse square law of flux dilution it is possible to find the explicit dependence in $r$ of each of the potentials in Eq.~(\ref{eq:Hnu1}), namely \begin{table*} \begin{adjustbox}{width=2\columnwidth,center} \begin{tabular}{c c c c c c c c c c c c c c c} \hline $\dot{M}$\T\B & $\rho$ & $T$ & $\eta_{e^{\mp}}$ & $n_{e^{-}}\!-n_{e^{+}}$ & $T_{\nu\bar{\nu}}$ & $\langle E_\nu \rangle$ & $F^{C}_{\nu_e,\bar{\nu}_e}$ & $F^{C}_{\nu_x,\bar{\nu}_x}$ & $n^{C}_{\nu_{e}\bar{\nu}_{e}}$ & $n^{C}_{\nu_{x}\bar{\nu}_{x}}$\ & $\sum_{i}\,n^{C}_{\nu_{i}\bar{\nu}_{i}}$ \\ $(M_\odot$~s$^{-1}$)\B & (g~cm$^{-3})$ & (MeV) & & (cm$^{-3}$) & (MeV) & (MeV) & (cm$^{-2}$s$^{-1}$) & (cm$^{-2}$s$^{-1}$) & (cm$^{-3})$ & (cm$^{-3})$ & (cm$^{-3})$ \\ \hline $10^{-6}$\T & $1.12\times10^{7}$ & 2.59 & $\mp 0.193$ & $3.38\times10^{30}$ & 2.93 & 10.61 & $2.40\times 10^{38}$ & $1.03\times 10^{38}$ & $1.60\times10^{28}$ & $6.90\times10^{27}$ & $2.29\times10^{28}$ \\ $10^{-5}$ & $3.10\times10^{7}$ & 3.34 & $\mp 0.147$ & $9.56\times10^{30}$ & 3.78 & 13.69 & $1.84\times 10^{39}$ & $7.87\times 10^{38}$ & $1.23\times10^{29}$ & $5.20\times10^{28}$ & $1.75\times10^{29}$ \\ $10^{-4}$ & $8.66\times10^{7}$ & 4.30 & $\mp 0.111$ & $2.61\times10^{31}$ & 4.87 & 17.62 & $1.39\times 10^{40}$ & $5.94\times 10^{39}$ & $9.24\times10^{29}$ & $3.96\times10^{29}$ & $1.32\times10^{30}$ \\ $10^{-3}$ & $2.48\times10^{8}$ & 5.54 & $\mp 0.082$ & $7.65\times10^{31}$ & 6.28 & 22.70 & $1.04\times 10^{41}$ & $4.51\times 10^{40}$ & $7.00\times10^{30}$ & $3.00\times10^{30}$ & $1.00\times10^{31}$ \\ $10^{-2}$ & $7.54\times10^{8}$ & 7.13 & $\mp 0.057$ & $2.27\times10^{32}$ & 8.08 & 29.22 & $7.92\times 10^{41}$ & $3.39\times 10^{41}$ & $5.28\times10^{31}$ & $2.26\times10^{31}$ & $7.54\times10^{31}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Characteristics inside the neutrino emission zone for selected values of the accretion rate $\dot{M}$. The symbols $F^{C}$ and $n^{C}$ refer to the neutrino flux and neutrino density at the emission region. The electron fraction is $Y_{e}=0.5$, the pinching parameter for the neutrino spectrum is $\eta_{\nu\bar{\nu}}=2.04$ and the.} \label{tab:tab1} \end{table*} \begin{widetext} \begin{equation} \omega_{p,r} =\! \frac{\Delta m^{2}}{2p\langle v_{r} \rangle},\,\, \lambda_{r} \!=\! \sqrt{2}G_{F}\left(n_{e^{-}}\!-n_{e^{+}}\right)\frac{1}{\langle v_{r} \rangle}, \,\, \mu_{r} \!=\!\frac{ \sqrt{2}G_{F}}{2}\left(\sum_{i\in\{e,x\}}\!n^{C}_{\nu_{i}\bar{\nu}_{i}}\right)\left( \frac{R_{\rm NS}}{r} \right)^{2}\left( \frac{1 - \langle v_{r} \rangle^{2}}{\langle v_{r} \rangle} \right), \label{eq:potentials} \end{equation} \end{widetext} where \begin{equation} \langle v_{r} \rangle = \frac{1}{2}\left[ 1+\sqrt{1 - \left(\frac{R_{\rm NS}}{r}\right)^{2}} \right]. \label{eq:averageradialvelocity} \end{equation} Using Eq.~(\ref{eq:approximationyakovlev}) and the hydrodynamic simulations in~\cite{2016ApJ...833..107B} we can obtain the thermodynamic properties of the accreting matter at the NS surface (see Table~\ref{tab:tab1}) which in turn are the initial condition to solve Eq.~(\ref{eq:Hnu1}) and obtain an approximate behaviour of oscillations. \begin{figure*} \centering \textbf{\large Inverted Hierarchy}\par\medskip \includegraphics[width=0.42\hsize,clip]{SurvIHm2real.pdf}\includegraphics[width=0.42\hsize,clip]{SurvIHm3real.pdf}\\ \includegraphics[width=0.42\hsize,clip]{SurvIHm4real.pdf}\includegraphics[width=0.42\hsize,clip]{SurvIHm55real.pdf} \caption{Neutrino flavour evolution for inverted hierarchy. Electron neutrino survival probability is shown as a function of the radial distance from the NS surface. The curves for the electron anti-neutrino match the ones for electron neutrinos.} \label{fig:singleangle} \end{figure*} \begin{figure*} \centering \textbf{\large Normal Hierarchy}\par\medskip \includegraphics[width=0.42\hsize,clip]{survnhm2ttotalreal.pdf}\includegraphics[width=0.42\hsize,clip]{survnhm3ttotalreal.pdf}\\ \includegraphics[width=0.42\hsize,clip]{SurvNHm4real.pdf}\includegraphics[width=0.42\hsize,clip]{SurvNHm55real.pdf} \caption{Electron neutrino and anti-neutrino flavour evolution for normal hierarchy. The survival probability is shown as a function of the radial distance from the NS surface.} \label{fig:singleanglesolutions} \end{figure*} In Figs.~\ref{fig:singleangle} and \ref{fig:singleanglesolutions} we show the solution of Eqs.~(\ref{eq:Hnu1}) for both normal and inverted hierarchies using a monochromatic spectrum dominated by the average neutrino energy for $\dot{M}=10^{-2},10^{-3},10^{-4}$ and $5 \times 10^{-5} M_{\odot}$~s$^{-1}$. For the inverted hierarchy, there is no difference between the neutrino and anti-neutrino survival probabilities. This should be expected since for these values of $r$ the matter and self-interaction potentials are much larger than the vacuum potential, and there is virtually no difference between Eqs.~(\ref{eq:Hnu1}). Also, note that the anti-neutrino flavour proportions in Tab.~\ref{tab:tab1} remain virtually unchanged for normal hierarchy while the neutrino flavour proportions change drastically around the point $\lambda_{r} \sim \omega_{r}$. From these solutions we can calculate the oscillation length to be \begin{equation} t_{\rm osc} \approx (0.05-1) \,\,{\rm km} \label{length} \end{equation} which agree with the algebraic estimations in~\cite{Hannestad:2006nj,Raffelt:2007yz}. Clearly, the full equations of oscillations are highly non-linear so the solution may not reflect the real neutrino flavour evolution. Concerning the single-angle approximation, it is discussed in \cite{Hannestad:2006nj,Raffelt:2007yz,Fogli:2007bk} that in the more realistic multi-angle approach, kinematic decoherence happens for both mass hierarchies. And in \cite{EstebanPretel:2007ec} the conditions for decoherence as a function of the neutrino flavour asymmetry have been discussed. It is concluded that if the symmetry of neutrinos and anti-neutrinos is broken beyond the limit of $O(25\%)$, i.e., if the difference between emitted neutrinos and anti-neutrinos is roughly larger than 25\% of the total number of neutrinos in the medium, decoherence becomes a sub-dominant effect. As a direct consequence of the peculiar symmetric situation we are dealing with, in which neutrinos and anti-neutrinos are produced in similar numbers, bipolar oscillations happen and, as we have already discussed, they present very small oscillation length as shown in Eq.~(\ref{length}). Note also that the bipolar oscillation length depends on the neutrino energy. Therefore, the resulting process is equivalent to an averaging over the neutrino energy spectrum and an equipartition among different neutrino flavours is expected~\cite{Raffelt:2007yz}. Although, for simplicity, we are dealing with the two neutrino hypothesis, this behavior is easily extended to the more realistic three neutrino situation. We assume, therefore, that at few kilometers from the emission region neutrino flavour equipartition is a reality: \begin{equation} \nu_e:\nu_\mu:\nu_\tau=1:1:1. \label{eq:proportion} \end{equation} After leaving the emission region, beyond $r\approx R_{\rm NS}+\Delta r_{\nu}$, where $\Delta r_{\nu}$ is the width defined in Eq.~(\ref{neutrinoshell}), the effective neutrino density quickly falls in a asymptotic behavior $\mu_{r} \approx 1/r^{4}$. The decay of $\lambda_{r}$ is slower. Hence, very soon the neutrino flavour evolution is determined by the matter potential. Matter suppresses neutrino oscillations and we do not expect significant changes in the neutrino flavour content along a large region. Nevertheless, the matter potential can be so small that there will be a region along the neutrino trajectory in which it can be compared with the neutrino vacuum frequencies and the higher and lower resonant density conditions will be satisfied. Using the results in~\cite{Fogli:2003dw,2018ApJ...852..120B} we can include the matter effects an compare in Tab.~\ref{tab:tabfluxes} the flavour content at the emission region and after decoherence and the MSW resonance. Finally, we note that for accretion rates $\dot{M} < 5\times\!10^{-5}M_{\odot}$~s$^{-1}$, either the matter potential is close enough to the vacuum potential and the MSW condition is satisfied, or both the self-interaction and matter potentials are so low that the flavor oscillations are only due to the vacuum potential. In both cases, bipolar oscillations are not present~\cite{2018ApJ...852..120B}. Without bipolar oscillations, it is not possible to guarantee that decoherence will be complete and Eq.~(\ref{eq:proportion}) is no longer valid. \begin{table*} \begin{adjustbox}{width=2\columnwidth,center} \begin{tabular}{c c c c c c c c c c c c} \hline & $n^{0}_{\nu_{e}}/n$\T\B & $n^{0}_{\bar{\nu}_{e}}/n$\T\B & $n^{0}_{\nu_{x}}/n$\T\B & $n^{0}_{\bar{\nu}_{x}}/n$\T\B & $n_{\nu_{e}}/n$\T\B & $n_{\bar{\nu}_{e}}/n$\T\B & $n_{\nu_{x}}/n$\T\B & $n_{\bar{\nu}_{x}}/n$\T\B \\ \hline\hline Normal Hierarchy\T\B & $\frac{1}{6}$\T & $\frac{1}{6}$\T & $\frac{1}{3}$\T & $\frac{1}{3}$\T & $\frac{1}{3}$\T & $\frac{1}{6} + \frac{1}{6}\sin^{2}\theta_{12}$\T & $\frac{1}{6}$\T & $\frac{1}{3}-\frac{1}{6}\sin^{2}\theta_{12}$\T \\ \hline Inverted Hierarchy\T\B & $\frac{1}{6}$\T & $\frac{1}{6}$\T & $\frac{1}{3}$\T & $\frac{1}{3}$\T & $\frac{1}{6} + \frac{1}{6}\cos^{2}\theta_{12}$\T & $\frac{1}{3}$\T & $\frac{1}{3}-\frac{1}{6}\cos^{2}\theta_{12}$\T & $\frac{1}{6}$\T \\ \hline \end{tabular} \end{adjustbox} \caption{Fraction of neutrinos and anti-neutrinos for each flavour after decoherence and matter effects. $n=2\sum_{i}n_{\nu_{i}}$.} \label{tab:tabfluxes} \end{table*} \subsection{Neutrino Oscillations in Accretion Disks}\label{sec2.2} In the same BdHN scenario of Sec.~\ref{sec2}, part of the SN ejecta keeps bound to the newborn Kerr BH, forming an accretion disk~\cite{2019ApJ...871...14B}. In order to study analytically the properties of accretion disks, different models make approximations that allow casting the physics of an accretion disk as a two- or even one-dimensional problem. Here, we will consider neutrino-cooled accretion disks (NCADs) which are steady-state~\cite{2019arXiv190901841U,1974ApJ...191..499P}, axisymmetric, thin, alpha-disks models with the following parameters: $\dot{M}$ the accretion rate, $\alpha$ the alpha-viscosity and $a$ the spin of the BH~\cite{2019arXiv190901841U,1973A&A....24..337S,1973blho.conf..343N,1974ApJ...191..499P,1999agnc.book.....K,1999tbha.book.....A,2007ApJ...657..383C,LIU20171}. The procedure to analyse the dynamics of oscillations is similar to the one in~\ref{sec2.1}. The first step is find the neutrino flavour distributions to establish the initial conditions for Eq.~(\ref{eq:Hnu1}), then we have to find each of the potentials and finally solve the equation. To do this we first solve the hydrodynamic model in the absence of oscillations. \begin{figure*} \centering \includegraphics[width=0.315\hsize,clip]{Densities.pdf}\includegraphics[width=0.315\hsize,clip]{ElectronDensities.pdf}\includegraphics[width=0.315\hsize,clip]{Temperatures.pdf}\\ \includegraphics[width=0.315\hsize,clip]{NeutrinoDensities1.pdf}\includegraphics[width=0.315\hsize,clip]{NeutrinoDensities2.pdf}\includegraphics[width=0.315\hsize,clip]{NeutrinoDensities3.pdf} \includegraphics[width=0.315\hsize,clip]{NeutrinoEnergies1.pdf}\includegraphics[width=0.315\hsize,clip]{NeutrinoEnergies2.pdf}\includegraphics[width=0.315\hsize,clip]{NeutrinoEnergies3.pdf}\caption{Properties of accretion disks in the absence of oscillations with $M=3M_{\odot}$, $\alpha = 0.01$, $a = 0.95$ for accretion rates $\dot{M} = 1M_{\odot}$ s$ ^{-1}$, $\dot{M} = 0.1M_{\odot}$ s$ ^{-1}$ and $\dot{M} = 0.01M_{\odot}$ s$ ^{-1}$, respectively.} \label{fig:Disks} \end{figure*} In Fig.~\ref{fig:Disks} we find the neutrino number densities and energies inside the disk. Note that, as in Sec.~\ref{sec2.1}, the energies of neutrinos are comparable to the ones in spherical accretion and the number of neutrinos and anti-neutrinos are equal. There is also a significant excess of electron neutrinos over non-electron neutrinos. However, there are several key differences that make the analysis in accretion disks more complex. First, an accretion disk has an effective thickness $H$ and neutrinos can be produced at any point inside the disk. This means that it is not possible to set a \emph{surface of emission} as before and due to the lack of spherical symmetry does not allow to use the single-angle approximation. second, close to the BH the effects of curvature may not be negligible, implying that in Eq.~(\ref{eq:Liouville}), when applying the Liouville operator, a term proportional rate of change of the energy of neutrinos $\dot{\mathbf{p}}$ may be present. To simplify the equations of oscillation we consider the local rest frame of the disk (see \cite{1972ApJ...178..347B,1974ApJ...191..507T} for details) and make a set of assumptions: \begin{figure*} \centering \includegraphics[width=0.9\hsize,clip]{SurvSev.pdf} \caption{Survival provability for electron neutrinos and anti-neutrinos for the accretion disk with $\dot{M}=0.1M_{\odot}$ s$ ^{-1}$ at $r=9r_{s},10r_{s},11r_{s},12r_{s}$.} \label{fig:SurvSev} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\hsize,clip]{CompD.pdf}\includegraphics[width=0.49\hsize,clip]{CompT.pdf} \caption{Comparison of density and temperature between thin disks with and without neutrino flavour equipartition for selected accretion rates.} \label{fig:comp} \end{figure*} \begin{enumerate}[i] \item Due to axial symmetry, the neutrino density is constant along the $\mathbf{z}$ direction. Moreover, since neutrinos follow null geodesics, we can set $\dot{p}_{z} \approx \dot{p}_{\phi}=0$. Also, within the thin disk approximation, the neutrino and matter densities are constant along the $\mathbf{y}$ direction and the momentum change due to curvature along this direction can be neglected, that is, $\dot{p}_{y} \approx 0$. \item In the local rest frame of the disk, the normalized radial momentum of a neutrino can be written as $p_{x} = \pm \frac{r}{\sqrt{r^2-2Mr+M^2a^2}}$ (see \cite{2019arXiv190901841U} for details). Hence, the typical scale of the change of momentum with radius is $\Delta r_{p_{x},\text{eff}} = \left\vert \frac{d\ln{p_{x}}}{dr} \right\vert^{-1} = \frac{r\left(r^2 -2Mr+M^2a^2\right)}{M\left(Ma^2 - r\right)}$, which obeys $\Delta r_{p_{x},\text{eff}} > r_{s}$ for $r > 2 r_{\text{in}}$. This means that we can assume $\dot{p}_{x} \approx 0$ up to regions very close to the inner edge of the disk. \item We define an effective distance $\Delta r_{\rho,\text{eff}} = \left\vert\frac{d \ln\left(n_{e^{-}}-n_{e^{+}} \right)}{dr}\right\vert^{-1}$. For all the systems we evaluated we found that is comparable to the height of the disk $(\Delta r_{\rho,\text{eff}}\sim 2-5$~$r_{s}$). This means that at any point of the disk we can calculate neutrino oscillations in a small regions assuming that both the electron density and neutrino densities are constant. \item We neglect energy and momentum transport between different regions of the disk by neutrinos that are recaptured by the disk due to curvature. This assumption is reasonable except for regions very close to the BH but is consistent with the thin disk model (see, e.g.,\cite{1974ApJ...191..499P}). We also assume initially that the neutrino content of neighbouring regions of the disk (different values of $r$) do not affect each other. As a consequence of the results discussed above, we assume that at any point inside the disk and at any instant of time an observer in the local rest frame can describe both the charged leptons and neutrinos as isotropic gases around small enough regions of the disk. \end{enumerate} \begin{figure*} \centering \includegraphics[width=0.49\hsize,clip]{testo1.pdf} \includegraphics[width=0.49\hsize,clip]{testo3.pdf}\caption{Total optical depth (left scale) and mean free path (right scale) for neutrinos and anti-neutrinos of both flavours for accretion disks with $\dot{M} = 1M_{\odot}$~s$ ^{-1}$ and $0.01M_{\odot}$~s$ ^{-1}$, between the inner radius and the ignition radius.} \label{fig:opticaldepth} \end{figure*} All assumptions are sensible except iv, which is considerably restrictive. However, we can build our analysis on top of it and use the same results of Sec.~\ref{sec2.1} to generalize the model. Note that with our assumptions, the last term in Eq.~(\ref{eq:Hnu1}) is again simplified. When we calculate the oscillation in different point of the disk (see Fig.~\ref{fig:SurvSev}) we obtain fast flavour transformations with oscillation lengths of the order \begin{equation} t_{\rm osc} \approx 10^{-6}\, {\rm s} \label{eq:timeosc2} \end{equation} Keeping this in mind and given the symmetry between neutrinos and anti-neutrinos in Fig.~\ref{fig:Disks} we note that in~\cite{EstebanPretel:2007ec} it was shown that if the symmetry between neutrinos and anti-neutrinos is not broken beyond the limit of 25\%, kinematic decoherence is still the main effect of neutrino oscillations. Additionally, in \cite{Raffelt:2007yz} it is shown that for asymmetric $\nu\bar{\nu}$ gas, even an infinitesimal anisotropy triggers an exponential evolution towards equipartition. Decoherence happens within a few oscillation cycles of oscillation so we can expect an steady-state, thin disk model to achieve flavour equipartition and is the result of a non-vanishing flux term (which is present in accretion disks due to the increasing density towards the BH) such that at any point, (anti)-neutrinos travelling in different directions, do not experience the same self-interaction potential due to the multi-angle term in the integral of Eq.~(\ref{eq:FullHam}). This effect is of the neutrino mass hierarchy and neutrino flavour equipartition is achieved for both hierarchies. Within the disk dynamic, this is equivalent to imposing the condition \begin{equation} \langle P_{\nu_e \to \nu_e} \rangle= \langle P_{\bar{\nu}_e \to \bar{\nu}_e} \rangle = 0.5. \end{equation}\label{eq:equipartition} Within this condition, we can compare the behaviour of disks with and without flavour equipartition. Figure~\ref{fig:comp} shows that equipartition increases the disk density and reduces the temperature where the neutrino emission is important. The effect is mild for low accretion rates while very pronounced for high ones. Thus result can be explained as follows: for low accretion rates the neutrino optical depth for all flavors is $\tau_{\nu\bar{\nu}} \lesssim 1$ (see Fig.~\ref{fig:opticaldepth}), hence neutrinos, regardless of their flavour, are free to leave the disk. When the initial (mainly electron flavour) is redistributed among both flavours, the total neutrino cooling remains virtually unchanged and the disk evolves as if equipartition had never occurred, save the new emission flavour content. On the other hand, when accretion rates are high, the optical depth obeys $\tau_{\nu_{x}}\approx\tau_{\bar{\nu}_{x}}\lesssim\tau_{\bar{\nu}_{e}} < \tau_{\nu} \sim 10^3$. The $\nu_{e}$ cooling is more heavily suppressed than the others. When flavours are redistributed, the \textit{new} $\nu_{x}$ particles a free to escape, enhancing the total cooling with a consequent reduction of the temperature. As the temperature decreases, a lower internal energy allows for a higher matter density. The net impact of flavour equipartition is to make the disk evolution less sensitive to $\nu_{e}$ opacity. It can be shown (see \cite{2019arXiv190901841U} for details) that it increases the total cooling efficiency by the precise factor \begin{equation} \frac{1}{2}\left(1 + \frac{\langle E_{\nu_{x}} \rangle}{\langle E_{\nu_{e}} \rangle}\frac{1 + \tau_{\nu_{e}}}{1 + \tau_{\nu_{x}}}\right). \label{eq:fluxcomp} \end{equation} The main difference with the previous system is that, for similar accretion rates, the density of the accretion disk can be high dense to impede, or even trap neutrinos within it. However, since electron and non-electron neutrinos have different cross sections, the flavour transformations affect not only the dynamics of the disk, but also the neutrino flavour content emerging from the disk. This, in turn, affects the energy deposition rate of the process $\nu+\bar{\nu}\mapsto e^{-}+e^{+}$. In particular, it leads to a deficit of electron neutrinos and a smaller energy deposition rate with respect to previous estimates not accounting by flavour oscillations inside the disk. The exact value of the reduction factor depends on the $\nu_{e}$ and $\nu_{x}$ optical depths but it can be as high as $\sim 5$. We refer the reader to \cite{2019arXiv190901841U} for further details on this subject. \section{Concluding Remarks}\label{sec3} We have outlined the implications of neutrino oscillations in two different accreting systems within the BdHN scenario of GRBs. In both, spherical accretion and disk accretion, the emission of neutrinos is a crucial ingredient since they act as the main cooling process that allows the accretion onto the NS (or onto the BH) to proceed at very high rates of up to $1~M_\odot$~s$^{-1}$. Also, the ambient conditions of density and temperature imply the occurrence of neutrino flavour oscillations, with a relevant role of neutrino self-interactions. We have seen that in spherical accretion the density of neutrinos on top the NS implies that neutrino self-interactions dominate the flavour evolution, leading to collective effects. The latter induce quick flavour conversions with an oscillation length as small as $(0.05$--$1)$~km. Far from the NS surface, the neutrino density decreases and so the matter potential and MSW resonances dominate the flavour oscillations. Owing to the above, the neutrino flavour content emerging from the system is completely different with respect to the one created at the bottom of it, namely on top the NS accreting surface. Concerning disk accretion onto a BH, we saw that the number densities of electron neutrinos and anti-neutrinos are very similar. As a consequence of this particular environment, very fast pair conversions, $\nu_{e}\bar{\nu}_{e} \rightleftharpoons \nu_{x}\bar{\nu}_{x}$, induced by bipolar oscillations, are obtained for the inverted mass hierarchy case with high oscillation frequencies. However, due to the interaction between neighbouring regions of the disk, the onset of kinematic decoherence with a timescale comparable to the oscillation length induces flavour equipartition among electronic and non-electronic neutrinos throughout the disk. Therefore, the neutrino content emerging from the disk is very different from the one that is usually assumed (see e.g. \cite{2012PhRvD..86h5015M,2016PhRvD..93l3004L}). Flavour equipartition, while leaving anti-neutrino cooling practically unchanged, it enhances neutrino cooling by allowing the energy contained (and partially trapped inside the disk due to high opacity) within the $\nu_{e}$ gas to escape in the form of $\nu_{x}$, rendering the disk insensible to the electron neutrino opacity. The variation of the flavour content in the emission flux implies a loss in the electron neutrino luminosity and an increase in non-electron neutrino luminosity and $L_{\bar{\nu}_{e}}$. As a consequence, the total energy deposition rate of the process $\nu+\bar{\nu}\to e^{-}+e^{+}$ is reduced. These results are only a first step toward the analysis of neutrino oscillations in a novel relativistic astrophysics context that can have an impact on a wide range of astrophysical phenomena: from $e^{-}e^{+}$ plasma production above BHs in GRB models, to r-process nucleosynthesis in disk winds and possible MeV neutrino detectability. \bibliographystyle{ieeetr}
{ "timestamp": "2020-12-29T02:22:46", "yymm": "2012", "arxiv_id": "2012.14046", "language": "en", "url": "https://arxiv.org/abs/2012.14046" }
\subsection{Graph Convolutional Networks on Unsigned Graphs} Graph convolutional network (GCN)~\citep{DBLP:conf/iclr/KipfW17} models the latent representation of a node by employing a convolutional operation on the features of its neighbors. Various GCN-based approaches~\citep{DBLP:conf/iclr/KipfW17,DBLP:conf/iclr/VelickovicCCRLB18,hamilton2017inductive} have aroused considerable attention since they enable diverse graph supervised tasks~\citep{DBLP:conf/iclr/KipfW17,yao2019graph,DBLP:conf/iclr/XuHLJ19} to be performed concisely under an end-to-end framework. However, the first generation of GCN models exhibit performance degradation due to the over-smoothing and vanishing gradient problems. Several works~\citep{li2018deeper,oono2020graph} have theoretically revealed the over-smoothing problem. Also, Li et al.~\citep{li2019deepgcns} have empirically shown that stacking more GCN layers leads to the vanishing gradient problem as in convolutional neural networks~\citep{he2016deep}. Consequently, most GCN-based models~\citep{DBLP:conf/iclr/KipfW17,DBLP:conf/iclr/VelickovicCCRLB18,hamilton2017inductive} are shallow; i.e., they do not use the feature information in faraway nodes when modeling node embeddings. A recent research direction aims at resolving the limitation. Klicpera et al.~\citep{DBLP:conf/iclr/KlicperaBG19} proposed APPNP exploiting Personalized PageRank~\citep{jeh2003scaling} to not only propagate hidden node embeddings far but also preserve local features, thereby preventing aggregated features from being over-smoothed. Li et al.~\citep{li2019deepgcns} suggested ResGCN adding skip connections between GCN layers, as in ResNet~\citep{he2016deep}. However, all of these models do not provide how to use signed edges since they are based on the homophily assumption~\citep{DBLP:conf/iclr/KipfW17}, i.e., users having connections are likely to be similar, which is not valid for negative edges. As opposed to the homophily, negative edges have the semantics of heterophily~\citep{rogers2010diffusion}, i.e., users having connections are dissimilar. Although these methods can still be applied to signed graphs by ignoring the edge signs, their trained features have limited capacity. \subsection{Network Embedding and Graph Convolutional Networks on Signed Graphs} Traditional methods on network embedding extract latent node features specialized for signed graphs in an unsupervised manner. Kim et al.~\citep{KimPLK18} proposed SIDE which optimizes a likelihood over direct and indirect signed connections on truncated random walks sampled from a signed graph. Xu et al.~\citep{xu2019link} developed SLF considering positive, negative, and non-linked relationships between nodes to learn non-negative node embeddings. However, such approaches are not end-to-end, i.e., they are not directly optimized for solving a supervised task such as link prediction. There are recent progresses on end-to-end learning on signed networks under the GCN framework. Derr et al.~\citep{derr2018signed} proposed SGCN which extends the GCN mechanism to signed graphs considering balanced and unbalanced relationships supported by structural balance theory~\citep{holland1971transitivity}. Yu et al.~\citep{li2020learning} developed SNEA using attention techniques to reveal the importance of these relationships. However, such state-of-the-art models do not consider the over-smoothing problem since they are directly extended from GCN. \subsection{Signed Graph Diffusion Layer} \label{sec:method:sgdnet} Given a signed graph $\G$ and the node embeddings \smf{$\H{l-1}$} from the previous layer, the $l$-th SGD layer learns new embeddings \smf{$\H{l}$} as shown in Figure~\ref{fig:overview:single}. It first transforms \smf{$\H{l-1}$} into hidden features \smf{$\tH{l}$} as \smf{$\tH{l} = \H{l-1}\W{l}_{t}$} with a learnable parameter \smf{$\W{l}_{t} \in \mathbb{R}^{\d{l-1} \times \d{l}}$}. Then, it applies the signed random walk diffusion which is represented as the function \smf{$\mathcal{F}_{d}(\G, \tH{l})$} that returns \smf{$\dP{l} \in \mathbb{R}^{n \times \d{l}}$} and \smf{$\dM{l} \in \mathbb{R}^{n \times \d{l}}$} as the positive and the negative embeddings, respectively (details in Section~\ref{sec:method:sgdnet:srwdiff}). The embeddings are concatenated and transformed as follows: \begin{align} \label{eq:sgdnet:non_linear_trans} \H{l} &= \phi\left(\left[\dP{l} \vert\rvert \dM{l}\right]\W{l}_{n} + \H{l-1}\right) \end{align} \noindent where $\phi(\cdot)$ is a non-linear activator such as $\texttt{tanh}$, \smf{$\vert\rvert$} denotes horizontal concatenation of two matrices, and \smf{$\W{l}_{n} \in \mathbb{R}^{2\d{l} \times \d{l}}$} is a trainable weight matrix that learns a relationship between \smf{$\dP{l}$} and \smf{$\dM{l}$}. We use the skip connection~\citep{he2016deep,li2019deepgcns} with \smf{$\H{l-1}$} in Equation~\eqref{eq:sgdnet:non_linear_trans} to avoid the vanishing gradient issue which frequently occurs when multiple layers are stacked. \begin{figure*}[t] \centering \vspace{-5mm} \hspace{2mm} \subfigure[Signed random walks]{ \hspace{2mm} \label{fig:example:srw} \includegraphics[width=0.26\linewidth]{FIG/METHOD_SRW.pdf} } \hspace{5mm} \subfigure[Feature diffusion for $\pvk{v}{k}$ and $\mvk{v}{k}$]{ \hspace{6mm} \label{fig:example:eq:sgdnet} \includegraphics[width=0.5\linewidth]{FIG/METHOD_SRWDIFF_EQ.pdf} } \caption{ Feature diffusion by signed random walks in \textsc{SGDNet}\xspace. (a) Signed random walks properly consider edge signs. (b) The positive and the negative feature vectors \smf{$\pvk{v}{k}$} and \smf{$\mvk{v}{k}$} are updated from the previous feature vectors and the local feature vector \smf{$\thl{v}{l}$} as described in Equation~\eqref{eq:srwdiff}. } \end{figure*} \subsection{Signed Random Walk Diffusion} \label{sec:method:sgdnet:srwdiff} We design the signed random walk diffusion operator $\mathcal{F}_{d}(\cdot)$ used in the $l$-th SGD layer. Given the signed graph $\G$ and the hidden node embeddings $\tH{l}$, the diffusion operator $\mathcal{F}_{d}(\cdot)$ diffuses the node features based on random walks considering edge signs so that it properly aggregates node features on signed edges and prevents the aggregated features from being over-smoothed. Signed random walks are performed by a signed random surfer~\citep{jung2016personalized} who has the $+$ or $-$ sign when moving around the graph. Figure~\ref{fig:example:srw} shows signed random walks on four cases according to edge signs: 1) a friend's friend, 2) a friend's enemy, 3) an enemy's friend, and 4) an enemy's enemy. The surfer starts from node $s$ with the $+$ sign. If it encounters a negative edge, the surfer flips its sign from $+$ to $-$, or vice versa. Otherwise, the sign is kept. The surfer determines whether a target node $t$ is a friend of node $s$ or not according to its sign. $\mathcal{F}_{d}(\cdot)$ exploits the signed random walk for diffusing node features on signed edges. Each node is represented by two feature vectors which represent the positive and negative signs, respectively. Let $k$ denote the number of diffusion steps or random walk steps. Then, \smf{$\pvk{v}{k} \in \mathbb{R}^{\d{l} \times 1}$} and \smf{$\mvk{v}{k} \in \mathbb{R}^{\d{l} \times 1}$} are aggregated at node $v$, respectively, where \smf{$\pvk{v}{k}$} (or \smf{$\mvk{v}{k}$}) is the feature vector visited by the positive (or negative) surfer at step $k$. These are recursively obtained by the following equations: \begin{align} \label{eq:srwdiff} \begin{split} \pvk{v}{k} &= (1-c) \Big(\sum_{u \in \INp{v}} \frac{1}{|\ON{u}|}\pvk{u}{k-1} + \sum_{t \in \INm{v}} \frac{1}{|\ON{t}|}\mvk{t}{k-1}\Big) + c\thvl{v}{l} \\ \mvk{v}{k} &= (1-c) \Big(\sum_{t \in \INm{v}} \frac{1}{|\ON{t}|}\pvk{t}{k-1} + \sum_{u \in \INp{v}} \frac{1}{|\ON{u}|}\mvk{u}{k-1}\Big) \end{split} \end{align} \noindent where $\overleftarrow{\set{N}}_{v}^{s}$ is the set of incoming neighbors to node $v$ connected with edges of sign $s$, \smf{$\ON{u}$} is the set of outgoing neighbors from node $u$ regardless of edge signs, \smf{$\thl{v}{l}$} is the local feature of node $v$ (i.e., the $v$-th row vector of \smf{$\tH{l}$}), and $0 < c < 1$ is a local feature injection ratio. That is, the features are computed by the signed random walk feature diffusion with weight $1-c$ and the local feature injection with weight $c$ with the following details. \paragraph{Signed Random Walk Feature Diffusion.} Figure~\ref{fig:example:eq:sgdnet} illustrates how \smf{$\pvk{v}{k}$} and \smf{$\mvk{v}{k}$} are diffused by the signed random walks according to Equation~\eqref{eq:srwdiff}. Suppose the positive surfer visits node $v$ at step $k$. For this to happen, the positive surfer of an incoming neighbor $u$ at step $k-1$ should choose the edge $(u \rightarrow v, +)$ by a probability \smf{$1/|\ON{u}|$}. This transition to node $v$ along the positive edge allows to keep the surfer's positive sign. At the same time, the negative surfer of an incoming neighbor $t$ at step $k-1$ should move along the edge $(t \rightarrow v, -)$ by a probability \smf{$1/|\ON{t}|$}. In this case, the surfer flips its sign from $-$ to $+$. Considering these signed random walks, \smf{$\pvk{v}{k}$} is obtained by the weighted aggregation of \smf{$\pvk{u}{k-1}$} and \smf{$\mvk{t}{k-1}$}. Similarly, \smf{$\mvk{v}{k}$} is aggregated as shown in Figure~\ref{fig:example:eq:sgdnet}. \paragraph{Local Feature Injection.} Although the feature diffusion above properly considers edge signs, the generated features could be over-smoothed after many steps if we depend solely on the diffusion. In other words, it considers only the graph information explored by the signed random surfer, while the local information in the hidden feature \smf{$\thl{v}{l}$} is disregarded during the diffusion. Hence, as shown in Figure~\ref{fig:example:eq:sgdnet}, we explicitly inject the local feature \smf{$\thl{v}{l}$} to \smf{$\pvk{v}{k}$} with weight $c$ at each aggregation in Equation~\eqref{eq:srwdiff} so that the diffused features are not over-smoothed. The reason why local features are only injected to $+$ embeddings is that we consider a node should trust ($+$) its own information (i.e., its local feature). \orange{ } { \subsection{Convergence Guarantee of Signed Random Walk Diffusion} Suppose that \smf{$\P{k}=[\pvkt{1}{k}; \cdots; \pvkt{n}{k}]$} and \smf{$\M{k}=[\mvkt{1}{k}; \cdots; \mvkt{n}{k}]$} represent the positive and negative embeddings of all nodes, respectively, where $;$ denotes vertical concatenation. Let \smf{$\mat{A}_{s}$} be the adjacency matrix for sign $s$ such that \smf{$\mat{A}_{suv}$} is $1$ for signed edge $(u \rightarrow v, s)$, and $0$ otherwise. Then, Equation~\eqref{eq:srwdiff} is vectorized as follows: \begin{align} \label{eq:srwdiff:vectorized} \begin{rcases*} \text{ }\P{k} = (1-c) (\nApT\P{k-1} + \nAnT\M{k-1}) + c\tH{l} \\ \M{k} = (1-c) (\nAnT\P{k-1} + \nApT\M{k-1}) \end{rcases*} \Rightarrow \T{k} &= (1-c) \nB\T{k-1} + c\Q \end{align} \noindent where \smf{$\mat{\tilde{A}}_{s} = \mat{D}^{-1}\mat{A}_{s}$} is the normalized matrix for sign $s$, and \smf{$\mat{D}$} is a diagonal out-degree matrix (i.e., \smf{$\mat{D}_{ii} = |\ON{i}|$}). The left equation of Equation \eqref{eq:srwdiff:vectorized} is compactly represented as the right equation where \begin{equation*} \T{k} = \begin{bmatrix} \P{k} \\ \M{k} \end{bmatrix} \qquad \nB = \begin{bmatrix} \nApT & \nAnT \\ \nAnT & \nApT \end{bmatrix} \qquad \Q = \begin{bmatrix} \tH{l} \\ \mat{0} \end{bmatrix}. \end{equation*} Then, $\T{k}$ is guaranteed to converge as shown in the following theorem. \begin{theorem} \label{theorem:convergence} The diffused features in $\T{k}$ converge to equilibrium for $c \in (0, 1)$ as follows: \begin{align} \mat{T}^{*}=\lim_{k \rightarrow \infty}\T{k} &= \lim_{k \rightarrow \infty}\left( \sum_{i = 0}^{k-1} (1-c)^{i}\nB^{i} \right)\tQ = (\mat{I} - (1-c)\nB)^{-1}\tQ \;\;\;\;\;\;\;\;\;\; (\tQ \coloneqq c\Q) \label{eq:srwdiff:exact} \end{align} If we iterate Equation~\eqref{eq:srwdiff:vectorized} $K$ times for $1 \leq k \leq K$, the exact solution $\mat{T}^{*}$ is approximated as \begin{align} \mat{T}^{*} \approx \T{K} = \tQ + (1-c)\nB\tQ + \cdots + (1-c)^{K-1}\nB^{K-1}\tQ + (1-c)^{K}\nB^{K}\T{0} \label{eq:srwdiff:approx} \end{align} where $\lVert \mat{T}^{*} - \T{K} \rVert_{1} \leq (1-c)^{K}\lVert \mat{T}^{*} - \T{0} \rVert_{1}$, and $\T{0} = \begin{bmatrix} \P{0} \\ \M{0} \end{bmatrix}$ is the initial value of Equation~\eqref{eq:srwdiff:vectorized}. \QEDB \end{theorem} \begin{proof} A proof sketch is to show the spectral radius of \smf{$\nB$}\normalsize is less than or equal to $1$, which guarantees the convergence of the geometric series with \smf{$(1-c)\nB$}\normalsize. See the details in Appendix~\ref{sec:appendix:convergence_analysis}. \QEDB \end{proof} According to Theorem~\ref{theorem:convergence}, \smf{$\nB^{K}\tQ$} is the node features diffused by $K$-step signed random walks with \smf{$\tQ$} where \smf{$\nB^{K}$} is interpreted as the transition matrix of $K$-step signed random walks. Thus, the approximation is the sum of the diffused features from $1$ to $K$ steps with a decaying factor $1-c$, i.e., the effect of distant nodes gradually decreases while that of neighboring nodes is high. This is the reason why \textsc{SGDNet}\xspace prevents diffused features from being over-smoothed. Also, the approximation error \smf{$\lVert \mat{T}^{*} - \T{K} \rVert_{1}$} exponentially deceases as $K$ increases due to the term \smf{$(1-c)^{K}$}. Another point is that the iteration of Equation~\eqref{eq:srwdiff:vectorized} converges to the same solution no matter what \smf{$\P{0}$} and \smf{$\M{0}$} are given. In this work, we initialize \smf{$\P{0}$} with \smf{$\tH{l}$}, and randomly initialize \smf{$\M{0}$} in $[-1, 1]$. The signed random walk diffusion operator $\mathcal{F}_{d}(\cdot)$ iterates Equation~\eqref{eq:srwdiff:vectorized} $K$ times for $1 \leq k \leq K$ where $K$ is the number of diffusion steps, and it returns \smf{$\dP{l}\leftarrow \P{K}$} and \smf{$\dM{l}\leftarrow \M{K}$} as the outputs of the diffusion module at the $l$-th SGD layer. The detailed pseudocode of \textsc{SGDNet}\xspace is described in Appendix~\ref{sec:appendix:pseudocode}, and its time complexity is analyzed in Appendix~\ref{sec:appendix:time_complexity}. } \subsection{Loss Function for Link Sign Prediction} \label{sec:method:loss} The link sign prediction is to predict the missing sign of a given edge. As shown in Figure~\ref{fig:overview:multiple}, \textsc{SGDNet}\xspace produces the final node embeddings \smf{$\H{L}$}. The embeddings are fed into a loss function \smf{$\mathcal{L}(\G, \H{L}; \mat{\Theta}) = \mathcal{L}_{sign}(\G, \H{L}) + \lambda\mathcal{L}_{reg}(\mat{\Theta})$} where $\mat{\Theta}$ is the set of model parameters, $\mathcal{L}_{sign}(\cdot)$ is the binary cross entropy loss, and $\mathcal{L}_{reg}(\cdot)$ is the $L_2$ regularization loss with weight decay $\lambda$. For a signed edge $(u\rightarrow v,s)$, the edge feature is \smf{$\vect{z}_{uv} \in \mathbb{R}^{1 \times 2\d{L}} \!\!= \vect{h}_{u}^{(L)} \vert\rvert \vect{h}_{v}^{(L)}$} where \smf{$\vect{h}_{u}^{(L)}$} is the $u$-th row vector of \smf{$\H{L}$}. Let $\set{E}$ be the set of signed edges. Then, $\mathcal{L}_{sign}(\cdot)$ is represented as follows: \begin{align*} \mathcal{L}_{sign}(\G, \X) = -\!\!\sum_{(u \rightarrow v, s) \in \set{E}} \; \sum_{t \in \{+,-\}} \;\; \mathbb{I}(t=s)\log\left(\texttt{softmax}_{t}\left(\vect{z}_{uv} \mat{W} \right)\right) \end{align*} \noindent where $\mat{W} \in \mathbb{R}^{2\d{L} \times 2}$ is a learnable weight matrix, $\texttt{softmax}_{t}(\cdot)$ is the probability for sign $t$ after softmax operation, and $\mathbb{I}(\cdot)$ returns $1$ if a given predicate is true, and $0$ otherwise. \subsection{Experimental Settings} \setlength{\tabcolsep}{12.5pt} \begin{table}[!t] \small \begin{threeparttable}[t] \centering \caption{ Dataset statistics of signed graphs. $\szV$ and $\szE$ are the number of nodes and edges, respectively. Given sign $s \in \{+,-\}$, $|\set{E}^{s}|$ and $\rho(s)$ are the number and percentage of edges with sign $s$, respectively. \vspace{-6mm} } \begin{tabular}{l|rrrrrr} \hline \toprule \textbf{Dataset} & $\szV$ & $\szE$ & $\szEp$ & $\szEn$ & $\rho(+)$ & $\rho(-)$ \\ \midrule Bitcoin-Alpha\tnote{1} & 3,783 & 24,186 & 22,650 & 1,536 & 93.65\% & 6.35\% \\ Bitcoin-OTC\tnote{1} & 5,881 & 35,592 & 32,029 & 3,563 & 89.99\% & 10.01\% \\ Slashdot\tnote{2} & 79,120 & 515,397 & 392,326 & 123,071 & 76.12\% & 23.88\% \\ Epinions\tnote{3} & 131,828 & 841,372 & 717,667 & 123,705 & 85.30\% & 14.70\% \\ \bottomrule \end{tabular} \label{tab:datasets} \begin{tablenotes} \scriptsize{ \item[1] {\url{https://snap.stanford.edu/data/soc-sign-bitcoin-otc.html}} \item[2] {\url{http://konect.uni-koblenz.de/networks/slashdot-zoo}} \item[3] {\url{http://www.trustlet.org/wiki/Extended\_Epinions\_dataset}} } \end{tablenotes} \end{threeparttable} \end{table} \textbf{Datasets.} We perform experiments on four standard signed graphs summarized in Table~\ref{tab:datasets}: Bitcoin-Alpha~\citep{kumar2016edge}, Bitcoin-OTC~\citep{kumar2016edge}, Slashdot~\citep{kunegis2009slashdot}, and Epinions~\citep{guha2004propagation}. We provide the detailed description of each dataset in Appendix~\ref{sec:appendix:datasets}. We also report additional experiments on Wikipedia dataset~\citep{leskovec2010signed} in Appendix~\ref{sec:appendix:wikipedia}. \textbf{Competitors.} We compare our proposed \textsc{SGDNet}\xspace with the following competitors: \begin{itemize} \item { \textbf{APPNP}~\citep{DBLP:conf/iclr/KlicperaBG19}: an unsigned GCN model based on Personalized PageRank. } \item { \textbf{ResGCN}~\citep{li2019deepgcns}: another unsigned GCN model exploiting skip connections to deeply stack multiple layers. } \item { \textbf{SIDE}~\citep{KimPLK18}: a network embedding model optimizing the likelihood over signed edges using random walk sequences to encode structural information into node embeddings. } \item { \textbf{SLF}~\citep{xu2019link}: another network embedding model considering positive, negative, and non-linked relationships to learn non-negative node embeddings. } \item { \textbf{SGCN}~\citep{derr2018signed}: a state-of-the-art signed GCN model considering balanced and unbalanced paths motivated from balance theory to propagate embeddings. } \item { \textbf{SNEA}~\citep{li2020learning}: another signed GCN model extending SGCN by learning attentions on the balanced and unbalanced paths. } \end{itemize} We use the absolute adjacency matrix for APPNP and ResGCN since they handle only unsigned edges. All methods are implemented by PyTorch and Numpy in Python. We use a machine with Intel E5-2630 v4 2.2GHz CPU and Geforce GTX 1080 Ti for the experiments. \textbf{Evaluation Metrics.} We randomly split the edges of a signed graph into training and test sets by the 8:2 ratio. As shown in Table~\ref{tab:datasets}, the sign ratio is highly skewed to the positive sign, i.e., the sampled datasets are naturally imbalanced. Considering the class imbalance, we measure the area under the curve (AUC) to evaluate predictive performance. We also report F1-macro measuring the average of the ratios of correct predictions for each sign since negative edges need to be treated as important as positive edges (i.e., it gives equal importance to each class). A higher value of AUC or F1-macro indicates better performance. We repeat each experiment $10$ times with different random seeds and report the average and standard deviation of test values. \textbf{Hyperparameter Settings.} We set the dimension of final node embeddings to $32$ for all methods so that their embeddings have the same learning capacity (see its effect in Appendix~\ref{sec:appendix:effect_dim}). We perform $5$-fold cross-validation for each method to find the best hyperparameters and measure the test accuracy with the selected ones. In the cross-validation for \textsc{SGDNet}\xspace, the number $L$ of SGD layers is sought between $1$ and $6$, and the restart probability $c$ is selected from $0.05$ to $0.95$ by step size $0.1$. We set the number $K$ of diffusion steps to $10$ and the feature dimension $\d{l}$ of each layer to $32$. We follow the range of each hyperparameter recommended in its corresponding paper for the cross-validation of other models. Our model is trained by the Adam optimizer \citep{DBLP:journals/corr/KingmaB14}, where the learning rate is $0.01$, the weight decay $\lambda$ is $0.001$, and the number of epochs is $100$. We summarize the hyperparameters used by \textsc{SGDNet}\xspace for each dataset in Appendix~\ref{sec:appendix:hyperparameter}. \def1.2{0.8} \setlength{\tabcolsep}{8.6pt} \begin{table*}[!t] \small \begin{threeparttable}[t] \caption{ \label{tab:link_sign:auc} \textsc{SGDNet}\xspace gives the best link sign prediction performance in terms of AUC. The best model is in bold, and the second best model is underlined. The \% increase measures the best accuracy against the second best accuracy. \vspace{-2mm} } \begin{tabular}{l|cccc} \hline \toprule \multicolumn{1}{c|}{\textbf{AUC}} & \textbf{Bitcoin-Alpha} & \textbf{Bitcoin-OTC} & \textbf{Slashdot} & \textbf{Epinions} \\ \midrule \textbf{APPNP}~\citep{DBLP:conf/iclr/KlicperaBG19} & 0.854$\pm$0.010 & 0.867$\pm$0.009 & \underline{0.837$\pm$0.003} & 0.870$\pm$0.002 \\ \textbf{ResGCN}~\citep{li2019deepgcns} & 0.853$\pm$0.017 & \underline{0.876$\pm$0.010} & 0.744$\pm$0.004 & 0.871$\pm$0.002 \\ \midrule \textbf{SIDE}~\citep{KimPLK18} & 0.801$\pm$0.020 & 0.839$\pm$0.013 & 0.814$\pm$0.003 & 0.880$\pm$0.003 \\ \textbf{SLF}~\citep{xu2019link} & 0.779$\pm$0.023 & 0.797$\pm$0.014 & 0.833$\pm$0.006 & 0.876$\pm$0.005 \\ \midrule \textbf{SGCN}~\citep{derr2018signed} & 0.824$\pm$0.018 & 0.857$\pm$0.008 & 0.827$\pm$0.004 & \underline{0.895$\pm$0.002} \\ \textbf{SNEA}~\citep{li2020learning} & \underline{0.855$\pm$0.006} & 0.858$\pm$0.008 & 0.754$\pm$0.005 & 0.771$\pm$0.004 \\ \midrule \textbf{SGDNet (proposed)} & \bf 0.911$\pm$0.007 & \bf 0.921$\pm$0.005 & \bf 0.886$\pm$0.001 & \bf 0.932$\pm$0.001 \\ \midrule \% increase & 6.4\% & 4.9\% & 5.9\% & 3.9\% \\ \bottomrule \end{tabular} \end{threeparttable} \vspace{-2mm} \end{table*} \def1.2{0.8} \setlength{\tabcolsep}{8.6pt} \begin{table*}[!t] \small \begin{threeparttable}[t] \caption{ \label{tab:link_sign:f1macro} \textsc{SGDNet}\xspace gives the best link sign prediction performance in terms of F1-macro. \vspace{-2mm} } \begin{tabular}{l|cccc} \hline \toprule \multicolumn{1}{c|}{\textbf{F1-macro}} & \textbf{Bitcoin-Alpha} & \textbf{Bitcoin-OTC} & \textbf{Slashdot} & \textbf{Epinions} \\ \midrule \textbf{APPNP}~\citep{DBLP:conf/iclr/KlicperaBG19} & 0.682$\pm$0.005 & 0.762$\pm$0.009 & 0.748$\pm$0.003 & 0.773$\pm$0.004 \\ \textbf{ResGCN}~\citep{li2019deepgcns} & 0.658$\pm$0.006 & 0.735$\pm$0.015 & 0.609$\pm$0.004 & 0.784$\pm$0.003 \\ \midrule \textbf{SIDE}~\citep{KimPLK18} & 0.663$\pm$0.008 & 0.709$\pm$0.008 & 0.685$\pm$0.009 & 0.785$\pm$0.006 \\ \textbf{SLF}~\citep{xu2019link} & 0.615$\pm$0.027 & 0.641$\pm$0.025 & 0.733$\pm$0.008 & 0.810$\pm$0.008 \\ \midrule \textbf{SGCN}~\citep{derr2018signed} & \underline{0.690$\pm$0.014} & \underline{0.776$\pm$0.008} & \underline{0.752$\pm$0.013} & \underline{0.844$\pm$0.002} \\ \textbf{SNEA}~\citep{li2020learning} & 0.670$\pm$0.005 & 0.742$\pm$0.011 & 0.690$\pm$0.005 & 0.805$\pm$0.005 \\ \midrule \textbf{SGDNet (proposed)} & \bf 0.757$\pm$0.012 & \bf 0.799$\pm$0.007 & \bf 0.778$\pm$0.002 & \bf 0.854$\pm$0.002 \\ \midrule \% increase & 7.4\% & 1.6\% & 3.5\% & 1.2\% \\ \bottomrule \end{tabular} \end{threeparttable} \end{table*} \subsection{Link Sign Prediction} \label{sec:experiments:link_sign} We evaluate the performance of each method on link sign prediction. Tables~\ref{tab:link_sign:auc}~and~\ref{tab:link_sign:f1macro} summarize the experimental results in terms of AUC and F1-macro, respectively. Note that our \textsc{SGDNet}\xspace shows the best performance in terms of AUC and F1-macro scores. \textsc{SGDNet}\xspace presents $3.9\sim6.4$\% and $1.2\sim 7.4$\% improvements over the second best models in terms of AUC and F1-macro, respectively. We have the following observations. \begin{itemize*} \item The unsigned GCN models APPNP and ResGCN show worse performance than \textsc{SGDNet}\xspace, which shows the importance of using sign information. \item { The performance of network embedding techniques such as SIDE and SLF is worse than that of other GCN-based models; this shows the importance of jointly learning feature extraction and link sign prediction for the performance. } \item { The performance of SGCN and SNEA which use limited features from nodes within $2 \sim 3$ hops is worse than that of \textsc{SGDNet}\xspace which exploits up to \smf{$K$}-hop neighbors' features where $K$ is set to $10$ in these experiments. It indicates that carefully exploiting features from distant nodes as well as neighboring ones is crucial for the performance. } \end{itemize*} \subsection{Effect of Diffusion Steps} \label{sec:experiments:effect} \begin{figure}[t] \centering \subfigure[Bitcoin-Alpha]{ \hspace{-2mm} \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIFF/BITCOIN_ALPHA_F1_MACRO.pdf} } \subfigure[Bitcoin-OTC]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIFF/BITCOIN_OTC_F1_MACRO.pdf} } \subfigure[Slashdot]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIFF/SLASHDOT_F1_MACRO.pdf} } \subfigure[Epinions]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIFF/EPINIONS_F1_MACRO.pdf} } \caption{ \label{fig:effect:diffusion} Effect of \textsc{SGDNet}\xspace's feature diffusion compared to state-of-the-art SGCN. The performance of \textsc{SGDNet}\xspace is boosted while that of SGCN degrades as the number $K$ of diffusion steps increases. } \vspace{-3mm} \end{figure} We investigate the effect of the feature diffusion in \textsc{SGDNet}\xspace for learning signed graphs. We use one SGD layer, and set the restart probability $c$ to $0.15$ to evaluate the pure effect of the diffusion module; we vary the number $K$ of diffusion steps from $1$ to $10$ and evaluate the performance of \textsc{SGDNet}\xspace in terms of F1-macro for each diffusion step. Also, we compare \textsc{SGDNet}\xspace to SGCN, a state-of-the-art-model for learning signed graphs. The number of diffusion steps of SGCN is determined by its number of layers. Figure~\ref{fig:effect:diffusion} shows that the performance of \textsc{SGDNet}\xspace gradually improves as $K$ increases while that of SGCN dramatically decreases over all datasets. This indicates that SGCN suffers from the performance degradation problem when its network becomes deep, i.e., it is difficult to use more information beyond $3$ hops in SGCN. On the other hand, \textsc{SGDNet}\xspace utilizes features of farther nodes, and generates more expressive and stable features than SGCN does. Note that the performance of \textsc{SGDNet}\xspace converges in general after a sufficient number of diffusion steps, which is highly associated with Theorem~\ref{theorem:convergence}. \begin{figure*}[t] \centering \subfigure[Bitcoin-Alpha]{ \hspace{-2mm} \includegraphics[width=0.231\linewidth]{FIG/PLOT/EFFECT_RESTART/BITCOIN_ALPHA_F1_MACRO.pdf} } \subfigure[Bitcoin-OTC]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_RESTART/BITCOIN_OTC_F1_MACRO.pdf} } \subfigure[Slashdot]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_RESTART/SLASHDOT_F1_MACRO.pdf} } \subfigure[Epinions]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_RESTART/EPINIONS_F1_MACRO.pdf} } \caption{ \label{fig:effect:restart} Effect of local injection ratio $c$ of \textsc{SGDNet}\xspace. A relatively small value ($0.15 \sim 0.35$) of $c$ is the best for the Bitcoin-Alpha and Bitcoin-OTC (small) datasets while $c$ around $0.5$ shows better accuracy for the Slashdot and Epinions (large) datasets. } \end{figure*} \vspace{-2mm} \subsection{Effect of Local Injection Ratio} \label{sec:experiments:effect:local} We examine the effect of the local injection ratio $c$ in the diffusion module of \textsc{SGDNet}\xspace. We use one SGD layer, and set the number $K$ of diffusion steps to $10$; we vary $c$ from $0.05$ to $0.95$ by $0.1$, and measure the performance of the link sign prediction task in terms of F1-macro. Figure~\ref{fig:effect:restart} shows the effect of $c$ to the predictive performance of \textsc{SGDNet}\xspace. For small datasets such as Bitcoin-Alpha and Bitcoin-OTC, $c$ between $0.15$ and $0.35$ provides better performance. On the other hand, $c$ around $0.5$ shows higher accuracy for large datasets such as Slashdot and Epinions. For all datasets, a too low or too high value of $c$ (e.g., $0.05$ or $0.95$) results in a poor performance. \subsection{Convergence Analysis} \label{sec:appendix:convergence_analysis} \addtocounter{theorem}{-1} \addtocounter{proof}{-1} \begin{theorem}[Convergence of Signed Random Walk Diffusion] The diffused features in $\T{k}$ converge to equilibrium for $c \in (0, 1)$ as follows: \begin{align*} \mat{T}^{*}=\lim_{k \rightarrow \infty}\T{k} &= \lim_{k \rightarrow \infty}\left( \sum_{i = 0}^{k-1} (1-c)^{i}\nB^{i} \right)\tQ = (\mat{I} - (1-c)\nB)^{-1}\tQ \;\;\;\;\;\;\;\;\;\; (\tQ \coloneqq c\Q) \end{align*} If we iterate Equation~\eqref{eq:srwdiff:vectorized} $K$ times for $1 \leq k \leq K$, the exact solution $\mat{T}^{*}$ is approximated as \begin{align*} \mat{T}^{*} \approx \T{K} = \tQ + (1-c)\nB\tQ + \cdots + (1-c)^{K-1}\nB^{K-1}\tQ + (1-c)^{K}\nB^{K}\T{0} \end{align*} \normalsize where $\lVert \mat{T}^{*} - \T{K} \rVert_{1} \leq (1-c)^{K}\lVert \mat{T}^{*} - \T{0} \rVert_{1}$, and $\T{0} = \begin{bmatrix} \P{0} \\ \M{0} \end{bmatrix}$ is the initial value of Equation~\eqref{eq:srwdiff:vectorized}. \QEDB \end{theorem} \begin{proof} The iteration of Equation~\eqref{eq:srwdiff:vectorized} is written as follows: \begin{align} \T{k} &= (1-c) \nB\T{k-1} + c\Q \nonumber\\ &= \left((1-c) \nB\right)^{2} \T{k-2} + \left((1-c)\nB + \mat{I}\right)\tQ \nonumber\\ &= \cdots \nonumber\\ &= \left((1-c) \nB\right)^{k} \T{0} + \left(\sum_{i=0}^{k-1}\left((1-c)^{i}\nB^{i}\right)\right)\tQ. \label{eq:srwdiff_expanded} \end{align} Note that the spectral radius $\rho(\nB)$ is less than or equal to $1$ by Theorem~\ref{lemma:spectral}; thus, for $0 < c < 1$, the spectral radius of $(1-c)\nB$ is less than $1$, i.e., $\rho((1-c)\nB) = (1-c)\rho(\nB) \leq (1-c) < 1$. Hence, if $k \rightarrow \infty$, the power of $(1-c)\nB$ converges to $\mat{0}$, i.e., $\lim_{k \rightarrow \infty}(1-c)^{k}\nB^{k} = \mat{0}$. Also, the second term in Equation~\eqref{eq:srwdiff_expanded} becomes the infinite geometric series of $(1-c)\nB$ which converges as the following equation: \begin{align*} \mat{T}^{*} = \lim_{k \rightarrow \infty} \T{k} = \mat{0} + \lim_{k \rightarrow \infty} \left(\sum_{i=0}^{k-1}\left((1-c)^{i}\nB^{i}\right)\right)\tQ = (\mat{I} - (1-c)\nB)^{-1}\tQ \end{align*} where the convergence always holds if $\rho((1-c)\nB) < 1$. The converged solution $\mat{T}^{*}$ satisfies $\mat{T}^{*} = (1-c)\nB\mat{T}^{*} + c\Q$. Also, $\mat{T}^{*}$ is approximated as Equation~\eqref{eq:srwdiff:approx}. Then, the approximation error \smf{$\lVert \mat{T}^{*} - \T{K} \rVert_{1}$} is bounded as follows: \begin{align} \begin{split} \lVert \mat{T}^{*} - \T{K}\rVert_{1} &= \lVert (1-c)\nB\mat{T}^{*} - (1-c)\nB\T{K-1}\rVert_{1} \leq (1-c)\lVert \nB \lVert_{1}\lVert \mat{T}^{*} - \T{K-1}\rVert_{1}\\ &\leq (1-c)\lVert \mat{T}^{*} - \T{K-1}\rVert_{1} \leq \cdots \\ &\leq (1-c)^{K}\lVert \mat{T}^{*} - \T{0}\rVert_{1} \end{split} \end{align} where $\lVert \cdot \rVert_{1}$ is $L_1$-norm of a matrix. Note that the bound $\lVert \nB \rVert_{1} \leq 1$ of Theorem~\ref{lemma:spectral} is used in the above derivation. \QEDB \end{proof} \begin{theorem}[Bound of Spectral Radius of $\nB$] \label{lemma:spectral} The spectral radius of $\nB$ in Equation~\eqref{eq:srwdiff:vectorized} is less than or equal to $1$, i.e., $\rho(\nB) \leq \lVert \nB \rVert_{1} \leq 1$. \QEDB \end{theorem} \begin{proof} According to spectral radius theorem~\citep{trefethen1997numerical}, \smf{$\rho(\nB) \leq \lVert \nB \rVert_{1}$} where $\lVert \cdot \rVert_{1}$ denotes $L_{1}$-norm of a given matrix, indicating the maximum absolute column sum of the matrix. Note that the entries of \smf{$\nB$} are non-negative probabilities; thus, the absolute column sums of \smf{$\nB$} are equal to its column sums which are obtained as follows: \begin{align} \begin{split} \vectt{1}_{2n} \nB &= \begin{bmatrix} \vectt{1}_{n} \nApT + \vectt{1}_{n} \nAnT & \vectt{1}_{n} \nAnT + \vectt{1}_{n} \nApT \end{bmatrix} = \begin{bmatrix} \vectt{1}_{n}\nAT & \vectt{1}_{n}\nAT \end{bmatrix} = \begin{bmatrix} \vectt{b} & \vectt{b} \end{bmatrix} \end{split} \end{align} where $\nAT = \nApT + \nAnT$, and $\vect{1}_{n}$ is an $n$-dimensional one vector. Note that $\mattt{A}_{s} = \matt{A}_{s}\mati{D}$ for sign $s$ where $\mat{D}$ is a diagonal out-degree matrix (i.e., $\mat{D}_{uu} = |\ON{u}|$). Then, $\vectt{1}_{n}\nAT$ is represented as \begin{align*} \vectt{1}_{n}\nAT = \vectt{1}_{n}(\matt{A}_{+}+\matt{A}_{-})\mati{D} = \vectt{1}_{n}\matt{|A|}\mati{D} = (|\mat{A}|\vect{1}_{n})^{\top}\mati{D} = \vectt{b} \end{align*} where $\mat{|A|}=\mat{A}_{+}+\mat{A}_{-}$ is the absolute adjacency matrix. The $u$-th entry of $|\mat{A}|\vect{1}_{n}$ indicates the out-degree of node $u$, denoted by $|\ON{u}|$. Note that $\mati{D}_{uu}$ is $1/|\ON{u}|$ if $u$ is a non-deadend. Otherwise, $\mati{D}_{uu} = 0$ (i.e., a deadend node has no outgoing edges). Hence, the $u$-th entry of $\vectt{b}$ is $1$ if node $u$ is not a deadend, or $0$ otherwise; its maximum value is less than or equal to $1$. Therefore, $\rho(\nB) \leq \lVert \nB \rVert_{1} \leq 1$. \QEDB \end{proof} \subsection{Time Complexity Analysis} \label{sec:appendix:time_complexity} \begin{theorem}[Time Complexity of \textsc{SGDNet}\xspace] The time complexity of the $l$-th SGD layer is $O(Km\d{l} + n\d{l-1}\d{l})$ where $K$ is the number of diffusion steps, $\d{l}$ is the feature dimension of the $l$-th layer, and $m$ and $n$ are the number of edges and nodes, respectively. Assuming all of $\d{l}$ are set to $d$, \textsc{SGDNet}\xspace with $L$ SGD layers takes $O(LKmd + Lnd^{2})$ time. \QEDB \end{theorem} \begin{proof} The feature transform operations require $O(n\d{l-1}\d{l})$ time due to their dense matrix multiplication. Each iteration of the signed random walk diffusion in Equation~\eqref{eq:srwdiff:vectorized} takes $O(m\d{l})$ time due to the sparse matrix multiplication $\nB\T{k-1}$ where the number of non-zeros of $\nB$ is $O(m)$. Thus, $O(Km\d{l})$ is required for $K$ iterations. Overall, the total time complexity of the $l$-th SGD layer is $O(Km\d{l} + n\d{l-1}\d{l})$. \QEDB \end{proof} \subsection{Pseudocode of \textsc{SGDNet}\xspace} \label{sec:appendix:pseudocode} Algorithm~\ref{alg:method} describes \textsc{SGDNet}\xspace's overall procedure which is depicted in Figure~\ref{fig:overview}. Given signed adjacency matrix $\A$ and related hyper-parameters (e.g., numbers $L$ and $K$ of SGD layers and diffusion steps, respectively), \textsc{SGDNet}\xspace produces the final hidden node features $\H{L}$ which are fed to a loss function as described in Section~\ref{sec:method:loss}. It first computes the normalized matrices $\nAp$ and $\nAn$ (line~\ref{alg:method:normalization}). Then, it performs the forward function of \textsc{SGDNet}\xspace (lines~\ref{alg:method:forward:start}~$\sim$ \ref{alg:method:forward:end}). The forward function repeats the signed random walk diffusion $K$ times (lines~\ref{alg:method:srwdiff:start}~$\sim$~\ref{alg:method:srwdiff:end}), and then performs the non-linear feature transformation skip-connected with $\H{l-1}$ (line~\ref{alg:method:non_linear_trans}). \renewcommand{\algorithmiccomment}[1]{#1} \begin{algorithm}[h!] \begin{algorithmic}[1] \caption{Pseudocode of \textsc{SGDNet}\xspace} \label{alg:method} \REQUIRE signed adjacency matrix $\A$, initial node feature matrix $\mat{X}$, number $K$ of diffusion steps, number $L$ of SGD layers, and local feature injection ratio $c$ \ENSURE hidden node feature matrix $\H{L}$ \STATE compute normalized matrices for each sign, i.e., $\nAp = \mati{D}\A_{+}$ and $\nAn = \mati{D}\A_{-}$ \label{alg:method:normalization} \STATE initialize $\H{0}$ with $\mat{X}$ \FOR[\hfill $\triangleright$ \textit{start the forward function of \small{\textsc{SGDNet}\xspace}}]{$l$ $\leftarrow$ $1$ to $L$} \label{alg:method:forward:start} \STATE perform the feature transformation as {$\tH{l} \leftarrow \H{l-1}\W{l}_{t}$} \STATE initialize $\P{0}$ with $\tH{l}$ and randomly initialized $\M{0}$ in $[-1, 1]$ \BlankLine \FOR[\hfill $\triangleright$ \textit{perform the signed random walk diffusion in Equation~\eqref{eq:srwdiff}}]{$k \leftarrow 1$ to $K$} \label{alg:method:srwdiff:start} \BlankLine \STATE $\P{k} \leftarrow (1-c) (\nApT\P{k-1} + \nAnT\M{k-1}) + c\tH{l}$ \STATE $\M{k} \leftarrow (1-c) (\nAnT\P{k-1} + \nApT\M{k-1})$ \BlankLine \ENDFOR \label{alg:method:srwdiff:end} \BlankLine \STATE set $\dP{l} \leftarrow \P{K}$ and $\dM{l} \leftarrow \M{K}$ \STATE compute $l$-th hidden node features $\H{l} \leftarrow \texttt{tanh}(\left[\dP{l} \vert\rvert \dM{l}\right]\W{l}_{n} + \H{l-1})$ \label{alg:method:non_linear_trans} \BlankLine \ENDFOR \label{alg:method:forward:end} \BlankLine \RETURN $\H{L}$ \end{algorithmic} \end{algorithm} \subsection{Detailed Description of Datasets} \label{sec:appendix:datasets} The Bitcoin-Alpha and Bitcoin-OTC datasets~\citep{kumar2016edge} are extracted from directed online trust networks served by Bitcoin Alpha and Bitcoin OTC, respectively. The Slashdot dataset~\citep{kunegis2009slashdot} is collected from Slashdot, a technology news site which allows a user to create positive or negative links to others. The Epinions dataset~\citep{guha2004propagation} is a directed signed graph scraped from Epinions, a product review site in which users mark their trust or distrust to others. The publicly available signed graphs do not contain initial node features even though they have been utilized as standard datasets in signed graph analysis. Due to this reason, many previous works~\citep{derr2018signed,li2020learning} on GCN for signed graphs have exploited singular vector decomposition (SVD) to extract initial node features. Thus, we follow this setup, i.e., $\mat{X}=\mat{U}\mat{\Sigma}_{d}$ is the initial feature matrix for all GCN-based models where $\A\simeq\mat{U}\mat{\Sigma}_{d_{i}}\matt{V}$ is obtained by a truncated SVD method, called Randomized SVD~\citep{halko2011finding}, with target rank $d_{i}=128$. Note that the method is very efficient (i.e., its time complexity is $O(nd_{i}^2)$ where $n$ is the number of nodes) and performed only once as a preprocessing in advance; thus, it does not affect the computational performance of training and inference. \subsection{Additional Experiments on Wikipedia Dataset} \label{sec:appendix:wikipedia} \begin{figure}[h] \centering \subfigure[Predictive performance in terms of AUC]{ \hspace{-2mm} \includegraphics[width=0.24\linewidth]{FIG/PLOT/WIKI/WIKIPEDIA_AUC.pdf} \label{fig:exp:wiki:auc} } \subfigure[Predictive performance in terms of F1-macro]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/WIKI/WIKIPEDIA_MACRO.pdf} \label{fig:exp:wiki:f1_macro} } \subfigure[Effect of the feature diffusion]{ \includegraphics[width=0.23\linewidth]{FIG/PLOT/EFFECT_DIFF/WIKIPEDIA_F1_MACRO.pdf} \label{fig:exp:wiki:diff} } \subfigure[Effect of the local feature injection ratio]{ \includegraphics[width=0.23\linewidth]{FIG/PLOT/EFFECT_RESTART/WIKIPEDIA_F1_MACRO.pdf} \label{fig:exp:wiki:inject} } \caption{ \label{fig:exp:wiki} Experimental results on Wikipedia dataset. } \end{figure} We perform additional experiments on Wikipedia dataset~\citep{leskovec2010signed} which has been also frequently used in signed graph analysis. The Wikipedia dataset is a signed graph representing the administrator election procedure in Wikipedia where a user can vote for ($+$) or against ($-$) a candidate. The numbers of nodes and edges are $7,118$ and $103,675$, respectively. Figure~\ref{fig:exp:wiki} shows the experimental results on the dataset. As seen in Figures~\ref{fig:exp:wiki:auc}~and~\ref{fig:exp:wiki:f1_macro}, \textsc{SGDNet}\xspace outperforms other methods in terms of AUC and F1-macro, respectively. Figure~\ref{fig:exp:wiki:diff} indicates our diffusion mechanism still works on the Wikipedia dataset. Figure~\ref{fig:exp:wiki:inject} shows the effect of the local feature injection ratio $c$, indicating properly selected $c$ such as $0.5$ is helpful for the performance. \subsection{Effect of Embedding Dimension} \label{sec:appendix:effect_dim} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{FIG/EFFECT_DIM_LEGEND}\\ \subfigure[Bitcoin-Alpha]{ \hspace{-2mm} \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIM/EFFECT_BITCOIN_ALPHA_AUC_DIM} \label{fig:exp:effect_dim:bitcoin_alpha} } \subfigure[Bitcoin-OTC]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIM/EFFECT_BITCOIN_OTC_AUC_DIM.pdf} \label{fig:exp:effect_dim:bitcoin_otc} } \subfigure[Slashdot]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIM/EFFECT_SLASHDOT_AUC_DIM.pdf} \label{fig:exp:effect_dim:slashdot} } \subfigure[Epinions]{ \includegraphics[width=0.235\linewidth]{FIG/PLOT/EFFECT_DIM/EFFECT_EPINIONS_AUC_DIM.pdf} \label{fig:exp:effect_dim:epinions} } \caption{ \label{fig:exp:effect_dim} Effect of the embedding dimension of each model. } \end{figure} We investigate the effect of the node embedding dimension of each model in the datasets listed in Table~\ref{tab:datasets}. For this experiment, we vary the dimension of hidden and final node embeddings from $8$ to $128$, and observe the trend of AUC in the link sign prediction task. As shown in Figure~\ref{fig:exp:effect_dim}, \textsc{SGDNet}\xspace outperforms its competitors over all the tested dimensions, and it is relatively less sensitive to the embedding dimension than other models in all datasets except Bitcoin-Alpha. \subsection{Hyperparameter Configuration} \label{sec:appendix:hyperparameter} \def1.2{1.2} \setlength{\tabcolsep}{6pt} \begin{table*}[!h] \small \begin{threeparttable}[t] \caption{ \label{tab:hyper} We summarize the configurations of \textsc{SGDNet}\xspace's hyperparameters, which are used in the experiments of this paper. \vspace{-2mm} } \begin{tabular}{l | ccccc} \toprule \multicolumn{1}{c|}{\textbf{Hyperparameter}} & \textbf{Bitcoin-Alpha} & \textbf{Bitcoin-OTC} & \textbf{Slashdot} & \textbf{Epinions} & \textbf{Wikipedia} \\ \midrule Number $L$ of SGD layers & 1 & 2 & 2 & 2 & 2 \\ Local injection ratio $c$ & 0.35 & 0.25 & 0.55 & 0.55 & 0.5 \\ Number $K$ of diffusion steps & \multicolumn{5}{c}{10} \\ \midrule Input feature dimension $d_{i}$ & \multicolumn{5}{c}{128} \\ Hidden embedding dimension $d_{l}$ & \multicolumn{5}{c}{32} \\ Optimizer & \multicolumn{5}{c}{Adam (learning rate: 0.01, weight decay $\lambda$: 0.001)} \\ Number of epochs & \multicolumn{5}{c}{100} \\ \bottomrule \end{tabular} \end{threeparttable} \vspace{-2mm} \end{table*} \section{Introduction} \label{sec:introduction} \input{001intro} \section{Related Work} \label{sec:related} \input{002related} \section{Proposed Method} \label{sec:proposed} \input{003proposed} \section{Experiments} \label{sec:experiments} \input{004experiments} \section{Conclusion} \label{sec:conclusion} \input{006conclusion} \bibliographystyle{iclr2021_conference}
{ "timestamp": "2020-12-29T02:27:50", "yymm": "2012", "arxiv_id": "2012.14191", "language": "en", "url": "https://arxiv.org/abs/2012.14191" }
\subsection{Accuracy Analysis} \label{sec:eval-accuracy} In the first experiment, we aim to answer \textbf{RQ1.} \textit{What is the impact of different detector generation strategies on the misuse detection accuracy?} As explained in Section \ref{sec:approach}, API misuse detectors can be generated using different strategies. We evaluate the detection accuracy with respect to the following factors in detector generation strategies: \begin{itemize} \item \textbf{Detector generation strategies:} How to generate detectors after the clustering step. As mentioned in Section~\ref{sec:approach}, we have two strategies to exploit the clusters for the detector generation: (1) \textit{parallel evolution} or (2) \textit{global evolution}. \item \textbf{GROUM complexity:} \emph{groums} abstract the API usage and capture different aspects of the source code, which may lead to complex \emph{groums} and a significant overhead for detector generation and \emph{groums} comparison. We are interested in evaluating whether we can achieve the same detection accuracy with simpler \emph{groums} in which we don't consider data dependencies. \item \textbf{Clustering:} One of the enhanced features introduced into our approach is the clustering. It helps to handle the variety of API usages, and it avoids the generation of a huge yet redundant number of detectors. We are interested in evaluating the impact of the clustering on the detection accuracy and what would be the effect of omitting the clustering from our approach. \end{itemize} \subsubsection{Analysis method} To answer \textbf{(RQ1)}, we performed a 10 fold cross-validation on each of the three APIs \textit{java.util.Iterator}, \textit{javax.crypto}, and \textit{javax.servlet.http}. We generated misuse detectors according to three generation configurations based on the previously mentioned variation factors: \textit{parallel vs. global evolution}, \textit{simple vs. complex \emph{groum}{}s} and \textit{with vs. without clustering}. We then identified which configuration achieves the best accuracy. For the 10 fold cross-validation, we created 10 folds that contain each 10\% of the good API usages collected (as explained in Section \ref{sec:data}). Then for each fold, we generate the detectors using the good uses from nine folds. The tenth fold, not used in the detector generation, is used in the test set. This test set is completed with the same number of bad usages, to have a balanced test set with the same number of good and bad usages. The detectors are then applied on each good and bad use case contained in this test set, to calculate their risk score (c.f., Section~\ref{sec:detection}). The cases are then sorted by their scores. This allows to compute the accuracy on the top-ranked use cases since we expect the API misuse to have the highest risk scores. We compute the accuracy as the number of true positives (misuses) over the number of considered top-k use cases. We calculate the accuracy for both top 10\% and top 30\% use cases. \subsubsection{Results and Analysis for RQ1} Tables \ref{tab:process5}, \ref{tab:process4}, and \ref{tab:process0} present the accuracy results for the three detector generation configurations. \begin{table*}[htp] \centering \caption{Detection accuracy for \textit{Global evolution}.} \begin{tabular}{|>{\raggedright}m{2.7cm}|*{6}{m{0.5in}|}} \hline \multirow{4}{*}{}&\multicolumn{3}{c|}{Complex \emph{groums}} & \multicolumn{3}{c|}{Simple \emph{groums}} \\ \cline{2-7} & Iterator & http & crypto & Iterator & http & crypto \\ \hline Mean accuracy 10\% data (\%) & 52.7\% & \textbf{70\%} & 55\% & 54.3\% & \textbf{70\%} & \textbf{60\%} \\ \hline Mean accuracy 30\% data (\%) & 54.6\% & 57.8\% & 58.6\% & 55.7\% & 55.6\% & \textbf{65.7\%} \\ \hline \end{tabular} \label{tab:process5} \end{table*} \begin{table*}[htp] \centering \caption{Detection accuracy for \textit{Parallel evolution}} \begin{tabular}{|>{\raggedright}m{2.7cm}|*{6}{m{0.5in}|}} \hline \multirow{4}{*}{}&\multicolumn{3}{c|}{Complex \emph{groums}} & \multicolumn{3}{c|}{Simple \emph{groums}} \\ \cline{2-7} & Iterator & http & crypto & Iterator & http & crypto \\ \hline Mean accuracy 10\% data (\%) & 52.7\% & \textbf{60\%} & 40\% & 54.3\% & 53.3\% & 50\% \\ \hline Mean accuracy 30\% data (\%) & 54.6\% & 57.8\% & 51.4\% & 55.7\% & 55.5\% & 50\% \\ \hline \end{tabular} \label{tab:process4} \end{table*} \begin{table}[htp] \centering \caption{Detection accuracy for the generation without clustering and \textit{simple \emph{groums}}.} \begin{tabular}{|l|r|r|r|} \hline & \multicolumn{1}{c|}{Iterator} & \multicolumn{1}{c|}{http} & \multicolumn{1}{c|}{crypto} \\ \hline Mean accuracy 10\% data & 54.3\% & 50\% & 50\% \\ \hline Mean accuracy 30\% data & 55.7\% & 53.3\% & 55.7\% \\ \hline \end{tabular} \label{tab:process0} \end{table} \textbf{\underline{Study 1.A: \textit{parallel vs. global evolution}.}} The global evolution (Table~\ref{tab:process5}) allows achieving in general better accuracy than the parallel evolution (Table~\ref{tab:process4}). We conjecture that the global evolution introduces more diversity during the generation. Conversely, parallel evolution forces the individual generation of the detectors for clusters having different sizes, which in some cases limits the exploration possibilities. For example, we obtained 8 clusters of sizes 2, 2, 5, 6, 24, 44, 46, and 95 for \code{crypto}\xspace. In the small clusters, we do not have enough examples of good usage to generate accurate detectors. In other words, if we want to generate 50 different detectors from two groums, we have to find 50 different mutations of these two groums, which is difficult when the groums are small graphs with few nodes. The same observation holds for \code{http}\xspace, with slightly less difference between the two strategies. \code{http}\xspace has 3 clusters of sizes 17, 51 and 61 and then has less small clusters than \code{crypto}\xspace. Note that the results for \code{Iterator}\xspace are exactly the same for the two strategies for the simple reason that a unique cluster was obtained (the API has only 4 methods), and then both strategies behave the same. \textbf{\underline{Study 1.B: \textit{simple vs. complex \emph{groums}}.}} Using simple \emph{groums} is slightly better than using complex \emph{groums}. For \code{Iterator}\xspace, the accuracy increases by 2\% when we use simple \emph{groums}. For \code{crypto}\xspace, the use of simple \emph{groums} increases the accuracy for both parallel and global process. In particular, for parallel process (\tabref{tab:process4}), the accuracy increases from 40\% to 50\% on the top 10\% of ranked methods. The exception is for \code{http}\xspace where the accuracy slightly decreases for the parallel evolution but stays high (70\%) with the global evolution. These results could be explained by the fact that the data dependency edges in the \emph{groums} are not obvious for the API usage comprehension and take an important weight during the similarity computation between a method and a detector. If we look at the \figref{Groum2} and we remove the data dependency edges in blue, we have a \emph{groums} with 7 elements instead of 10. So, the simple \emph{groums} give implicitly more weight to each node and the other edge which are more important for API usage comprehension. \\ \\ Note that the simplification of the \emph{groums} was done after the detector generation. During the generation process, we consider the data dependencies. \textbf{\underline{Study 1.C: \textit{with vs. without clustering}.}} The clustering allows achieving much better accuracy than without clustering. The clustering benefits the detection accuracy as we can observe on the top 10\% an increase of 3.33\% with the parallel evolution and an increase of 20\% with global evolution on \code{http}\xspace whereas \code{crypto}\xspace sees an increase of 10\% with the global evolution to obtain 60\% of accuracy (\tabref{tab:process0}, \tabref{tab:process4} and \tabref{tab:process5}). As we conjectured, the clustering allows to target specifically different usage scenarios during the detector generation. When the detectors are generated without clustering, some usage scenarios can be partially ignored in the random generation of detectors, especially those that are not very common, i.e., the probability to generate a detector from this rare usage scenarios is low. \\ \\ In conclusion With this first study, we first show that global evolution is more beneficial than parallel evolution because of the presence of small clusters. Second, simple \emph{groums} are slightly more efficient because they give more weight to important information about the API usages. Finally, not performing the clustering is detrimental to the detection accuracy and confirms the intuition that grouping the API usages to generate the detectors is beneficial. \section{BIS-based detection of API misuses} \label{sec:detection_algorithm} In this section, we discuss the formulation and rational behind the choice of the biological immune-system as an inspiration to develop {\textsc{APImmune}\xspace}. We present the principles of the artificial immune system algorithm (AIS), a simplified abstraction of the BIS, and the adaptation of this to the API-misuse detection problem. \subsection{Running Example} An API misuse is a specific issue which results from the violation of the API specifications. It is an unexpected utilization of the APIs. For example, in Figure~\ref{Example}, the resource \textit{BufferedReader} is initialized (line 2), used to read each line (line 4) but not closed at the end. It is a classic example of \textit{BufferedReader} misuse. This misuse is an instance of one of the 13 types of common misuses, identified by the authors of the benchmark \emph{MuBench} in \cite{MuBench}, i.e., missing call. In this misuse, if the resource is not closed, it means that no other processes can access it. This misuse can cause exceptions and perhaps corrupt the resources. Thus, we need to detect this risk early. \begin{figure} \includegraphics[width=3in]{fig/testGroum_java} \caption{\textit{BufferedReader} Misuse Example} \label{Example} \end{figure} In this work, we formulate an API usage as follows. \begin{Definition}[API Usage] An API usage is a fragment of code that involves the API classes and methods in certain libraries. \end{Definition} As an example, the code in Figure~\ref{Example} is an API usage containing several APIs in JDK library, such as the classes \code{StringBuffer}, \code{BufferredReader}, \code{FileReader}, etc., and the method calls \code{BufferedReader.readLine()}, \code{StringBuffer.append(...)}, \code{StringBuffer.length()}, etc. We adopt GROUM, a graph-based representation for the API usages~\cite{Grouminer}. For \subsection{Artificial Immune System Detection} To protect the organism from potential pathogens, the immune system follows a 3-steps cycle: (1) discovery, (2) identification and (3) elimination. The discovery step detects potential pathogenes, such as viruses and bacteria. When such an element is detected, the identification step is responsible for checking if the identified element is known (immune memory). Finally, in the elimination step, the adequate response is selected depending on the identification step. As we are concerned with the detection of API misuses, we focus only on the discovery step and its abstraction by the AIS. One of the most important notions in an AIS is the self. It defines the boundaries of what is normal, and then, non-risky. For a human being, for example, the self is defined by all the normal cells of the body. In the context of the API misuse detection, the selfs for APIs are the API usages in client programs after the API usages are fixed. In the context of the API misuse problem, the self for an API can be defined by a set of client programs that correctly used the API in the past. The advantage of this definition is that the self for an API evolves as new clients correctly using the API can be added. A second important notion in an AIS is the maturation of T-cells that can be used to detect non-self elements. The T-cells are created randomly and exposed to normal cells. If a T-cell matches a normal cell, it is removed from the repertoire of immune cells to avoid the body attacking itself. This is called negative selection. For computation reasons, it is not possible to create a large number of T-cells. Hence, it is important to ensure a maximum coverage while keeping minimal the number of T-cells. When transposing the principle of the negative selection to the detection of API misuses, the random generation of detectors can be implemented by a multi-objective optimization process. Indeed, if we fix the number of detectors, the goal is to find the set of detectors that are as much as possible different from the normal use situations in the selected clients, but also as much as possible different between them. The next important notion of an AIS is the affinity computation between a detector and an encountered cell. The affinity is a similarity function that allows to assess if the encountered cell belongs to self or not. For the misuse detection, we are interested in the method level. The goal then is to define an abstract representation of a method that indicates how this method uses the targeted API. Such an abstraction should ignore elements not related to the API. \section{{\textsc{APImmune}\xspace}: BIS-inspired API Misuse Detection} \label{sec:approach} This section presents our algorithms to realize {\textsc{APImmune}\xspace}, an BIS-inspired API misuse detection tool. Our approach to detect API misuses is depicted in Figure~\ref{fig:Approach}. We begin with the extraction of usage signatures (groums) that represent the usage scenarios from the methods of a given client code corpus (Self). As the API can be used in different ways, the next step is to cluster the signatures depending on which API methods are involved. Then, starting from the clusters, a set of detectors is generated. The final step is the actual detection, in which all the generated detectors are used to assess the misuse risk of each new client method. Note that the obtained detectors will have their own independent life cycles. They can be reused/shared, enhanced with new detectors when new safe clients are considered and destroyed if they detect false positive(s). \begin{figure*}[h] \centering \includegraphics[width=5.25in]{fig/approach} \caption{{\textsc{APImmune}\xspace} Architecture: BIS-inspired API Misuse Detection} \label{fig:Approach} \end{figure*} \subsection{Usage Signature Extraction} The goal of this step is to produce, for each method in the safe clients' code, a signature that captures the API usage independently from the client behavior. To this end, we use the tool \emph{GrouMiner}~\cite{Grouminer} to extract a \emph{groum}{} as defined in Section~\ref{section:formulation}. The initial \emph{groum}{} contains all the elements in the considered-method body. Then, this \emph{groum}{} is pruned by removing all the nodes and edges that are not concerned with the API calls. Let us consider again the code fragment of Listing~\ref{example}, the initial \emph{groum}{} is the one of Figure~\ref{Groum1}. Now if only the BufferedReader class is considered as part of the API, the pruning process will produce the \emph{groum}{} depicted in Figure~\ref{Groum2}. \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{fig/TestGroum2} \caption{Usage Signature Example} \label{Groum2} \end{figure} \subsection{Signature Clustering} An API can offer different functionalities and then expose different sets of methods to use them. It is, then, important to target different families of misuses without enumerating them explicitly. A good way to handle this variety of API usages is to identify similar usage scenarios. In this context, the second step of our approach is to derive clusters of usage-signatures, which define families of API usage scenarios. The clustering allows targeting different usage scenarios during the detector generation and taking into account those that are not very common. Moreover, if the detectors are generated without clustering, redundant detectors will be derived for similar usage scenarios that were not clustered. Figure~\ref{fig:clustering} shows the clustering process. In a first step, We cluster API methods that are co-used together by the client methods. To this end, we use DBSCAN, a density-based clustering algorithm ~\cite{Ester96adensity-based}. DBSCAN constructs clusters of API methods by grouping those that are close to each other (i.e., similar methods) form a dense region. Two API methods are close to each other (short distance) if they have a high co-occurrence frequency, thus, they will share a large set of common usage. Moreover, with DBSCAN we don't need to specify the number of clusters, and DBSCAN is also very robust to outliers which in our case will occur for utility methods that are frequently co-used with domain-specific methods. The algorithm has two parameters: The first parameter is the minimum number of methods in a cluster. We set it at two so that a cluster includes at least two methods of the API. The second parameter is epsilon, the maximum distance within which two points can be considered as neighbor, each to other. In other words, epsilon value controls the minimal density that a clustered region can have. The shorter is the distance between methods within a cluster the denser is the cluster. We set it at 0.8 to minimize the noisy points, i.e., two methods are clustered together if they share at least 20\% of their client calling methods. In a second step, we derive the families of API usage scenarios. For each API-method cluster inferred in the first step, we identify its corresponding client methods, i.e., the client methods using the API methods in the cluster. \begin{figure*}[h] \centering \includegraphics[height=2.75in, width=6in]{fig/clustering.png} \caption{Signature clustering} \label{fig:clustering} \end{figure*} \begin{figure*}[p] \centering \includegraphics[height=3.5in, width=6in]{fig/Process4.png} \caption{Detectors generation process with clustering. Process with merging after evolution} \label{fig:evolutionTop} \end{figure*} \begin{figure*}[p] \centering \includegraphics[height=3.5in, width=6in]{fig/Process5.png} \caption{Detectors generation process with clustering. Process with merging before evolution on the bottom} \label{fig:evolutionbottom} \end{figure*} \subsection{Detector Generation} To allow the detection of API misuses, we use a genetic algorithm to generate detectors mimicking the T-cells. The objective is to generate a fixed number of detectors that represent artificial signatures that are different from those of the safe code (random alterations of the good-usage signatures). The genetic-based generation algorithms start from a population of randomly generated detector sets, each having a fixed size. Each set is a candidate solution. Then the algorithm evolves these sets through a given number of generation. In each generation, the algorithm create a new population of candidate detector sets by modifying the detectors in the sets (production of new genetic material by the mutation operator) and/or by combining detectors coming from different sets (recombination of the existing genetic material by the crossover operator). After the clustering step, we experimented with two alternative generation process. The first, call it \textit{parallel evolution}, consists in having a separate detector generation process (Figure~\ref{fig:evolutionTop}). The second, call \textit{global evolution}, uses the clusters to seed the generation process by producing an initial set of detector that are globally refined later on (Figure~\ref{fig:evolutionbottom}). \textit{Parallel evolution:} The detector generation is performed specifically for each cluster in a parallel mode. Detectors are generated by mutating client-method groums. We evolve detectors separately for each cluster, using a genetic algorithm, and we merge the best final solutions (set of detectors) at the end of the process. If the number of clusters is high, merging all the detectors may results in a large set of detectors. Thus, we use another optimization process to reduce the number of the merged detectors to a minimal set. To this end, we use a proportionate selection, also known as roulette wheel selection, to generate a population of random combinations of detectors with each combination having a fixed size. The probability of a detector to be included in any of the combinations is proportional to its fitness (see below for the fitness calculation). Then the generated population evolves through genetic recombination. Note that this last process does not generate new detectors. It only searches for the best combination of a fixed number of detectors. \textit{Global evolution:} In this alternative, the genetic algorithm is allied only once. We seed the initial population of detectors candidate sets with detectors coming from each cluster using the roulette wheel selection to have a representation of individuals conforming to the cluster size. The larger client-method groums are in a cluster, the more candidate detectors are likely to be generated from this cluster. Then, the genetic-basic generation process runs on this initial population regardless of the clusters that served to generate the initial population. For all the alternatives, the evolution is guided by the two objectives of having a set of detectors that is different from the normal signatures while being diverse. Additionally, the evolution of solutions is performed using genetic operators, i.e., elitism, crossover, and mutation. The details of the most important elements of the algorithm are as follows. \textbf{Elitism}: When creating the next generation of candidate detector sets, a small subset of the current-generation sets having the highest fitness values is automatically added. The elitism ensures to keep the best solutions during the evolution. \textbf{Crossover}: After performing the elitism, the remaining slots for the next generation are filled using the crossover between detector sets of the current generation. The crossover consists of selecting two sets and producing two offsprings by exchanging subsets of detectors as illustrated in Figure \ref{fig:crossover}. The selection favors the fittest detector sets while keeping a certain degree of randomness. When selecting two detector sets, the crossover is performed under a probability (set to 0.9). \begin{figure*}[h] \centering \includegraphics[width=0.8\linewidth]{fig/crossover.png} \caption{Crossover Process} \label{fig:crossover} \end{figure*} \textbf{Mutation}: After each crossover, the offsprings (or the parents if the crossover is not performed) are candidates for mutation with a certain probability (set to 0.2). When a decision is made to mutate a detector set, a subset of its detectors is randomly selected and one of the nine types of mutations is randomly selected to apply on each of these detectors. {\textsc{APImmune}\xspace} considers the types of mutation operators as explained in Section~\ref{section:formulation}. Figure \ref{fig:mutation} shows the \emph{add node} mutation. \begin{figure*}[h] \centering \includegraphics[width=0.7\linewidth]{fig/mutation.png} \caption{Mutation Process} \label{fig:mutation} \end{figure*} \textbf{Fitness function}: Both the elitism and selection for the crossover use a fitness function to favor the fittest detector sets. For a detector set $S$, the fitness function is the average of the fitness scores of each detector $d_i \in S$. To ensure a maximum coverage with a limited set of detectors, the fitness function of each detector should consider two aspects: the dissimilarity with the safe usage signatures $C$ and the dissimilarity with the other detectors of $S$. Consequently, the fitness score of a detector $d_i$ is the linear combination of two dissimilarity functions: \begin{equation} fitness(d_i) = \alpha \cdot clientDis(d_i) + \beta \cdot detectorDis(d_i) \end{equation} Both client distance $clientDis(d_i)$ and detector distance $detectorDis(d_i)$ are based on the similarity function $sim(di,y)$ between two \emph{groums}: $d_i$ for the detector and $y$ for either another detector or a Self API usage. It is defined as the proportion of shared elements (nodes, edges, exceptions and control structures) between the compared \emph{groums}. To derive $clientDis(d_i)$, we start by calculating $minDis(d_i)$, the minimal distance between $d_i$ and any of the API usages in the Self $C$. \begin{equation} minDis(d_i)=1-max_{s_j \in C} (sim(d_i, s_j)) \end{equation} To capture the fact that deviation from the normal usages are in general different but not that distant from the normal usages, we give a perfect distance score $clientDis(d_i)=1$ when the $minDis(d_i)$ is in a certain interval $[l,h]$ where $l$ is close to 0 and $h$ is a maximum tolerated deviation. We considered the interval $[0.01, 0.33]$ in our experiments. For values outside this interval, we assign to $clientDis(d_i)$ a value between $0$ and $0.75$ proportionably to how far we are from the interval. This value is 0 if $minDis(d_i)$ equals 0 or 1. The distance with the other detectors $detectorDis(s_i)$ is calculated as follows: \begin{equation} detectorDis(s_i)=1-\frac{\sum_{d_k \in C, k \neq i} sim(d_i, d_k)}{|C|-1} \end{equation} Regardless of detector generation alternative, the best detector set $R$ is used in the future to assess new client code. \subsection{Misuse Detection} \label{sec:detection} The actual detection consists in measuring the similarity between the signature $a_t$ of each new client method $m_t$ with each detector $d_j \in R$ using the $sim(a_t,d_j)$ function. For a given a $m_t$, the risk score is derived by aggregating similarities with the individual detectors. The obvious strategy is to assign to the risk score the maximum similarity found between $m_t$ and the detectors in $R$. Alternatively, rather than looking at the detector that best matches the method being evaluated, we assign higher risk scores to methods that are close to multiple detectors. The closer the method is to different detectors with high similarities, the more it will be considered at risk. In our metaphor with the immune system, this would mean that a cell that tends to match several T-cells would be qualified as pathogenic. To implement this idea, we use the logical function of \emph{Noisy or}. The risque score according to the \emph{Noisy or} aggregation is calculated as follows and as illustrated in Figure~\ref{score}. \begin{equation} risk(m_t)=1-(\prod_{d_k \in R} 1- sim(d_j, a_t) ) \end{equation} \begin{figure*}[h] \centering \includegraphics[width=0.9\linewidth]{fig/score.png} \caption{Detectors generation process} \label{score} \end{figure*} \section{Background} \label{background} In this section, we present the background on API usages and misuses as well as on the principles of the artificial immune system (AIS), a simplified abstraction of the biological immune system (BIS). \subsection{API Misuses} Developers use the functionality of libraries~via Application Programming Interfaces~(API) to access the classes, methods, and fields that make up the APIs. Software libraries can be used in different ways. API specifications are the conditions on the usages of those API elements that a program needs to follow for the libraries to work properly \cite{saied2018improving,huppe2017mining}. For example, in Java Development Kit (JDK), one could instantiate a \code{BufferedReader} object for reading the data from a buffer, and then close the resource to guarantee data integrity. However, not all of the usages are well documented in the official documentation and programming guides~\cite{DBLP:conf/icse/Duala-EkokoR12}. That leads to misunderstanding and, thus, incorrect usages of the APIs that violate their specifications. Those violations are called {\em API misuses}. For example, in Listing~\ref{example}, the resource \textit{BufferedReader} is instantiated (line 2), used to read each line (line 4) but if not closed at the end, it will be a classic example of \textit{BufferedReader} misuse. This misuse is an instance of one of the 13 types of common misuses, identified by the authors of the benchmark \emph{MuBench}~\cite{MuBench}, i.e., missing call. \input{codeExample} \subsection{Artificial Immune System Detection} A detailed presentation of the biological immune system and Artificial Immune Systems can be found in \cite{immunology2002richard,castro2002artificial}. Let us summarize the principles of the artificial immune system algorithm (AIS) that interests us for our work. To protect the organism from potential pathogens, the immune system follows a 3-step cycle: (1) discovery, (2) identification and (3) elimination. The discovery step detects potential pathogenes, such as viruses and bacteria. When such an element is detected, the identification step is responsible for checking if the identified element is known (immune memory). Finally, in the elimination step, the adequate response is selected depending on the identification step. Discovery is the phase that interests us in particular as we are concerned with the detection of API misuses. Therefore, we explain its principle in the following paragraphs. There is no central organ that fully controls the immune system. In-stead, detectors wander in the body searching for harmful elements. Any element that can be recognised by the immune system is called an antigen. The cells that originally belong to our body and are harmless to its functioning are termed self (for self antigens) while the disease-causing elements are named non-self (for non-self antigens). The immune system classifies cells that are present in the body as self and non-self cells. The immune system produces a large number of randomly created detectors T-cells that can be used to detect non-self elements. The T-cells are created randomly and exposed to normal cells. If a T-cell matches a normal cell, it is removed from the repertoire of immune cells to avoid the body attacking itself. This is called negative selection. When using the AIS metaphor, it is not possible to create a large number of T-cells. Hence, it is important to ensure a maximum coverage while keeping minimal the number of T-cells The next important notion of an AIS is the affinity computation between a detector and an encountered cell. The affinity is a similarity function that assesses if the encountered cell belongs to self or not. Figure \ref{AIS} gives a simplified overview of how the presented AIS concepts will be used in our approach. The normal API usages are considered as the normal body cells (equivalent of the self). We will collect normal usages of the given APIs from the set of client programs correctly using the APIs. Artificial detectors (equivalent of the T-cells) will be randomly generated with the objective of being different from the normal usage of APIs. The detection is will be performed on tested methods using the APIs, that will be compared against the detectors to estimate the misuse risks. The risk index will be used to identify the API misuse (equivalent of the non-self). \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{fig/AISconcepts} \caption{AIS concepts applied to the API misuse detection} \label{AIS} \end{figure} \subsection{Comparative Analysis} \label{sec:comparison} In this experiment, we aim to answer \textbf{RQ3.} \textit{How well does our approach perform compared to existing approaches?} \subsubsection{Analysis method} To answer this research question, we run \textsc{APImmune}\xspace to detect misuses in MuBench dataset~\cite{MuBench} and compare the result with those of the misuse detection tools in MuBenchPipe~\cite{MuBenchPipe}. MuBench provides a benchmark of API misuses from real-world projects, which have been manually verified. MuBenchPipe provides a pipeline for running existing detectors on the misuses in MuBench. In the current version, MuBenchPipe supports four Java API misuse detectors: DMMC~\cite{MBM10}, GROUMiner~\cite{Grouminer}, Jadet~\cite{WZL07}, and Tikanga~\cite{WZ11}, all of which mine patterns from each subject project and detect misuses as violations of the patterns at the same time. MuBench contains misuses from a wide range of APIs including \code{Iterator} and \code{javax.crypto}. However, it does not contain misuses from \code{javax.serlet.http}. Therefore, in this experiment, we compare the detectors on only misuses of \code{Iterator} and \code{javax.crypto}. MuBench has 13 and 7 misuses for respectively \code{Iterator} and \code{javax.crypto} , . Using the detectors trained from our dataset collected from GitHub, we detected misuses in the methods of the projects in the benchmark and ranked the analyzed methods according to their risk scores. For \code{Iterator} uses, we only considered the top-13 ranked methods, since MUBench just flagged 13 misuses of Iterator. And for the same reason, we considered the top-7 ranked methods for \code{javax.crypto}. To compare the tools we calculate the recall as the number of misuses detected by a tool over the total number of known misuses in MUBench. We also went through the identified misuses to qualitatively and conceptually compare our approach and the considered tools to show how they could complement each other. \subsubsection{Results for RQ3} As shown in Table~\ref{tab:recall}, \textsc{APImmune}\xspace{} performs better than the others four tools for \code{crypto}\xspace and less good than two out of the four tools for \code{Iterator}\xspace. What is interesting in the results of our approach is the diversity of misuse revealed. For \code{crypto}\xspace we detect 3 misuses of different types, a missing call, a missing exception handling and a missing condition value or state. For \code{Iterator}\xspace, all misuses are missing condition value or state. Three considerations may explain the good performance for \code{crypto}\xspace. The first important point is that we do not mine and use usage patterns. Using patterns restrict the detection to specific scenarios. That focus on specific usages and do not take into account the diversity of API utilization, especially rare usages. Second, our groums capture various control structures, including exceptions, which allows to better represent the API good usage and the potential deviances. This is why we were able to detect a missing exception. The last benefit of our approach is the edge typing of \emph{groums}. This allows, among others, to define the scopes in which the API methods are called. For example, with the loop inclusion edge, it is possible to distinguish between an API method call inside a loop and the same call after a loop. This bring more precise information about the usage scenario. In conclusion, \textsc{APImmune}\xspace{} can complement the pattern-based detection tools. These are good in detecting specific cases of misuses as it was the case for \code{Iterator}\xspace. However, for diverse misuses, as for \code{crypto}\xspace, \textsc{APImmune}\xspace{} is better suited as it learns various deviances from normal uses rather than encoding a fixed set of patterns. \begin{table}[htp] \centering \caption{Detectors recall (number of flagged misuses)} \begin{tabular}{|>{\raggedright}m{2.3cm}|*{6}{m{0.7in}|}} \hline Tool & Iterator & crypto \\ \hline \textsc{APImmune}\xspace & 8\% (1) & 43\% (3) \\ \hline DMMC & 14\% (2) & 0\% (0) \\ \hline GROUMiner & 0\% (0) & 0\% (0) \\ \hline Jadet & 0\% (0) & 0\% (0) \\ \hline Tikanga & 23\% (3) & 0\% (0) \\ \hline \end{tabular} \label{tab:recall} \end{table} \section{Conclusion} \label{sec:conclusion} In this work, we propose \textsc{APImmune}\xspace{}, an approach to detect API misuses using the immune-system metaphor. The normal API usages are considered as self normal body cells. Whereas API misuses are considered as the non-self antigens. We use a genetic algorithm to generate detectors mimicking the T-cells that can be used to detect non-self API misuses. This approach has the advantage of generating the detectors once, and then, they can be used and enhanced without the need of disclosing the clients’ code, nor abstracting good use patterns. Moreover, the detectors can be produced for different versions of the programming interface, which brings more flexibility to the detection process. The evaluation of \textsc{APImmune}\xspace{} shows that it can detect various types of misuses, however, our approach does not outperform others in all cases. We have rather developed an approach to complement existing work. Existing misuse detection techniques are mainly based on usage patterns. This strategy is effective in detecting specific cases of misuses in particular on APIs where the uses are easy to characterize and patterns are easy to infer. Moreover, pattern-based approaches are heavily dependent on the frequency of good usage in the learning data set. \textsc{APImmune}\xspace{} can complement the pattern-based detection tools that are limited to fixed sets of misuse patterns. For diverse misuses, \textsc{APImmune}\xspace{} is better suited as it learns various deviances from normal uses rather than encoding a fixed set of patterns. \textsc{APImmune}\xspace{} may require some time to generate the detectors, but this is done only once. It requires much less time to evaluate client programs and assign risk scores to their methods. This short time encourages potential integration into IDEs. Despite the encouraging results, there is still some exploration to be done with \textsc{APImmune}\xspace{} the approach has multiple parameters that had to be set. It could be interesting to explore the results with different threshold parameters to see their impact on detection efficiency. For instance, we could increase the number of generated detectors while taking into account the potential redundancy of generated detectors. In the future, we plan to experiment with larger datasets involving many APIs. This will help us customize the detection process to the characteristics of these APIs (number of public methods, the number of distinct functionalities, etc.). Another area for improvement is to explore the combination of our approach with pattern-based detection. \subsection{Training Data Collection} \label{sec:data} As explained in \secref{sec:approach}, our approach generates misuse detectors for a specific API of interest. In our experiments, we evaluate our approach on APIs providing different functionality from three different packages of JDK: \code{java.util.Iterator}\footnote{https://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html}, \code{javax.crypto.*}\footnote{https://docs.oracle.com/javase/8/docs/api/javax/crypto/package-summary.html} and \code{javax.servlet.http.*}\footnote{https://docs.oracle.com/javaee/7/api/javax/servlet/http/package-summary.html}. To train the detectors, we need to collect a large samples of good usages of these APIs in their client code. We did that by looking at the changes to client methods that fixed the uses of these APIs in the histories of open source projects hosted on GitHub. We consider the client methods before the changes (buggy ones) as misuses and those after the changes (fixed ones) as good uses of these APIs. In order to do that, we first collected high-quality repositories. To eliminate toy or experimental repositories, we used GitHub APIs to search for Java repositories that had been given at least 5 stars by individual GitHub users. This gave us 21,745 repositories. Then, for each repository, we identified the commits that had potentially fixed bugs by applying string pattern matching on the contents of the commit messages against keywords \textit{fix, bug, issue, error, problem, exception and fail}. For each potential fixing commit, we mapped the methods before and after the change from the set of changed files. For each modified method, we used the abstract syntax tree (AST) differencing algorithm in~\cite{jsync-tse11} to identify the changed AST nodes. If a changed method does not contain any changed AST nodes whose resolved type is \code{java.util.Iterator} or belongs to package \code{javax.crypto} or \code{javax.servlet.http}, we disregard it. To make sure that a changed method actually fixes the usage of the APIs of interest, we construct and compare the \emph{groum}{}s with respect to the three APIs of interest of the methods before and after the change. If the \emph{groum}{}s are different, we add the method before the fix to the set of misuses and the method after the fix to the set of good uses of the corresponding API. We keep the misuses for the validation experiment. The numbers of projects, fixing commits, pairs of misuses and good uses and API \emph{groum}{}s generated for each API in our dataset are shown in \tabref{tab:data}. \begin{table} \centering \caption{Statistics of the Training Dataset.} \begin{tabular}{|l|r|r|r|r|} \hline API & Projects & Commits & Uses & API \emph{groum}{}s\\ \hline \code{Iterator} & 832 & 1,560 & 1,833 & 3,641 \\ \code{javax.crypto} & 74 & 132 & 201 & 271 \\ \code{javax.servlet.http} & 968 & 4,672 & 6,607 & 957 \\ \hline \end{tabular} \label{tab:data} \end{table} \section{Discussion and Threats to Validity} \label{sec:validity} It is widely recognized that several factors can bias the validity of empirical studies. We will discuss the different threats that may affect our study. External Validity concerns the possible biases related to the choice of experimental subjects/objects. Although we have analyzed a large-scale dataset of 4,869 API groums with 8,641 API uses, collected from 6,364 commits of 1,874 projects selected in an initial set of 21,745 repositories. We cannot claim that our results can be generalized beyond the three APIs for which the dataset was collected. Regarding the Internal validity, we did not fine-tune the other detectors our self, we rather used the results reported in MUBench with the configurations reported in their respective publications of the different detectors. Probably a better fine-tuning would have lead to different results. Another threat to internal validity can be related to the knowledge and expertise of the human evaluators who compared the groums of the methods before and after the bug fixing commit; To add the method before the fix to the set of misuses and the method after the fix to the set of good uses of the corresponding API. However, it is only the expertise of the human evaluators that guaranty that the result of the fix itself is a good use of the API. Threats to construct validity can be related to the measurements performed to address the research questions. In our study, we measured the risk score for each evaluated method. The misuse detectors are applied to each good and bad use case contained in the test set. The cases are then sorted by their risk scores. This allows computing the accuracy on the top-ranked cases. We computed the accuracy as the number of true positives (misuses) over the number of considered top-k use cases. While the adoption of these measurements is popular to assess the efficiency of algorithms and to conduct comparative studies we cannot neglect the existence of a slight bias related to the set of ground truth used to calculate accuracy, as the good and bad use case were manually validated. Moreover, the experimenter expectancy effect is another possible threat. Indeed, the manual inspection was performed by two of the authors. Our approach does not outperform others in all cases. We have rather developed an approach to complement existing work. Existing misuse detection techniques are mainly based on usage patterns. This strategy is effective in detecting specific cases of misuses in particular on APIs similar to the Iterator where the uses are easy to characterize and patterns are easy to infer. Our approach brings a new way of addressing the API misuse problem. while the vast majority of existing detectors consider the deviance from perfection as a criterion to identify API misuses, we rather decided to opt for the closeness to evil as a criterion to detect the API misuses. Instead of measuring how far from right behavior (i.e. the patterns) is the evaluated method, we look at how similar is an evaluated method to bad behaviors (i.e. the detectors). Moreover, in our metaphor with the immune system, a cell that tends to match several T-cells would be qualified as pathogenic. Thus, we made the choice to assign higher risk scores to methods that are close to multiple detectors through the logical function of Noisy or, rather than looking at the detector that best matches the method being evaluated. To improve APIMMUNE in detecting specific cases of misuses, as it was the case for Iterator, a different strategy to measure the risk score could be investigated, for instance the maximum similarity between the evaluated method and each of the detectors. However, for diverse misuses, as for crypto, APIMMUNE can complement the pattern-based detection tools. as it learns various deviances from normal uses rather than encoding a fixed set of patterns. In our approach, we generate only fifty detectors for each API, this could be considered a low number and we could consider producing a larger number of detectors. However, if we generate a high number of detectors this will bring into the picture the detectors diversity issue. When the detectors are not as diverse as expected, the misuse detection step will be impacted, If a method (a cell) tends to match several detectors (T-cells) it would be qualified as misuse (pathogenic). Thus, we assign higher risk scores to methods that are close to multiple detectors through the logical function of Noisy or. As a consequence, if the detectors are not different from each other we will assign higher risk scores to methods that are similar to the same redundant detectors. Moreover, our choice to limit the number of detectors is also motivated by performance concern. The genetic-based generation of detectors is a heavy process during which thousands of detectors are generated and are compared to the training data set as well as to each other, to finally come up with a limited set of best and most diverse detectors. Despite all those reasons, we have to admit that fixing the number of detectors to allow number for all the evaluated APIs, is a strategy that needs to be improved. in our future work, we will investigate the impact of correlating the number of detectors to some characteristics of the API such as the number of public methods or the number of inferred clusters (usage scenarios of the APIs)." Our approach generates a fixed number of detectors that represent artificial signatures that are different from those of the safe code. However, the fact that a detector deviates from the normal API usages in the learning dataset, doesn't guarantee that the detector actually represents by itself a misuse of the API. Moreover, in a very extreme case, we may consider the risk that the detector generation could end up producing signatures that are good or efficient usage of the API. Even though this case is almost impossible, our approach has the features that allow avoiding it. What actually makes a detector deviate from the good usage of an API is the mutation operator. Each detector undergoes one or multiple mutations among nine types of mutations. Moreover, we replace the boolean detection results (Self/non-Self) with a risk score that allows ranking the evaluated methods according to the estimated risk. We assign higher risk scores to methods that are close to multiple detectors through the logical function of Noisy or, rather than looking at the detector that best matches the method being evaluated. Thus even though a bad detector ends up in the list of used detectors, its impact will be reduced through the comparison with other detectors. In addition, the obtained detectors will have their own independent life cycles. They can be reused/shared, enhanced with new detectors when new safe clients are considered and destroyed if they detect false positive(s). \subsection{Efficiency Analysis} \label{sec:eval-efficiency} In this experiment, we aim to answer \textbf{RQ2.} \textit{What is the execution cost of our approach in term of required execution time and storage?} \subsubsection{Analysis method} We measure the execution times for different steps of the approach as well as the average required storage for the 3 APIs when serializing the detectors' \emph{groums}. We run with the best detector generation strategy which uses simple \emph{groums} (without data dependencies), clustering and global evolution of detectors for each API to compute the average performances. For each execution we compute the time at each step of the detectors generation and risk-score computation. For memory usage, we use JVM Monitor~\footnote{http://jvmmonitor.org/index.html} to obtain realtime data. And for storage, we look at the Windows explorer properties of the output folders. To measure the performance of our approach, we use the benchmark \emph{MuBench}~\cite{MuBench}. \textsc{APImmune}\xspace{} works in 2 steps, the detector generation and the risk evaluation. The first step takes relatively a long time with in average 33 minutes to generate the detectors for the 3 APIs. The generation of detectors consists of 3 steps: extracting API \emph{groums} from source code, clustering usages and generating the detectors. The extraction of APIs \emph{groums} takes less than 10 minutes and varies among APIs depending on the number of methods provided as input. Clustering takes less than 15 seconds. As for the detectors, the generation exceeds 15 minutes to produce 50 detectors per API. Note that the detector generation process is performed once and the generated detectors can be used several time to evaluate different clients using the API for which the detectors were generated. The minutes magnitude can be acceptable. We measured the execution time of the risk estimation only for \code{Iterator}\xspace and \code{crypto}\xspace because MuBench has not client code and misuses for \code{http}\xspace. The risk estimation is also divided into 3 steps which are listed in Table 5. The first step is the extraction of API \emph{groums} from client code under evaluation. This step takes less than 23s for \code{Iterator}\xspace and less than 10s for \code{crypto}\xspace. This difference in time is explained by the fact that \code{Iterator}\xspace clients compose a corpus of more than 21,000 methods and produce 2,175 uses. In contrast, \code{crypto}\xspace has only 20 different uses in 120 methods. The second step is to load the trained detectors to use them for the detection. This takes about 10s for 50 detectors for each of the two APIs. Finally, there is the sorting time which are less than 9 seconds for \code{Iterator}\xspace and about a tenth of a second for \code{crypto}\xspace. This is explained by the difference in the number of methods to sort, 100 times more for \code{Iterator}\xspace. For memory measurement, we ran the experiment for the 3 APIs with clustering and global evolution on a laptop under Windows 10 with an OS architecture amd64, 8 processors and 6 GB of RAM. On average, an API \emph{groum}{} is stored on 26 MB of memory and a detector on 2.5 MB. It is, therefore, necessary to provide more than 125 MB of memory to save 50 detectors. In terms of heap memory the maximum is 3,695 MB and on average it is used 1,109 MB. In conclusion, \textsc{APImmune}\xspace{} requires more time to generate the detectors, but this is done only once. It requires much less time to evaluate client programs and assign risk scores to their methods. This short time allows potential integration into IDEs. \begin{table}[htp] \centering \caption{Evaluation time in seconds on MUBench clients.} \begin{tabular}{|l|r|r|} \hline & Iterator & crypto \\ \hline API groums building & 24.64 & 9.41 \\ Detectors deserialization & 10.34 & 10.15 \\ Ranking time & 8.81 & 0.16 \\ \hline Total time & 43.80 & 10.31 \\ \hline \end{tabular} \label{tab:ExecTime} \end{table} \section{Preliminary evaluation} \label{sec:evaluation} \input{data} \subsection{Results} To conduct the experiments, we created 10 folds having, each, 183 cases of good uses and their equivalent 183 misuse cases. Then for each fold, we generate the detectors using the good uses in the nine other folds. The detectors are then applied to each good or bad use case in the fold (386=183+183) to calculate their risk scores (c.f., Section~\ref{sec:detection}). The cases are then sorted by their scores. This allows to compute the precision and recall for each fold at k (k cases with the highest risk scores). Figure~\ref{fig:precision_recall} shows the distribution of the precision and recall at-250 for the ten folds. With an average precision of 63,67\% (and a maximum of 72.5\%), our approach produces results better than a random guess (50\% for a balanced testing sample). The average recall is also high with an average of 87,32\% (maximum of 99,45\%). To show how the precision and recall evolve as the risk score decreases, we selected a representative fold (Figure~\ref{fig:fold7}. \begin{table}[htp] \centering \begin{tabular}{|>{\raggedright}m{2cm}|*{6}{m{0.25in}|}} \hline \multirow{4}{*}{}&\multicolumn{3}{c|}{Max} & \multicolumn{3}{c|}{Noisy or} \\ \cline{2-7} & Iterator & http & crypto & Iterator & http & crypto \\ \hline Mean score good usages & 0.2808 & 0.1505 & 0.2104 & 0.9246 & 0.4127 & 0.3763 \\ \hline Mean score bad usages & 0.2865 & 0.984 & 0.1947 & 0.9483 & 0.5583 & 0.3841 \\ \hline Mean precision 10\% data (\%) & 53.78 & \textbf{70.0} & 50.,0 & 52.70 & \textbf{66.67} & 45.0 \\ \hline Mean precision 30\% data (\%) & 50.18 & 58.89 & 57.14 & 54.74 & \textbf{66.67} & 54.29 \\ \hline \end{tabular} \caption{Generation process without clustering results} \label{tab:process0} \end{table} \begin{table}[htpb] \centering \begin{tabular}{|*{5}{c|}} \hline \multirow{3}{*}{}&\multicolumn{2}{c|}{Max} & \multicolumn{2}{c|}{Noisy or} \\ \cline{2-5} & http & crypto & http & crypto \\ \hline Mean score good usages & 0.1866 & 0.1841 & 0.4653 & 0.4227 \\ \hline Mean score bad usages & 0.2286 & 0.1520 & 0.6312 & 0.3830 \\ \hline Mean precision 10\% data (\%) & 53.33 & 40.0 & \textbf{60.0} & 40.0 \\ \hline Mean precision 30\% data (\%) & 57.78 & 48.57 & 57.78 & 51.43 \\ \hline \end{tabular} \caption{Generation process with clustering, and merging after evolution results} \label{tab:process4} \end{table} \begin{table}[htpb] \centering \begin{tabular}{|*{5}{c|}} \hline \multirow{3}{*}{}&\multicolumn{2}{c|}{Max} & \multicolumn{2}{c|}{Noisy or} \\ \cline{2-5} & http & crypto & http & crypto \\ \hline Mean score good usages & 0.1641 & 0.2064 & 0.4515 & 0.4654 \\ \hline Mean score bad usages & 0.1947 & 0.2129 & 0.5303 & 0.5754 \\ \hline Mean precision 10\% data (\%) & 46.67 & 45.0 & \textbf{70.0} & 55.0 \\ \hline Mean precision 30\% data(\%) & 56.67 & 52.86 & 57.78 & 58.57 \\ \hline \end{tabular} \caption{Generation process with clustering, and merging before evolution results} \label{tab:process5} \end{table} \begin{figure} \includegraphics[width=3.25in]{fig/MUBenchCrypto.png} \caption{Precision - Recall for javax.crypto clients on MUBench} \label{fig:MUBenchCrypto} \end{figure} \begin{figure} \includegraphics[width=3.25in]{fig/MUBenchIterator} \caption{Precision - Recall for java.util.Iterator clients on MUBench} \label{fig:MUBenchIterator} \end{figure} \section{Empirical Evaluation} \label{sec:experiments} \newcommand{\textit{What is the impact of different detector generation strategies on the misuse detection accuracy?}}{\textit{What is the impact of different detector generation strategies on the misuse detection accuracy?}} \newcommand{\textit{What is the execution cost of our approach in term of required execution time and storage?}}{\textit{What is the execution cost of our approach in term of required execution time and storage?}} \newcommand{\textit{How well does our approach perform compared to existing approaches?}}{\textit{How well does our approach perform compared to existing approaches?}} The objective of this section is to evaluate the performance of our approach in detecting API misuses in practice and in comparison with existing techniques. We formulated the research questions of our evaluation as follows: \begin{enumerate}[] \item [\textbf{RQ1.}] \textit{What is the impact of different detector generation strategies on the misuse detection accuracy?} \item [\textbf{RQ2.}] \textit{What is the execution cost of our approach in term of required execution time and storage?} \item [\textbf{RQ3.}] \textit{How well does our approach perform compared to existing approaches?} \end{enumerate} For each experiment in this section, we present the research question to answer, the research methodology to address it, followed by the obtained results. \input{data} \input{accuracy} \input{efficiency} \input{comparison} \section{BIS-inspired API Misuse Detection Formulation} \label{section:formulation} In this section, we present our formulation of the API misuse detection problem by the adaptation of the BIS. \begin{Definition}[API Usage] {\em An API usage is a fragment of client code that involves the API classes and/or methods for a given library.} \end{Definition} The code fragment in Listing~\ref{example} is an example of usage of APIs in JDK library with classes such as \code{StringBuffer}, \code{BufferredReader}, and \code{FileReader}, and method calls such as \code{BufferedReader.readLine()}, \code{StringBuffer.append(...)}, and \code{StringBuffer.length()}. \begin{figure*}[h] \includegraphics[height=4.75in, width=6.25in]{fig/TestGroum1} \caption{Graph-based API Usage Representation for Listing~\ref{example}} \label{Groum1} \end{figure*} When a client method is considered as an API usage, not all its statements are relevant to this usage. To capture the specific usage, we extract a \emph{groum}{}, a graph-based representation for the API usage~\cite{Grouminer}. \begin{Definition}[API Usage Graph] {\em An API Usage Graph (GROUM) is a directed graph representing an API usage in which nodes represent API objects constructor calls, method calls, field accesses, and branching points of the control structures, and edges represent temporal usage orders and data dependencies among them.} \end{Definition} For example, the API usage of Listing~\ref{example} will lead to the \emph{groum}{} shown in Figure~\ref{Groum1}. As mentioned in Section~\ref{background}, in a BIS, the notion of Self refers to what is normal by contrast to what is risky. In the API misuse detection problem, the Self is the correct use of the API. \begin{Definition}[API Usage as Self] {\em The self for an API is a set of API usage methods that are known to be correct.} \end{Definition} The Self for an API can be extracted from the set of client programs that correctly use the APIs. Identifying correct usages can be done manually based on history data ({e.g.,}\xspace issue tracking systems). In our experiments (see Section~\ref{sec:experiments}), we used an automated approach to extract the Self. We included in this set versions of methods using the API that were fixed after a bug was declared in them. We retain a fixed version if its \emph{groum}{} differs from one of the corresponding buggy version. Another important concept in BIS is the mutations of T-cells to detect the anomaly cells. Roughly speaking, the immune system generates T-cells from the stem cells and keep only those that do not match body cells (negative selection). When transposing the principle of the negative selection to the detection of API misuses, it is necessary to generate detectors (equivalent to T-cells) by randomly mutate the Self API usages. As the goal is to generate a limited number of detectors with a maximum coverage of the deviations from the Self, the generation process can be viewed as a {\em multi-objective optimization} problem. If we fix the number of detectors for performance consideration, the goal is, therefore, to find the set of detectors that deviate from the normal use situations in the Self, but also are as much as possible different from one another to avoid duplications. \begin{Definition}[Detector Generation] {\em The generation process aims to produce a fixed number of detectors that deviate from the normal API usages in correct client programs, and that are different as much as possible from one another.} \end{Definition} \begin{Definition}[A Mutation] {\em A mutation of an API usage $u$ is the \emph{groum}{} of $u$ after the application of a mutation operation.} \end{Definition} \begin{Definition}[Mutation Operators] {\em Mutation operators are respectively: adding an edge to or removing an edge from the \emph{groum}{}, adding a node (a random API method call), removing a node, replacing a node (changing a method call by another), moving (changing the position of) a node in the \emph{groum}{}, and adding, removing and moving an exception.} \end{Definition} For example, a mutation of the correct usage from Figure~\ref{Groum1} is the graph without the node \code{BufferedReader.close} and the inducing edges to that node. The next important notion to instantiate in our formulation is the affinity computation that assesses the similarity between a T-cell (detector) and an encountered cell (an evaluated client method) to determine if the cell is part of the Self or not. \begin{Definition}[Affinity of APIs] {\em In the API misuse detection context, the similarity between a detector graph an API usage in a client program is measured by the graph editing distance between the respective groums.} \end{Definition} The basic mapping between a BIS and our detection problem is not enough to fully tackle the complexity of the API misuse problem. In this section, we present the additional features that complement the BIS-inspired detection. The first feature is the clustering of the API usages before the generation of the detectors. Many APIs can be used in different ways by different clients. To ensure addressing all these different usage variations, we apply a clustering process to the API usages in the Self to group similar API usages in clusters. Each cluster corresponds to a usage scenario. Then the generation process will take into account the representativeness of the generated detectors with respect to the obtained clusters. The second feature is to replace the boolean detection results (Self/non-Self) by a risk score that allows to rank the evaluated methods according to the estimated risk. This helps client-program developers dedicating their available effort to the methods with the higher risk scores. \section{Introduction} Nowadays, software libraries and their Application Programming Interfaces (APIs) are essential ingredients for the development of complex software systems \cite{vayghan2018deploying, almarimi2019web, vayghan2019kubernetes, vayghan2019microservice, saidani2019towards}. They are supposed to provide tested and proven reusable functionality at a low cost \cite{shatnawi2018identifying}. Yet, using APIs is not always easy due to their complexity and often incomplete documentation \cite{saied2015observational,benomar2015detection,shatnawi2018identifying2}. Developers may misuse them, which results in faults difficult to debug. Preventing API misuses is not always possible due to their complexity and the many different ways they can be used. An alternative approach is to detect API misuses. The existing detection approaches for API misuses can be broadly classified into two categories: explicit and implicit detection. Several approaches fall into the explicit detection category in which the specifications of correct API usages are determined either by the API designers or developers. The specifications are then used in the detection process where a detector assesses if a given usage is valid with respect to API specifications. However, not all libraries are well-documented for correct usages because the number of correct usages can be large. Manually writing and maintaining specifications over time is then challenging~\cite{dagenais-icse12,dagenais-fse10}. To avoid the need of writing specifications or explicitly enumerating all the potential usages, several approaches follow implicit detection in which the correct API usages are not specified before the detection. These approaches rely on the {\em consensus principle} in which an instance of an API usage is considered as a misuse if it deviates far from the frequently used ones (often called API usage {\em patterns}) \cite{uddin2012temporal,saied2015mining, saied2015could,zhong2009mapo, saied2015visualization, saied2016cooperative, saied2015could2, saied2016automated}. While several researchers have been proposing a large number of mining techniques on API misuse detection, API misuses remain a problem in practice~\cite{LHXRM16,ABFKMS16}.show that existing API misuse detectors have suffered high false positives due to an important issue with the use of thresholds on frequent usages (patterns) and on the deviations from those patterns. Specifically, the detectors often failed to detect misuses when they cannot relate misuses to the respective patterns because the differences between them exceed pre-defined thresholds. To address the challenges in establishing the distinctions between correct usages and misuses or in spending efforts in writing specifications, we explore an idea from biology. A similar detection phenomenon is one from the biological immune system (BIS). The goal of this detection is to decide if an element is a normal body cell (\emph{self}) or a threat that can be an antigen, e.g., bacteria, viruses, and parasites, or a malfunctioning cell, e.g., cancerous cells (\emph{non-self}). The detection is done as the BIS produces among others T-cells that are created randomly and kept only if they do no match normal body cells. The number of self-elements (body cells) is very high, and the number of non-self elements is potentially infinite. Still, except for very few cases, the detection with the T-cells works very well. A key benefit of this detection mechanism in BIS is to minimize the false-positive rates, i.e., minimize the mis-identification of normal cells as non-selfs. In this work, we explore that BIS-inspired idea to build {\textsc{APImmune}\xspace}, a novel API misuse detector. The normal API usages are considered as the normal body cells (self). We will collect normal usages of the given APIs from the set of client programs correctly using the APIs. For those API usages, {\textsc{APImmune}\xspace} extracts features to be used as usage signatures. Like for the BIS, artificial detectors (equivalent of T-cells) are randomly generated with the objective of being different from the normal usage signatures. The detection is performed when {\textsc{APImmune}\xspace} is used to detect API misuses in a given client program using the APIs. The API usages in the client program are extracted and compared against the detectors. Matches represent misuse risks. With this BIS-inspired mechanism, {\textsc{APImmune}\xspace} avoids the establishing of the thresholds and frequencies as in the consensus-based approaches, and avoids manual writing of API specifications as in the explicit approaches. Importantly, as a consequence, as {\textsc{APImmune}\xspace} creates the detectors by mutating the normal usages, we expect it to improve false positive rates over the existing approaches. Moreover, when the detectors are generated, they can be used and enhanced without the need of disclosing the clients' code that served for the generation. To evaluate the viability of our approach, we performed a preliminary study with three APIs. Our results show that {\textsc{APImmune}\xspace} has good detection accuracy and performance, and it can complement pattern-based tools for uncommon misuses. The rest of the paper is organized as follows. In sections \ref{background} and \ref{section:formulation} we introduce the principles of the artificial immune system algorithm (AIS) and discuss the parallel between the immune system and the detection of API misuse. The different steps of our approach are described in Section~\ref{sec:approach}. Section~\ref{sec:experiments} presents the results of the preliminary evaluation while providing discussions in Section~\ref{sec:validity}, Section~\ref{sec:related} presents the closest related work and the novelty of our approach with respect to it. Finally,we conclude and suggest future work in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related} In this section, we focus on presenting the related work on API misuse detection approaches that use the consensus principle. Those approaches typically follow two steps: mining the (good) usage patterns from a given code corpus and considering uses deviating from the patterns as misuses. Amann {\em et al.} \cite{MuBenchPipe} have recently conceptually compared the capabilities of existing API-misuse detection approaches, and conducted several empirical studies to evaluated and compared four well-known detectors using a benchmark infrastructure called MUBench~\cite{MuBench}. The authors reported the existing API misuse detectors have suffered high false positives due to an important issue with the use of thresholds on frequent usages (patterns) and on the deviations from those patterns. JADET is a misuse detector for Java~\cite{WZL07} that focuses on API call order and call receivers in usages. It derives a pair of calls for each call order. The sets of these pairs are the inputs to the mining to identify API patterns, in term of {\em sets of pairs of API calls}. JADET is capable of detecting missing method calls and missing loops as a missing call-order relation from a method call in the loop header to itself. Tikanga~\cite{WZ11} is built up on JADET, and extends it to general Computation Tree Logic formulae on object usages with the use of model checking on those. Tikanga applies Formal Concept Analysis~\cite{GW99} to mine patterns and misuses at the same time. Tapir \cite{saied2020towards, saied2018towards} considers the recovery of temporal API usage patterns as an optimization problem and solves it using a genetic-programming algorithm. API temporal constraints are mined from execution traces of client programs using the API, and Linear Temporal Logic (LTL) formulas, representing candidate usage patterns are recovered. Tapir alert the user of potential API misuse of when LTL formulas, are violated. Nguyen {\em et al.}~\cite{Grouminer} propose a graph-based object usage model (groum) for their misuse detector in GrouMiner. It uses sub-graphs mining to mine patterns as frequent usage sub-graphs. GrouMiner can detect misuses with missing API elements compared to patterns. In our work, we do not abstract/learn usage patterns from historical data to use them for the detection. The novelty of our approach is in generating new artificial data, i.e., detectors, that diverge from the historical data, following the negative selection principle. By doing this, we create new artifacts (detectors) that can be shared among development groups without disclosing the clients' code. DMMC~\cite{MBM10,MM13} aims to detect misuses in API method calls with exactly one missing one. It focuses on type usages, i.e., sets of methods called on a given receiver type of a given method. The assumption is that violations should have only few exactly similar usages, but many near-similar ones. Alattin~\cite{TX09b} mine alternative patterns for condition checking. It applies frequent-itemset mining on the set of rules on pre- and post-condition checks on the receiver, the arguments, and the return value of a method call. It detects missing \code{null}-checks and missing value or state conditions that are ensured by checks. CAR-Miner~\cite{TX09} aims to detect API misuses in error handling. To detect a misuse, it extracts the normal call sequence and the exception call sequence for a method call. It learns the association rules and then determines the expected exception handling and reports a violation if the actual sequence does not include. CAR-Miner can detect missing exception-handling as well as missing method calls among error-handling functions. AX09~\cite{AX09} also detects incorrect error handling. It uses model checking to generate error handling paths as sequences of method calls and applies frequent-subsequence mining to detect patterns. It then uses push-down model checking to verify the consistency to these patterns and identify respective misuses. PR-Miner~\cite{LZ05} encodes API usages as a set of all function names in a function in C, and uses frequent-itemset mining to detect patterns. Misuses are the subset that occurred at least 10 times less frequently than the pattern. It focuses on detecting missing method calls. Chronicler~\cite{RGJ07b} aims to detect patterns in frequent call orders in inter-procedural control-flow graphs. The orders hold at least 80\% of the all execution paths are patterns and otherwise, they are violations. Chronicler detects missing method calls. Since loops are unrolled exactly once, it cannot detect missing iterations. DroidAssist~\cite{NPVN16} detects misuses in Java bytecode. It learns the call sequences to build a Hidden Markov Model. If a likelihood of a given call sequence is too small, it is considered as a misuse. MUDETECT \cite{rendemystify} increases the level of details found in identified patterns, in order to increase the accuracy and recall of API misuse detection. MUDETECT uses a new graph representation of API usages that captures different types of API misuses and it devised a systematically designed ranking strategy that effectively improves precision. More recently Ren et al. \cite{sven2019investigating} devised a text mining technique that extracts Android API misuses and patches from StackOverflow answers. The method produces a natural language report from these code fragments, explaining how to use the API. \begin{table*}[t] \centering \caption{Comparison of different misuse detection tools} \begin{tabular}{|l|l|l|l|} \hline Tools & Mono API detection & Input to learning step & Type of detected misuse \\ \hline Alattin~\cite{TX09b} &No & Code examples extracted & Missing null checks \\ & & using code search engines & Missing condition value or state \\ \hline AX09~\cite{AX09} &No & Client systems source code & Incorrect error handling \\ \hline CAR-Miner~\cite{TX09} &No & Code examples extracted & Incorrect error handling \\ & & using code search engines & \\ \hline Chronicler~\cite{RGJ07b} &No & Client systems source code & Missing method call \\ \hline DMMC~\cite{MBM10,MM13} &No & Java byte code of Client systems & Missing method call \\ \hline DroidAssist~\cite{NPVN16} &Yes & Java byte code of Client systems & Missing method call \\ \hline Jadet~\cite{WZL07} &No & Client systems source code & Missing method calls \\ &&& Missing loops\\ \hline PR-Miner~\cite{LZ05} &Yes & Client systems source code & Missing method calls \\ \hline Tikanga~\cite{WZ11} &No & Client systems source code & Missing condition value or state \\ \hline GrouMiner~\cite{Grouminer} &No & Client systems source code & Missing API elements \\ \hline \end{tabular} \label{tab:related} \end{table*} As we can see in table \ref{tab:related}, in the majority of the cases the misuse detection is not specific to a single API and thus it fails in identifying different types of misuses related to that API. Moreover, the learning step is dependent on the considered client systems that generally focus on specific usages and do not take into account the diversity of API utilization, especially rare usages
{ "timestamp": "2020-12-29T02:23:57", "yymm": "2012", "arxiv_id": "2012.14078", "language": "en", "url": "https://arxiv.org/abs/2012.14078" }
\section{Introduction} Synchronization of oscillators, especially synchronization in complex networks \cite{Arenas2008}, has been recognized as one of the important phenomena in nature. Among the different models of oscillator dynamics, the Kuramoto model \cite{Kuramoto1987}, and its various generalizations \cite{Rodrigues2016}, are some of the most popular models. Within this class of generalized Kuramoto models, second-order oscillator models, that is, oscillators with inertias, have been used for describing the dynamics of fireflies \cite{Ermentrout1991}, Josephson junction arrays \cite{Levi1978, Watanabe1994, Trees2005}, goods markets \cite{Ikeda2012}, dendritic neurons \cite{Sakyte2011}, and power grids \cite{Filatrella2008}. Due to the effect of inertias, phenomena such as hysteresis, bi-stability and abrupt transitions are found for these second-order oscillators \cite{Tanaka1997, Tanaka1997a, Gao2017}. In \cite{Tanaka1997, Gao2017} the changes from continuous to abrupt phase transition for second-order oscillators have been studied in detail using the self-consistent method. As a natural generalization, oscillators with both inertias and phase shifts, namely the second-order Kuramoto-Sakaguchi model, are considered in \cite{Barre2016}. It has been found that due to the effect of inertias the synchronization transition of oscillators can be changed from continuous to abrupt and vice versa. In this paper, we generalize the self-consistent method presented in \cite{Gao2017} to the second-order Kuramoto-Sakaguchi model. We find that the inertias introduce effective phase shifts and that the type of synchronization transition is affected by the mixture of these inertia-induced phase shifts and the ones built into the model. Moreover, we find a new type of synchronization process with increasing coupling strength. In this process, oscillators converge to an oscillating state by forming several synchronization clusters, which cannot be further synchronized by increasing the coupling strength. This process is quite different from the common belief that with sufficient large coupling strengths the coupled Kuramoto-like oscillators are typically synchronized to a highly coherent steady state, except for some specific choice of parameters, such as with phase shift $\pm\pi/2$. Through the self-consistent method and dynamical analysis of the synchronized clusters, we show that this process is due to the cross-effect of inertias and phase shifts, and is not limited to the case of all-connected oscillators. Through numerical simulations, this new type of synchronization process is also found in oscillators connected in complex networks. Our paper is organized as follows. In Section \ref{section_one}, we generalize the self-consistent method to oscillators with inertias and phase shifts. The mixture of effective (inertia-induced) and intrinsic phase shifts is associated to the change of properties of the synchronization transition. Using the self-consistent method, in Section \ref{section_two}, we find the new synchronization process to oscillating states and study it through the self-consistent method and dynamical analysis. Using numerical simulations, this process is also observed for oscillators on complex networks. We conclude this paper in Section \ref{section_four}. \section{Effective Phase Shifts}\label{section_one} To focus on the effect of phase shifts, we assume that all the oscillators have the same inertia $m$ and damping constant $D$. The dynamics of the second-order Kuramoto-Sakaguchi model reads \begin{equation}\label{eq_dynamics_two} m\ddot{\varphi}_i+D\dot{\varphi}_i=\Omega_i+\frac{K}{N}\sum_{j=1}^{N}\sin(\varphi_j-\varphi_i-\alpha), \ i=1,2\dots,N \end{equation} where $N$ is the number of oscillators and $K$ is the uniform coupling strength. Each oscillator is described by its phase $\varphi_i\in\mathbb{S}$ with $\Omega_i$ as its natural frequency. The intrinsic phase shift $\alpha$ is added in the coupling term $\sin(\varphi_j-\varphi_i-\alpha)$. The standard second-order model corresponds to $\alpha = 0$. Following the work by Kuramoto \cite{Kuramoto1987} we define the order parameter \begin{equation}\label{eq_order_definition} re^{i\phi}=\frac{1}{N}\sum_{j=1}^{N}e^{i\varphi_j}, \end{equation} where $r$ and $\phi$ represent the coherence and mean-phase of the oscillators. If all the oscillators run independently, their phases will almost uniformly distribute along the unit circle. As a result, we have $r\approxeq 0$, and the state is called incoherence. On the other hand, if all of the oscillators are synchronized and have the same phase $\theta_i(t)\equiv\theta(t)$, we have $r=1$. This is called the complete synchronization state of the system. Using $r$ and $\phi$, the model \eqref{eq_dynamics_two} can also be rewritten in a mean-field form as \begin{equation}\label{eq_dynamics_three} m\ddot{\varphi}+D\dot{\varphi}=\Omega + Kr(t)\sin(\phi(t)-\varphi-\alpha), \end{equation} where the subscripts have been dropped. In Eq.~\eqref{eq_dynamics_three} each oscillator interacts with other oscillators only through the mean-field terms $r$ and $\phi$. Therefore, the dynamics of the system can be obtained through the analysis of each single oscillator with a presupposed mean-field. For simplicity, in this paper we assume an infinite number of oscillators $N\rightarrow\infty$, and that the distribution of natural frequencies of oscillators $g_\Omega(\Omega)$ is symmetric, $g_\Omega(\Omega)=g_\Omega(-\Omega)$, and unimodal. The essential states of the system are the steady states defined as \begin{equation}\label{def_steady_eq} r(t) = r, \ \ \phi(t)=\Omega^r t +\Psi, \end{equation} where the order parameter $r(t)$ is independent of time, and the phase $\phi(t)$ has a constant rotation velocity. Without loss of generality, we set $\Psi\equiv0$. Define the phases $\theta$ of each oscillator in a rotating coordinate frame through the transformation $\theta = \varphi -\phi(t)$. Substitution of Eq.~\eqref{def_steady_eq} into Eq.~\eqref{eq_dynamics_three} yields \begin{equation}\label{eq_phase_differece} m\ddot{\theta}+D\dot{\theta}=(\Omega-D\Omega^r)-Kr\sin(\theta+\alpha).\ \ \end{equation} For $\alpha=0$, Eq.~\eqref{eq_phase_differece} is exactly the same as the one for a single second-order oscillator without intrinsic phase shift \cite{Gao2017}. Following \cite{Gao2017}, Eq.~\eqref{eq_phase_differece} can be rewritten in the standard form as \begin{equation}\label{eq_single_dynamic} \ddot{\theta}+a\dot{\theta}=b-\sin(\theta+\alpha), \end{equation} with rescaled time $\tau=t/\sqrt{m/Kr}$ and \begin{equation}\label{def_transformation} a=\frac{D}{\sqrt{Krm} }, \ \ b=\frac{\Omega-D\Omega^r}{Kr}. \end{equation} Because of its dependence on $\Omega$, the parameter $b$ follows the distribution $g_b(b)=Krg_\Omega(Krb+D\Omega^r)$. It is known from the earlier studies \cite{Tanaka1997,Tanaka1997a,Gao2017} that the system Eq.~\eqref{eq_single_dynamic} has two stable states, one fixed point and one limit cycle \cite{Strogatz2014,Tanaka1997a}. The rotation frequency of oscillators is defined as $\omega=\dot{\theta}$. Taking $a>0$, the stable fixed point reads \begin{equation} \label{estimation of fixed point} \theta_0=\arcsin(b)-\alpha,\ \ \omega_0=0, \end{equation} with the existence condition $b\leq b_L(a)=1$. On the other hand, for the limit cycle, using the same estimation method as in \cite{Gao2017} we have the approximate expression $\dot\theta(\tau)$ for the limit cycle, given by \begin{equation}\label{limit cycle} \dot\theta(\tau) = \frac{b}{a} - \frac{1}{a}\sigma \sin(\theta(\tau)+\Delta+\alpha), \end{equation} where the coupling factor $\sigma$ and phase shift term $\Delta$ are \begin{equation}\label{def_transformation2a} \sigma=\frac{a^2}{\sqrt{b^2+a^4}},\ \ \Delta=\arcsin \left(\frac{-b}{\sqrt{b^2+a^4}} \right). \end{equation} The existence condition of the limit cycle can be calculated through Melnikov's method \cite{Guckenheimer2013} or Lyapunov’s direct method \cite{Risken1996fokker} and numerical simulations \cite{Gao2017} as \begin{equation}\label{bs} b\geq b_S= \begin{cases} (4/\pi)\,a-0.305a^3, \ \ &a\leq 1.193,\\ 1, \ \ &a>1.193. \end{cases} \end{equation} Eq.~\eqref{limit cycle} shows that running oscillators have the same dynamics as Kuramoto-Sakaguchi oscillators with coupling factor $\sigma$ and \emph{effective phase shift} $\alpha+\Delta$ as the combination of intrinsic phase shift $\alpha$ and inertia-induced phase shift $\Delta\in(-\pi/2,\pi/2)$. Even a small inertia value can introduce the mixture effect of phase shifts $\alpha$ and $\Delta$. As a result, several non-trivial transitions of Kuramoto-Sakaguchi oscillators that depend on the specific choice of phase shifts will be undermined by inertias. These include the non-universal transition processes in \cite{Omel2012,Omel2013}, shown in Fig.~\ref{fig_three}(a-b), and the discontinuous transition Fig.~\ref{fig_three}(c-d). Note that with the introduction of inertias, the transition processes are not always changed from continuous to abrupt. The opposite also happens when there are phase shifts, as pointed out in \cite{Barre2016} using the stability analysis around the critical point. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{nf1} \caption{Synchronization for oscillators with a double Gaussian distribution $g_\Omega(\Omega)=0.6\times\frac{1}{\sqrt{2\pi}}e^{-\Omega^2/2}+0.4\times\frac{1}{\sqrt{2\pi}\times0.1}e^{-\Omega^2/(2\times0.1^2)}$ with $\alpha=1.07$ and different inertias $m=0$ (a), $m=0.1$ (b); or a double Lorentz distribution $g_\Omega(\Omega)=0.8\times\frac{1}{\pi}\frac{1}{\Omega^2+1}+0.2\times\frac{1}{\pi}\frac{0.075}{\Omega^2+0.075^2}$ with $\alpha=0.8$ and different inertias, $m=0$ (c), $m=0.1$(d). The solid (dashed) lines are solutions of self-consistent equations, and correspond to stable (unstable) steady states. Circles are from the numerical simulations of $10000$ oscillators with proper initial states.}\label{fig_three} \end{figure} When the inertia is not zero, there is a bistable parameter region, where the system has both a fixed point and a limit cycle, given by $b_L\geq b \geq b_S$. Each oscillator is either \emph{locked} at the fixed point or \emph{running} along the limit cycle. Taking $N\rightarrow\infty$, the order parameter defined in Eq.~\eqref{eq_order_definition} can be rewritten as \begin{equation}\label{Eq.model.3} r=\int_{\mathbb R} \int_{\mathbb S} \int_{\mathbb R} e^{i\theta(t)} p(\Omega,\theta_0,\omega_0) d\omega_0 d\theta_0 d\Omega, \end{equation} where $p(\Omega,\theta_0,\omega_0)$ represents the distribution of initial conditions and natural frequencies, and the dynamics $\theta(t)$ for each oscillator depends on its initial conditions and $\Omega$. Note that $\int_{\mathbb S}\int_{\mathbb R} p(\Omega,\theta_0,\omega_0) d\theta_0 d\omega_0 = g_\Omega(\Omega)$. If we know the ratio of locked and running oscillators in the system then the last expression can be simplified. Substituting the solution of locked and running oscillators, Eq.~\eqref{estimation of fixed point} and Eq.~\eqref{limit cycle}, into Eq.~\eqref{Eq.model.3}, together with their existence conditions, we have the self-consistent equations \begin{subequations}\label{eq_self_consistent} \begin{align} r&=\int_{\mathbb R} g_\Omega(\Omega)\rho_{l}(a,b)\left(\sqrt{1-b^2}\cos\alpha-b\sin\alpha\right) \notag \\ &-g_\Omega(\Omega)\rho_{r}(a,b)\left(\frac{b}{\sigma}+\sqrt{\frac{b^2}{\sigma^2}-1}\right)\sin(\Delta+\alpha)d\Omega,\\ 0&=\int_{\mathbb R} g_\Omega(\Omega)\rho_{l}(a,b)\left(b\cos\alpha-\sqrt{1-b^2}\sin\alpha\right) \notag \\ &+g_\Omega(\Omega)\rho_{r}(a,b)\left(\frac{b}{\sigma}-\sqrt{\frac{b^2}{\sigma^2}-1}\right)\cos(\Delta+\alpha)d\Omega. \end{align} \end{subequations} The fraction functions $\rho_{l}(a,b)$ and $\rho_{r}(a,b)$ are the fraction of locked and running oscillators respectively, satisfying the normalization condition $\rho_{l}(a,b)+\rho_{r}(a,b)=1$, and the boundaries $\mathbf{1}_S(a,b)\leq\rho_{l}(a,b)\leq\mathbf{1}_L(a,b)$. The indicator functions $\mathbf{1}_{S,L}$ take the value $1$ if $|b| < b_S(a)$ or $|b| < b_L(a)$ and $0$ otherwise, corresponding to cases of running or locked oscillators. The most commonly used fraction functions are the two indicator functions $\rho_{l}(a,b)=\mathbf{1}_S(a,b)$ and $\rho_{l}(a,b)=\mathbf{1}_L(a,b)$ where all the oscillators are in the limit cycle state, or the fixed point state, as long as it is possible. These two functions correspond to the so-called forward and backward processes. In the forward process, the initial state for small coupling strength is the incoherence state, and the coupling strength is then progressively increased. In the backward process, the initial state for large coupling strength is the synchronization state, and the coupling strength is then progressively decreased. For second-order oscillators, these two processes in general do not coincide with each other, a phenomenon known as \emph{hysteresis} \cite{Tanaka1997,Gao2017}. Compared with the previous results, it is easy to verify that when $\alpha=0$, from Eq.~\eqref{eq_self_consistent} one regains the self-consistent equations for second-order oscillators without phase shifts in \cite{Gao2017} using the approximation $b/\sigma-\sqrt{b^2/\sigma^2-1}\approx \sigma/(2b)$ which is valid for small $\sigma$. On the other hand, in the limit $m\rightarrow0$, we have $b_{S,L}(a)\rightarrow1, \sigma\rightarrow1, \Delta\rightarrow 0$. The self-consistent equations \eqref{eq_self_consistent} in this case are the same as the ones obtained for Kuramoto-Sakaguchi models in \cite{Omel2012,Omel2013}, Following \cite{Omel2013,Gao2017}, by defining $q=Kr$ and correspondingly $a=D/\sqrt{qm}$ and $g_b(b)=qg_\Omega(qb+D\Omega^r)$, the self-consistent equations \eqref{eq_self_consistent} can be rewritten as \begin{subequations}\label{eq_self_consistent3} \begin{align} &\begin{aligned} \frac{\cos\alpha}{K}=&F_1(q,\Omega^r)\equiv \int_{-\infty}^{\infty}g_\Omega(qb+D\Omega^r)\\ &\left[\rho_l\sqrt{1-b^2}+\rho_r\left(\frac{b}{\sigma}-\sqrt{\frac{b^2}{\sigma^2}-1}\right)\sin\Delta\right]db, \end{aligned}\\ &\begin{aligned} \frac{\sin\alpha}{K}=&F_2(q,\Omega^r)\equiv \int_{-\infty}^{\infty}g_\Omega(qb+D\Omega^r)\\ &\left[\rho_lb+\rho_r\left(\frac{b}{\sigma}-\sqrt{\frac{b^2}{\sigma^2}-1}\right)\cos\Delta\right]db, \end{aligned}\label{eq_self_consistent3b} \end{align}. \end{subequations} Eq.~\eqref{eq_self_consistent3} defines a map from $(q,\Omega^r)$ to $(\alpha, K)$. The solutions of the self-consistent equations can be denoted as the quad $(q,\Omega^r,K,\alpha)$ corresponding to points on the graph of this map. From the quad $(q,\Omega^r,K,\alpha)$, it is straightforward to obtain the solutions for the order parameter as the triplets $(K,\alpha,r)$ and $(K,\alpha,\Omega^r)$. These are depicted in Fig.~\ref{fig_three}. The results of the numerical simulation that demonstrates the mixture effect of intrinsic and inertia-induced phase shifts are shown in Fig.~\ref{fig_three}. Here, we consider $N=10000$ oscillators with either no or small inertias $m=0.1$. The natural frequencies are chosen from a double Gaussian or a double Lorenz distribution considered in \cite{Omel2012,Omel2013}. The coupling strength is increased from $K=0$ to $K=4$ with increment $dK=0.1$. To obtain the stable states at each coupling strength $K$, two initial states of oscillators are considered. One is the incoherence state, and the other is the synchronization state. From these two initial states, after sufficient long transient time $t=100$, we obtain the stable states at each coupling strength $K$, shown as circles in Fig.~\ref{fig_three}. The theoretical results are obtained from the self-consistent method in \cite{Omel2012,Omel2013} for $m=0$ and the equations Eq.~\eqref{eq_self_consistent3} for $m=0.1$. Due to the fact that the inertia $m=0.1$ is quite small, the difference between $b_S$ and $b_L$ is negligible. The synchronization transitions can be obtained directly from the stable states. If there is only one stable state for each $K$, the transition is continuous. On the contrary, if there are multiple and discontinuous branches of stable states, the transitions are abrupt. Comparing the numerical simulations and theoretical results, we firstly find that the theoretical predictions of the self-consistent method coincide well with the results of the numerical simulations. Secondly, even with a small value of inertias, such as $m=0.1$, the stable states of oscillators change dramatically, resulting in corresponding changes in the synchronization transitions. This phenomenon is found in \cite{Barre2016} through the stability analysis around the critical point. In this paper, through Eq.~\eqref{limit cycle} and the self-consistent method, we show that the physical mechanism of these transitions is the inertia-induced phase shift $\Delta$ and its direct mixture with the intrinsic phase shift $\alpha$. This mixture results in the cancellation of the effect of the phase shift and consequently leads to the continuous synchronization transitions for oscillators with unimodal distributions. Interestingly, this analysis can also be applied to the second-order oscillators with $\alpha=0$, where the phase shift $\Delta$ in general introduces abrupt transitions \cite{Gao2018}. \section{Oscillating Synchronization Process}\label{section_two} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{nf2} \caption{(a) Synchronization transitions for oscillators with a Gaussian distribution $g_\Omega(\Omega)=\frac{1}{\sqrt{2\pi}}e^{-\Omega^2/2}$ with $m=2, \alpha=0.5$ in shown. The dotted and dash-doted lines are the solutions of self-consistent equations in the backward, forward processes. Squares and circles with error bar are from the numerical simulations of $10000$ oscillators in the forward and backward processes, where the error bar is the standard deviation of $r(t)$. At $k=16$ in the forward process, the oscillating state is in (b) with the order parameter $r(t)$, (c) the mean frequencies of oscillators versus their natural frequencies with the distribution of the mean frequencies in the inner figure. (d) The mean frequencies of two largest synchronized clusters in the forward process.} \label{Fig_oscillating} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f5} \caption{(a) Phase diagram of $1000$ oscillators in the forward processes up to $k=40$ with inertias $m$ and phase shifts $\alpha$. The natural frequencies of oscillators are chosen randomly from a Gaussian distribution $g_\Omega(\Omega)=\frac{1}{\sqrt{2\pi}}e^{-\Omega^2/2}$. (b) The basin of attraction of oscillating states at $K=20$ with $m=2, \alpha=0.5$. The oscillators are sorted separated into two groups by their natural frequencies, with fractions $n_1$ and $n_2$ where $n_1+n_2=1$. The oscillators initial frequencies are chosen randomly from $[\Omega_1-\delta\omega,\Omega_1+\delta\omega]$ and $[\Omega_2-\delta\omega,\Omega_2+\delta\omega]$ respectively. The initial phases of all oscillators are chosen randomly from $[0,2\pi]$. We set $\Omega_1=-1$ and $\delta\omega=0.1$. The separation of oscillating sates and synchronization states is determined by the standard deviation $\sigma_r=0.1$ of $r(t)$.}\label{Fig_oscillating00 } \end{figure} In the previous section we saw how the cross-effect of phase shifts and inertias leads to changes in the synchronization transitions from abrupt to continuous or vice verse through the direct mixture of $\alpha$ and $\Delta$. Here, we show that the same cross-effect to a different synchronization transition where oscillators do not reach a steady state with increasing coupling strength but instead they reach an oscillating state for arbitrarily large coupling strength. This phenomenon is due to the formation of several synchronized clusters and appears in the parameter region of relatively large inertias and phase shifts. First, we check the existence of complete synchronization state in the limit $K\rightarrow\infty$. From the self-consistent equations, when the coupling strength is sufficiently large if the system converges to the complete synchronization as $r\rightarrow1$ and all the oscillators are locked, we have $q\approxeq K \gg 1$. Then Eq.~\eqref{eq_self_consistent3b} reads \begin{equation}\label{eq_limit} r\sin\alpha\approx\int_{-qb_{S,L}(a)+D\Omega^r}^{qb_{S,L}(a)+D\Omega^r}g_\Omega(\Omega) \frac{\Omega-D\Omega^r}{q}d\Omega \end{equation} where $qb_{S,L}(a)+D\Omega^r\gg0$ and $-qb_{S,L}(a)+D\Omega^r\ll0$ are the self-consistent conditions for the complete synchronization. From the property that $g_\Omega(\Omega)$ is a normalized distribution, we have the solution \begin{equation}\label{eq_solution} D\Omega_r\approx\bar{\Omega}-K\sin\alpha\approx\bar{\Omega}-q\sin\alpha, \end{equation} where $\bar{\Omega}$ is the mean-frequency of natural frequencies $\Omega$. From the symmetry of $g_\Omega(\Omega)$ as we assumed, we have $\bar{\Omega}=0$. Note that the collective frequency $\Omega_r$ depends on the coupling strength. Such complete synchronization state only exists if the self-consistent conditions \begin{equation}\label{Condition} qb_{S,L}(a)+D\Omega^r\gg 0, \ -qb_{S,L}(a)+D\Omega^r\ll 0, \end{equation} are satisfied. When $q$ is sufficiently large, we have $qb_L= q$ and $qb_S\approx C\sqrt{q}$ with the constant $C=4D/(\pi\sqrt{m})$. From the solution $D\Omega_r=-q\sin\alpha$, we deduce that only in the backward process with $qb_L=q$ the self-consistent conditions Eq.~\eqref{Condition} are satisfied if $\alpha\neq\pm\pi/2$ and hence the complete synchronization states exist. On the contrary, in the forward process, both expressions in Eq.~\eqref{Condition} are either positive or negative, depending on the value of $\alpha$. In both cases, the self-consistent conditions are not satisfied and oscillators cannot converge to the complete synchronization states. The critical coupling strength $K_n$ for this new synchronization process can be estimated by $qb_{S,L}(a)=|D\Omega^r|$, which gives \begin{equation} K_n=\frac{16D^2}{\pi^2m\sin^2\alpha}. \end{equation} When $\alpha\rightarrow0$ or $m\rightarrow0$, we have $K_n\rightarrow\infty$. In this case, all the oscillators are already synchronized with each other and therefore this new synchronization process does not manifest. The new synchronization process only appears in the forward process when $K_n$ is smaller than the critical point of the appearance of complete synchronization states. Hence one gets the usual synchronization processes with either small inertias or small phase shifts. In addition, when both $\alpha$ and $m$ are large enough, another effect of inertias should also be included, namely the appearance of additional synchronized clusters. As discussed in \cite{Olmi2014,Gao2018}, for second-order oscillators with large enough inertias, the steady states with only one cluster are not stable and several additional clusters can form besides the central cluster. In this case, the amplitude of the order parameter $r(t)$ exhibits a periodic oscillation. This kind of state is called \emph{oscillating state}, and is the direct result of inertias \cite{Gao2018}. Hence, as shown in Fig.~\ref{Fig_oscillating}, numerical simulations reveal that the synchronization process converges to oscillating states and not to the steady states calculated with the self-consistent method. In this case, the oscillators form two major synchronized clusters. We name this synchronization process the oscillating synchronization process to distinguish it from the classic synchronization process that leads to the complete synchronization state. The numerical results for $N=10000$ oscillators are shown in Fig.~\ref{Fig_oscillating}. The natural frequencies of oscillators are chosen randomly from a Gaussian distribution. The inertia and phase shift of the oscillators are $m=2$, $\alpha=0.5$. Both the forward and backward processes are considered in the region $K\in[0,20]$ with $dK=0.1$. Comparing with the theoretical results from the self-consistent equations Eq.~\eqref{eq_self_consistent}, the numerical result in the backward process coincides well with the result from Eq.~\eqref{eq_self_consistent}, as shown in Fig.~\ref{Fig_oscillating}(a). However, in the forward process, the order parameter exhibits a large oscillation. For a specific state in the forward process at $K=16$, we show the order parameter $r(t)$, and mean-frequency $\bar\omega$ in Fig.~\ref{Fig_oscillating}(b,c). We see the periodic oscillation of $r(t)$ and correspondingly the multi-synchronization clusters shown as the stairs in Fig.~\ref{Fig_oscillating}(c). To check the properties of the oscillating state, we show the mean-frequency of the largest two clusters in the forward process. As shown in Fig.~\ref{Fig_oscillating}(d), the two mean-frequencies of these clusters depend linearly on the coupling strength $K$. Though the results in Fig.~\ref{Fig_oscillating} are shown up to $K=20$, we have checked that these non-synchronized oscillating states are still stable up to $K=500$. Recall that when there is no phase shift, these synchronized clusters will merge into a single one for sufficiently large coupling strength \cite{Gao2018}. However, due to the phase shift $\alpha$ the separation of such clusters is strengthened. The frequency of these two clusters depends on the coupling strength $K$ approximately linearly with a slope proportional to the fraction of oscillators in it, as shown in Fig.~\ref{Fig_oscillating}(d). As a matter of fact, these two clusters cannot be synchronized by increasing the coupling strength. As a simple model exhibiting the same behaviour, consider a special system with only two values of natural frequencies, i.e. $N_1$ oscillators with $\Omega_1$ and $N_2$ oscillators with $\Omega_2$, following \begin{equation} m\ddot{\theta_i}+D\dot{\theta_i}=\Omega_1+\frac{K}{N}\sum_{j=1}^{N}\sin(\theta_j-\theta_i-\alpha), \end{equation} when $i=1,\dots,N_1$, and \begin{equation} m\ddot{\theta_i}+D\dot{\theta_i}=\Omega_2+\frac{K}{N}\sum_{j=1}^{N}\sin(\theta_j-\theta_i-\alpha), \end{equation} when $i=N_1+1,\dots,N=N_1+N_2.$ The oscillators are naturally divided into two groups and synchronized within each group. The dimension of the system can be reduced and one finds \begin{subequations} \begin{align} &m\ddot{\theta_1}+D\dot{\theta_1}=\Omega_1-Kn_1\sin\alpha+Kn_2\sin(\theta_2-\theta_1-\alpha),\\ &m\ddot{\theta_2}+D\dot{\theta_1}=\Omega_2-Kn_2\sin\alpha+Kn_1\sin(\theta_1-\theta_2-\alpha), \end{align} \end{subequations} where $\theta_1$ and $\theta_2$ are the common phases of the oscillators in the first and second group respectively, and $n_1=N_1/N, n_2=N_2/N$ with $n_1+n_2=1$. With the definition of phase difference $\varphi=\theta_1-\theta_2$ we have \begin{equation}\label{eq_two_1} \begin{aligned} m\ddot{\varphi}+D\dot{\varphi}=&\Omega_1-\Omega_2-K(n_1-n_2)\sin\alpha\\ &-K[n_2\sin(\varphi+\alpha)+n_1\sin(\varphi-\alpha)]. \end{aligned} \end{equation} Without loss of generality, taking $n_1>n_2$, Eq.~\eqref{eq_two_1} can be rewritten as \begin{equation}\label{eq_two_2} m\ddot{\varphi}+D\dot{\varphi}=\Delta\Omega-\bar{K}\sin(\varphi+\bar{\alpha}), \end{equation} where \begin{subequations} \begin{align} \Delta\Omega&=\Omega_1-\Omega_2-K(n_1-n_2)\sin\alpha,\\ \bar{K}&=K\sqrt{\cos^2\alpha+(n_1-n_2)^2\sin^2\alpha}\equiv Kq(\alpha),\\ \bar{\alpha}&=\arcsin\left(\frac{(n_1-n_2)\sin\alpha}{\sqrt{\cos^2\alpha+(n_1-n_2)^2\sin^2\alpha}}\right). \end{align} \end{subequations} The simplified dynamics in Eq.~\eqref{eq_two_2} is the same as the dynamics for second-order oscillators in the mean field Eq.~\eqref{eq_single_dynamic}. Hence the synchronization condition for the two clusters is determined by the two parameters \begin{equation}\label{dynamics_2} a=\frac{D}{\sqrt{Kq(\alpha)m}},\ \ b=\frac{\Omega_1-\Omega_2}{Kq(\alpha)}-\frac{(n_1-n_2)\sin\alpha}{q(\alpha)}. \end{equation} As a result, with $K\rightarrow\infty$, we have $a\rightarrow0$ and $|b|\rightarrow(n_1-n_2)\sin\alpha/q(\alpha)>0$. From the fact that $b_S(a)\rightarrow0$ in the limit $a\rightarrow0$, the synchronization condition $|b|<b_S(a)$ can not be satisfied with increasing $K$. In this case, we have the non-synchronized process, where the two clusters cannot be synchronized. It is clear that the non-synchronized process is due to the cross-effect of inertia and phase shift. If $\alpha=0$, we have $q(\alpha)=1$. Substitution of $\alpha$ and $q(\alpha)$ into Eq.~\eqref{dynamics_2} yields \begin{equation} a=\frac{D}{\sqrt{Km}},\ \ b=\frac{\Omega_1-\Omega_2}{K}. \end{equation} In the limit $K\rightarrow\infty$, we have $b_S\approx 4D/\sqrt{Km}\pi$. Hence no matter how large $|\Omega_1-\Omega_2|$ we always have $K_c=\pi^2(\Omega_1-\Omega_2)^2m/16D^2$ where the two clusters will become synchronized with $K>K_c$ with increasing $K$. On the other hand, if $m=0$ one gets $b_S=1$. From the fact that $(n_1-n_2)\sin\alpha/q(\alpha)<1$ we have $|b|<1=b_S$ in the limit $K\rightarrow\infty$. As a result, these two clusters will be synchronized when $K$ is large enough. In addition, similar to the analysis for $m=0$, in the backward processes with $b_L\equiv 1$, the synchronization states are not affected by the inertias and phase shifts. As a result, from the quite different properties of $b_L$ and $b_S(a)$, we have the non-trivial bi-stability of complete synchronization and oscillating states. To test the conclusion above, we calculate the phase diagram of non-synchronized oscillating states. $N=10000$ oscillators are considered with a Gaussian distribution of their natural frequencies. With different inertias $m\in[0.1,4]$ and phase shifts $\alpha\in[0.01,1]$, we follow the oscillators in the forward process to a sufficient large coupling $K=40$. The boundary between oscillating states and partial synchronization states is determined by $\pm0.1$ standard deviations of the order parameter $r$ from its mean value. The result is shown in Fig.~\ref{Fig_oscillating00 }(a). The oscillating states exist when both the inertia and phase shift are relatively large. The fitting curve for the boundary lines read $\alpha=(\pi/2)/(1+2.96k)$. In Fig.~\ref{Fig_oscillating00 }(b), we check the basin of attraction of the oscillating state at $K=20$. The oscillators sorted and separated into two groups according to their natural frequencies. The fraction of the two groups is defined by $n_1$ and $n_2$ with $n_1+n_2=1$. The initial frequencies of the oscillators are chosen randomly from a small region around $\Omega_1$ and $\Omega_2$, and their initial phases are chosen randomly from $[0,2\pi]$. Without loss of generality, we take $\Omega_1=-1$. From the numerical simulations, we see that there is clear large basin of attraction of the oscillating state as shown in Fig.~\ref{Fig_oscillating00 }(b). As we can see from the expression for the parameters $a,b$ in Eq.~\eqref{dynamics_2}, the basin of attraction of oscillating states is closely related to the frequency and fraction separation $|\Omega_1-\Omega_2|$ and $|n_1-n_2|$ of two groups. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f4} \caption{Synchronization transitions for $1000$ oscillators with a Gaussian distribution $g_\Omega(\Omega)=\frac{1}{\sqrt{2\pi}}e^{-\Omega^2/2}$ with $m=2, \alpha=0.5$ in backward process on (a) Erdos-R\'{e}nyi random networks \cite{Erdos1960evolution} with $p=0.3$ the probability for edge creation, (b) Watts-Strogatz small-world networks \cite{Watts1998collective} with $k=100$ the nearest connection in a ring and $p=0.3$ the probability for edge creation, (c) Barab\'{a}si-Albert scale-free networks \cite{Barabasi1999emergence} with the minimum degree $k_0=50$, (d) Barab\'{a}si-Albert scale-free networks with the minimum degree $k_0=3$.}\label{Fig_oscillating0} \end{figure} To check the generality of the oscillating synchronization process, we considered various systems of the second-order Kuramoto-Sakaguchi oscillators. For all-connected oscillators, this new oscillating synchronization process is found in all the cases we considered, including uniform, Lorentz and double-Gaussian/Lorentz distributions of the natural frequencies. For oscillators in complex networks, we consider scale-free, ER random, and small-word networks. The non-synchronization processes are found in all the systems as shown in Fig.~\ref{Fig_oscillating0}. The oscillating states appear in these processes when the mean-degree of such networks is large. On the other hand, with a smaller mean-degree, the second synchronization cluster is suppressed by the topology of the network. We find the non-synchronized steady states converge to $r=0$ in the limit $K\rightarrow\infty$ as shown in Fig.~\ref{Fig_oscillating0}(d). The oscillating synchronization process depends on the mean-degree of networks, not their densities. This fact is closely related to the conditions determining weather the mean-field assumption works for random networks. The suppression of the oscillating states is beyond the scope of this paper, and will be considered in a forthcoming work. \section{Conclusion}\label{section_four} In this paper we analyse the second-order oscillators with phase shifts, namely second-order Kuramoto-Sakaguchi model. The self-consistent method is generalized and used to study the steady states of oscillators. With the inertia introduced phase shifts, the non-universal transitions of Kuramoto-Sakaquchi oscillators \cite{Omel2012} are canceled out by a small value of $m$. The changing of abrupt to continuous transitions with the effect of inertias proposed in \cite{Barre2016} is also shown and studied by the self-consistent method. In addition, the cross-effect of inertia and phase shifts also results in the oscillating synchronization forward processes. Instead of synchronization states, the system will stay in the oscillating state and can not be synchronized with increasing coupling strength. This interesting phenomenon is due to the combination of additional synchronized clusters as an effect of inertias and the dependence of $\Omega^r$ on $K$ as an effect of phase shifts. Using numerical simulations, such non-synchronized processes are also found in different distributions of natural frequencies and topologies of the networks. \begin{acknowledgments} We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. J. Gao would like to acknowledge scholarship support from the China Scholarship Council (CSC). \end{acknowledgments} \bibliographystyle{elsarticle-num-names}
{ "timestamp": "2020-12-29T02:24:30", "yymm": "2012", "arxiv_id": "2012.14088", "language": "en", "url": "https://arxiv.org/abs/2012.14088" }
\section{Introduction} The system characterized by the $2$ \textit{nonlinearly-coupled} ODEs \begin{subequations} \label{1} \begin{eqnarray} \dot{x}_{n}\left( t\right) =c_{n1}\left[ x_{1}\left( t\right) \right] ^{2}+c_{n2}x_{1}\left( t\right) x_{2}\left( t\right) +c_{n3}\left[ x_{2}\left( t\right) \right] ^{2} && \notag \\ +c_{n4}x_{1}\left( t\right) +c_{n5}x_{2}\left( t\right) +c_{n6}~,~~~n=1,2~, && \label{1a} \end{eqnarray namel \begin{equation} \dot{x}_{1}\left( t\right) =c_{11}\left[ x_{1}\left( t\right) \right] ^{2}+c_{12}x_{1}\left( t\right) x_{2}\left( t\right) +c_{13}\left[ x_{2}\left( t\right) \right] ^{2}+c_{14}x_{1}\left( t\right) +c_{15}x_{2}\left( t\right) +c_{16}~, \label{1b} \end{equation \begin{equation} \dot{x}_{2}\left( t\right) =c_{21}\left[ x_{1}\left( t\right) \right] ^{2}+c_{22}x_{1}\left( t\right) x_{2}\left( t\right) +c_{23}\left[ x_{2}\left( t\right) \right] ^{2}+c_{24}x_{1}\left( t\right) +c_{25}x_{2}\left( t\right) +c_{26}~, \label{1c} \end{equation is a prototypical example of \textit{autonomous} dynamical systems. It features the $12$ \textit{a priori arbitrary} time-independent parameters c_{nj}$ ($n=1,2;$ $j=1,2,3,4,5,6$). The main finding of the present paper is to show that the \textit{initial-values} problem of this dynamical system can be \textit{explicitly} solved, provided the $12$ coefficients $c_{nj}$ ( n=1,2$; $j=1,2,3,4,5,6$) satisfy $4$ \textit{simple constraints}:\textit{\ for a neat version of these \textit{constraints} see below \textbf{Section 4 }where the solution of the \textit{initial-values} problem of the system \ref{1}) is displayed. \textbf{Notation 1-1}. The $2$ (possibly \textit{complex}) numbers x_{n}\equiv x_{n}\left( t\right) ,$ $n=1,2,$ are the dependent variables; $t$ is the independent variable ("time"; but the treatment remains valid when $t$ is considered a \textit{complex} number); superimposed dots denote $t -differentiations; the $12$ time-independent (possibly \textit{complex}) numbers $c_{nj},$ $n=1,2,$ $j=1,2,3,4,5,6,$ are parameters. Hereafter the time-dependence of variables is often not \textit{explicitly} indicated, when this omission is unlikely to cause misunderstandings. The indices $n,$ m,$ $j,$ $k,$ $\ell $---as indeed generally clear from the context---run respectively over the integers from $1$ to $2$ ($n=1,2$), from $1$ to $2$ ( m=1,2$), from $1$ to $6$ ($j=1,2,3,4,5,6$), from $1$ to $3$ ($k=1,2,3$) and from $2$ to $0$ ($\ell =2,1,0$). $\blacksquare $ \textbf{Remark 1-1}. The system (\ref{1}) has been investigated over time in an enormous number of mainly \textit{mathematical}, or mainly \textit applicable}, contexts; too many to allow any attempt to provide a list of references that would do justice to the multitude of relevant papers. We limit ourselves here to quote only $3$ very recent papers: \cite{CCL2020} (where the case with \textit{homogeneous} right-hand sides has been treated, i. e. the system (\ref{1}) with $c_{nj}=0$ for $n=1,2$ and $j>3$), and \cit {CP2019} \cite{CP2020} (which treat a multiplicity of analogous models); because these papers have motivated this research and also because from them relevant previous references can be traced. $\blacksquare $ \textbf{Remark 1-2}. The system (\ref{1}) is clearly \textit{invariant} under the symmetry transformation \end{subequations} \begin{eqnarray} &&c_{11}\Leftrightarrow c_{23}~,~c_{12}\Leftrightarrow c_{22}~,~c_{13}\Leftrightarrow c_{21}~,~c_{14}\Leftrightarrow c_{25}~,~c_{15}\Leftrightarrow c_{24}~,~c_{16}\Leftrightarrow c_{26}~; \notag \\ &&x_{1}\left( t\right) \Leftrightarrow x_{2}\left( t\right) ~. \label{1Trans} \end{eqnarray $\blacksquare $ In \textbf{Section 2} the technique used in this paper to solve the \textit initial-values} problem of system (\ref{1}) is described: it involves the introduction of $10$ parameters $A_{nm}$ and $a_{n\ell }$, in terms of which the $12$ parameters $c_{nj}$ are expressed; the \textit{inverse} problem to express these $10$ parameters $A_{nm}$ and $a_{n\ell }$ in terms of the $12$ \textit{a priori arbitrary} parameters $c_{nj}$ is then solved in \textbf Section 3}, and it is shown there that this entails---as it were, \textit{a posteriori}---that the $12$ parameters $c_{nj}$ are thereby required to satisfy $4$ rather simple \textit{constraints}, which are \textit{explicitly} exhibited. The reader who is only interested in the main results may jump over these $2$ sections and go directly to \textbf{Section 4}, where a \textit{summary} of the main results of this paper is presented. The subsequent \textbf{Section 5} is devoted to two special cases of the system \ref{1}) which deserve a separate treatment; and an extremely terse \textbf Section 6} completes the main body of the paper; which also includes $3$ short Appendices. \section{The technique to solve the system (\protect\ref{1})} \textbf{First position}: \begin{subequations} \label{xnyn} \begin{equation} x_{n}\left( t\right) =A_{n1}y_{1}\left( t\right) +A_{n2}y_{2}\left( t\right) ~, \label{xnyna} \end{equation namel \begin{equation} x_{1}\left( t\right) =A_{11}y_{1}\left( t\right) +A_{12}y_{2}\left( t\right) ~, \label{xnynb} \end{equation \begin{equation} x_{2}\left( t\right) =A_{21}y_{1}\left( t\right) +A_{22}y_{2}\left( t\right) ~. \label{xnync} \end{equation This assignment implies the introduction of the $4$, \textit{a priori arbitrary}, time-independent parameters $A_{nm}$ ($n=1,2;$ $m=1,2)$ and of the $2$ \textit{auxiliary} variables $y_{1}\left( t\right) $ and y_{2}\left( t\right) $. \textbf{Remark 2-1}. This assignment is clearly invariant under the transformation \end{subequations} \begin{equation} x_{1}\left( t\right) \Leftrightarrow x_{2}\left( t\right) ~,~~y_{1}\left( t\right) \Leftrightarrow y_{2}\left( t\right) ~,~~A_{11}\Leftrightarrow A_{22}~,~~A_{12}\Leftrightarrow A_{21}~. \label{2Transx1x2} \end{equation $\blacksquare $ \textbf{Remark 2-2}. In the following (except in \textbf{Subsection 5.1}) we generally assume that none of the $4$ parameters $A_{nm}$ vanishes, A_{nm}\neq 0$. $\blacksquare $ \textbf{Remark 2-3}. The addition of $2$ \textit{a priori arbitrary} additional parameters---say $A_{1}$ respectively $A_{2}$---to the right-hand sides of the $2$ eqs. (\ref{xnyn}) would only complicate the following developments without providing any significant additional generality to our treatment. $\blacksquare $ \textbf{Evolution of the auxiliary variables }$y_{n}\left( t\right) $. Let us hereafter assume that $y_{1}\left( t\right) $ and $y_{2}\left( t\right) $ evolve according to the following system of $2$ \textit{decoupled} ODEs: \begin{subequations} \label{yndot} \begin{equation} \dot{y}_{n}\left( t\right) =a_{n2}\left[ y_{n}\left( t\right) \right] ^{2}+a_{n1}y_{n}\left( t\right) +a_{n0}~, \label{yndota} \end{equation namely \begin{equation} \dot{y}_{1}\left( t\right) =a_{12}\left[ y_{1}\left( t\right) \right] ^{2}+a_{11}y_{1}\left( t\right) +a_{10}~, \label{yndotb} \end{equation \begin{equation} \dot{y}_{2}\left( t\right) =a_{22}\left[ y_{2}\left( t\right) \right] ^{2}+a_{21}y_{2}\left( t\right) +a_{20}~. \label{yndotc} \end{equation The \textit{initial-values} problem of this system of $2$ (decoupled) ODEs---which involve the $6$ \textit{a priori arbitrary} time-independent parameters $a_{n\ell }$ ($n=1,2;$ $\ell =2,1,0$)---is easily seen to be \textit{explicitly solvable }(see \textbf{Appendix A}): \end{subequations} \begin{subequations} \label{ynt} \begin{equation} y_{n}\left( t\right) =\frac{y_{n}\left( 0\right) \left[ y_{n+}-y_{n-}\exp \left( \beta _{n}t\right) \right] -y_{n+}y_{n-}\left[ 1-\exp \left( \beta _{n}t\right) \right] }{y_{n+}\exp \left( \beta _{n}t\right) -y_{n-}+y_{n}\left( 0\right) \left[ 1-\exp \left( \beta _{n}t\right) \right] }~,~~~n=1,2~, \label{2ynt} \end{equation with the $6$ (time-independent) parameters $y_{n\pm }$ and $\beta _{n}$ defined (above and hereafter) in terms of the $6$ parameters $a_{n\ell }$ as follows: \begin{equation} y_{n\pm }=\left( -a_{n1}\pm \beta _{n}\right) /\left( 2a_{n2}\right) ~,~~~\beta _{n}=\sqrt{\left( a_{n1}\right) ^{2}-4a_{n0}a_{n2}}~,~~~n=1,2~. \label{ynplusminusbetan} \end{equation} \textbf{Remark 2-4}. The system of $2$ decoupled ODEs (\ref{yndot}) is clearly invariant under the transformation \end{subequations} \begin{equation} y_{1}\left( t\right) \Leftrightarrow y_{2}\left( t\right) ~,~~~a_{1\ell }\Leftrightarrow a_{2\ell }~,~~~\ell =2,1,0~. \label{Transy1y2a1la2l} \end{equation} Hence the combination of this invariance property with that reported above---see \textbf{Remark 2-1}---clearly entails the overall invariance property of the $2$ systems of ODEs (\ref{1}) and (\ref{yndot}), as well as the change of variables (\ref{xnyn}), under the following transformations \begin{eqnarray} x_{1}\left( t\right) &\Leftrightarrow &x_{2}\left( t\right) ~,~~y_{1}\left( t\right) \Leftrightarrow y_{2}\left( t\right) ~,~~A_{11}\Leftrightarrow A_{22}~,~~A_{12}\Leftrightarrow A_{21}~, \notag \\ a_{11} &\Leftrightarrow &a_{21}~,~~a_{12}\Leftrightarrow a_{22}~,~~a_{10}\Leftrightarrow a_{20}~.~~~\blacksquare \label{23Trasns} \end{eqnarray} Let us emphasize that, while the solutions (\ref{ynt}) are \textit{rather simple}, their behaviors as functions of $t$---even for \textit{real} $t$ ("time")---can be \textit{fairly complicated} if the parameters $\beta _{n}$ are themselves \textit{not real}; as may well be the case even if \textit{al } the parameters $a_{n\ell }$ are \textit{real }numbers: see (\re {ynplusminusbetan}). On the other hand if both $\beta _{n}=\mathbf{i}\rho _{n}\omega $ with $n=1,2$, $\omega $ an \textit{arbitrary (nonvanishing) rea } number and with \textit{both} parameters $\rho _{n}$ \textit{rational (nonvanishing) real} numbers, then clearly (or, if need be, see for instance \cite{C2008}) the evolution of both the pairs $y_{n}\left( t\right) $ and x_{n}\left( t\right) $---as functions of \textit{real} $t$ ("time")---is \textit{completely periodic} with a period \textit{independent }of the respective initial data, which is an \textit{integer }multiple of the basic period $2\pi /\left\vert \omega \right\vert $ ("isochrony"). Let us also note that $y_{n\pm }$ (see (\ref{ynplusminusbetan})) are the \textit{equilibrium} positions of the system (\ref{yndot}), while the \textit{asymptotic} behavior of $y_{n}\left( t\right) $ as the (\textit{real ) time $t$ \textit{diverges} is always rather simple: indeed if $R \left[ \beta _{n}\right] <0$, then \begin{subequations} \begin{equation} \underset{t\rightarrow +\infty }{\lim }\left[ y_{n}\left( t\right) \right] =y_{n+}~; \label{yn+} \end{equation while if $Re\left[ \beta _{n}\right] >0$ then \begin{equation} \underset{t\rightarrow +\infty }{\lim }\left[ y_{n}\left( t\right) \right] =y_{n-}~. \label{yn-} \end{equation And clearly the corresponding \textit{equilibrium} configurations and \textit{asymptotic} behaviors of the variables $x_{n}\left( t\right) $ are as well rather simple (see (\ref{xnyn})): but note that the asymptotic behavior of these variables $x_{n}\left( t\right) $ is \textit asymptotically isochronous} (see \cite{CGU2008}), when one and only one of the $2$ quantities $\beta _{n}$ is \textit{purely imaginary}. It is of course evident that the \textit{coupled} system of $2$ ODEs implied by the relations (\ref{xnyn}) and by the $2$ ODEs (\ref{yndot}) satisfied by the $2$ variables $y_{n}\left( t\right) $ is \textit{identical} to the system of $2$ ODEs (\ref{1}) satisfied by the $2$ variables $x_{n}\left( t\right) $, of course provided the $12$ parameters $c_{nj}$ are \textit appropriately} expressed in terms of the $10=4+6$ parameters $A_{nm}$ in \ref{xnyn}) and $a_{n\ell }$ in (\ref{yndot}). The corresponding computation of the explicit expressions of the $12$ parameters $c_{nj}$ in terms of the 10$ parameters $A_{nm}$, $a_{n\ell }$ is a standard---if tedious---task (see \textbf{Appendix B}), yielding (for the $6$ parameters $c_{1j}$) the following results: \end{subequations} \begin{subequations} \label{c1j} \begin{equation} c_{11}=\left[ a_{12}A_{11}\left( A_{22}\right) ^{2}+a_{22}A_{12}\left( A_{21}\right) ^{2}\right] /D^{2}~, \end{equation \begin{equation} c_{12}=-2A_{11}A_{12}\left[ a_{12}A_{22}+a_{22}A_{21}\right] /D^{2}~, \end{equation \begin{equation} c_{13}=A_{11}A_{12}\left[ a_{12}A_{12}+a_{22}A_{11}\right] /D^{2}~, \end{equation \begin{equation} c_{14}=\left( a_{11}A_{11}A_{22}-a_{21}A_{12}A_{21}\right) /D~, \end{equation \begin{equation} c_{15}=-\left( a_{11}-a_{21}\right) A_{11}A_{12}/D~, \end{equation \begin{equation} c_{16}=a_{10}A_{11}+a_{20}A_{12}~, \end{equation where the quantity $D$ is defined, above and hereafter, as follows: \end{subequations} \begin{equation} D=A_{11}A_{22}-A_{12}A_{21}~. \label{AA} \end{equation} The analogous formulas for the $6$ parameters $c_{2j}$ can be obtained from those written above via the transformations (see \textbf{Remarks 1-2}, \textbf{2-1 }and\textbf{\ 2-2})\textbf{\ } \begin{eqnarray} c_{11} &\Leftrightarrow &c_{23}~,~c_{12}\Leftrightarrow c_{22}~,~c_{13}\Leftrightarrow c_{21}~,~c_{14}\Leftrightarrow c_{25}~,~c_{15}\Leftrightarrow c_{24}~,~c_{16}\Leftrightarrow c_{26}~; \notag \\ A_{11} &\Leftrightarrow &A_{22}~,~~~A_{12}\Leftrightarrow A_{21}~;~~~a_{1\ell }\Leftrightarrow a_{2\ell }~, \label{2TransfcnjAnmanl} \end{eqnarray under which the quantity $D$, see (\ref{AA}), is clearly invariant. Hence they read as follows: \begin{subequations} \label{c2j} \begin{equation} c_{23}=\left[ a_{22}A_{22}\left( A_{11}\right) ^{2}+a_{12}A_{21}\left( A_{12}\right) ^{2}\right] /D^{2}~, \end{equation \begin{equation} c_{22}=-2A_{22}A_{21}\left[ a_{22}A_{11}+a_{12}A_{12}\right] /D^{2}~, \end{equation \begin{equation} c_{21}=A_{22}A_{21}\left[ a_{22}A_{21}+a_{12}A_{22}\right] /D^{2}~, \end{equation \begin{equation} c_{25}=\left( a_{21}A_{11}A_{22}-a_{11}A_{12}A_{21}\right) /D~, \end{equation \begin{equation} c_{24}=-\left( a_{21}-a_{11}\right) A_{22}A_{21}/D~, \end{equation \begin{equation} c_{26}=a_{20}A_{22}+a_{10}A_{21}~. \end{equation} These findings of course imply that the solution of the \textit initial-values} problem for the system (\ref{1}) is provided via the formulas (\ref{xnyn}) from the \textit{explicit} solutions (\ref{ynt}) of the \textit{initial-values} problem for the system (\ref{yndot}), hence essentially via rather simple, quite \textit{explicit,} \textit{algebraic} operations; this can be done for any assignment of the $12$ parameters c_{nj}$, such that the \textit{explicit} formulas (\ref{c1j}) and (\ref{c2j )---expressing the $12$ parameters $c_{nj}$ in terms of the $10$ parameters A_{nm}$ and $a_{n\ell }$---can be \textit{inverted}: in the following \textbf{Section 3} we show how this can be done, provided the $12$ parameters $c_{nj}$ satisfy $4$ \textit{constraints}. \section{The inverse problem: expressing the $10$ parameters $A_{nm}$ and a_{n\ell }$ in terms of the $12$ coefficients $c_{nj}$} Our main task in this \textbf{Section 3} is to discuss the \textit{inversion} of the $12$ relations obtained above---see (\ref{c1j}) and (\ref{c2j} \textbf{\ }---expressing the $12$ coefficients $c_{nj}$ in terms of the $10$ parameters $A_{nm}$ and $a_{n\ell }$; namely to show how, given $12$ \textit a priori arbitrary }coefficients $c_{nj}$, the $10$ corresponding parameters $A_{nm}$ and $a_{n\ell }$ can be computed. We show below how this task can be \textit{explicitly }accomplished; but that it entails that the $12$ coefficients $c_{nj}$ ($n=1,2$; $j=1,2,3,4,5,6$) must---as it were, \textit a posteriori}---satisfy $4$ \textit{algebraic} conditions (\textit{explicitl } obtained below). An obvious route to achieve our main task is to try and solve the $12$ \textit{algebraic} equations (\ref{c1j}) and (\ref{c2j}) for the $10$ parameters $A_{nm}$ and $a_{n\ell }$; but given the fairly \textit{large} number of these \textit{algebraic} relations and their \textit{nonlinear} character this is a nontrivial job (beyond the power of standard algebraic manipulation packages such as \textbf{Mathematica}, used on a modern PC). More progress in this direction can be made via the following alternative procedure. Let us note that the relation (\ref{y1xn}) in \textbf{Appendix B} implies that the variable $y_{1}\left( t\right) $ clearly satisfies the ODE \end{subequations} \begin{subequations} \begin{equation} \dot{y}_{1}=\left( A_{22}\dot{x}_{1}-A_{12}\dot{x}_{2}\right) /D~, \end{equation hence, via (\ref{1}) \begin{eqnarray} &&\dot{y}_{1}=\left\{ A_{22}\left[ c_{11}\left( x_{1}\right) ^{2}+c_{12}x_{1}x_{2}+c_{13}\left( x_{2}\right) ^{2}+c_{14}x_{1}+c_{15}x_{2}+c_{16}\right] \right. \nonumber \\ &&\left. -A_{12}\left[ c_{21}\left( x_{1}\right) ^{2}+c_{22}x_{1}x_{2}+c_{23}\left( x_{2}\right) ^{2}+c_{24}x_{1}+c_{25}x_{2}+c_{26}\right] \right\} /D~, \nonumber \\ && \end{eqnarray yielding, via (\ref{xnyn}) and some trivial if tedious algebra, \end{subequations} \begin{equation} \dot{y}_{1}=b_{11}\left( y_{1}\right) ^{2}+b_{12}y_{1}y_{2}+b_{13}\left( y_{2}\right) ^{2}+b_{14}y_{1}+b_{15}y_{2}+b_{16}~, \label{3y1dot} \end{equation with \begin{subequations} \label{3b1j} \begin{eqnarray} b_{11} &=&\left[ \left( A_{11}\right) ^{2}\left( -A_{12}c_{21}+A_{22}c_{11}\right) +A_{11}A_{21}\left( -A_{12}c_{22}+A_{22}c_{12}\right) \right. \nonumber \\ &&\left. +\left( A_{21}\right) ^{2}\left( -A_{12}c_{23}+A_{22}c_{13}\right) \right] /D~, \end{eqnarray \begin{eqnarray} &&b_{12}=\left\{ A_{11}\left[ -2\left( A_{12}\right) ^{2}c_{21}+A_{12}A_{22}\left( 2c_{11}-c_{22}\right) +\left( A_{22}\right) ^{2}c_{12}\right] \right. \nonumber \\ &&\left. +A_{21}\left[ -\left( A_{12}\right) ^{2}c_{22}+A_{12}A_{22}\left( c_{12}-2c_{23}\right) +2\left( A_{22}\right) ^{2}c_{13}\right] \right\} /D~, \end{eqnarray \begin{eqnarray} &&b_{13}=\left\{ \left( A_{12}\right) ^{2}\left[ -A_{12}c_{21}+A_{22}\left( c_{11}-c_{22}\right) \right] \right. \nonumber \\ &&\left. +\left( A_{22}\right) ^{2}\left[ A_{12}\left( c_{12}-c_{23}\right) +A_{22}c_{13}\right] \right\} /D~, \end{eqnarray \begin{equation} b_{14}=\left[ A_{11}\left( -A_{12}c_{24}+A_{22}c_{14}\right) +A_{21}\left( -A_{12}c_{25}+A_{22}c_{15}\right) \right] /D~, \nonumber \end{equation} \begin{equation} b_{15}=\left[ -\left( A_{12}\right) ^{2}c_{24}+A_{12}A_{22}\left( c_{14}-c_{25}\right) +\left( A_{22}\right) ^{2}c_{15}\right] /D~, \end{equation} \begin{equation} b_{16}=\left( -A_{12}c_{26}+A_{22}c_{16}\right) /D~. \end{equation} And now a comparison of (\ref{yndotb}) with (\ref{3y1dot}) yields, via (\re {3b1j}), the following $6$ relations: \end{subequations} \begin{subequations} \label{3a1L} \begin{eqnarray} a_{12} &=&\left[ \left( A_{11}\right) ^{2}\left( -A_{12}c_{21}+A_{22}c_{11}\right) +A_{11}A_{21}\left( -A_{12}c_{22}+A_{22}c_{12}\right) \right. \nonumber \\ &&\left. +\left( A_{21}\right) ^{2}\left( -A_{12}c_{23}+A_{22}c_{13}\right) \right] /D~, \end{eqnarray} \begin{equation} a_{11}=\left[ A_{11}\left( -A_{12}c_{24}+A_{22}c_{14}\right) +A_{21}\left( -A_{12}c_{25}+A_{22}c_{15}\right) \right] /D~, \end{equation \begin{equation} a_{10}=\left( -A_{12}c_{26}+A_{22}c_{16}\right) /D~; \end{equation \end{subequations} \begin{subequations} \label{3Anmc1L} \begin{eqnarray} &&A_{11}\left[ -2\left( A_{12}\right) ^{2}c_{21}+A_{12}A_{22}\left( 2c_{11}-c_{22}\right) +\left( A_{22}\right) ^{2}c_{12}\right] \nonumber \\ &&+A_{21}\left[ -\left( A_{12}\right) ^{2}c_{22}+A_{12}A_{22}\left( c_{12}-2c_{23}\right) +2\left( A_{22}\right) ^{2}c_{13}\right] =0~, \label{3Anm1} \end{eqnarray \begin{eqnarray} &&\left( A_{12}\right) ^{2}\left[ -A_{12}c_{21}+A_{22}\left( c_{11}-c_{22}\right) \right] \nonumber \\ &&+\left( A_{22}\right) ^{2}\left[ A_{12}\left( c_{12}-c_{23}\right) +A_{22}c_{13}\right] =0~, \label{3Anm2} \end{eqnarray \begin{equation} -\left( A_{12}\right) ^{2}c_{24}+A_{12}A_{22}\left( c_{14}-c_{25}\right) +\left( A_{22}\right) ^{2}c_{15}=0~. \label{FirstEqA1A2} \end{equation} By a completely analogous development, based on the ODE (\ref{yndotc}) satisfied by $y_{2}\left( t\right) $ rather than (\ref{yndotb}) satisfied by $y_{1}\left( t\right) $---or, more easily, via the symmetry properties associated to the transformation (\ref{2TransfcnjAnmanl})---one gets the following $6$ additional relations: \end{subequations} \begin{subequations} \label{3a2L} \begin{eqnarray} a_{22} &=&\left[ \left( A_{22}\right) ^{2}\left( -A_{21}c_{13}+A_{11}c_{23}\right) +A_{22}A_{12}\left( -A_{21}c_{12}+A_{11}c_{22}\right) \right. \nonumber \\ &&\left. +\left( A_{12}\right) ^{2}\left( -A_{21}c_{11}+A_{11}c_{21}\right) \right] /D~, \end{eqnarray \begin{equation} a_{21}=\left[ A_{22}\left( -A_{21}c_{15}+A_{11}c_{25}\right) +A_{12}\left( -A_{21}c_{14}+A_{11}c_{24}\right) \right] /D~, \end{equation \begin{equation} a_{20}=\left( -A_{21}c_{16}+A_{11}c_{26}\right) /D~; \end{equation \end{subequations} \begin{subequations} \label{Anm4} \begin{eqnarray} &&A_{22}\left[ -2\left( A_{21}\right) ^{2}c_{13}+A_{21}A_{11}\left( 2c_{23}-c_{12}\right) +\left( A_{11}\right) ^{2}c_{22}\right] \nonumber \\ &&+A_{12}\left[ -\left( A_{21}\right) ^{2}c_{12}+A_{21}A_{11}\left( c_{22}-2c_{11}\right) +2\left( A_{11}\right) ^{2}c_{21}\right] =0~, \label{3Anm3} \end{eqnarray \begin{eqnarray} &&\left( A_{21}\right) ^{2}\left[ -A_{21}c_{13}+A_{11}\left( c_{23}-c_{12}\right) \right] \nonumber \\ &&+\left( A_{11}\right) ^{2}\left[ A_{21}\left( c_{22}-c_{11}\right) +A_{11}c_{21}\right] =0~, \label{3Anm4} \end{eqnarray \begin{equation} -\left( A_{21}\right) ^{2}c_{15}+A_{21}A_{11}\left( c_{25}-c_{14}\right) +\left( A_{11}\right) ^{2}c_{24}=0~. \label{SecondEqA1A2} \end{equation} It is thus seen that the $6$ parameters $a_{n\ell }$ ($n=1,2;$ $\ell =2,1,0)$ are given \textit{explicitly} by the $6$ formulas (\ref{3a1L}) and (\re {3a2L}) in terms of the $12$ coefficients $c_{nj}$ and the $4$ parameters A_{nm}$. The remaining task is to extract the expressions of the $4$ parameters A_{nm}$ in terms of the $6$ parameters $c_{nk}$ ($n=1,2;$ $k=1,2,3$), the only ones featured in the remaining $6$ \textit{algebraic} equations (\re {3Anmc1L}) and (\ref{Anm4}). To this end, let us now introduce the $2$ \textit{auxiliary} parameters z_{1}$ and $z_{2}$: \end{subequations} \begin{equation} z_{1}=A_{11}/A_{21}~,~~~z_{2}=A_{12}/A_{22}~; \label{3z1z2} \end{equation it is then easily seen that the $2$ eqs. (\ref{FirstEqA1A2}) and (\re {SecondEqA1A2}) yield---recall \textbf{Remark 2-2}---the \textit{same cubic equation for these $2$ quantities: \begin{subequations} \label{3EqzCubic} \begin{equation} c_{21}\left( z_{n}\right) ^{3}+\left( c_{22}-c_{11}\right) \left( z_{n}\right) ^{2}+\left( c_{23}-c_{12}\right) z_{n}-c_{13}=0~,~~~n=1,2~, \label{3Eqz} \end{equation namel \begin{equation} c_{21}\left( z_{1}\right) ^{3}+\left( c_{22}-c_{11}\right) \left( z_{1}\right) ^{2}+\left( c_{23}-c_{12}\right) z_{1}-c_{13}=0~, \label{3Eqz1} \end{equation \begin{equation} c_{21}\left( z_{2}\right) ^{3}+\left( c_{22}-c_{11}\right) \left( z_{2}\right) ^{2}+\left( c_{23}-c_{12}\right) z_{2}-c_{13}=0~. \label{3Eqz2} \end{equation} \textbf{Remark 3-1}. Note that, consistently with the transformations (\re {1Trans}) and (\ref{23Trasns}), the corresponding transformations of the $2$ auxiliary parameters $z_{n}$ are $z_{1}\Leftrightarrow 1/z_{2}$ and (of course) $z_{2}\Leftrightarrow 1/z_{1}$, implying the invariance under all these transformations of the eqs. (\ref{3EqzCubic}). $\blacksquare $ The $2$ algebraic equations (\ref{3EqzCubic}) allow to compute (explicitly, via the Cardano formulas) the $2$ quantities $z_{n}$ in terms of the $6$ coefficients $c_{nk}$; of course, they do \textit{not} imply that the $2$ quantities $z_{1}$ and $z_{2}$ coincide, indeed we \textit{exclude} hereafter this possibility because it would imply the vanishing of $D$ (see \ref{AA}) and (\ref{3z1z2})). But more progress is possible. Indeed, let us take advantage of the definitions (\ref{3z1z2}) to rewrite the $2$ eqs. (\ref{3Anm1}) and (\ref{3Anm3}), getting thereby (again recalling \textbf{Remark 2-2}) \end{subequations} \begin{subequations} \label{3Eqszc} \begin{eqnarray} &&\left[ -2c_{21}\left( z_{n+1}\right) ^{2}+\left( 2c_{11}-c_{22}\right) z_{n+1}+c_{12}\right] z_{n} \nonumber \\ &&-c_{22}\left( z_{n+1}\right) ^{2}+\left( c_{12}-2c_{23}\right) z_{n+1}+2c_{13}=0~,~~n=1,2~\mod \left[ 2\right] ~, \end{eqnarray namely (also dividing by $2$ \begin{eqnarray} &&\left[ -c_{21}\left( z_{2}\right) ^{2}+\left( c_{11}-c_{22}/2\right) z_{2}+c_{12}/2\right] z_{1} \nonumber \\ &&-c_{22}\left( z_{2}\right) ^{2}/2+\left( c_{12}/2-c_{23}\right) z_{2}+c_{13}=0~, \label{3FirstConstraint} \end{eqnarray \begin{eqnarray} &&\left[ -c_{21}\left( z_{1}\right) ^{2}+\left( c_{11}-c_{22}/2\right) z_{1}+c_{12}/2\right] z_{2} \nonumber \\ &&-c_{22}\left( z_{1}\right) ^{2}/2+\left( c_{12}/2-c_{23}\right) z_{1}+c_{13}=0~. \label{3SecondConstraint} \end{eqnarray} Let us now sum the $2$ eqs. (\ref{3Eqz1}) and (\ref{3SecondConstraint}), and likewise the $2$ eqs. (\ref{3Eqz2}) and (\ref{3FirstConstraint}). We thus obtain (using the fact that $z_{1}-z_{2}\neq 0$; see above) the same \textit quadratic} equation for the $2$ quantities $z_{1}$ and $z_{2}$: \end{subequations} \begin{subequations} \begin{equation} 2c_{21}\left( z_{n}\right) ^{2}-\left( 2c_{11}-c_{22}\right) z_{n}-c_{12}=0~,~~~n=1,2~, \label{3Eqzn} \end{equation implying \begin{equation} z_{n}=\left[ 2c_{11}-c_{22}+\left( -1\right) ^{n}\sqrt{\left( 2c_{11}-c_{22}\right) ^{2}+8c_{12}c_{21}}\right] /\left( 4c_{21}\right) ~,~~~n=1,2~. \label{3zn} \end{equation These formulas (\ref{3zn}) feature of course only \textit{square-roots ---rather than the \textit{cubic-roots} that would be featured by the Cardano solutions of the \textit{cubic} equations (\ref{3EqzCubic})---and moreover they yield the \textit{explicit} expressions (\ref{3zn}) of the $2$ auxiliary parameters $z_{n}$ in terms of (only!) the $4$ parameters $c_{nm}$ ($n=1,2$; $m=1,2$). Hence by inserting these expressions of $z_{1}$ and z_{2}$ in any $2$ of the $4$ eqs. (\ref{3EqzCubic}) and (\ref{3Eqszc}), we get a system of $2$ \textit{algebraic} equations satisfied by the $6$ parameters $c_{nk}$ ($n=1,2$; $k=1,2,3$) which features the $2$ parameters c_{13}$ and $c_{23}$ (only!) \textit{linearly} and therefore allows to express both these coefficients \textit{explicitly} in terms of the other $4$ coefficients $c_{nm}$ ($n=1,2;m=1,2$). For instance the $2$ eqs. (\re {3FirstConstraint}) and (\ref{3SecondConstraint}) yield (by subtracting the second multiplied by $z_{1}$ from the first multiplied by $z_{2},$ and by subtracting the second from the first) \end{subequations} \begin{subequations} \begin{equation} c_{13}=-c_{11}z_{1}z_{2}-c_{12}\left( z_{1}+z_{2}\right) /2~, \end{equation \begin{equation} c_{23}=-c_{21}z_{1}z_{2}-c_{22}\left( z_{1}+z_{2}\right) /2~, \end{equation hence, via (\ref{3zn}), \end{subequations} \begin{subequations} \label{3ConstraintsFinal} \begin{equation} 4c_{13}c_{21}-c_{12}c_{22}=0~, \end{equation \begin{equation} 2\left( -c_{12}+2c_{23}\right) c_{21}+\left( 2c_{11}-c_{22}\right) c_{22}=0~. \end{equation} \textbf{Remark 3-2}. Since throughout our treatment we have assumed that z_{1}$ is different from $z_{2},$ clearly the formula (\ref{3zn}) implies that the $4$ parameters $c_{11},$ $c_{12},$ $c_{21},$ $c_{22}$ must satisfy the \textit{inequality} \end{subequations} \begin{equation} \left( 2c_{11}-c_{22}\right) ^{2}+8c_{12}c_{21}\neq 0~. \label{3Ineq1} \end{equation $\blacksquare $ Two additional relations can be obtained by inserting in the $2$ eqs. (\re {FirstEqA1A2}) and (\ref{SecondEqA1A2}) the expression \begin{equation} A_{11}=z_{1}A_{21}~,~~~A_{12}=z_{2}A_{22}~, \end{equation implied by (\ref{3z1z2}), obtaining thereby (again) $2$ identical \textit second-degree} equations for the $2$ parameters $z_{1}$ and $z_{2}: \begin{subequations} \label{3cc1245} \begin{equation} c_{24}\left( z_{n}\right) ^{2}+\left( c_{25}-c_{14}\right) z_{n}-c_{15}=0~,~~~n=1,2~, \label{3SecondEqzn} \end{equation implying of cours \begin{equation} z_{n}=\left[ c_{14}-c_{25}+\left( -1\right) ^{n}\sqrt{\left( c_{14}-c_{25}\right) ^{2}+4c_{15}c_{24}}\right] /\left( 2c_{24}\right) ~,~~~n=1,2~. \label{3znSecond} \end{equation} \textbf{Remark 3-3}. Note that these equations (\ref{3cc1245}) only involve the $4$ parameters $c_{14},$ $c_{24},$ $c_{15},$ $c_{25};$ and that the condition $z_{1}\neq z_{2}$ entails the \textit{inequality} \end{subequations} \begin{equation} \left( c_{25}-c_{14}\right) ^{2}+4c_{15}c_{24}\neq 0~. \label{3Ineq2} \end{equation $\blacksquare $ Since from the $4$ eqs. (\ref{3Anm1}), (\ref{3Anm2}), (\ref{3Anm3}), (\re {3Anm4}) we extracted the $2$ \textit{constraints }(\ref{3ConstraintsFinal ), clearly from these $4$ equations we can only obtain $2$ additional relations constraining the parameters $c_{nj}$. A convenient way to get such relations is to subtract the eq. (\ref{3SecondEqzn}) multiplied by $2c_{21}$ from the eq. (\ref{3Eqzn}) itself multiplied by $c_{24},$ getting thereby the following $2$ identical \textit{first-degree} equations for the parameters $z_{1}$ and $z_{2}$ \begin{eqnarray} &&-\left[ c_{24}\left( 2c_{11}-c_{22}\right) +2c_{21}\left( c_{25}-c_{14}\right) \right] z_{n} \nonumber \\ &=&c_{12}c_{24}-2c_{15}c_{21}~,~~~n=1,2~. \label{3FirstDegreeEq} \end{eqnarray But these $2$ first-degree equations seem to imply that $z_{1}=z_{2},$ while we know that this is \textit{not} the case (at least, provided the two inequalities (\ref{3Ineq1}) and (\ref{3Ineq2}) hold true; which is generally the case for any \textit{generic} assignment of $12$ parameters $c_{nj}$). Hence these \textit{first-degree} eqs. (\ref{3FirstDegreeEq}) satisfied by z_{1}$ and $z_{2}$ must have the property to be \textit{identically} satisfied for any arbitrary value of $z_{1}$ and $z_{2},$ which is of course the case provided the $8$ parameters $c_{11},$ $c_{12},$ $c_{14},$ $c_{15},$ $c_{21},$ $c_{22},$ $c_{24},$ $c_{25}$ satisfy the following $2$ \textit constraints}: \begin{subequations} \label{3ThirdFourthCon} \begin{equation} c_{24}\left( 2c_{11}-c_{22}\right) +2c_{21}\left( c_{25}-c_{14}\right) =0~, \end{equation \begin{equation} c_{12}c_{24}-2c_{15}c_{21}=0 \end{equation (to obtain the first of these $2$ equations we assumed $c_{24}\neq 0$, consistently with our assumption that the $12$ parameters $c_{nj}$ have \textit{generic} values). In conclusion, we have obtained $4$ \textit{constraints} on the $10$ parameters $c_{np}$ ($n=1,2$; $p=1,2,3,4,5$): see the $2$ eqs. (\re {3ConstraintsFinal}) and the $2$ eqs. (\ref{3ThirdFourthCon}); note that the $2$ parameters $c_{n6}$ are \textit{not} involved at all in these \textit constraints}. There remain to compute the $4$ parameters $A_{nm}$. To compute the $4$ parameters $A_{nm}$, rather than using the $6$ eqs. (\re {3Anmc1L}) and (\ref{Anm4})---out of which we already extracted the $4$ \textit{constraints} (\ref{3ConstraintsFinal}) and (\ref{3ThirdFourthCon}); so that we can expect to be only able to compute only $2$ of the $4$ parameters $A_{nm}$ in terms of the other $2$ (and of course the coefficients $c_{nj}$)---the simplest way is to use the relations implied by the definitions (\ref{3z1z2}): \end{subequations} \begin{equation} A_{11}=z_{1}A_{21}~,~~~A_{12}=z_{2}A_{22}~; \label{3AA} \end{equation here of course $z_{1}$ and $z_{2}$ are defined by their expressions (\re {3zn}) or, equivalently, (\ref{3znSecond}), and the $2$ parameters $A_{21}$ and $A_{22}$ can be considered as \textit{free} parameters; so that these relations can be rewritten as follows \begin{equation} A_{21}=\lambda _{1}~,~~~A_{22}=\lambda _{2}~,~~~A_{11}=z_{1}\lambda _{1}~,~~~A_{12}=z_{2}\lambda _{2}~, \label{3Alanda} \end{equation with $\lambda _{1}$ and $\lambda _{2}$ two \textit{arbitrary} (nonvanishing) parameters. This concludes both the identification of the \textit{subclass} of the dynamical system (\ref{1}) which is treated in this paper and the solution of its \textit{initial-values} problem; except for the further step of inserting in all the relevant formulas---in addition to the expression (\re {3Alanda})---the following rather simple expressions, say, of $c_{13}$ and c_{23}, \begin{subequations} \begin{equation} c_{13}=c_{12}c_{22}/\left( 4c_{21}\right) ~, \end{equation \begin{equation} c_{23}=\left[ 2c_{12}c_{21}-c_{22}\left( 2c_{11}-c_{22}\right) \right] /\left( 4c_{21}\right) \end{equation implied by (\ref{3ConstraintsFinal}), and likewise, say, of $c_{24}$ and c_{25}, \end{subequations} \begin{subequations} \begin{equation} c_{24}=2c_{15}c_{21}/c_{12}~, \end{equation \begin{equation} c_{25}=c_{14}-c_{24}\left( 2c_{11}-c_{22}\right) /\left( 2c_{21}\right) ~, \end{equation implied by (\ref{3ThirdFourthCon}) (we ignore the \textit{nongeneric} cases with $c_{21}=c_{24}=0$). An essential compendium of all the relevant formulas is displayed in the following \textbf{Section 4}, for the convenience of the reader who is more interested in using these findings than in following their derivation. \textbf{Remark 3-4}. A final observation. The reader who has followed our derivation up to this point might justifiably be puzzled by the fact that our solution seems to feature the $2$ \textit{free} parameters $\lambda _{1}$ and $\lambda _{2}$. But in fact these $2$ \textit{free} parameters are \textit{not} present at all in the solution $x_{n}\left( t\right) $ ($n=1,2 ) of the dynamical system (\ref{1}) obtained above. This is proven in \textbf{Appendix C}; the development reported there are also useful to get the final formulas reported in the following \textbf{Section 4} (which indeed do \textit{not} feature the $2$ parameters $\lambda _{n}$). \blacksquare $ \section{A summary of the solution of the system (\protect\ref{1})} In this \textbf{Section 4} we summarize the main results obtained in this paper so far. For the convenience of the reader who is only interested in these results and not in their derivation, we report these findings in a self-consistent fashion, even at the cost of the repetition of some key formulas already displayed above and in the \textbf{Appendices}. Let us recall that our focus is on the system of $2$ nonlinearly-coupled first-order ODEs (\ref{1}), i. e. \end{subequations} \begin{equation} \dot{x}_{n}=c_{n1}\left( x_{1}\right) ^{2}+c_{n2}x_{1}x_{2}+c_{n3}\left( x_{2}\right) ^{2}+c_{n4}x_{1}+c_{n5}x_{2}+c_{n6}~,~~~n=1,2~. \label{41} \end{equation} Our main finding is the solution in \textit{explicit} form of the \textit initial-values} problem for this system, which is however achieved only provided its $12$ \textit{a priori arbitrary }parameters $c_{nj}$ ($n=1,2;$ j=1,2,3,4,5,6$) do satisfy---as it were, \textit{a posteriori}---the following $4$ \textit{algebraic constraints}: \begin{subequations} \label{4ConstraintsFinal} \begin{equation} 4c_{13}c_{21}-c_{12}c_{22}=0~, \end{equation \begin{equation} 2\left( -c_{12}+2c_{23}\right) c_{21}+\left( 2c_{11}-c_{22}\right) c_{22}=0~, \end{equation \begin{equation} c_{24}\left( 2c_{11}-c_{22}\right) +2c_{21}\left( c_{25}-c_{14}\right) =0~, \end{equation \begin{equation} c_{12}c_{24}-2c_{15}c_{21}=0~. \end{equation Note that the first $2$ of these $4$ \textit{constraints} only involve the 6 $ parameters $c_{nk}$ ($n=1,2;$ $k=1,2,3$), and the last $2$ only involve the $8$ parameters $c_{11},$ $c_{12},$ $c_{14},$ $c_{15},$ $c_{21},$ c_{22}, $ $c_{24},$ $c_{25};$ while the $2$ parameters $c_{n6}$ are \textit unconstrained} and only influence (see below) the $2$ parameters $\alpha _{n0}$. The \textit{explicit} solution of the \textit{initial-values} problem for the system (\ref{41}) with (\ref{4ConstraintsFinal}) is then provided by the following formulas, for whose derivation the interested reader should go through the developments reported in the rest of this paper (for some guidance see below \textbf{Remark 4-1}): \end{subequations} \begin{subequations} \label{4xnt} \begin{equation} x_{1}\left( t\right) =z_{1}w_{1}\left( t\right) +z_{2}w_{2}\left( t\right) ~, \label{4xnta} \end{equation \begin{equation} x_{2}\left( t\right) =w_{1}\left( t\right) +w_{2}\left( t\right) ~; \label{4xntb} \end{equation \end{subequations} \begin{equation} z_{n}=\left[ 2c_{11}-c_{22}+\left( -1\right) ^{n}\sqrt{\left( 2c_{11}-c_{22}\right) ^{2}+8c_{12}c_{21}}\right] /\left( 4c_{21}\right) ~,~~~n=1,2~; \label{4zn} \end{equation \begin{eqnarray} &&w_{n}\left( t\right) \equiv w_{n}\left( C,t\right) ~, \nonumber \\ &=&\frac{w_{n}\left( 0\right) \left[ w_{n+}-w_{n-}\exp \left( \beta _{n}t\right) \right] -w_{n+}w_{n-}\left[ 1-\exp \left( \beta _{n}t\right) \right] }{w_{n+}\exp \left( \beta _{n}t\right) -w_{n-}+w_{n}\left( 0\right) \left[ 1-\exp \left( \beta _{n}t\right) \right] }~, \nonumber \\ n &=&1,2~; \label{4wnt} \end{eqnarray \begin{subequations} \label{4w1w20} \begin{equation} w_{1}\left( 0\right) \equiv w_{1}\left( C,0\right) =\left[ x_{1}\left( 0\right) -z_{2}x_{2}\left( 0\right) \right] /\left( z_{1}-z_{2}\right) ~, \label{4w10} \end{equation \begin{equation} w_{2}\left( 0\right) \equiv w_{2}\left( C,0\right) =-\left[ x_{1}\left( 0\right) -z_{1}x_{2}\left( 0\right) \right] /\left( z_{1}-z_{2}\right) ~; \label{4w20} \end{equation \end{subequations} \begin{equation} w_{n\pm }=\left( -\alpha _{n1}\pm \beta _{n}\right) /\left( 2\alpha _{n2}\right) ~,~~~\beta _{n}=\sqrt{\left( \alpha _{n1}\right) ^{2}-4\alpha _{n0}\alpha _{n2}}~,~~n=1,2~; \label{4wnpm} \end{equation \begin{subequations} \label{4alphanl} \begin{equation} \alpha _{12}=\left[ \left( z_{1}\right) ^{2}\left( c_{11}-z_{2}c_{21}\right) +z_{1}\left( c_{12}-z_{2}c_{22}\right) +c_{13}-z_{2}c_{23}\right] /\left( z_{1}-z_{2}\right) ~, \end{equation \begin{equation} \alpha _{11}=\left[ z_{1}\left( c_{14}-z_{2}c_{24}\right) +c_{15}-z_{2}c_{25 \right] /\left( z_{1}-z_{2}\right) ~, \end{equation \begin{equation} \alpha _{10}=\left( c_{16}-z_{2}c_{26}\right) /\left( z_{1}-z_{2}\right) ~; \end{equation} \begin{eqnarray} \alpha _{22} &=&\left[ \left( -c_{13}+z_{1}c_{23}\right) +z_{2}\left( -c_{12}+z_{1}c_{22}\right) \right. \nonumber \\ &&\left. +\left( z_{2}\right) ^{2}\left( -c_{11}+z_{1}c_{21}\right) \right] /\left( z_{1}-z_{2}\right) ~, \end{eqnarray \begin{equation} \alpha _{21}=\left[ -c_{15}+z_{1}c_{25}+z_{2}\left( -c_{14}+z_{1}c_{24}\right) \right] /\left( z_{1}-z_{2}\right) ~, \end{equation \begin{equation} \alpha _{20}=\left( -c_{16}+z_{1}c_{26}\right) /\left( z_{1}-z_{2}\right) ~. \end{equation} \textbf{Remark 4-1}. The $4$ \textit{constraints} (\ref{4ConstraintsFinal}) coincide with the formulas (\ref{3ConstraintsFinal}) and (\re {3ThirdFourthCon}); the formulas (\ref{4xnt}) come from the eqs. (\ref{xnyn ), (\ref{3Alanda}), and (\ref{Cynt}); the formulas (\ref{4zn}) coincide with the eqs. (\ref{3zn}); the formulas (\ref{4wnt}) come from the eqs. (\re {Cynt}), (\ref{Cynpm}) and (\ref{2ynt}); the formulas (\ref{4w1w20}) come from the eqs. (\ref{Cynt}), (\ref{y12x12}) and (\ref{3Alanda}); the formulas (\ref{4wnpm}) come from the eqs. (\ref{Cynt}), (\ref{AalphaA}), (\re {ynplusminusbetan}) and (\ref{3Alanda}); the $6$ formulas (\ref{4alphanl}) come from the eqs. (\ref{3a1L}), (\ref{3a2L}), (\ref{Calpha}), and (\re {3Alanda}). $\blacksquare $ \textbf{Remark 4-2}. The special subcases of this system which feature the remarkable property to be \textit{isochronous }are clearly those characterized by the requirement that the $2$ parameters $\beta _{n}$---see \ref{4wnpm}) with (\ref{4alphanl}) and (\ref{4zn})---be both \textit{rationa } multiples of the \textit{same imaginary} number: \end{subequations} \begin{equation} \beta _{n}=\mathbf{i}\rho _{n}\omega ~,~~~n=1,2~, \end{equation where $\mathbf{i}\omega $ is an \textit{arbitrary imaginary} number and \rho _{n}$ are $2$ \textit{real} (positive or negative) \textit{rational} numbers (this is rather obvious, but in case of doubt see, for instance, \cite{C2008}). While, if one of the $2$ parameters $\beta _{n}$ is an \textit{arbitrary purely imaginary} number and the other is \textit{not} a \textit{purely imaginary number}, then the system (\ref{41}) is \emph{asymptotically isochronous} (see \cite{CGU2008}). $\blacksquare $ \section{Two special cases of the system (\protect\ref{1})} The treatment reported up to this point has assumed that the $12$ coefficients $c_{nj}$ in (\ref{1}) take \textit{generic} values (except, of course, for satisfying the $4$ \textit{constraints} (\ref{4ConstraintsFinal )). However in several context this is \textit{not} the case; for instance the subclass of systems (\ref{1}) characterized by the restrictions c_{13}=c_{15}=c_{21}=c_{24}=0$ is relevant in many \textit{applicable} contexts; and the system (\ref{1}) with \textit{homogeneous} second-degree polynomial right-hand sides---i. e., with $c_{nj}=0$ for $n=1,2$ and j=4,5,6 $---also deserves a special treatment, in order to compare the findings presented in the present paper with those reported in the recent paper \cite{CCL2020}. These $2$ special cases are treated in the following 2 $ subsections of this \textbf{Section 5}. \subsection{The subcase of (\protect\ref{1}) with c_{13}=c_{15}=c_{21}=c_{24}=0$} In many \textit{applicative} contexts it is unreasonable to assume that the time-evolution of the dependent variable $x_{n}\left( t\right) $ is influenced by agents (represented by terms in the right-hand sides of the ODEs (\ref{1})) which depend \textit{only} on the other variable x_{n+1}\left( t\right) $ (with $n=1,2~\mod[2]$). Hence the subcase of the system (\ref{1}) characterized by the restrictions \begin{subequations} \label{cfgh} \begin{equation} c_{13}=c_{15}=c_{21}=c_{24}=0~ \label{vanishingCs} \end{equation deserves special attention, featuring indeed in many \textit{applicative} contexts. For this reason in this \textbf{Subsection 5.1} we focus on the special case of (\ref{1}) characterized by these limitations (\re {vanishingCs}), introducing moreover---for notational simplicity---the following new notation for the $8$ remaining coefficients $c_{nj}$ in (\re {1}): \begin{eqnarray} c_{11} &=&f_{11}~,~~~c_{12}=f_{12}~,~~~c_{14}=g_{1}~,~~~c_{16}=h_{1}~, \nonumber \\ c_{22} &=&f_{21}~,~~~c_{23}=f_{22}~,~~~c_{25}=g_{2}~,~~~c_{26}=h_{2}~; \label{4cf} \end{eqnarray so that the system (\ref{1}) reads hereafter (in this \textbf{Subsection 5.1 ) as follows: \end{subequations} \begin{subequations} \label{4Syst1} \begin{equation} \dot{x}_{n}=x_{n}\left( f_{n1}x_{1}+f_{n2}x_{2}+g_{n}\right) +h_{n}~,~~~n=1,2 \label{4xndot} \end{equation namel \begin{equation} \dot{x}_{1}=x_{1}\left( f_{11}x_{1}+f_{12}x_{2}+g_{1}\right) +h_{1}~, \label{4x1dot} \end{equation \begin{equation} \dot{x}_{2}=x_{2}\left( f_{21}x_{1}+f_{22}x_{2}+g_{2}\right) +h_{2}~. \label{4x2dot} \end{equation} \textbf{Remark 5.1-1}. Clearly this system of $2$ coupled nonlinear ODEs is invariant under the following transformation: \end{subequations} \begin{equation} x_{1}\left( t\right) \Leftrightarrow x_{2}\left( t\right) ~,~~f_{11}\Leftrightarrow f_{22}~,~~f_{12}\Leftrightarrow f_{21}~,~~g_{1}\Leftrightarrow g_{2}~,~~h_{1}\Leftrightarrow h_{2}~; \label{4Transx1x2} \end{equation which clearly replaces the analogous transformation reported in \textbf Remark 1-2}. $\blacksquare $ The interested reader will easily verify that a direct adaptation to the present case of the final findings reported above (see \textbf{Section 4}) is \textit{a priori unjustified}, because the conditions (\ref{cfgh}) render illegitimate some of the steps performed in that section---where the treatment was indeed based on the assumption that the coefficients $c_{nj}$ in (\ref{1}) have \textit{generic} values (except for satisfying the constraints (\ref{4ConstraintsFinal})). So below (in this \textbf{Subsection 5.1}) we review the above treatment, adapting it to the new situation. On the other hand, for the convenience of the reader who is only interested in the solution of the dynamical system (\ref{4Syst1}) and not in the details of how that solution has been obtained, we report in the following \textbf Subsubsection 5.1.1}---in analogy to what we did in \textbf{Section 4} for the dynamical system (\ref{1})---the explicit solution of the system (\re {4Syst1}); even at the cost of some repetitions. The $12$ equations (\ref{c1j}) and (\ref{c2j}) read now (via (\re {vanishingCs}) and (\ref{4cf})) as follows: \begin{subequations} \label{4fgh} \begin{equation} f_{11}=\left[ a_{12}A_{11}\left( A_{22}\right) ^{2}+a_{22}A_{12}\left( A_{21}\right) ^{2}\right] /D^{2}~, \label{4fgha} \end{equation \begin{equation} f_{12}=-2A_{11}A_{12}\left[ a_{12}A_{22}+a_{22}A_{21}\right] /D^{2}~, \label{4fghb} \end{equation \begin{equation} A_{11}A_{12}\left( a_{12}A_{12}+a_{22}A_{11}\right) =0~, \label{4fgh3} \end{equation \begin{equation} g_{1}=\left( a_{11}A_{11}A_{22}-a_{21}A_{12}A_{21}\right) /D~, \label{4fghd} \end{equation \begin{equation} a_{11}=a_{21}~, \label{4fghe} \end{equation \begin{equation} h_{1}=a_{10}A_{11}+a_{20}A_{12}~; \label{4fghf} \end{equation \begin{equation} f_{22}=\left[ a_{22}A_{22}\left( A_{11}\right) ^{2}+a_{12}A_{21}\left( A_{12}\right) ^{2}\right] /D^{2}~, \label{4fghg} \end{equation \begin{equation} f_{21}=-2A_{22}A_{21}\left( a_{22}A_{11}+a_{12}A_{12}\right) /D^{2}~, \label{h4fgh} \end{equation \begin{equation} A_{22}A_{21}\left( a_{22}A_{21}+a_{12}A_{22}\right) =0~, \label{4fghi} \end{equation \begin{equation} g_{2}=\left( a_{21}A_{11}A_{22}-a_{11}A_{12}A_{21}\right) /D~, \label{4fghj} \end{equation \begin{equation} h_{2}=a_{10}A_{21}+a_{20}A_{22}~, \label{4fghk} \end{equation of course with $D$ defined as above, see (\ref{AA}). \textbf{Remark 5.1-2}. In this \textbf{Subsection 5.1}, as mentioned above, we assume that the parameter $D$ does \textit{not} vanish, but we do not exclude the possibility that one of the $4$ parameters $A_{nm}$ vanish (in contrast with \textbf{Remark 2-2}). $\blacksquare $ It is now again convenient to introduce the $2$ auxiliary parameters z_{1}=A_{11}/A_{21}$ and $z_{2}=A_{12}/A_{22}$ (see (\ref{3z1z2})). Then the $2$ eqs. (\ref{3Anm2}) and (\ref{3Anm1}) yield (via (\ref{cfgh})) \end{subequations} \begin{subequations} \begin{equation} \left[ \left( f_{21}-f_{11}\right) z_{n}+f_{22}-f_{12}\right] z_{n}=0~,~~~n=1,2~, \end{equation implying \end{subequations} \begin{subequations} \label{4z1z2} \begin{equation} z_{1}=0\,,~~~z_{2}=\frac{f_{22}-f_{12}}{f_{11}-f_{21}}~, \label{4z1z2a} \end{equation o \begin{equation} z_{2}=0\,,~~~z_{1}=\frac{f_{22}-f_{12}}{f_{11}-~f_{21}}~, \label{4z1z2b} \end{equation since (see \textbf{Remark 5.1-2}) we exclude the solution $z_{1}=z_{2}$ which implies $D=0$ (see (\ref{AA}) and (\ref{3z1z2})). Note that---via (\re {3z1z2})---$z_{1}=0$ implies $A_{11}=0$ and likewise $z_{2}=0$ implies A_{12}=0$, each of these $2$ equalities reducing eq. (\ref{4fgh3}) to the \textit{identity} $0=0$. Let us now see how the remaining $11$ eqs. (\ref{4fgh}) simplify in the z_{1}=0$ case, when clearly \end{subequations} \begin{subequations} \label{4aAfgh} \begin{equation} A_{11}=0 \label{4A11D} \end{equation implying $D=-A_{12}A_{21}$; for analogous results in the $z_{2}=0$ case see below \textbf{Remark 5.1.1-1}. As already mentioned above, this eq. (\ref{4A11D}) implies that eq. (\re {4fgh3}) holds identically ($0=0$); hence only the following $10$ equations remain (note that they are reported below in a somewhat different order than in (\ref{4fgh})) \begin{equation} a_{22}A_{21}+a_{12}A_{22}=0~, \label{4aaAA} \end{equation \begin{equation} a_{11}=a_{21}~; \end{equation \begin{equation} f_{11}=a_{22}/A_{12}~, \label{4f11} \end{equation \begin{equation} f_{12}=0~, \label{4f12EqZero} \end{equation \begin{equation} f_{21}=-2a_{12}A_{22}/\left( A_{12}A_{21}\right) ~, \label{4f21} \end{equation \begin{equation} f_{22}=a_{12}/A_{21}~; \end{equation \begin{equation} g_{1}=a_{21}~, \end{equation \begin{equation} g_{2}=a_{11}~; \end{equation \begin{equation} h_{1}=a_{20}A_{12}~, \end{equation \begin{equation} h_{2}=a_{10}A_{21}+a_{20}A_{22}~. \end{equation Clearly the first $3$ of these $11$ equations (\ref{4aAfgh}) provide $3$ \textit{constraints} on the $8$ parameters $A_{nm}$ and $a_{nm}$; while the last $8$ of these $12$ eqs. (\ref{4aAfgh}) express \textit{explicitly---}in terms of the $8$ parameters $A_{nm}$ and $a_{nm}$---the $8$ parameters f_{nm},$ $g_{n},$ $h_{n}$ which characterize the system of ODEs (\ref{4Syst1 ). The next task is to invert the last $8$ formulas (\ref{4aAfgh}), namely to express in terms of the $8$ parameters $f_{nm},$ $g_{n},$ $h_{n}$ featured by the system (\ref{4Syst1}), the $8$ parameters $A_{nm},$ and $a_{nm}$ which characterize the explicit solution of this system (\ref{4Syst1}) via the formulas of \textbf{Section 1} complemented by the restrictions and redefinitions (\ref{cfgh}); and as well to identify---most importantly---the \textit{constraints} implied by our treatment on the $8$ parameters $f_{nm},$ $g_{n},$ $h_{n}$. These findings can be obtained by appropriately specializing the formulas obtained in the preceding \textbf{Section 3}, i. e. by inserting in them the formulas (\ref{cfgh}) as well as the findings reported above in this \textbf Subsection 5.1}. In this manner from the $3$ eqs. (\ref{3a1L}) we get (using the \textit constraint} $f_{12}=0$ already obtained above, see (\ref{4f12EqZero})) \end{subequations} \begin{subequations} \label{4a1L} \begin{equation} a_{12}=A_{21}f_{22}~, \end{equation \begin{equation} a_{11}=g_{2}~, \end{equation \begin{equation} a_{10}=\left( A_{12}h_{2}-A_{22}h_{1}\right) /\left( A_{12}A_{21}\right) ~; \end{equation and likewise from the $3$ eqs. (\ref{3a2L}) we get \end{subequations} \begin{subequations} \label{4a2L} \begin{equation} a_{22}=A_{12}f_{11}~, \end{equation \begin{equation} a_{21}=g_{1}~, \end{equation \begin{equation} a_{20}=h_{1}/A_{12}~. \end{equation} Next, let us look at the $6$ eqs. (\ref{3Anmc1L}) and (\ref{Anm4}), using again the \textit{constraint} $f_{12}=0$ (see (\ref{4f12EqZero})) to simplify some of them. The $3$ eqs. (\ref{3Anmc1L}) read then as follows: \end{subequations} \begin{subequations} \begin{equation} A_{12}f_{21}+2A_{22}f_{22}=0~, \end{equation \begin{equation} A_{12}\left( f_{11}-f_{21}\right) -A_{22}f_{22}=0~, \end{equation \begin{equation} g_{1}-g_{2}=0~. \label{g1g2} \end{equation} It is now easily seen that the first $2$ of these $3$ equations imply (since we assume that $A_{12}$ and $A_{22}$ do \textit{not} vanish) the following \textit{second constraint} on the $2$ parameters $f_{11}$ and $f_{21}$: \end{subequations} \begin{equation} f_{21}=2f_{11}~; \label{4f21f11} \end{equation and we moreover find from eq. (\ref{g1g2}) the following \textit{third constraint} \begin{equation} g_{1}=g_{2}~. \label{5g1g2} \end{equation} On the other hand the $3$ eqs. (\ref{Anm4}) are \textit{identically satisfied---}i. e., $0=0$---thanks to (\ref{4A11D}), to the conditions (\re {cfgh}) and, again, to the constraint $f_{12}=0$, see (\ref{4f12EqZero}). So, let us summarize the findings in this $z_{1}=0$ case. There are the $2$ \textit{constraints} $f_{12}=0$ (see (\ref{4f12EqZero})) and $f_{21}=2f_{11}$ (see (\ref{4f21f11})) on the parameters $f_{nm}$ of the system (\ref{4Syst1 ), and the third \textit{constraint} $g_{1}=g_{2}$; note that the first $2$ of these $3$ \textit{constraints} imply $z_{2}=-f_{22}/f_{11}$. Provided these $3$ \textit{constraints} are satisfied, the \textit{explicit} solution of the \textit{initial-values }problem for this system (\ref{4Syst1}) is provided by the treatment of \textbf{Section 2}, of course\ with the parameters $c_{nj}$ replaced by the parameters $f_{nm}$, $g_{n}$, $h_{n}$ as implied by the relations (\ref{cfgh}), and with the parameters $a_{n\ell }$ expressed by the formulas (\ref{4a1L}) and (\ref{4a2L}) in terms of the $8$ parameters $f_{nm}$, $g_{n}$, $h_{n}$, and also of the $4$ parameters A_{nm} $. As for these latter parameters, they are themselves determined in terms of the parameters $f_{nm}$ as follows (the last of these formulas is of course implied by $A_{12}=z_{2}A_{22}$ with $z_{2}=-f_{22}/f_{11}$, see above): \begin{equation} A_{21}=\lambda _{1}~,~~~A_{22}=\lambda _{2}~,~~~A_{11}=0~,~~~A_{12}=-\left( f_{22}/f_{11}\right) \lambda _{2}~. \end{equation Here $\lambda _{1}$ and $\lambda _{2}$ are again $2$ \textit{arbitrary} (nonvanishing) parameters, which can be \textit{freely} assigned because their values do not influence the solution of the problem (as explained at the end of \textbf{Section 3} and in \textbf{Appendix C} in the context of the more general case of the system (\ref{1}); and see also \textbf Subsubsection 5.1.1}). The corresponding solution of the \textit{initial-values} problem for the system (\ref{4Syst1}) is reported in the following \textbf{Subsubsection 5.1.1}. Finally, let us conclude this \textbf{Subsection 5.1} by emphasizing that the $3$ \textit{constraints }(\ref{4f12EqZero}), (\ref{4f21f11}) and (\re {5g1g2}) entail a significant limitation on the generality of the system \ref{4Syst1}) treated in this section; for instance, they exclude models of the Lotka-Volterra type, which require (at least!) an arbitrary (\textit nonvanishing}) assignment of the parameter $f_{12}$. Nevertheless the fact that the \textit{initial-values} problem for the system (\ref{4Syst1}) can be explicitly solved provided only the $3$ restrictions indicated above hold seems a significant new finding. \subsubsection{Solution of the initial-values problem for the system \protect\ref{4Syst1})} In this \textbf{Subsubsection 5.1.1} we report the solution of the \textit initial-values problem} for the system characterized by the following $2$ equations of motion: \begin{subequations} \label{511System} \begin{equation} \dot{x}_{1}=x_{1}\left( f_{1}x_{1}+g\right) +h_{1}~, \end{equation \begin{equation} \dot{x}_{2}=x_{2}\left( 2f_{1}x_{1}+f_{2}x_{2}+g\right) +h_{2}~, \end{equation which correspond to the system (\ref{4Syst1}) treated in this \textbf Section 5.1 }by taking into account the following \textit{constraints }and \textit{simplified} notation: \end{subequations} \begin{subequations} \label{511ConSimp} \begin{equation} c_{12}=c_{13}=c_{15}=c_{21}=c_{24}=0~, \label{511Con} \end{equation} \begin{eqnarray} c_{11} &=&f_{11}=f_{1}~,~~c_{14}=c_{25}=g_{1}=g_{2}=g~,~~c_{16}=h_{1}~, \nonumber \\ c_{22} &=&f_{21}=2f_{1}~,~~c_{23}=f_{22}=f_{2}~,~~c_{23}=f_{22}=f_{2}~;~~c_{26}=h_{2}~, \label{511Simp} \end{eqnarray implied by our treatment, see above. The solution is then provided by the following formulas (obtained via (\re {511ConSimp}) from the corresponding solution reported in \textbf{Section 4 ): \end{subequations} \begin{equation} x_{1}\left( t\right) =-\left( f_{2}/f_{1}\right) \xi _{2}\left( t\right) ~,~~~x_{2}\left( t\right) =\xi _{1}\left( t\right) +\xi _{2}\left( t\right) ~; \end{equation \begin{subequations} \begin{eqnarray} \xi _{n}\left( t\right) &=&\frac{\xi _{n}\left( 0\right) \left[ \xi _{n+}-\xi _{n-}\exp \left( \gamma _{n}t\right) \right] -\xi _{n+}\xi _{n- \left[ 1-\exp \left( \gamma _{n}t\right) \right] }{\xi _{n+}\exp \left( \gamma _{n}t\right) -\xi _{n-}+\xi _{n}\left( 0\right) \left[ 1-\exp \left( \gamma _{n}t\right) \right] }~, \nonumber \\ n &=&1,2~; \label{511ksin} \end{eqnarray \begin{equation} \xi _{1}\left( 0\right) =x_{2}\left( 0\right) +\left( f_{1}/f_{2}\right) x_{1}\left( 0\right) ~,~~~\xi _{2}\left( 0\right) =-\left( f_{1}/f_{2}\right) x_{1}\left( 0\right) ~, \end{equation \begin{equation} \xi _{n\pm }=\left( -\eta _{n1}\pm \gamma _{n}\right) /\left( 2\eta _{n2}\right) ~,~~~\gamma _{n}=\sqrt{\left( \eta _{n1}\right) ^{2}-4\eta _{n0}\eta _{n2}}~,~~n=1,2~; \end{equation \end{subequations} \begin{subequations} \begin{equation} \eta _{12}=f_{2}~,~~~\eta _{11}=g~,~~~\eta _{10}=\left( f_{1}/f_{2}\right) h_{1}+h_{2}~, \end{equation} \begin{equation} \eta _{22}=-f_{2}~,~~~\eta _{21}=g~,~~~\eta _{20}=-\left( f_{1}/f_{2}\right) h_{1}~; \end{equation implying \end{subequations} \begin{equation} \gamma _{1}=\sqrt{g^{2}-4f_{2}\eta _{10}}~,~~~\gamma _{2}=\sqrt g^{2}+4f_{2}\eta _{20}}~. \end{equation} \textbf{Remark 5.1.1-1}. The diligent reader will check that the \textit same }result is obtained in the alternative case with $z_{2}=0$ (i. e. with \ref{4z1z2b}) replacing (\ref{4z1z2a})); of course provided all the corresponding notational changes are made, consisting essentially to an exchange of the roles of the auxiliary variables $\xi _{1}\left( t\right) $ and $\xi _{2}\left( t\right) $---themselves corresponding, of course up to an appropriate rescaling, to the variables $y_{1}\left( t\right) $ and y_{2}\left( t\right) $, in analogy to the treatment detailed, for the more general system (\ref{1}), in \textbf{Appendix C}. $\blacksquare $ \subsection{The subcase of (\protect\ref{1}) with homogeneous second-degree polynomial right-hand sides, i. e. $c_{nj}=$ $0$ for $n=1,2$ and $j=4,5,6$} The special case of the dynamical system (\ref{1}) treated in this \textbf Subsection 5.2} is characterized by the following ODEs: \begin{subequations} \label{3CCL} \begin{equation} \dot{x}_{n}\left( t\right) =c_{n1}\left[ x_{1}\left( t\right) \right] ^{2}+c_{n2}x_{1}\left( t\right) x_{2}\left( t\right) +c_{n3}\left[ x_{2}\left( t\right) \right] ^{2}~,~~~n=1,2~, \label{3CCLa} \end{equation namel \begin{equation} \dot{x}_{1}\left( t\right) =c_{11}\left[ x_{1}\left( t\right) \right] ^{2}+c_{12}x_{1}\left( t\right) x_{2}\left( t\right) +c_{13}\left[ x_{2}\left( t\right) \right] ^{2}~, \label{3CCLb} \end{equation \begin{equation} \dot{x}_{2}\left( t\right) =c_{21}\left[ x_{1}\left( t\right) \right] ^{2}+c_{22}x_{1}\left( t\right) x_{2}\left( t\right) +c_{23}\left[ x_{2}\left( t\right) \right] ^{2}~. \label{3CCLc} \end{equation A large class of subcases of this system---identified by explicit restrictions on the $6$ parameters $c_{nk}$ ($n=1,2;$ $k=1,2,3$) and characterized by the property to be \textit{solvable} by \textit{algebraic} operations---has been identified in the recent paper \cite{CCL2020}. In this \textbf{Subsection 5.2} we compare the results obtained in this paper \cit {CCL2020} \ with those obtained in the present paper. The main conclusion of this comparison is that there is, of course, a certain overlap among the cases treated in the present paper and those treated in \cite{CCL2020}; however subcases of (\ref{1}) identified as \textit{solvable} in \cit {CCL2020} are \textit{not} included in the treatment provided in the present paper; and likewise subcases of (\ref{1}) identified as \textit{solvable} in the present paper are \textit{not} included in the treatment provided by \cite{CCL2020}. Hence the $2$ approaches ---with their similarities and their differences---are in some sense \textit{complementary}. This is explained in detail in the following $2$ Subsubsections. \textbf{Remark 5.2-1}. There is a significant difference among the treatments of \cite{CCL2020} and the present paper. In \cite{CCL2020} the systems identified are \textit{algebraically solvable} in the following sense: the computation of their $t$-evolution is reduced to the evaluation of the zeros of a $t$-dependent polynomial $P_{N}\left( x;t\right) $ of (finite) order $N$ in its argument $x$ ($N$ being a \textit{positive integer} whose value depends on the particular model under consideration), the $t -evolution of which is \textit{explicitly} known (while the expressions of the solutions $x_{n}\left( t\right) $ of the system (\ref{3CCL}) can of course only be obtained \textit{explicitly }if $N\leq 4$). In the present paper the systems identified as \textit{solvable} allow the \textit{explicit} display of the $t$-evolution $x_{n}\left( t\right) $ of the solutions of their initial-values problem, as reported in \textbf{Section 4}.~ \blacksquare $ \textbf{Remark 5.2-2}. Another significant difference among the class of systems treated in \cite{CCL2020} and in the present paper is that---while the system (\ref{1}) features the $12$ \textit{a priori arbitrary coefficients $c_{nj}$---the solutions obtained in \cite{CCL2020} generally feature $9$ ($9=2+6+1$: see below \textbf{Subsubsection 5.2.2}) \textit freely assigned} parameters, while those obtained in the present paper feature $8$ ($8=12-4$: see above \textbf{Section 4}) \textit{freely assigned} parameters (in addition of course, in both cases, to the $2$ initial data x_{n}\left( 0\right) $). But, as shown in the following $2$ \textbf Subsubsections }(see their titles), neither one of these two subclasses of the system (\ref{1})---that treated in \cite{CCL2020} and that treated in the present paper---includes the other subclass: a confirmation that these 2 $ papers are actually \textit{complementary}. $\blacksquare $ \subsubsection{Subcases of (\protect\ref{1}) shown to be \textit{solvable} in \protect\cite{CCL2020} which are \textit{not} included among those shown to be \textit{solvable} in the present paper} In \cite{CCL2020} it is noted that essentially the entire class of systems \ref{3CCL}) can be reduced to the following simpler system (see eq. (6) of \cite{CCL2020}): \end{subequations} \begin{equation} \dot{x}_{1}=x_{1}x_{2}~,~~~\dot{x}_{2}=A\left[ \left( x_{1}\right) ^{2}+\left( x_{2}\right) ^{2}\right] +Bx_{1}x_{2}~, \label{31CCL} \end{equation and that this system can be solved by \textit{algebraic} operations if the 2 $ parameters $A$ and $B$ are suitably restricted, for instance \textit sufficient} conditions are that \begin{equation} A=\frac{n+q-1}{n+q}~,~~~B=\pm \frac{n-q}{n+q}\sqrt{\frac{n+q-1}{nq}} \label{3ABnq} \end{equation with $n$ an \textit{arbitrary positive integer}, and $q$ an \textit{arbitrar }, possibly \textit{complex}, \textit{rational} number (see eqs. (17) of \cite{CCL2020}; there also are other possibilities, see eqs. (18-20) of \cit {CCL2020}, but we do not need to evoke them to make our point). On the other hand it is easily seen that the system (\ref{31CCL}), which corresponds to the system (\ref{1}) only if \textit{all} the $12$ parameters $c_{nj}$ vanish except for the following $4$ of them \begin{equation} c_{12}=1~,~~~c_{21}=A~,~~~c_{22}=B~,~~~c_{23}=A~, \label{3cAB} \end{equation entails, via the $4$ constraints (\ref{4ConstraintsFinal}), eithe \begin{equation} A=B=0~, \end{equation which is also consistent with (\ref{3ABnq}) (say, with $q=1-n)$, o \begin{equation} B=0~,~~~A=1/2~, \end{equation which is consistent (say, with $n=q=1$); both assignments, of course, much less general than (\ref{3ABnq}) with $n$ an \textit{arbitrary positive integer}, and $q$ an \textit{arbitrary}, possibly \textit{complex}, \textit rational} number. This shows that there are some special subcases of (\ref{1}) which are \textit{solvable} both via the technique of \cite{CCL2020} and via the technique of the present paper; and many more which are \textit{solvable} via the technique of \cite{CCL2020} but are \textit{not solvable} via the technique of the present paper. Q. E. D. \subsubsection{Subcases of (\protect\ref{1}) shown to be \textit{solvable} in the present paper which are \textit{not} included among those shown to be \textit{solvable} in \protect\cite{CCL2020}} Since the system (\ref{1})---even with the \textit{constraints} (\re {4ConstraintsFinal})---is clearly more general than the system (\ref{3CCL}) treated in \cite{CCL2020}---because of the additional $6$ terms featuring the coefficients $c_{nj}$ with $n=1,2$ and $j=4,5,6$, it might seem that what we want to\ prove in this \textbf{Subsubsection 5.2.2} (see its title) is altogether obvious. But the subclass of the system (\ref{1}) treated in \cite{CCL2020} includes the \textit{additional} possibility to perform a \textit{linear} transformation---with \textit{arbitrary time-independent} coefficients---of the dependent variables. So this argument is \textit{not} cogent. But such a transformation---which generally features $6$ \textit{free parameters---cannot change the dependence on the independent variable $t$ from \textit{algebraic} to \textit{exponential}; while the dependence on the variable $t$ of the solutions reported in the present paper is indeed generally \textit{exponential} (see for instance above eq. (\ref{4wnt})). However this argument is still not entirely conclusive because of the possibility to extend the system (\ref{3CCL}) via the simple \textit invertible} change of dependent and independent variable \begin{equation} x_{n}\left( t\right) =\exp \left( \lambda t\right) \zeta _{n}\left( \tau \right) ~,~~~\tau =\left[ \exp \left( \lambda t\right) -1\right] /\lambda ~, \label{3transttau} \end{equation which transforms the following system for $\zeta _{n}\left( \tau \right) ,$ reading \begin{equation} d\zeta _{n}\left( \tau \right) /d\tau =c_{n1}\left[ \zeta _{1}\left( \tau \right) \right] ^{2}+c_{n2}\zeta _{1}\left( \tau \right) \xi _{2}\left( \tau \right) +c_{n3}\left[ \zeta _{2}\left( \tau \right) \right] ^{2}~,~~~n=1,2~, \end{equation hence being included among those treated in \cite{CCL2020}, into the, also \textit{autonomous}---and as well \textit{solvable }(via (\ref{3transttau ))---syste \begin{equation} \dot{x}_{n}\left( t\right) =\lambda x_{n}\left( t\right) +c_{n1}\left[ x_{1}\left( t\right) \right] ^{2}+c_{n2}x_{1}\left( t\right) x_{2}\left( t\right) +c_{n3}\left[ x_{2}\left( t\right) \right] ^{2}~,~~~n=1,2~. \end{equation This new system features the \textit{additional free} parameter $\lambda $ and---most importantly with respect to the previous argument---its solutions clearly feature now an \textit{exponential} dependence on the independent variable $t$ (see (\ref{3transttau})). But this implies that the solutions of this model---even after a linear reshuffling of the dependent variables---can only feature a dependence on the single exponential $\exp \left( \lambda t\right) $; while the solutions obtained in the present paper feature the $2$, generally \textit{different}, exponentials $\exp \left( \beta _{n}t\right) $, $n=1,2$, see above eqs. (\re {4wnt}) and (\ref{4wnpm}). It is thereby shown that there indeed are subcases of (\ref{1}) shown to be \textit{solvable} in the present paper which are \textit{not} included among those shown to be \textit{solvable} in \cite{CCL2020}. Q. E. D. \section{Conclusions and outlook} The prototypical system (\ref{1}) and its subcases treated above in \textbf Section 5 }have been investigated over time by top mathematicians and subtend an enormous number of applied mathematics models in several scientific fields. It is our hope to obtain analogous results for analogous models in the future; for one such result see \cite{CP2020}. \section{Acknowledgements} The results reported in this paper have been mainly obtained by a collaboration at a distance among its two authors (essentially via e-mails). We would like to acknowledge with thanks $2$ grants, which shall facilitate our future collaboration by allowing FP to visit (hopefully more than once) in 2021 the Department of Physics of the University of Rome "La Sapienza": one granted by that University, and one granted jointly by the Istituto Nazionale di Alta Matematica (INdAM) of that University and by the International Institute of Theoretical Physics (ICTP) in Trieste in the framework of the ICTP-INdAM "Research in Pairs" Programme. Finally, we gratefully acknowledge a special contribution by Fran\c{c}ois Leyvraz, who pointed out a serious flaw in a preliminary version of this paper, the elimination of which also entailed a substantial simplification of its presentation. \section{Appendix A} In this \textbf{Appendix A} we tersely demonstrate the following elementary fact, which clearly implies the result (\ref{ynt}): that the solution of the \textit{initial-values} problem for the ODE \begin{subequations} \begin{equation} \dot{y}\left( t\right) =a_{2}\left[ y\left( t\right) \right] ^{2}+a_{1}y\left( t\right) +a_{0}~, \label{Aydot} \end{equation is provided by the formul \begin{equation} y\left( t\right) =\frac{y_{+}\left[ y\left( 0\right) -y_{-}\right] -y_{- \left[ y\left( 0\right) -y_{+}\right] \exp \left( \beta t\right) }{y\left( 0\right) -y_{-}-\left[ y\left( 0\right) -y_{+}\right] \exp \left( \beta t\right) }~, \label{Ayt} \end{equation with $y_{\pm }$ defined as follows \begin{equation} y_{\pm }=\left( -a_{1}\pm \beta \right) /\left( 2a_{2}\right) ~,~~~\beta \sqrt{\left( a_{1}\right) ^{2}-4a_{0}a_{2}}~. \label{Aypluminusbeta} \end{equation} Indeed the ODE (\ref{Aydot}) can clearly be reformulated as follows: \end{subequations} \begin{subequations} \begin{equation} \dot{y}=a_{2}\left( y-y_{+}\right) \left( y-y_{-}\right) \end{equation with $y_{\pm }$ defined by (\ref{Aypluminusbeta}); and then (again, via (\re {Aypluminusbeta})) this ODE can be rewritten as follows: \begin{equation} \dot{y}\left[ \left( y-y_{+}\right) ^{-1}-\left( y-y_{-}\right) ^{-1}\right] =\beta ~. \end{equation The integration of this ODE for the dependent variable $y\left( t^{\prime }\right) $ over the independent variable $t^{\prime }$---from $t^{\prime }=0$ to $t^{\prime }=t$---clearly yields \end{subequations} \begin{equation} \ln \left[ \frac{y\left( t\right) -y_{+}}{y\left( 0\right) -y_{+}}\right] -\ln \left[ \frac{y\left( t\right) -y_{-}}{y\left( 0\right) -y_{-}}\right] =\beta t~, \end{equation which coincides---after exponentiation---with (\ref{Ayt}). Q. E. D. \section{Appendix B} In this \textbf{Appendix B} we tersely outline the derivation of the expressions (\ref{c1j}) and (\ref{c2j}) of the $12$ parameters $c_{nj}$ in terms of the $10$ parameters $A_{nm}$ and $a_{n\ell }$. The \textbf{first step} is to invert the relations (\ref{xnyn}), getting \begin{subequations} \label{y12x12} \begin{equation} y_{1}\left( t\right) =\left[ A_{22}x_{1}\left( t\right) -A_{12}x_{2}\left( t\right) \right] /D~, \label{y1xn} \end{equation \begin{equation} y_{2}\left( t\right) =\left[ A_{11}x_{2}\left( t\right) -A_{21}x_{1}\left( t\right) \right] /D~, \label{y2xn} \end{equation where the quantity $D$ is defined as above, see (\ref{AA}). The \textbf{second step} is to note that the relations (\ref{xnyn}) imply \end{subequations} \begin{subequations} \begin{equation} \dot{x}_{n}=A_{n1}\dot{y}_{1}+A_{n2}\dot{y}_{2}~,~~~n=1,2~, \end{equation hence, via the ODEs (\ref{yndot}), \begin{eqnarray} \dot{x}_{n}=A_{n1}\left[ a_{12}\left( y_{1}\right) ^{2}+a_{11}y_{1}+a_{10 \right] && \nonumber \\ +A_{n2}\left[ a_{22}\left( y_{2}\right) ^{2}+a_{21}y_{2}+a_{20}\right] ~,~~~n=1,2~. && \end{eqnarray} The \textbf{third and last step} is to insert the expressions (\ref{y12x12}) of $y_{1}$ and $y_{2}$ in terms of $x_{1}$ and $x_{2}$ in the right-hand sides of these ODEs. Then, via a bit of trivial if tedious algebra, there obtains the system (\ref{1}) with the definitions (\ref{c1j}) and (\ref{c2j ) of the $12$ coefficients $c_{nj}$. Q. E. D. \section{Appendix C} In this \textbf{Appendix C} we show that the solutions $x_{n}\left( t\right) $ of the dynamical system (\ref{1})---as treated above, see \textbf{Sections 2} and \textbf{3}---do \textit{not} depend on the \textit{free} parameters \lambda _{n}$ introduced via the positions (\ref{3Alanda}). To this end we insert the expressions (\ref{3Alanda}) of the parameters A_{nm}$ in terms of the free parameters $\lambda _{n}$ in the formulas (\re {3a1L}) and (\ref{3a2L}) expressing the $6$ parameters $a_{nk}$; in order to display the very simple dependence of these $8$ parameters from the $2$ free parameters $\lambda _{n}$. We thus easily find the following formulas: \end{subequations} \begin{subequations} \label{AalphaA} \begin{equation} a_{n\ell }\equiv \left( \lambda _{n}\right) ^{\ell -1}\alpha _{n\ell }\left( C\right) ~,~~~n=1,2~,~~~\ell =0,1,2~, \label{Calpha} \end{equation where the notation $C$ indicates---above and hereafter---the set of the $12$ parameters $c_{nj}$, and the $6$ functions $\alpha _{n\ell }\left( C\right) $ are \textit{explicitly} displayed in \textbf{Section 4}, see (\ref{4alphanl ); of course to this end we also used the definitions (\ref{3zn}) of the $2$ auxiliary parameters $z_{n}$ in terms of the $4$ coefficients $c_{nm}$ ( n=1,2;$ $m=1,2$). The next step is to insert the formulas (\ref{Calpha}) in the expressions \ref{ynt}), getting thereb \begin{equation} y_{n\pm }\equiv \left( \lambda _{n}\right) ^{-1}w_{n\pm }\left( C\right) ~,~~~\beta _{n}\equiv \beta _{n}\left( C\right) ~,~~~n=1,2~, \label{Cynpm} \end{equation again with the functions $w_{n\pm }\left( C\right) $ and $\beta _{n}\left( C\right) $ \textit{explicitly} displayed in \textbf{Section 4}, see (\re {4wnt}) and (\ref{4wnpm}). The insertion of these formulas in the expressions (\ref{2ynt}) of the solutions $y_{n}\left( t\right) $ of the auxiliary dynamical system (\re {yndot}) evidences the following, very simple, dependence of these functions from the \textit{free} parameters $\lambda _{n}$: \end{subequations} \begin{equation} y_{n}\left( t\right) \equiv \left( \lambda _{n}\right) ^{-1}w_{n}\left( C,t\right) ~,~~~n=1,2~, \label{Cynt} \end{equation where again the $2$ functions $w_{n}\left( C,t\right) $ are \textit explicitly} displayed in \textbf{Section 4}, see (\ref{4wnt}). And via the insertion in the expressions (\ref{xnyn}) of $x_{n}\left( t\right) $ of these formulas (\ref{Cynt}), together with the expressions \ref{3Alanda}) of $A_{nm}$, we conclude that the solutions $x_{n}\left( t\right) $ are independent of the \textit{free} parameters $\lambda _{n}$; as indeed displayed in \textbf{Section 4}, see the set of eqs. from (\re {4xnt}) to (\ref{4alphanl}). Q. E. D.
{ "timestamp": "2020-12-29T02:22:02", "yymm": "2012", "arxiv_id": "2012.14021", "language": "en", "url": "https://arxiv.org/abs/2012.14021" }
\section{Introduction} Knowledge distillation (KD) \citep{bucilua2006model,hintonkd} is a commonly-used technique to reduce the size of large neural networks \citep{Sanh2019DistilBERTAD}. Apart from this, we also consider it as a complementary and generic add-on to enrich the training process of any neural model \cite{furlanello2018born}. In KD, a student network ($\mathcal{S}$) is glued to a powerful teacher ($\mathcal{T}$) during training. These two networks can be trained simultaneously or $\mathcal{T}$ can be a pre-trained model. Usually, $\mathcal{T}$ uses more parameters than $\mathcal{S}$ for the same task, therefore it has a higher learning capacity and is expected to provide reliable predictions. On the other side, $\mathcal{S}$ follows its teacher with a simpler architecture. For a given input, both models provide predictions where those of the student are penalized by an ordinary loss function (using \textit{hard} labels) as well as predictions received from $\mathcal{T}$ (also known as \textit{soft} labels). Training a (student) model for a natural language processing (NLP) task can be formalized as a multi-class classification problem to minimize a cross-entropy (\textit{ce}) loss function, as shown in Equation \ref{eq:1}: \begin{multline} \label{eq:1} \mathcal{L}_{ce} = - \sum_{i=1}^{N} \sum_{w \in V} [\mathbbm{1}(y_i=w) \times\\\log p_{_\mathcal{S}}(y_i=w | x_i, \theta_{_\mathcal{S}})] \end{multline} where $\mathbbm{1}(.)$ is the indicator function, $V$ is a vocabulary set (or different classes in a multi-class problem), $N$ is the number of tokens in an input sequence, and $y$ is a prediction of the network $\mathcal{S}$ with a parameter set $\theta_{_\mathcal{S}}$ given an input $x$. To incorporate teacher's supervision, KD accompanies $\mathcal{L}_{ce}$ with an auxiliary loss term, $\mathcal{L}_{_{KD}}$, as shown in Equation \ref{eq:2}: \begin{multline}\label{eq:2} \mathcal{L}_{_{KD}} = - \sum_{i=1}^{N} \sum_{w \in V} [p_{_\mathcal{T}}(y_i=w | x_i, \theta_{_\mathcal{T}}) \times\\\log p_{_\mathcal{S}}(y_i=w | x_i, \theta_{_\mathcal{S}})] \end{multline} Since $\mathcal{S}$ is trained to behave identically to $\mathcal{T}$, model compression can be achieved if it uses a simpler architecture than its teacher. However, if these two models are the same size KD would still be beneficial. What $\mathcal{L}_{_{KD}}$ proposes is an ensemble technique by which the student is informed about teacher's predictions. The teacher has better judgements and this helps the student learn how much it deviates from true labels. This form of KD that is referred to as Regular KD (RKD) throughout this paper, only provides $\mathcal{S}$ with external supervision for final predictions, but this can be extended to other components such as intermediate layers too. The student needs to be aware of the information flow inside teacher's layers and this becomes even more crucial when distilling from deep teachers. Different alternatives have been proposed to this end, which compare networks' internal layers in addition to final predictions \cite{jiao2019tinybert,sun2020mobilebert,pkd}, but they suffer from other types of problems. The main goal in this paper is to study such models and address their shortcomings. \subsection{Problem Definition} To utilize intermediate layers' information (and other components in general), a family of models exists that defines a dedicated loss function to measure how much a student diverges from its teacher in terms of internal representations. In particular, if the goal is to distill from an $n$-layer teacher into an $m$-layer student, a subset of $m$ (out of $n$) teacher layers is selected whose outputs are compared to those of student layers (see Equation \ref{eq:3} for more details). Figure \ref{fig:0} illustrates this concept. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{st} \end{center} \caption{\label{fig:0} Student and teacher models have $m$ and $n$ layers, respectively. Each node is an intermediate layer and links are cross-model connections. In this example, every other layer of the teacher is skipped in order to match the size of the student. The output of nodes connected to each other are compared via a loss function (shown with $\leftrightarrow$) to ensure that the student model has similar internal representations as its teacher.} \end{figure} As the figure shows, each student layer is connected to a single, dedicated peer on the teacher side, e.g. the $n$-\textit{th} teacher layer corresponds to the $m$-\textit{th} student layer. Since outputs of these two layers are compared to each other, we hope that both models generate as similar outputs as possible at points $n$ and $m$. With this simple technique, teacher's knowledge can be used to supervise student's intermediate layers. Experimental results show that intermediate layer matching could be quite effective, but in our study we realized that it may suffer from two shortcomings: \begin{itemize} \item If $n \gg m$, multiple layers in $\mathcal{T}$ have to be ignored for distillation but we know that those layers consist of precious information for which we spend expensive resources to learn. This issue is referred to as the \textit{skip} problem in this paper. \item Moreover, it seems the way teacher layers are kept/skipped is somewhat arbitrary as there is no particular strategy behind it. Before training, we lack enough knowledge to judge which subset of teacher layers contributes more to the distillation process, so there is a good chance of skipping significant layers if we pick them in an arbitrary fashion. Finding the best subset of layers to distill from requires an exhaustive search or an expert in the field to signify connections. We refer to this issue as the \textit{search} problem. \end{itemize} In order to resolve the aforementioned issues we propose an alternative, which is the main contribution of this paper. Our solution does not skip any layer but utilizes \textit{all} information stored inside $\mathcal{T}$. Furthermore, it combines teacher layers through an attention mechanism, so there is no need to deal with the search problem. We believe that the new notion of combination defined in this paper is as important as our novel KD architecture and can be adapted to other tasks too. The remainder of this paper is organized as follows: First, we briefly review KD techniques used in similar NLP applications, then we introduce our methodology and explain how it addresses existing shortcomings. We accompany our methodology with experimental results to show whether the proposed technique is useful. Finally, we conclude the paper and discuss future directions. \section{Related Work} \label{background} KD was originally proposed for tasks other than NLP \cite{bucilua2006model,hintonkd}. \citet{kim2016sequence} adapted the idea and proposed a sequence-level extension for machine translation. \citet{freitag2017ensemble} took a step further and expanded it to a multi-task scenario. Recently, with the emergence of large NLP and language understanding (NLU) models such as ELMO \citep{Peters:2018} and BERT \citep{Devlin2019BERTPO} KD has gained extra attention. Deep models can be trained in a better fashion and compressed via KD, which is favorable in many ways. Therefore, a large body of work in the field such as Patient KD (PKD) \citep{pkd} has been devoted to compressing/distilling BERT (and similar) models. PKD is directly related to this work, so we discuss it in more detail. It proposes a mechanism to match teacher and student models' intermediate layers by defining a third loss function, $\mathcal{L}_{P}$, in addition to $ \mathcal{L}_{ce}$ and $\mathcal{L}_{KD}$, as shown in Equation \ref{eq:3}: \begin{equation}\label{eq:3} \mathcal{L}_{P} = - \sum_{i=1}^{N} \sum_{j=1}^{m} || \frac{h_{\mathcal{S}}^{i,j}}{||h_{\mathcal{S}}^{i,j}||_2} - \frac{\mathcal{A}(j)^i}{||\mathcal{A}(j)^i||_2}||^2_2 \end{equation} where $h^{i,j}_\mathcal{S}$ is the output\footnote{By the output, we mean the output of the layer for the {\fontfamily{pcr}\selectfont CLS} token. For more details about {\fontfamily{pcr}\selectfont CLS} see \citet{Devlin2019BERTPO}.} of the $j$-th student layer for the $i$-th input. A subset of teacher layers selected for distillation is denoted with an alignment function $\mathcal{A}$, e.g. $\mathcal{A}(j) = h^{l}_\mathcal{T}$ implies that the output of the $j$-th student layer should be compared to the output of the $l$-th teacher layer ($h_{\mathcal{S}}^{i,j} \leftrightarrow h_{\mathcal{T}}^{i,l}$). PKD is not the only model that utilizes internal layers' information. Other models such as TinyBERT \cite{jiao2019tinybert} and MobileBERT \citep{sun2020mobilebert} also found it crucial for training competitive student models. However, as Equation \ref{eq:3} shows, in these models only $m$ teacher layers (the number of teacher layers returned by $\mathcal{A}$) can contribute to distillation. In the presence of deep teachers and small students, this limitation can introduce a significant amount of information loss. Furthermore, what is denoted by $\mathcal{A}$ directly impacts quality. If $\mathcal{A}$ skips an important layer the student model may fail to provide high-quality results. To tackle this problem, \citet{wu-etal-2020-skip} proposed a combinatorial technique, called CKD. In their model, $\mathcal{A}(j)$ returns a subset of teacher layers instead of a single layer. Those layers are combined together and distillation happens between the combination result and the $j$-th student layer, as shows in equation \ref{eq:ckd}: \begin{equation}\label{eq:ckd} \begin{split} \hat{\mathcal{C}}^j =& \mathcal{F}_{c}( h^k_\mathcal{T}) ;\hspace{1mm}{h^k_\mathcal{T} \in \mathcal{A}(j)} \\ \mathcal{C}^j =& \mathcal{F}_{r}(\hat{\mathcal{C}}^j) \\ \cup_{j=1}^m \mathcal{A}(j) =& \{h_\mathcal{T}^1, ..., h_\mathcal{T}^n\} \end{split} \end{equation} where $\hat{\mathcal{C}}^j$ is the result of a combination produced by the function $\mathcal{F}_c$ given a subset of teacher layers indicated by $\mathcal{A}(j)$. In \citet{wu-etal-2020-skip}, $\mathcal{F}_c$ is implemented via a simple concatenation. Depending on the form of combination used in Equation \ref{eq:ckd}, there might be a dimension mismatch between $\hat{\mathcal{C}}^j$ and the student layer $h_{\mathcal{S}}^j$. Accordingly, there is another function, $\mathcal{F}_r$, to reform the combination result into a comparable shape to the student layer. CKD uses a single projection layer to control the dimension mismatch. With the combination technique (concatenation+projection), CKD could solve the skip problem but the search problem still remains unanswered. Similar to PKD, CKD also requires a search process, but it looks for the best subset of teacher layers instead of the best single layer. These two models are directly related to this research so we consider them as baselines in our experiments. The application of KD in NLP and NLU is not limited to the aforementioned models. \citet{aguilar2020knowledge} followed the same architecture as PKD but they introduced a new training regime, called progressive training. In their method, lower layers are trained first and training is progressively shifted to upper layers. They claim that the way internal layers are trained during KD can play a significant role. \citet{Liu2019ImprovingMD} investigated KD from another perspective. Instead of focusing on the compression aspect, they kept the size of student models equal to their teachers and showed how KD could be treated as a complementary training ingredient. \citet{tan2019multilingual} squeezed multiple translation engines into one transformer \citep{vaswani2017attention} and showed that knowledge can be distilled from multiple teachers. \citet{wei2019online} introduced a novel training procedure where there is no need for an external teacher. A student model can learn from its own checkpoints. At each validation step, if the current checkpoint is better than the best existing checkpoint, student learns from it otherwise the best stored checkpoint is considered as a teacher. \section{Methodology} For a given student model $\mathcal{S}$ and a teacher model $\mathcal{T}$ we show all intermediate layers with sets $H_\mathcal{S} = \{h_\mathcal{S}^1, ..., h_\mathcal{S}^m\}$ and $H_\mathcal{T} = \{h_\mathcal{T}^1, ..., h_\mathcal{T}^n\}$, respectively. Based on the pipeline designed by current models for intermediate layer KD, there must be a connection between $H_\mathcal{S}$ and $H_\mathcal{T}$ during training and each student layer can only correspond to a single peer on the teacher side. As previously mentioned, layer connections are denoted by $\mathcal{A}$. A common heuristic to devise $\mathcal{A}$ is to divide teacher layers into $m$ buckets with approximately the same sizes and pick only one layer from each \citep{jiao2019tinybert,pkd}. Therefore, for the $j$-th layer of the student model, $\mathcal{A}(j)$ returns a single teacher layer among those that reside in the $j$-th bucket. Figure \ref{fig:1}a illustrates this setting. Clearly, this is not the best way of connecting layers, because they are picked in a relatively arbitrary manner. More importantly, no matter what heuristic is used there still remain $n\!-\!m$ layers in this approach whose information is not used in distillation. To address this issue, we simply propose a combinatorial alternative whereby all layers inside buckets are taken into consideration. Our technique is formulated in Equation \ref{eq:4}: \begin{equation}\label{eq:4} \begin{split} \mathcal{C}^j =& \sum_{h^k_\mathcal{T} \in \mathcal{A}(j)} \alpha_{jk} \hspace{1mm} h^k_\mathcal{T} \\ \alpha_{jk} =& \frac{\exp({h^j_\mathcal{S} \hspace{0.5mm} . \hspace{0.5mm} h^k_\mathcal{T}})}{\sum_{h^{k'}_\mathcal{T} \in \mathcal{A}(j)} \exp({h^j_\mathcal{S} \hspace{0.5mm} . \hspace{0.5mm}h^{k'}_\mathcal{T}})} \\ \cup_{j=1}^m \mathcal{A}(j) &= H_\mathcal{T} = \{h_\mathcal{T}^1, ..., h_\mathcal{T}^n\} \end{split} \end{equation} This idea is similar to that of CKD, but we use an attention mechanism \cite{bahdanau2014neural} instead of a concatenation for layer combination. Experimental results demonstrate that this form of combination is more useful. We refer to this idea as \textbf{A}ttention-based \textbf{L}ayer \textbf{P}rojection for \textbf{KD} or ALP-KD in short. According to the equation, if a student layer associates with a particular bucket, all layers inside that bucket are combined/used for distillation and $\mathcal{C}^j$ is a vector representation of such a combination. Our model benefits from all $n$ teacher layers and skips none as there is a dedicated $\mathcal{C}$ vector for each student layer. Figure \ref{fig:1}b visualizes this setting. \begin{figure}[th] \begin{center} \includegraphics[scale=0.44]{mappings-small} \end{center} \caption{\label{fig:1} Three pairs of $\mathcal{S}$ and $\mathcal{T}$ networks with different forms of layer connections. In Figure \ref{fig:1}a, teacher layers are divided into $3$ buckets and only one layer from each bucket is connected to the student side, e.g. $h_\mathcal{T}^5$ is the source of distillation for $h_\mathcal{S}^2$ ($h_\mathcal{T}^5 \leftrightarrow h_\mathcal{S}^2$). In Figure \ref{fig:1}b, a weighted average of teacher layers from each bucket is considered for distillation, e.g. $\mathcal{A}(2)$ = $\{h_\mathcal{T}^4,h_\mathcal{T}^5\}$ and $\mathcal{C}^2=\alpha_{_{24}}h_\mathcal{T}^4 + \alpha_{_{25}}h_\mathcal{T}^5$ ($\mathcal{C}^2 \leftrightarrow h_\mathcal{S}^2$). In Figure \ref{fig:1}c, there is no bucketing and all teacher layers are considered for projection. Links with higher color intensities have higher attention weights.} \end{figure} Weights ($\alpha$ values) assigned to teacher layers are learnable parameters whose values are optimized during training. They show the contribution of each layer to the distillation process. They also reflect the correlation between student and teacher layers, i.e. if a student layer correlates more with a set of teacher layers weights connecting them should receive higher values. In other words, that specific layer is playing the role of its teacher peers on the student side. To measure the correlation, we use the \textit{dot product} in our experiments but any other function for similarity estimation could be used in this regard. Equation \ref{eq:4} addresses the \textit{skip} problem with a better combination mechanism and is able to provide state-of-the-art results. However, it still suffers from the \textit{search} problem as it relies on buckets and we are not sure which bucketing strategy works better. For example, in Figure \ref{fig:1}b the first bucket consists of the first three layers of the teacher but it does not mean that we cannot append a fourth layer. In fact, a bucket with four layers might perform better. Buckets can also share layers; namely, a teacher layer can belong to multiple buckets and can be used numerous times in distillation. These constraints make it challenging to decide about buckets and their boundaries, but it is possible to resolve this dilemma through a simple modification in our proposed model. To avoid bucketing, we span the attention mask over all teacher layers rather than over buckets. To implement this extension, $\mathcal{A}(j)$ needs to be replaced with $H_\mathcal{T}$ in Equation \ref{eq:4}. Therefore, for any student layer such as $h_\mathcal{S}^j$ there would be a unique set of $n$ attention weights and $\mathcal{C}^j$ would be a weighted average of \textit{all} teacher layers, as shown in Equation \ref{eq:5}: \begin{equation}\label{eq:5} \begin{split} \mathcal{C}^j =& \sum_{h^k_\mathcal{T} \in \mathcal{A}(j)} \alpha_{jk} \hspace{1mm} h^k_\mathcal{T}\\ \mathcal{A}(j) =& H_\mathcal{T} \hspace{2mm} \forall j\in \{1,2,...,m\} \end{split} \end{equation} This new configuration, which is illustrated in Figure \ref{fig:1}c, proposes a straightforward way of combining teacher layers and addresses both \textit{skip} and \textit{search} problems at the same time. To train our student models, we use a loss function which is composed of $\mathcal{L}_{ce}$, $\mathcal{L}_{KD}$, and a dedicated loss defined for ALP-KD, as shown in Equation \ref{eq:6}: \begin{equation}\label{eq:6} \begin{split} \mathcal{L} = \beta \mathcal{L}_{ce} + \eta \mathcal{L}_{KD} + \lambda \mathcal{L}_{_{ALP}} \\ \mathcal{L}_{_{ALP}} = \sum_{i=1}^{N}\sum_{j=1}^{m} \textrm{\textit{MSE}}(h_{\mathcal{S}}^{i,j},\mathcal{C}^{i,j}) \end{split} \end{equation} where \textit{MSE()} is the mean-square error and $\mathcal{C}^{i,j}$ shows the value of $\mathcal{C}^j$ when the teacher is fed with the $i$-th input. $\beta$, $\eta$, and $\lambda$ are hyper-parameters of our model to minimize the final loss. \section{Experimental Study}\label{exp} A common practice in our field to evaluate the quality of a KD technique is to feed $\mathcal{T}$ and $\mathcal{S}$ models with instances of standard datasets and measure how they perform. We followed the same tradition in this paper and selected a set of eight GLUE tasks \cite{Wang2018GLUEAM} including CoLA, MNLI, MRPC, QNLI, QQP, RTE, SST-2, and STS-B datasets to benchmark our models. Detailed information about datasets is available in the appendix section. \begin{table*}[hbt!] \centering \begin{tabular}{l l l l l l l l l l l} \toprule Problem &Model & CoLA & MNLI & MRPC & QNLI & QQP & RTE & SST-2 & STS-B & Average\\ \hline N/A & $\mathcal{T}_{_{\textrm{BERT}}}$ & 57.31 & 83.39 & 86.76 & 91.25 & 90.96 & 68.23 & 92.67 & 88.82 & 82.42\\\hline N/A & $\mathcal{S}_{_{\textrm{NKD}}}$ & 31.05 & 76.83 & 77.70 & 85.13 & 88.97 & 61.73 & 88.19 & 87.29 & 74.61\\ \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{RKD}}}$ &29.22 &79.31 &79.41 &86.77 &90.25 &65.34 &90.37 &87.45 &76.02 \\\hline \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{PKD}}}$ &32.13 &79.26 &80.15 &86.64 &90.23 &65.70 &90.14 &87.26 &76.44 \\\hline \textit{search} & $\mathcal{S}_{_{\textrm{CKD-NO}}}$ & 31.23 & 79.42 &80.64 &86.93 &88.70 &66.06 &90.37 &87.62 &76.37 \\ \textit{search} & $\mathcal{S}_{_{\textrm{CKD-PO}}}$ &31.95& 79.53& 80.39 &86.75& 89.89& \textbf{67.51}& 90.25& 87.55& 76.73 \\\hline \textit{search} & $\mathcal{S}_{_{\textrm{ALP-NO}}}$ & \textbf{34.21}& 79.26 &79.66 &\textbf{87.11}& \textbf{90.72}& 65.70& 90.37& 87.52& 76.82 \\ \textit{search} & $\mathcal{S}_{_{\textrm{ALP-PO}}}$ & 33.86& \textbf{79.74}& 79.90& 86.95& 90.25& 66.43 &\textbf{90.48} &87.52& 76.89 \\\hline \textit{none} & $\mathcal{S}_{_{\textrm{ALP}}}$& 33.07& {79.62}& \textbf{80.72}& 87.02& 90.54& 67.15& 90.37& \textbf{87.62}& \textbf{77.01} \\ \bottomrule \end{tabular} \caption{\label{t:1} Except the teacher ($\mathcal{T}_{_{\textrm{BERT}}}$) which is a $12$-layer model, all other models have $4$ layers. Apart from the number of layers, all students have the same architecture as the teacher. The first column shows what sort of problems each model suffers from. NKD stands for \textit{No KD} which means there is no KD technique involved during training this student model. \textit{NO} and \textit{PO} are different configurations for mapping internal layers. Boldfaced numbers show the best student score for each column over the validation set. Scores in the first column are Matthew's Correlations. SST-B scores are Pearson correlations and the rest are accuracy scores.} \end{table*} In NLP/NLU settings, $\mathcal{T}$ is usually a pre-trained model whose parameters are only fine-tuned during training. On the other side, $\mathcal{S}$ can be connected to $\mathcal{T}$ to be trained thoroughly or can alternatively be initialized with $\mathcal{T}$'s parameters to be fine-tuned similar to its teacher. This helps the student network generate better results and converge faster. {Fine-tuning} is more common than {training} in our context and we thus fine-tune our models rather than training. This concept is comprehensively discussed by \citet{Devlin2019BERTPO} so we skip its details and refer the reader to their paper. We have the same fine-tuning pipeline in this work. In our experiments, we chose the original BERT model\footnote{\url{https://github.com/google-research/bert}} (also known as BERT$_{\textrm{Base}}$) as our teacher. We are faithful to the configuration proposed by \citet{Devlin2019BERTPO} for it. Therefore, our in-house version also has $12$ layers with $12$ attention heads and the hidden and feed-forward dimensions are $768$ and $3072$, respectively. Our students are also BERT models only with fewer layers ($|H_\mathcal{S}| = m\hspace{1mm};m<12$). We use the teacher BERT to initialize students, but because the number of layers are different ($12 \ne m$) we only consider its first $m$ layers. We borrowed this idea from PKD \cite{pkd} in the interest of fair comparisons. In order to maximize each student's performance we need to decide about the learning rate, batch size, the number of fine-tuning iterations, and $\beta$, $\eta$, and $\lambda$. To this end, we run a grid search similar to \citet{pkd} and \citet{wu-etal-2020-skip}. In our setting, the batch size is set to $32$ and the learning rate is selected from $\{1e-5,2e-5, 5e-5\}$. $\eta$ and $\lambda$ take values from $\{0, 0.2, 0.5, 0.7\}$ and $\beta = 1-\eta-\lambda$. Details of the grid search and values of all hyper-parameter are reported in the appendix section. We trained multiple models with different configurations and compared our results to RKD- and PKD-based students. To the best of our knowledge, these are the only alternatives that use BERT as a teacher and their students' architecture relies on ordinary Transformer blocks \citep{vaswani2017attention} with the same size as ours, so any comparison to any other model with different settings would not be fair. Due to CKD's similarity to our approach we also re-implemented it in our experiments. The original CKD model was proposed for machine translation and for the first time we evaluate it in NLU tasks. Table \ref{t:1} summarizes our experiments. The teacher model with $12$ layers and $109$M parameters has the best performance for all datasets.\footnote{Similar to other papers, we evaluate our models on validation sets. Testset labels of GLUE datasets are not publicly available and researchers need to participate in leaderboard competitions to evaluate their models on testsets.} This model can be compressed, so we reduce the number of layers to $4$ and train another model ($\mathcal{S}_{_{\textrm{NKD}}}$). The rest of the configuration (attention heads, hidden dimension etc) remains untouched. There is no connection between the teacher and $\mathcal{S}_{_{\textrm{NKD}}}$ and it is trained separately with no KD technique. Because of the number of layers, performance drops in this case but we still gain a lot in terms of memory as this new model only has $53$M parameters. To bridge the performance gap between the teacher and $\mathcal{S}_{_{\textrm{NKD}}}$, we involve KD in the training process and train new models, $\mathcal{S}_{_{\textrm{RKD}}}$ and $\mathcal{S}_{_{\textrm{PKD}}}$, with RKD and PKD techniques, respectively. $\mathcal{S}_{_{\textrm{RKD}}}$ is equivalent to a configuration known as DistilBERT in the literature \cite{Sanh2019DistilBERTAD}. To have precise results and a better comparison, we trained/fine-tuned all models in the same experimental environment. Accordingly, we do not borrow any result from the literature but reproduce them. This is the reason we use the term equivalent for these two models. Furthermore, DistilBERT has an extra Cosine embedding loss in addition to those of $\mathcal{S}_{_{\textrm{RKD}}}$. When investigating the impact of intermediate layers in the context of KD, we wanted $\mathcal{L}_{P}$ to be the only difference between RKD and PKD, so incorporating any other factor could hurt our investigation and we thus avoided the cosine embedding loss in our implementation. PKD outperforms RKD with an acceptable margin in Table \ref{t:1} and that is because of the engagement of intermediate layers. For $\mathcal{S}_{_{\textrm{PKD}}}$, we divided teacher layers into $3$ buckets ($4$ layers in each) and picked the first layer of each bucket to connect to student layers, i.e. $\mathcal{A}(1)=h_{\mathcal{T}}^1$, $\mathcal{A}(2)=h_{\mathcal{T}}^5$, and $\mathcal{A}(3)=h_{\mathcal{T}}^9$. There is no teacher layer assigned to the last layer of the student. This form of mapping maximizes PKD's performance and we figured out this via an empirical study. \begin{table*}[ht!] \centering \begin{tabular}{l l l l l l l l l l l} \toprule Problem &Model & CoLA & MNLI & MRPC & QNLI & QQP & RTE & SST-2 & STS-B & Average\\ \hline N/A & $\mathcal{T}_{_{\textrm{BERT}}}$ & 57.31& 83.39& 86.76& 91.25& 90.96& 68.23& 92.67& 88.82& 82.42 \\\hline N/A & $\mathcal{S}_{_{\textrm{NKD}}}$ & 40.33& 79.91& 81.86& 87.57 &90.21 &65.34 &90.02& 88.49& 77.97 \\ \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{RKD}}}$ & 45.51 &81.41& 83.82& 88.21& 90.56& 67.51& 91.51& 88.70& 79.65 \\\hline \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{PKD}}}$ & 45.78& \textbf{82.18}& 85.05& 89.31& 90.73& 68.23& 91.51& 88.56& 80.17 \\\hline \textit{search} & $\mathcal{S}_{_{\textrm{CKD-NO}}}$ & \textbf{48.49}& 81.91& 83.82& 89.53& 90.64& 67.51& 91.40& 88.73& 80.25 \\ \textit{search} & $\mathcal{S}_{_{\textrm{CKD-PO}}}$ & 46.99 &81.99& 83.82& 89.44 &\textbf{90.82}& 67.51& 91.17& 88.62& 80.05 \\\hline \textit{search} & $\mathcal{S}_{_{\textrm{ALP-NO}}}$ & 46.40& 81.99&\textbf{85.78}& \textbf{89.71}& 90.64& \textbf{68.95}& \textbf{91.86}& \textbf{88.81}& \textbf{80.52} \\ \textit{search} & $\mathcal{S}_{_{\textrm{ALP-PO}}}$ & 46.02& 82.04 &84.07& 89.16& 90.56& 68.23& 91.74& 88.72 &80.07 \\\hline \textit{none} & $\mathcal{S}_{_{\textrm{ALP}}}$& 46.81& 81.86& 85.05& 89.67& 90.73& 68.59& \textbf{91.86}& 88.68& 80.41 \\ \bottomrule \end{tabular} \caption{\label{t:2} The teacher model $\mathcal{T}_{_{\textrm{BERT}}}$ has $12$ and all other student models have $6$ layers.} \end{table*} Results discussed so far demonstrate that cross-model layer mapping is effective, but it can be improved even more if the skip issue is settled. Therefore, we trained two other students using CKD. The setting for these models is identical to PKD, namely teacher layers are divided into $3$ buckets. The first $4$ teacher layers reside in the first bucket. The fifth to eighth layers are in the second bucket and the rest are covered by the third bucket. Layers inside the first bucket are concatenated and passed through a projection layer to match the student layers' dimension. The combination result for the first bucket is assigned to the first student layer ($\mathcal{C}^1 \leftrightarrow h_{\mathcal{S}}^1$). The same procedure is repeated with the second and third buckets for $h_{\mathcal{S}}^2$ and $h_{\mathcal{S}}^3$. Similar to PKD, there is no teacher layer connected to the last student layer. This configuration is referred to as \textbf{N}o \textbf{O}verlap (\textbf{NO}), that indicates buckets share no layers with each other. In addition to \textbf{NO} we designed a second configuration, \textbf{PO}, which stands for \textbf{P}artial \textbf{O}verlap. In \textbf{PO}, each bucket shares its first layer with the preceding bucket, so the first bucket includes the first to fifth layers, the second bucket includes the fifth to ninth layers, and from the ninth layer onward reside in the third bucket. We explored this additional configuration to see the impact of different bucketing strategies in CKD. Comparing $\mathcal{S}_{_{\textrm{CKD}}}$ to $\mathcal{S}_{_{\textrm{PKD}}}$ shows that the combination (concatenation+projection) idea is useful in some cases, but for others the simple skip idea is still better. Even defining different bucketing strategies did not change it drastically, and this leads us to believe that a better form of combination such as an attention-based model is required. In $\mathcal{S}_{_{\textrm{ALP}}}$ extensions, we replace the CKD's concatenation with attention and results improve. ALP-KD is consistently better than all other RKD, PKD, and CKD variations and this justifies the necessity of using attention for combination. $\mathcal{S}_{_{\textrm{ALP-NO}}}$ and $\mathcal{S}_{_{\textrm{ALP-PO}}}$ also directly support this claim. In $\mathcal{S}_{_{\textrm{ALP}}}$, we followed Equation \ref{eq:5} and spanned the attention mask over all teacher layers. This setting provides a model that requires no engineering adjustment to deal with \textit{skip} and \textit{search} problems and yet delivers the best result on average. \subsection{Training Deeper/Shallower Models Than $4$-Layer Students} So far we compared $4$-layer ALP-KD models to others and observed superior results. In this section, we design additional experiments to study our technique's behaviour from the size perspective. The original idea of PKD was proposed to distill from a $12$-layer BERT to a $6$-layer student \cite{pkd}. In such a scenario, only every other layer of the teacher is skipped and it seems the student model should not suffer from the skip problem dramatically. We repeated this experiment to understand if our combination idea is still useful or its impact diminishes when student and teacher models have closer architectures. Table \ref{t:2} summarizes findings of this experiment. Among $6$-layer students, $\mathcal{S}_{_{\textrm{ALP-NO}}}$ has the best average score which demonstrates that the combinatorial approach is still useful. Moreover, the supremacy of attention-based combination over the simple concatenation holds for this setting too. $\mathcal{S}_{_{\textrm{ALP}}}$ is the second best and yet our favorite model as it requires no layer alignment before training. The gap between PKD and ALP-KD is narrowed in $6$-layer models compared to $4$-layer students, and this might be due to an implicit relation between the size and need for combining intermediate layers. We focused on this hypothesis in another experiment and this time used the same teacher to train $2$-layer students. In this scenario, student models are considerably smaller with only $39$M parameters. Results of this experiment are reported in Table \ref{t:3}. \begin{table*}[ht!] \centering \begin{tabular}{l l l l l l l l l l l} \toprule Problem &Model & CoLA & MNLI & MRPC & QNLI & QQP & RTE & SST-2 & STS-B & Average\\ \hline N/A & $\mathcal{T}_{_{\textrm{BERT}}}$ & 57.31& 83.39& 86.76& 91.25& 90.96& 68.23& 92.67& 88.82& 82.42 \\\hline N/A & $\mathcal{S}_{_{\textrm{NKD}}}$ & 14.50& 72.73& 72.06& 79.61 &86.89& 57.04& 85.89& 40.80& 63.69 \\ \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{RKD}}}$ & 24.50& 74.90& 73.53& 81.04& 87.40& 59.21 &87.39& 41.87& 66.23 \\\hline \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{PKD-1}}}$ & 23.09& 74.65& 72.55& 81.27 &87.68& 57.40& 88.76& 43.37& 66.1 \\ \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{PKD-6}}}$ &22.48& 74.57& 73.04& 80.74& 87.70& 57.40& 88.65& 42.92& 65.94 \\ \textit{skip}, \textit{search} & $\mathcal{S}_{_{\textrm{PKD-12}}}$ & 22.46 &74.33& 72.79& 81.22& 87.88& 57.40& 88.76& 45.39& 66.28 \\\hline \textit{search} & $\mathcal{S}_{_{\textrm{CKD}}}$ & \textbf{24.69}& 74.67& 73.04& \textbf{81.60}& 87.10 &58.84& 88.65& 43.71& 66.54 \\\hline \textit{none} & $\mathcal{S}_{_{\textrm{ALP}}}$& 24.61 &\textbf{74.78}& \textbf{73.53}& 81.24& \textbf{88.01} &\textbf{59.57}& \textbf{88.88}& \textbf{46.04}& \textbf{67.08} \\ \bottomrule \end{tabular} \caption{\label{t:3} The teacher model $\mathcal{T}_{_{\textrm{BERT}}}$ has $12$ and all other student models have $2$ layers. $\mathcal{S}_{_{\textrm{PKD-\textit{l}}}}$ indicates that $h_\mathcal{T}^l$ is used for distillation.} \end{table*} For CKD and ALP-KD, we combine all teacher layers and distill into the first layer of the student. Similar to previous experiments, there is no connection between the last layer of $2$-layer students and the teacher model and KD happens between $h_\mathcal{S}^1$ and $H_\mathcal{T}$. For PKD, we need to decide which teacher layers should be involved in distillation, for which we assessed three configurations with the first ($h_\mathcal{S}^1 \leftrightarrow h_\mathcal{T}^1$), sixth ($h_\mathcal{S}^1 \leftrightarrow h_\mathcal{T}^6$), and twelfth ($h_\mathcal{S}^1 \leftrightarrow h_\mathcal{T}^{12}$) layers. $\mathcal{S}_{_{\textrm{ALP}}}$ outperforms other students in this case too and this time the gap between PKD and ALP-KD is even more visible. This result points out to the fact that when teacher and student models differ significantly, intermediate layer combination becomes crucial. \subsection{Qualitative Analysis} We tried to visualize attention weights to understand what happens during training and why ALP-KD leads to better performance. Figure \ref{fig:3} illustrates results related to this experiment. From the SST-2 dataset, we randomly selected $10$ examples and stimulated both teacher and student models to emit attention weights between the first layer of the student ($h_\mathcal{S}^1$) and all teacher layers ($H_\mathcal{T}$). We carried out this experiment with $2$-, $4$-, and $6$-layer $\mathcal{S}_{_{\textrm{ALP}}}$ models. The \textit{x} and \textit{y} axes in the figure show the attention weights and $10$ examples, respectively. \begin{figure}[th] \includegraphics[scale=0.23]{visu} \caption{\label{fig:3} Visualizing attention weights between the first layer of the student model and all teacher layers for $10$ samples from SST-2. Weights belong to $\mathcal{S}_{_{\textrm{ALP}}}$ with $2$ (a), $4$ (b), and $6$ (c) layers.} \end{figure} As seen in Figure \ref{fig:3}a, the first half of the teacher model is more active, which is expected since we distill into the first layer of the student. However, $h_\mathcal{S}^{1}$ receives strong signals from other layers in the second half too, e.g. in {\fontfamily{pcr}\selectfont Example-10} there is a strong connection between $h_\mathcal{T}^{11}$ and $h_\mathcal{S}^{1}$. This visualization demonstrates that all teacher layers participate in distillation and defining buckets or skipping layers might not be the best approach. A similar situation arises when distilling into the $4$-layer model in Figure \ref{fig:3}b as the first half is still more active. For the $6$-layer model, we see a different pattern where there is a concentration in attention weights around the middle layers of the teacher and $h_\mathcal{S}^1$ is mainly fed by layers $h_\mathcal{T}^4$ to $h_\mathcal{T}^7$. Considering the distribution of attention weights, any skip- or even concatenation-based approach would fail to reveal the maximum capacity of KD. Such approaches assume that a single teacher layer or a subset of adjacent layers affect the student model, whereas almost all of them participate in the process. Apart from previously reported results, this visualization again justifies the need for an attention-based combination in KD. Our technique emphasizes on intermediate layers and the necessity of having similar internal representations between student and teacher models, so in addition to attention weights we also visualized the output of intermediate layers. The main idea behind this analysis is to show the information flow inside student models and how ALP-KD helps them mimic their teacher. Figures \ref{fig:internal-layers}a and \ref{fig:internal-layers}b illustrate this experiment. We randomly selected $100$ samples from the SST-2 dataset and visualized what hidden representations of $\mathcal{S}_{_{\textrm{ALP}}}$, $\mathcal{S}_{_{\textrm{PKD}}}$, and $\mathcal{T}$ models (from Table \ref{t:1}) look like when stimulated with these inputs. Student models have $4$ layers but due to space limitations we only show middle layers' outputs, namely $h_\mathcal{S}^2$ (Figure \ref{fig:internal-layers}a) and $h_\mathcal{S}^3$ (Figure \ref{fig:internal-layers}b). $h_\mathcal{S}^1$ and $h_\mathcal{S}^4$ also expressed very similar attitudes. The output of each intermediate layer is a $768$-dimensional vector, but for visualization purposes we consider the first two principle components extracted via PCA \cite{wold1987principal}. During training, $h_\mathcal{T}^5$ and $h_\mathcal{T}^9$ are connected to $h_\mathcal{S}^2$ and $h_\mathcal{S}^3$ as the source of distillation in PKD, so we also include those teacher layers' outputs in our visualization. As the figure shows, ALP-KD's representations are closer to teacher's and it demonstrates that our technique helps train better students with closer characteristics to teachers. \begin{figure} \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.92\linewidth]{layer-2} \caption{} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.92\linewidth]{layer-3} \caption{} \label{fig:sub2} \end{subfigure} \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.93\linewidth]{dist2} \caption{} \label{fig:sub3} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.93\linewidth]{dist3} \caption{} \label{fig:sub4} \end{subfigure} \caption{Visualizing intermediate layers' outputs and their distance from the teacher in ALP-KD and PKD students. Teacher-, ALP-KD-, and PKD-related information is visualized with {green}, {red}, and {blue} colors, respectively. Figures 4a and 4c provide information about $h^{2}_{\textrm{ALP}}$, $h^{2}_{\textrm{PKD}}$, and $h_\mathcal{T}^5$, and Figures 4b and 4d report information about $h^{3}_{\textrm{ALP}}$, $h^{3}_{\textrm{PKD}}$, and $h_\mathcal{T}^9$. In the bottom figures, the \textit{x} axis shows samples and the \textit{y} axis is the Cosine distance from the teacher.} \label{fig:internal-layers} \end{figure} We conducted another complementary analysis where we used the output of the same teacher and student layers from the previous experiment and measured their distance for all $100$ examples. Results of this experiment are illustrated in Figures \ref{fig:internal-layers}c and \ref{fig:internal-layers}d for the second and third student layers, respectively. Internal representations generated by PKD are more distant from those of the teacher compared to ALP-KD's representations, e.g. the distance between $h^{20,2}_{\textrm{PKD}}$ (the output of the second PKD layer for the $20$-\textit{th} example in Figure \ref{fig:internal-layers}c) and $h^{20,5}_{\mathcal{T}}$ is around $0.20$ whereas this number is only $0.05$ for ALP-KD. This is an indication that the ALP-KD student follows its teacher better than the PKD student. To measure distance, we used the Cosine similarity in this experiment. \section{Conclusion and Future Work} In this paper, we discussed the importance of distilling from intermediate layers and proposed an attention-based technique to combine teacher layers without skipping them. Experimental results show that the combination idea is effective. Our findings in this research can be summarized as follows: \begin{itemize} \item It seems to distil from deep teachers with multiple internal components combination is essential. \item The more teacher and student models differ in terms of the number of layers, the more intermediate layer combination becomes crucial. \item Although a simple concatenation of layers is still better than skipping in many cases, to obtain competitive results an attention-based combination is required. \item ALP-KD can be tuned to combine layers inside buckets and this approach is likely to yield state-of-the-art results, but if there is no enough knowledge to decide about buckets, a simple attention mask over all teacher layers should solve the problem. \end{itemize} As our future direction, we are interested in applying ALP-KD to other tasks to distill from extremely deep teachers into compact students. Moreover, we will work on designing better attention modules. Techniques that are able to handle sparse structures could be more useful in our architecture. Finally, we like to adapt our model to combine other internal components such as attention heads. \section*{Acknowledgement} We would like to thank our anonymous reviewers as well as Chao Xing and David Alfonso Hermelo from Huawei Noah's Ark Lab for their valuable feedback.
{ "timestamp": "2020-12-29T02:22:07", "yymm": "2012", "arxiv_id": "2012.14022", "language": "en", "url": "https://arxiv.org/abs/2012.14022" }
\section{Introduction}\label{int} This article is a further step in a program of investigation of the infrared problems in electrodynamics. Among them, the long-time asymptotic behavior of the charged matter fields is one of the key issues. Such questions as understanding what is a charged particle, or how to define scattering operator in quantum electrodynamics, are correlated to this problem. It is well-known that the long-time asymptotics poses problems in theories with long-range interactions. Standard way to deal with that is to modify the asymptotic dynamics of charged particles or fields, by Dollard or similar methods, augmented by some `dressing' of charged particles. The cost is the loss of a clear interpretation of the asymptotic dynamics. Moreover, in physically realistic quantum field theory models this procedure has not resulted, up to now, in a non-perturbational understanding of the issue.\footnote{For (generalized) Dollard methods see the monograph \cite{der97}. Recent examples of the use of `dressing' in construction of simplified quantum field models include \cite{ms14} and \cite{dyb17}. A recent attempt at a precise implementation of the Dollard idea in general quantum field theory (in the form proposed much earlier in rather imprecise terms by Kulish and Faddeev) may be found in \cite{duc19}.} In a series of articles I have put forward the idea, that the long-time asymptotics problem may be relieved by an appropriate choice of gauge of the electromagnetic potential. In a recent article \cite{her19} a Schr\"{o}dinger (nonrelativistic) particle was considered in a time-dependent electromagnetic field, of the form typical for scattering situations. It was found that an appropriate choice of gauge allows the existence of the asymptotic dynamics, with no need for dressing.\footnote{What remains an open question for this system is the asymptotic completeness. In the article we expressed the view that the clash between the symmetry groups of the two parts of the system: Lorentz for the Maxwell and Galilean for the Schr\"{o}dinger equations, might make the issue more problematic. Our present discussion seems to confirm this.} We also refer the reader to this article for more extensive description of the context and our motivation. Here we implement the idea in the case of the classical Dirac field evolving in the external electromagnetic field. The system is not fully self-interacting, but the form of the external electromagnetic field mimics the expected properties of this field in fully interacting system. We consider the evolution of the Dirac field as a unitary evolution in a Hilbert space. However, similarly as in the previously considered nonrelativistic case, the evolution does not take place in standard time, over flat pure space sheets. It turns out, that for our purpose it is convenient to consider a Cauchy foliation supplied by constant $\tau$ surfaces, where $(\tau,\z)$ form the coordinate system\footnote{I have first proposed the use of these coordinates in the present context in 1999, as a natural extension of the hyperbolic foliation of the inside of the lightcone, to a Cauchy foliation of the whole spacetime. The evolution of the Dirac field over the hyperbolic foliation, and its large hyperbolic time asymptotics, was analysed in \cite{her95} in terms of a Fourier-like transformation, which transformed this evolution to a unitary evolution in the Hilbert space of spinors on the hyperboloid of $4$-velocities. The existence of the wave operators for the Dirac field in external electromagnetic field was established (in appropriate gauge, see below), but no results on asymptotic completeness were obtained. The idea now was to formulate the evolution over the foliation \eqref{intvar} in similar Fourier terms. Piotr Marecki, my student at that time, carried out in his MSc Thesis \cite{mar00} calculations of this programme for the free Dirac field (scattering was only briefly mentioned in this thesis). This method has its assets, but it is inconvenient for the analysis of many further questions like self-adjointness of the generators, the analysis of the domain of validity of the Dirac equation in the differential form, or asymptotic completeness of the interacting case, the questions that have not been considered. Here we use another method, which could be developed much further, and enabled the solution of all these questions.} defined by: \begin{equation}\label{intvar} x^0=\tau(|\z|^2+1)^\frac{1}{2}\,,\qquad \x=(\tau^2+1)^\frac{1}{2}\z\,. \end{equation} This poses technical complication even at the free Dirac equation level, as the unitary evolution is not a unitary one-parameter group (Hamiltonian depends on $\tau$).\footnote{ One should also mention that the use of more general, than the flat equal time hypersurfaces, places our problem as a special case of the problem of hyperbolic equations on Lorentzian manifolds, a question intensively studied in the past, see a recent monograph \cite{bgp07}. However, our more specific system allows us to apply more specific Hilbert space methods, and obtain stronger results.} This is to be contrasted with the nonrelativistic case considered earlier, where the Niederer transformation introducing suitable coordinates leads from the free particle Hamiltonian to that of the harmonic oscillator (a~kind of beautiful miracle). Nevertheless, the Schr\"{o}dinger operator case becomes steeply difficult when the potential with space part is introduced, as then the term $\mathbf{A}\cdot\mathbf{p}$ prevents the application of the Dyson series method. To deal with that case we have applied the Kato theorem. Here, for the free Dirac equation we proceed differently, and our method may be of independent interest. The addition of the interaction with the electromagnetic field may be then treated, in contrast to the nonrelativistic Schr\"{o}dinger case, by a variation of the Dyson method combined with the relativistic causality. For this system we show that an appropriate choice of gauge removes the asymptotic problem. We show the existence and asymptotic completeness of the wave operators, with no need for any modification of the asymptotic free Dirac evolution.\footnote{The Cauchy problem and the asymptotic completeness of the interacting Maxwell-Dirac system were considered by Flato et al.\ in \cite{fst95}. However, their analysis needs strong smoothness and smallness assumptions, the latter not under well-determined control, and uses methods rather not well suited for an application in quantum case (`nonlinear representation of the Poincar\'e group'). Also, the authors modify the asymptotic dynamics by a variation of the Dollard method. Our aims are different, as explained above.} This result will show that the choice of a gauge $\cA(x)$ in this classical field setting has a decisive importance for the asymptotic identification of the incoming/outgoing charged fields. The main qualitative feature of gauges in this appropriate class is that the product $x\cdot\cA(x)$ vanishes in timelike directions, see remarks in Section \ref{disc} (a property already identified in \cite{her95}). We would like to stress that the theory is formulated from the outset in such chosen gauge. Whether such formulation may be carried over to the quantum field theory is a subject for future research.\footnote{We postpone to such prospective publication a comparison with the existing discussions of the relevance of gauge choice in QED. Here counts the idea, originated by Dirac \cite{dir55}, of an \emph{a posteriori} transformation to a `gauge-independent gauge', developed by many authors, most exhaustively by Steinmann, see his monograph \cite{sta00}, Chapter 12. Also, the expectation (not shared by everyone) that formulations of QED in differing gauges need not be equivalent received a recent support from a mathematical analysis of a simplified model \cite{dyb19} (where literature account may also be found).} This prospective investigation should also find contact with the asymptotic algebra of fields in quantum electrodynamics postulated by the author \cite{her98} (see also \cite{her17}). Here let us only mention, that the use of a gauge in the class anticipated above could lead to some broadening of the scope of external classical, time-dependent electromagnetic fields for which the scattering operator for the classical Dirac field may be lifted to the case of the respective quantum field. As is well-known the mixing of electrons/positrons leads to rather severe restrictions in this respect\footnote{This is a standard example of the fact that classical external interaction problems for quantum fields create problems of their own, which are not expected to propagate into full closed quantum theory. An even more restrictive question in the case of the quantum Dirac field in an external classical field is whether a unitary evolution operator exists for this system. As shown in \cite{rui76} (for standard evolution over the flat foliation of the spacetime), this may be possible only if the magnetic field vanishes. This brings to sharp light a rather restricted physical relevance of such questions and models (note that this condition is not even inertial observer-independent; anyway, this problem is not related to infrared questions).} (see e.g.\ the monograph \cite{sha95}, Sections 2.4 and~2.5). Precisely what gain would be possible is an open question. The outline of the article is as follows. In Section \ref{free} we formulate the Dirac evolution as a unitary evolution, with time-dependent self-adjoint generators, over a rather general family of Cauchy surfaces. Section \ref{evolelmg} gives the formulation of the external field problem with the Dyson series method, but with the important use of the relativistic causality. In both free and interacting case the self-adjointness is achieved with the use of the commutator theorem (see Appendix \ref{open}) and the harmonic oscillator Hamiltonian. In Section \ref{sfnpfa} we specify our choice of coordinates to those mentioned above and transform the evolution to a new `picture'. In this picture the free Dirac evolution on our foliation has well-defined limits as unitary operators. Section \ref{elmintgau} specifies the general external problem to the electromagnetic case and discusses the gauge transformation. Section \ref{scattering} contains our central results formulated for a wide class of electromagnetic fields: the existence and asymptotic completeness of the wave operators. Section \ref{typspec} gives a theorem showing that the electromagnetic fields typical for scattering contexts described above admit potentials in gauges satisfying the demands of the main theorems of Section \ref{scattering}. Section \ref{disc} offers some remarks on the implications of our results and on further physically motivated restriction in the class of the obtained electromagnetic gauges. Large parts of the material are shifted to Appendix. In Appendix \ref{trK} we discuss a spinor transformation needed in Section~\ref{free}. Appendix \ref{open} describes our method to deal with a class of time dependent Hamiltonians. A lemma needed for the application of this method to the system considered here is discussed in Appendix \ref{lemma}. In Appendix \ref{spvar} we gather geometrical facts and relations in our special coordinate system. Appendices \ref{fDe} and \ref{fwe} recapitulate some properties of the solutions of the free Dirac and wave equations, respectively. Appendix~\ref{decay} contains some estimates of the decay of the advanced and retarded solutions of the inhomogeneous wave equation and their differences (radiation fields). The results of Appendices \ref{fDe}--\ref{decay} are applied next in Appendix \ref{elmfields} to the case of the electromagnetic fields typical for scattering contexts. Finally, the necessary decay properties of the special gauge introduced in Section~\ref{typspec} are obtained in Appendix \ref{spgava}. \section{Free Dirac evolution}\label{free} Throughout the article we set $\hbar=1$, $c=1$. We choose a reference point, and then the flat spacetime is identified with the Minkowski vector space. Let $m$ be the mass parameter in the free Dirac equation. To simplify notation we rescale Minkowski vectors by multiplying them by $m$, and denote the resulting space by $M$, and its dimensionless vectors by $x^a$. The flat (covariant) derivative in the rescaled Minkowski space is denoted by $\n_a$. Also, the electromagnetic interaction to appear later will be introduced by the interaction term $\cA_a\ov{\psi}\ga^a\psi$, so to recover the physical units and quantities one should replace $\cA\rightarrow (e/m)\cA$, with $e$ the elementary charge. Let $\tau:M\mapsto\mR$ be a smooth surjective function such that the hypersurfaces $\Sigma_\tau$ of constant $\tau$ form a Cauchy foliation of the Minkowski spacetime, with $\tau$ increasing into the future. Consider the Dirac equation, which we write in the form \begin{equation}\label{fDirac} (\tfrac{1}{2}[\ga^a,i\n_a]_+-1)\psi=0 \end{equation} in our dimensionless coordinates, where $\gamma^a$ are the Dirac matrices and $[.,.]_+$ symbolizes the anticommutator. As is well-known, the Cauchy problem for this equation with the initial data $\psi|_{\Sigma_{\tau_0}}=f$ is explicitly solved by the formula \begin{equation} \psi(x)=i^{-1}\int_{\Sigma_{\tau_0}} S(x-y)\ga^a f(y)\, d\si_a(y)\,, \end{equation} where $d\si_a$ is the dual integration element on $\Sigma_{\tau_0}$ and $S(x)$ is the standard Green function of the free Dirac field, in the dimensionless coordinates \begin{equation} S(x)=(i\ga\cdot\n+1)D_1(x)\,,\quad D_1(x)=\frac{i}{(2\pi)^3} \int\sgn(v^0)\delta(v^2-1)e^{-iv\cdot x}dv\,. \end{equation} If $f$ is a smooth bi-spinor function on $\Sigma_{\tau_0}$, with a compact support, then $\psi(x)$ is a~smooth function in $M$, with compact support on each Cauchy surface $\Sigma_{\tau}$. Therefore, we obtain a bijective evolution mapping between the spaces of smooth, compactly supported bi-spinor functions on our family of Cauchy surfaces. Moreover, if on each $\Sigma_\tau$ one defines the scalar product \begin{equation}\label{prtau} (\psi_1,\psi_2)_\tau=\int_{\Sigma_\tau} \ov{\psi_1}\ga^a\psi_2\,d\si_a\,, \end{equation} then the evolution is isometric. Let now $(\nv^\mu)=(\nv^0,\nv^i)=(\tau,z^i)\equiv(\tau,\z )\in\mR^4$ ($i=1,2,3$), with $\tau$ des\-cribed above, be a smooth curvilinear coordinate system, mapping $M$ diffeomorphically onto $\mR^4$. Denote by $\eta_{ab}$ and $C^{a\ldots}{}_{b\ldots}$ the Minkowski spacetime metric tensor, and any other tensor, respectively. Then the geometrical components in the coordinate system $(\xi^\mu)$ will be denoted by\pagebreak[1] \begin{equation}\label{compxi} \begin{gathered} \p_\mu=\frac{\p}{\p\nv^\mu}=\frac{\p x^a}{\p\nv^\mu}\n_a\,,\quad \p_\tau=\frac{\p}{\p\tau}\,,\quad \p_i=\frac{\p}{\p z^i}\,,\\[1ex] \gax^\mu=(\n_a\nv^\mu)\ga^a\,,\quad \hat{C}^{\mu\ldots}{}_{\nu\ldots} =(\n_a\zeta^\mu)\ldots C^{a\ldots}{}_{b\ldots}\frac{\p x^b}{\p\zeta^\nu}\ldots \,,\\ g_{\mu\nu}=\frac{\p x^a}{\p\nv^\mu}\frac{\p x^b}{\p\nv^\nu}\eta_{ab}\,,\quad g=\det(g_{\mu\nu})\,,\quad \gz=\det[(g_{ij})_{i,j\leq3}]\,, \end{gathered} \end{equation} Coordinates restricted to the indices $1,2,3$ will be written as $\gax^i$, $g_{ij}$ and $\hat{C}^{i\ldots}{}_{j\ldots}$, and the zeroth coordinate will be indicated by $\tau$; thus for instance: $\cA_a\ga^a=\hA_\mu \gax^\mu=\hA_\tau\gax^\tau+\hA_i\gax^i$. In~these coordinates the Dirac equation~\eqref{fDirac} and the product \eqref{prtau} take the form \begin{gather} \Big(\tfrac{i}{2}\Big[|g|^{\frac{1}{2}}\gax^\mu,|g|^{-\frac{1}{2}}\p_\mu\Big]_+ -1\Big)\psi=0\,,\label{dc}\\[1ex] (\psi_1,\psi_2)_\tau=\int\ov{\psi_1}\gax^\tau\psi_2\, |g|^{\frac{1}{2}}d^3z\,.\label{pc} \end{gather} We now choose a Minkowski reference system $(e_0,\ldots,e_3)$. Let us denote \begin{equation}\label{n} n=[g^{\tau\tau}]^{-\frac{1}{2}}\n\tau \end{equation} and let $K$ be the Lorentz rotation of Dirac spinors in the hyperplane spanned by the pair of timelike unit vectors $(e_0,n)$, such that \begin{equation}\label{defK} K^{-1}\ga^a n_a\,K=\ga^0\,,\qquad K^\hc=\ga^0K^{-1}\ga^0\,, \end{equation} where dagger $\hc$ denotes the hermitian conjugation of matrices. The form and further properties of $K$ are discussed in Appendix \ref{trK}. We define the following transformation: \begin{equation}\label{psich} \psi=T_\tau\chi\,,\qquad T_\tau=(|g|g^{\tau\tau})^{-\frac{1}{4}}K=|\gz|^{-\frac{1}{4}}K\,. \end{equation} The product \eqref{pc} then takes the standard $\mC^4\otimes L^2(\mR^3)$ form \begin{equation} (\psi_1,\psi_2)_\tau=\int \chi_1^\hc\chi_2 d^3z\,. \end{equation} Setting $\psi$ into \eqref{dc} and applying from the left the transformation $(g^{\tau\tau})^{-\frac{1}{2}}T_\tau^{-1}$ we obtain an equivalent form of the Dirac equation \begin{multline}\label{dcT} \Big(\tfrac{i}{2}\Big[(g^{\tau\tau})^{-\frac{1}{2}}\gax_K^\mu,\p_\mu\Big]_+ -(g^{\tau\tau})^{-\frac{1}{2}}\Big)\chi\\ +\tfrac{i}{2}(g^{\tau\tau})^{-\frac{1}{2}} \Big[\gax_K^\mu,(K^{-1}\p_\mu K)\Big]_+\chi=0\,, \end{multline} where we have introduced notation \begin{equation}\label{gaK} \ga_K=K^{-1}\ga K\,. \end{equation} We now make an additional assumption: $\Sigma_\tau$ is rotationally symmetric in the chosen Minkowski system, that is $\tau=\tau(x^0,|\x|)$. We show in Appendix \ref{trK} that in this case the anticommutator in the second line of \eqref{dcT} vanishes. We also note that $(g^{\tau\tau})^{-\frac{1}{2}}\gax_K^\tau=\ga^0\equiv\beta$. This allows us to write the Dirac equation in the form \begin{equation}\label{dcTs} i\p_\tau\chi =\Big(-\tfrac{i}{2}\big[(g^{\tau\tau})^{-\frac{1}{2}}\beta\gax_K^i,\p_i\big]_+ +(g^{\tau\tau})^{-\frac{1}{2}}\beta\Big)\chi\,. \end{equation} We summarize the above discussion. \begin{thm}\label{freeDirac} (i) Let the smooth function on the Minkowski space $\tau=\tau(x^0,|\x|)$ determine its foliation by Cauchy surfaces $\Sigma_\tau$, with $\tau$ increasing into the future. Denote by $C_0^\infty(\Sigma_\tau, \mC^4)$ the space of smooth, compactly supported functions on $\Sigma_\tau$ with values in $\mC^4$. Then the free Dirac evolution of initial data in $C_0^\infty(\Sigma_\sigma, \mC^4)$ is determined by a family of bijective linear evolution operators \begin{equation}\label{fsec} U^\Sigma_0(\tau,\sigma): C_0^\infty(\Sigma_\sigma, \mC^4)\mapsto C_0^\infty(\Sigma_\tau, \mC^4) \end{equation} isometric with respect to the products \eqref{prtau}. The size of the support in $\Sigma_\tau$ is restricted by the size of the support in $\Sigma_\si$ and the relativistic causality. By continuity, $U_0^\Sigma(\tau,\sigma)$ extend to unitary operators \begin{equation}\label{fseh} U^\Sigma_0(\tau,\sigma): \Hc(\Sigma_\sigma)\mapsto\Hc(\Sigma_\tau)\,, \end{equation} where $\Hc(\Sigma_\tau)$ is the Hilbert space of spinor functions on $\Sigma_\tau$ with the pro\-duct~\eqref{prtau}. (ii) Let $(\tau,z^i):M\mapsto\mR^4$ be a smooth diffeomorphism, with notation \eqref{compxi}, such that $\tau$ satisfies the assumptions of (i). Denote \begin{equation} \Hc=\mC^4\otimes L^2(\mR^3,d^3z)\,, \end{equation} \begin{equation}\label{H0} H_0(\tau)=\tfrac{1}{2}[\la^i(\tau),p_i]_+ +\mu(\tau)\,,\quad h_0=\tfrac{1}{2}(\pv^2+\z^2)\,, \end{equation} \begin{equation}\label{mula} p_i=-i\p/\p z^i\,,\qquad \mu(\tau)=(g^{\tau\tau})^{-\frac{1}{2}}\beta\,,\qquad \la^i(\tau)=\mu(\tau)\gax_K^i\,. \end{equation} Then the transformation \begin{equation} T_\tau:\Hc\mapsto \Hc(\Sigma_\tau)\,,\qquad T_\tau=|\gz|^{-1/4}K\,, \end{equation} with operator $K$ discussed in Appendix \ref{trK}, is a unitary operator and the family \begin{equation}\label{Uzero} U_0(\tau,\sigma)=T_\tau^{-1}U_0^\Sigma(\tau,\sigma)T_\sigma\,, \qquad U_0(\tau,\si):\Hc\mapsto\Hc \end{equation} formes a unitary, strongly continuous evolution system, such that the following holds: \begin{itemize} \item[(A)] $U_0(\tau,\si)\Dh=\Dh$ and the relativistic causality is respected. \item[(B)] For $\vph\in\Dh$ the maps $(\tau,\si)\mapsto H_0(\tau)U_0(\tau,\si)\vph$ and $(\tau,\si)\mapsto h_0U_0(\tau,\si)\vph$ are strongly continuous, the map $(\tau,\si)\mapsto U_0(\tau,\si)\vph$ is in the class $C^1$ in the strong sense, and the following equations are satisfied \begin{equation}\label{freeDH} \begin{split} i\p_\tau U_0(\tau,\si)\vph&=H_0(\tau)U_0(\tau,\si)\vph\,,\\ i\p_\si U_0(\tau,\si)\vph&=-U_0(\tau,\si)H_0(\si)\vph\,. \end{split} \end{equation} \item[(C)] The operators $H_0(\tau)$ are symmetric on $\Dh$. \end{itemize} \end{thm} \begin{proof} The symmetry of $H_0(\tau)$ follows from the symmetry of the operators $\la^i(\tau)$, $\mu(\tau)$ and~$p_i$, and invariance of $\Dh$ with respect to them. The other statements of the theorem follow easily from the discussion preceding the theorem. \end{proof} We now add assumptions which will be satisfied in the context of our application, and which alow us to significantly extend the domains of validity of the results in the last theorem. We use the abbreviation \begin{equation}\label{zm} \zm=(|\z|^2+1)^\frac{1}{2}\,. \end{equation} Also, the usual multi-index notation will be used. \begin{thm}\label{freeDiracHil} Let all the assumptions of Theorem \ref{freeDirac} be satisfied. Suppose in addition that the following bounds of matrix norms hold: \begin{gather} |\la^i(\tau,\z)|\leq C(\tau)\zm\,,\quad |\p_z^\al\la^i(\tau,\z)|\leq C(\tau)\,,\quad 1\leq|\al|\leq3\,,\label{labound}\\ |\p_z^\beta\mu(\tau,\z)|\leq C(\tau)\zm^{2-|\beta|}\,,\quad |\beta|\leq2\,, \end{gather} where $C(\tau)$ is a continuous function. Denote $h_0=\tfrac{1}{2}(\pv^2+\z^2$), the self-adjoint harmonic oscillator Hamiltonian. Then the following holds: \begin{itemize} \item[(A)] $H_0(\tau)$ are essentially self-adjoint on $\Dh$ and on each core of $h_0$, and $\Dc(h_0)\subseteq \Dc(H_0(\tau))$. Moreover, $h_0^{-1}H_0(\tau)$ and $H(\tau)h_0^{-1}$ extend to bounded operators, and as functions of $\tau$ are strongly continuous. \item[(B)] The operator $h_0U_0(\tau,\si)h_0^{-1}$ is bounded, with the norm \begin{equation} \|h_0U_0(\tau,\si)h_0^{-1}\| \leq \exp\Big[\con\int_\si^\tau C(\rho)d\rho\Big]\,, \end{equation} so in particular \begin{equation} U_0(\tau,\si)\Dc(h_0)=\Dc(h_0)\,, \end{equation} and the map $(\tau,\si)\mapsto h_0U(\tau,\si)h_0^{-1}$ is strongly continuous. \item[(C)] For $\vph\in\Dc(h_0)$ the vector functions $h_0U_0(\tau,\si)\vph$, $H_0(\tau)U_0(\tau,\si)\vph$ and $U_0(\tau,\si)H_0(\si)\vph $ are strongly continuous with respect to parameters, the vector function $U_0(\tau,\si)\vph$ is of the class $C^1$ with respect to parameters in the strong sense and the following equations hold \begin{align} i\p_\tau U_0(\tau,\si)\vph&=H_0(\tau) U_0(\tau,\si)\vph\,,\\ i\p_\si U_0(\tau,\si)\vph&=-U_0(\tau,\si)H_0(\si)\vph\,. \end{align} \item[(D)] Relativistic causality is respected by $U_0(\tau,\si)$. Therefore, if we denote \begin{gather} \Hc\supset\Hc_c\ \text{--the subspace of functions with compact essential support}\,,\\ \Dc_c(\pv^2)=\Hc_c\cap\Dc(\pv^2)\,, \end{gather} then \begin{equation} U_0(\tau,\si)\Hc_c=\Hc_c\,,\quad U_0(\tau,\si)\Dc_c(\pv^2)=\Dc_c(\pv^2)\,. \end{equation} \end{itemize} \end{thm} \begin{proof} It follows from the assumptions on $\la^i$ and $\mu$ that $H_0(\tau)$ fulfill all the conditions imposed on the operator $h$ in Lemma \ref{lembound} in Appendix \ref{lemma}. In~Theorems \ref{op-ham} and \ref{op-ev} in Appendix~\ref{open} we now put \begin{equation} \Dc=\Dh\,,\quad h(\tau)=H_0(\tau)\,,\quad u(\tau,\si)=U_0(\tau,\si) \end{equation} and $h_0$ as defined in the present assumptions. Then by the results of Theorem~\ref{freeDirac} all the assumptions of these theorems are satisfied and the present thesis follows. \end{proof} \section{Evolution in external field}\label{evolelmg} We now consider the Dirac equation in an external field, of the form \begin{equation} (\tfrac{1}{2}[\ga^a,i\n_a]_+-1-V)\psi=0\,, \end{equation} where $V(x)$ is a matrix function satisfying the condition \begin{equation}\label{Vhc} V(x)^\hc=\ga^0V(x)\ga^0\,, \end{equation} which guarantees the reality of the interaction term $-\ov{\psi}V\psi$ in the lagrangian and the conservation of the current $\ov{\psi}\gamma^a\psi$. The addition of this interaction term leads to the modification of the free curvilinear version to the equation of the form \begin{equation}\label{intW} i\p_\tau\vph=(\tfrac{1}{2}[\la^i,p_i]_++\mu +W)\vph\,, \end{equation} where \begin{equation} W(\tau,\z)=\mu K^{-1}V(x)K=W(\tau,\z)^\hc\,, \end{equation} hermicity being equivalent to the property \eqref{Vhc} (see \eqref{defK}). Using the interaction picture technique we shall obtain the evolution operators for which equation \eqref{intW} is satisfied. However, thanks to the relativistic causality we can extend the applicability of the technique to the following setting, in which the operators $W(\tau)$ do not need to be bounded. \begin{thm}\label{interDirac} Let all the assumptions of Theorems \ref{freeDirac} and \ref{freeDiracHil} be satisfied. Suppose in addition that $W(\tau,\z)$ is a hermitian matrix function such that the mappings \begin{equation}\label{propW1} \big\{\mR\ni\tau\mapsto \p_z^\al W(\tau,\z)\big\} \in C^0(\mR, L_\mathrm{loc}^\infty(\mR^3))\,,\qquad |\al|\leq2\,, \end{equation} and \begin{equation}\label{propW2} \|\zm^{-2+|\al|}\p_z^\al W(\tau,\z)\|_\infty\leq C(\tau)\,,\qquad |\al|\leq1\,, \end{equation} where $C(\tau)$ is a continuous function. Denote \begin{equation} H(\tau)=H_0(\tau)+W(\tau)\,, \end{equation} with the initial domain $\Dh$, and define the formal series\pagebreak[2] \begin{equation}\label{U} \begin{aligned} U(\tau,\si)&=\sum_{n=0}^\infty U^{(n)}(\tau,\si)\,,\qquad U^{(0)}(\tau,\si)=\1\,,\\ U^{(n)}(\tau,\si) &=(-i)^n \int\limits_{\tau\geq\tau_{n}\geq\ldots\geq\tau_1\geq\si} U_0(\tau,\tau_n)W(\tau_n)U_0(\tau_n,\tau_{n-1})\ldots\\ &\hspace{10em}\ldots W(\tau_1)U_0(\tau_1,\si)d\tau_n\ldots d\tau_1\\ &=-i\int\limits_\si^\tau U_0(\tau,\rho)W(\rho)U^{(n-1)}(\rho,\si)d\rho\\ &=-i\int\limits_\si^\tau U^{(n-1)}(\tau,\rho)W(\rho)U_0(\rho,\si)d\rho\,,\qquad n\geq 1\,. \end{aligned} \end{equation} Then the following is true: \begin{itemize} \item[(A)] $H(\tau)$ are essentially self-adjoint on $\Dh$ and on each core of $h_0$, and $\Dc(h_0)\subseteq \Dc(H(\tau))$. \item[(B)] Series $U(\tau,\si)$ and its conjugation converge strongly on $\Hc_c$, and the limit operator extends to a unitary propagator on $\Hc$, strongly continuous in its parameters. \item[(C)] $U(\tau,\si)$ respects relativistic causality and \begin{equation} U(\tau,\si)\Dc_c(\pv^2)=\Dc_c(\pv^2)\,. \end{equation} \item[(D)] For $\vph\in\Dc_c(\pv^2)$ the vector functions $h_0U(\tau,\si)\vph$, $H(\tau)U(\tau,\si)\vph$ and $U(\tau,\si)H(\si)\vph$ are strongly continuous with respect to parameters, the function $U(\tau,\si)\vph$ is of the class $C^1$ with respect to parameters in the strong sense and the following equations hold \begin{equation}\label{extDirac} \begin{aligned} i\p_\tau U(\tau,\si)\vph&=H(\tau) U(\tau,\si)\vph\,,\\ i\p_\si U(\tau,\si)\vph&=-U(\tau,\si)H(\si)\vph\,. \end{aligned} \end{equation} \item[(E)] Unitary operators satisfying (D), with $U(\si,\si)=\1$, are unique. \end{itemize} \end{thm} \begin{proof} The proof of (A) is similar as the proof in the free case, statement (A) in Theorem \ref{freeDiracHil}. One has to note that the assumption \eqref{propW2} ensures the validity of items (i) and (ii) in Lemma \ref{lembound}; the result of item (iii) is not needed. Consider the series \eqref{U}. As each of the operators $U_0(\tau,\tau_n)$, $U_0(\tau_k,\tau_{k-1})$ and $U_0(\tau_1,\si)$ in the definition of $U^{(n)}(\tau,\si)$ respects causality, and multiplication by $W(\tau_k,\z)$ does not enlarge the support of the function, each of the operators $U^{(n)}(\tau,\si)$ respects causality. Let, for chosen $T$ and $r$, \begin{equation} \tau,\si\in[-T,T]\,,\quad \vph\in\Hc\,,\quad \essupp\vph\subseteq\{\z\mid |\z|\leq r\}\,, \end{equation} which is assumed for the rest of this proof. Then there exists $R(T,r)$ such that the essential support of all functions $U^{(n)}(\tau,\si)\vph$ is contained in $|\z|\leq R(T,r)$ (uniformly with respect to $\tau$, $\si$ and $\vph$ in the assumed range). Let $J(\z)$ be a smooth function such that $J(\z)=1$ for $|\z|\leq R$ and $J(\z)=0$ for $|\z|\geq R+1$. Denote by $U_J(\tau,\si)$ and $U_J^{(n)}(\tau,\si)$ the series \eqref{U} and its terms in which $W(\rho,\z)$ have been replaced by $W_J(\rho,\z)=J(\z)W(\rho,\z)$; note that by assumption \eqref{propW1} $W_J(\rho)$ is a bounded, strongly continuous operator, with $\|W_J(\rho)\|\leq d_1$ for some constant $d_1$, uniformly on $\rho\in[-T,T]$. Using the first of the recursive relations in \eqref{U} it is now easy to see that in the given setting \begin{equation} U^{(n)}(\tau,\si)\vph=U_J^{(n)}(\tau,\si)\vph\,. \end{equation} Arguing as in the proof of the Dyson expansion (see Thm.\, X.69 in \cite{rs79II}) one shows that $U_J(\tau,\si)$ converges uniformly to a strongly continuous unitary propagator. As the space $\Hc_c$ is dense in $\Hc$, the statement (B) is proved. Also, the causality is respected. To prove (C) (the invariance of the subspace) and (D) we write \begin{equation} h_0W_J(\rho) h_0^{-1}=[h_0,W_J(\rho)]h_0^{-1}+W_J(\rho) \end{equation} and note that by Theorem \ref{op-ham} applied to $h(\tau)=W_J(\tau)$ (with Lemma \ref{lembound}) the first term on the rhs is a bounded operator, strongly continuous by the assumption \eqref{propW1}. Therefore, the operator $h_0W_J(\rho)h_0^{-1}$ is also bounded and strongly continuous, and $\|h_0W_J(\rho)h_0^{-1}\|\leq d_2$ for some constant $d_2$, uniformly on $\rho\in[-T,T]$. Taking also into account statement (B) in Theorem \ref{freeDiracHil} we conclude that $h_0U_J^{(n)}(\tau,\si)h_0^{-1}$ is bounded, strongly continuous, with the norm estimated by \begin{equation} \|h_0 U_J^{(n)}(\tau,\si)h_0^{-1}\| \leq (n!)^{-1}(d_1+d_2)^n \exp\Big[\con\int_\si^\tau C(\rho)d\rho\Big]\,, \end{equation} This implies that the series $\sum_nh_0U_J^{(n)}(\tau,\si)h_0^{-1}$ converges uniformly to the strongly continuous function $h_0U_J(\tau,\si)h_0^{-1}$. Let us now impose on $\vph$ a stron\-ger assumption that $\vph\in \Dc_c(\pv^2)$. Then \begin{equation} h_0U(\tau,\si)\vph=h_0U_J(\tau,\si)\vph=h_0U_J(\tau,\si)h_0^{-1}h_0\vph\,. \end{equation} This proves (C), and also the strong continuity of $h_0U(\tau,\si)\vph$ in (D). The strong continuity of $H(\tau)U(\tau,\si)\vph$ and $U(\tau,\si)H(\si)\vph$ follows now from the strong continuity of $H(\tau)h_0^{-1}$, similarly as in Theorem \ref{freeDiracHil}, basing on Theorem \ref{op-ham} and assumptions \eqref{propW1} and \eqref{propW2}. Next, we note that \begin{equation} U_0(0,\tau)U^{(n)}(\tau,\si)\vph =-i\int_\si^\tau U_0(0,\rho)W(\rho)U^{(n-1)}(\rho,\si)\vph\, d\rho\,, \end{equation} so this vector function is continuously differentiable in $\tau$ in the strong sense and \begin{equation} i\p_\tau[U_0(0,\tau)U^{(n)}(\tau,\si)]\vph = U_0(0,\tau)W(\tau)U^{(n-1)}(\tau,\si)\vph \end{equation} (remember that on the rhs $W$ may be replaced by $W_J$). Again, using the uniform convergence of $U_J(\tau,\si)$ we obtain \begin{equation} i\p_\tau[U_0(0,\tau)U(\tau,\si)]\vph=U_0(0,\tau)W(\tau)U(\tau,\si)\vph\,, \end{equation} and the rhs is strongly continuous in $(\tau,\si)$. Finally, differentiating \begin{equation} U(\tau,\si)\vph=U_0(\tau,0)[U_0(0,\tau)U(\tau,\si)]\vph \end{equation} by the Leibnitz rule and using (C) from Theorem \ref{freeDiracHil} we arrive at the first equation in \eqref{extDirac}. The uniqueness (E) follows easily from this equation. The proof of the second equation in \eqref{extDirac} is similar, with the use of the second of the recursive relations in \eqref{U}; we omit the details. \end{proof} The most general matrix field $V(x)$ satisfying condition \eqref{Vhc} may be concisely represented in the form \begin{equation} V=\sum_{k=0}^4 i^{\frac{1}{2}k(k-1)}C^k_{a_1\ldots a_k}\ga^{a_1}\ldots\ga^{a_k}\,, \end{equation} where the tensor fields $C^k_{a_1\ldots a_k}$ are real and antisymmetric. The corresponding form of the hermitian matrix function $W$ is \begin{equation} W=\mu\sum_{k=0}^4 i^{\frac{1}{2}k(k-1)} \hat{C}^k_{\mu_1\ldots \mu_k}\gax^{\mu_1}_K\ldots\gax^{\mu_k}_K\,. \end{equation} This form encompasses the scalar ($k=0$) and pseudoscalar ($k=4$) potentials, the electromagnetic vector potential ($k=1$) and the pseudovector potential ($k=3$), the interactions characteristic for anomalous magnetic and electric moments, as well as the linearized gravitation ($k=2$). \section{Special foliation, new picture and free asymptotics}\label{sfnpfa} We now choose the foliation $\tau$ and the variables $\z$ by \begin{equation}\label{tauzed} x^0=\tau\zm\equiv\tau(|\z|^2+1)^{\frac{1}{2}}\,,\qquad \x=\tm\z\equiv(\tau^2+1)^{\frac{1}{2}}\z\,, \end{equation} and we also denote \begin{equation} \tmz=(\tau^2+|\z|^2+1)^{\frac{1}{2}}\,. \end{equation} The idea behind this choice is that for $\tau$ tending to $\pm\infty$ the Cauchy surfaces should tend, for timelike directions in the spacetime, to the hyperboloids $x^2=\tau^2$. All geometrical facts on these curvilinear coordinates needed for our purposes are gathered in Appendix~\ref{spvar}. From now on we adopt the coordinate system \eqref{tauzed}. Using the properties of this system it is then easy to see that all the conditions of Theorems \ref{freeDirac} and \ref{freeDiracHil} are satisfied. Also, a~convenient property of these coordinates is that $g_{\tau i}=0$. Therefore, we find \begin{equation} [\gax_K^i,\beta]_+=(g^{\tau\tau})^{-\frac{1}{2}}K^{-1}[\gax^i,\gax^\tau]_+K =2(g^{\tau\tau})^{-\frac{1}{2}}g^{i\tau}=0\,, \end{equation} and then \begin{gather} [\la^i,\beta]_+=0=[\la^i,\mu]_+\,,\label{comlabe}\\[1ex] \tfrac{1}{2}[\la^i,\la^j]_+=-g_{\tau\tau}g^{ij}\1\equiv \rho^{ij}\1\,;\label{defro} \end{gather} the symmetric form $\rho$ is positive definite. The free evolution $U_0(\tau,\si)$ may be easily expressed in the coordinate system \eqref{tauzed}. For this purpose it is sufficient to consider $U_0(\tau,0)$. Using the representation of the Dirac solution \eqref{Dirini} in Appendix \ref{fDe} and the definition~\eqref{Uzero}, for $\vph\in\Sc(\mR^3,\mC^4)$ we obtain \begin{multline}\label{asUfi} [U_0(\tau,0)\vph](\z) =\Big(\frac{\tmz}{\tm\zm}\Big)^{\frac{1}{2}}K(\tau,\z)^{-1}\times\\ \times\Big(\frac{\tm}{2\pi}\Big)^{\frac{3}{2}}\int\big[e^{-ix(\tau,\z)\cdot v}P_+(v) -e^{ix(\tau,\z)\cdot v}P_-(v)\big][\mathcal{F}^{-1}\vph](v)d\mu(v)\,, \end{multline} where $x(\tau,\z)=(\tau\zm,\tm\z)$. We consider the asymptotic form of this evolution for $\tau\to\pm\infty$. Let $\vph\in \mathcal{F}\Dh$ and restrict $\z$ to a compact set. Then both $\vv$ (the space part of $v$) and $\z$ are restricted to bounded sets and by the stationary phase method (see e.g.\ \cite{vai89}) the leading asymptotic behavior of the second line in \eqref{asUfi} is given by \begin{equation}\label{asint} \mp i\big[e^{-i(\tau\pm\frac{\pi}{4})}P_+(z^0,\pm\z) +e^{i(\tau\pm\frac{\pi}{4})}P_-(z^0,\pm\z)\big][\mathcal{F}^{-1}\vph](z^0,\pm\z) +O(\tm^{-1})\,, \end{equation} where $z^0=\zm$, the upper/lower signs $\mp$ and $\pm$ correspond to the limits $\tau\to\pm\infty$, respectively, and the rest is bounded uniformly for $\z$ in the given set. The limit of the term on the rhs of \eqref{asUfi} in the first line is equal to $\zm^{-\frac{1}{2}}K(\pm\infty,\z)^{-1}$, with \begin{equation}\label{asK} K(\pm\infty,\z)=2^{-\frac{1}{2}}\big[(1+\zm)^{\frac{1}{2}} \pm(1+\zm)^{-\frac{1}{2}}\ga^0\z\cdot\gab\big]=K(\infty,\pm\z)\,, \end{equation} and again the rest is bounded by $\con\tm^{-1}$, uniformly in the given set. We now note two facts: \begin{gather} K(\infty,\vv)^{-1} v\cdot\ga\, K(\infty,\vv)=\beta\,,\\ e^{-i\si}\tfrac{1}{2}(\1+\beta) +e^{i\si}\tfrac{1}{2}(\1-\beta) =e^{-i\si\beta}\,. \end{gather} The first identity is the limit form of the first of relations in \eqref{defK}, but it may also be checked directly. The second identity is most easily evaluated on the two complementary eigenspaces of $\beta$ (with eigenvalues $\pm1$). Setting the asymptotic forms \eqref{asint} and \eqref{asK} into the formula \eqref{asUfi}, and using the above identities (the second one with $\si=\tau\pm\frac{\pi}{4}$), we obtain \begin{multline}\label{asU0fi} [U_0(\tau,0)\vph](\z) = \mp ie^{-i(\tau\pm\frac{\pi}{4})\beta}\zm^{-\frac{1}{2}}K(\infty,\pm\z)^{-1} [\mathcal{F}^{-1}\vph](z^0,\pm\z)\\ +O(\tm^{-1})\,. \end{multline} The last formula suggests the definition of the following unitary transformation: \begin{equation} \Phi(\tau)=\exp[-i\tau\beta]\,,\label{fi1} \end{equation} and the associated change of the evolution `picture' (also in the interacting case): \begin{equation}\label{HUPhi} \begin{gathered} U_{0\Phi}(\tau,\si)=\Phi(\tau)^*U_0(\tau,\si)\Phi(\si)\,,\quad U_\Phi(\tau,\si)=\Phi(\tau)^*U(\tau,\si)\Phi(\si)\,,\\[1ex] H_{0\Phi}(\tau)=\Phi(\tau)^*H_0(\tau)\Phi(\tau)-\beta\,,\\[1ex] H_{\Phi}(\tau)=H_{0\phi}(\tau)+W_\Phi(\tau)\,,\quad W_\Phi(\tau)=\Phi(\tau)^* W(\tau)\Phi(\tau)\,. \end{gathered} \end{equation} \begin{rem}\label{interDiracPhi} Under the conditions of Theorem~\ref{interDirac} all statements of its thesis remain valid with the replacements defined by the equations \eqref{HUPhi}. \end{rem} \noindent This follows quite trivially, as $\Phi(\tau)$ acts only on the factor $\mC^4$ in the Hilbert space $\Hc=\mC^4\otimes L^2(\mR^3)$. After that, we go back to the asymptotics of the free evolution, where we shall need the parity operator \begin{equation} [\mathcal{P}\vph](\z)=\vph(-\z)\,. \end{equation} Moreover, we observe that the map \begin{equation} [\mathcal{K}\vph](\z)=\zm^{\frac{1}{2}}K(\infty,\z)\vph(\z) \end{equation} is a unitary operator $\Hc\mapsto L^2_\ga(H)$ (with the latter space defined in Appendix~\ref{fDe}). \begin{thm}\label{UF0} The following strong limits exist as unitary operators in $\Hc$: \begin{gather} U_{0\Phi}(+\infty,0)=\slim_{\tau\to+\infty}U_{0\Phi}(\tau,0) =-ie^{-i\frac{\pi}{4}\beta}\mathcal{K}^{-1}\mathcal{F}^{-1}\,,\\ U_{0\Phi}(-\infty,0)=\slim_{\tau\to-\infty}U_{0\Phi}(\tau,0) =ie^{+i\frac{\pi}{4}\beta}\mathcal{P} \mathcal{K}^{-1}\mathcal{F}^{-1}\,, \end{gather} \begin{gather} U_{0\Phi}(0,+\infty)=\slim_{\tau\to+\infty}U_{0\Phi}(0,\tau) =i\mathcal{F}\mathcal{K}e^{+i\frac{\pi}{4}\beta}\,,\\ U_{0\Phi}(0,-\infty)=\slim_{\tau\to-\infty}U_{0\Phi}(0,\tau) =-i\mathcal{F}\mathcal{K}\mathcal{P}e^{-i\frac{\pi}{4}\beta}\,,\\ U_{0\Phi}(0,\pm\infty)=U_{0\Phi}(\pm\infty,0)^*\,. \end{gather} Therefore,\footnote{Note an analogy with the nonrelativistic case, see Section 2 in \cite{her19}.} \begin{equation} U_{0\Phi}(+\infty,-\infty)=i\beta\mathcal{P}\,. \end{equation} \end{thm} \begin{proof} For $\psi\in \Dh$ and $\vph\in \mathcal{F}\Dh$ the formula \eqref{asU0fi} gives \begin{gather} \lim_{\tau\to+\infty}(\psi,U_{0\Phi}(\tau,0)\vph) =(\psi,(-i)e^{-i\frac{\pi}{4}\beta}\mathcal{K}^{-1}\mathcal{F}^{-1}\vph)\,,\\ \lim_{\tau\to-\infty}(\psi,U_{0\Phi}(\tau,0)\vph) =(\psi,ie^{+i\frac{\pi}{4}\beta}\mathcal{P} \mathcal{K}^{-1}\mathcal{F}^{-1}\vph)\,. \end{gather} Both subspaces are dense in $\Hc$, so the weak operator limits result. But the limit operators are evidently unitary, so the weak limits imply the strong limits of $U_{0\Phi}(\tau,0)$, as well as its conjugate. \end{proof} \section{Electromagnetic interaction and gauge transformation}\label{elmintgau} In the rest of this article we are interested in the standard, minimal coupling electromagnetic interaction. For the electromagnetic field $F_{ab}$, we reserve notation $A_a$ for the Lorenz gauge potential (fully specified in what follows). We write $\cA_a$ for the potential in a general gauge to be used in the Dirac equation. Also, we recall our conventions defined in formulas \eqref{compxi} and the following remarks, so $\hF_{\mu\nu}$, $\hat{A}_\mu$ and $\hA_\mu$ are components of these fields in our coordinate system, and for $\mu,\nu=i,j$ etc. the range is restricted to values $1,2,3$. Therefore, in the electromagnetic case the field $V$ and its transformed version $W$ are \begin{equation}\label{WA} V(x)=\cA_a(x)\ga^a\,,\qquad W(\tau,\z)=\hA_\tau(\tau,\z)+\hA_i(\tau,\z)\la^i(\tau,\z)\,, \end{equation} and we write the Hamiltonian as \begin{equation} H=\tfrac{1}{2}[\la^i,\pi_i]_++\mu+\hA_\tau\,,\qquad \pi_i=p_i+\hA_i\,. \end{equation} \begin{thm}\label{gengau} The coordinate system $(\tau,\z)$ given by \eqref{tauzed} is assumed. (i) Let the electromagnetic potential $\cA(x)$ have components $\hA_\mu(\tau,\z)$ such that for all indices $\mu=\tau,1,2,3$ the mappings \begin{equation}\label{pot} \big\{\mR\ni\tau\mapsto \p_z^\al\hA_\mu(\tau,\z)\big\} \in C^0(\mR,L^\infty_\mathrm{loc}(\mR^3))\,,\quad |\al|\leq2\,, \end{equation} and \begin{equation}\label{pot2} \left.\begin{aligned} &\|\zm^{-2+|\al|}\p_z^\al\hA_\tau(\tau,\z)\|_\infty\\ &\|\zm^{-1+|\al|}\p_z^\al\hA_i(\tau,\z)\|_\infty \end{aligned} \right\}\leq C(\tau)\,,\quad |\al|\leq1\,, \end{equation} where $C(\tau)$ is a continuous function. Then the conditions of Theorem \ref{interDirac} are fulfilled and the unitary propagator $U(\tau,\si)$ with the listed properties is obtained. (ii) Let the potential $\cA(x)$ and the corresponding propagator $U(\tau,\si)$ be as in (i). Define a new gauge \begin{equation} \cA^\gau=\cA-\n\gau\,, \end{equation} where $\gau(x)$ is a gauge function such that for all indices $\mu=\tau,1,2,3$ the mappings \begin{equation}\label{potgau} \big\{\mR\ni\tau\mapsto \p_z^\al\p_\mu\gau(\tau,\z)\big\} \in C^0(\mR,L^\infty_\mathrm{loc}(\mR^3))\,,\quad |\al|\leq1\,, \end{equation} and \begin{equation}\label{potgau2} \left.\begin{aligned} &\|\zm^{-2+|\al|}\p_z^\al\p_\tau\gau(\tau,\z)\|_\infty\\ &\|\zm^{-1+|\al|}\p_z^\al\p_i\gau(\tau,\z)\|_\infty \end{aligned} \right\}\leq C(\tau)\,,\quad |\al|\leq1\,, \end{equation} where $C(\tau)$ is a continuous function. Denote by $H^\gau(\tau)$ the interacting Hamiltonian with $\hA$ replaced by $\hA^\gau$, and \begin{equation}\label{US} U^\gau(\tau,\si)= e^{i \gau(\tau)}U(\tau,\si)e^{-i \gau(\si)}\,. \end{equation} Then for such modified operators the statements (A) and (C)--(E) of Theorem~\ref{interDirac} are satisfied. \end{thm} \begin{proof} (i) The bounds \eqref{labound} are satisfied in our coordinate system. Together with the assumed properties of $\hA_\mu$ this ensures that the interaction term \eqref{WA} satisfies the assumptions of Theorem \ref{interDirac}, so the thesis follows. (ii) For (A) we note that the assumption \eqref{potgau2} implies that the interaction term $W^\gau=\hA^\gau_\tau+\hA^\gau_i\la^i$ satisfies assumptions imposed on $M$ in Lemma \ref{lembound} (i) and (ii) (not necessarily (iii)). This is sufficient for the conclusion on self-adjointness of $H^\gau$ obtained as in Theorem \ref{interDirac}. Moreover, it follows from \eqref{potgau} that $e^{i\gau(\tau)}\Dc_c(\pv^2)=\Dc_c(\pv^2)$, so (C) is satisfied. Finally, for $\vph\in\Dc_c(\pv^2)$ we have \begin{gather} i\p_\tau e^{i\gau(\tau)}\vph=-e^{i\gau(\tau)}\p_\tau \gau(\tau)\vph\,,\\ e^{i\gau(\tau)}H(\tau)e^{-i\gau(\tau)}\vph=[H(\tau)-\la^i(\tau)\p_i\gau(\tau)]\vph\,, \end{gather} so the remaining statements easily follow with the use of \eqref{potgau}. \end{proof} \section{Scattering}\label{scattering} We come here to our main objective in this article, scattering of the Dirac field in an external time-dependent electromagnetic field. With the assumptions of Theorem \ref{gengau}, augmented by some decay conditions formulated in the two assumptions below, we shall obtain the complete description of scattering in terms of the Cauchy surfaces of constant $\tau$, as to be found in Theorem~\ref{scat}. The existence of the wave operators needs only a rather simple additional Assumption \ref{asum0}, but their unitarity and completeness are more demanding, and they follow from Assumption \ref{asum}. It is with regard to this latter question that we need to discuss some further notation and properties. It is easy to see that the operators \begin{equation}\label{hatH} \mH=\tfrac{1}{2}[\la^i,\pi_i]_+=H-\mu-\hA_\tau\,,\quad \mH_0=\tfrac{1}{2}[\la^i,p_i]_+=H_0-\mu \end{equation} have similar properties as $H$ and $H_0$, in particular they are essentially self-adjoint on $\Dh$, so $\Dc_c(\pv)=\Hc_c\cap\Dc(\pv)$ is contained in their domains. Therefore, $\Dc_c(\pv^2)$ is contained in the domains of $\mH^2$ and $\mH_0^2$ (as well as $H^2$ and $H_0^2$). Moreover, we note for later use that \begin{equation}\label{antHbe} [\mH,\beta]_+=[\mH_0,\beta]_+=0\,, \end{equation} which is a consequence of \eqref{comlabe}. We shall need a more explicit form of $\mH^2$ below. We calculate (with $\rho$ defined in \eqref{defro}) \begin{multline}\label{Hti2} \mH^2=(\pi_i\la^i+\tfrac{i}{2}\p\cdot\la)(\la^j\pi_j-\tfrac{i}2{}\p\cdot\la)\\ =\pi_i\rho^{ij}\pi_j+\tfrac{1}{2}\pi_i[\la^i,\la^j]\pi_j +\tfrac{i}{2}(\p\cdot\la)\la^j\pi_j-\tfrac{i}{2}\pi_j\la^j(\p\cdot\la) +\tfrac{1}{4}(\p\cdot\la)^2\,. \end{multline} The second term on the rhs above may be written in two alternative ways: \begin{equation}\label{pillpi} \begin{aligned} \tfrac{1}{2}\pi_i[\la^i,\la^j]\pi_j&=-\tfrac{i}{2}(\p_i[\la^i,\la^j])\pi_j +\tfrac{1}{2}[\la^i,\la^j]\pi_i\pi_j\\ &=\tfrac{i}{2}\pi_i(\p_j[\la^i,\la^j])+ \tfrac{1}{2}\pi_i\pi_j[\la^i,\la^j]\,. \end{aligned} \end{equation} Taking into account that $[\la^i,\la^j]$ is antisymmetric in the indices, one can replace the product $\pi_i\pi_j$ by $\frac{1}{2}[\pi_i,\pi_j]=-\frac{i}{2}\hF_{ij}$, and then replace $[\la^i,\la^j]$ multiplying this expression by $2\la^i\la^j$. Taking now one half of the sum of the two expressions in \eqref{pillpi}, we find \begin{equation}\label{pillpiF} \tfrac{1}{2}\pi_i[\la^i,\la^j]\pi_j=\tfrac{i}{4}\big[\pi_j,\p_i[\la^j,\la^i]\big]_+ -\tfrac{i}{2}\la^i\la^j\hF_{ij}\,. \end{equation} For the next two terms on the rhs of \eqref{Hti2} we note \begin{equation}\label{dll} \tfrac{i}{2}(\p\cdot\la)\la^j\pi_j-\tfrac{i}{2}\pi_j\la^j(\p\cdot\la) =-\tfrac{i}{4}[\pi_j,[\la^j,\p\cdot\la]]_+-\tfrac{1}{4}\p_j[\p\cdot\la,\la^j]_+\,, \end{equation} which is shown in a somewhat similar way as the former identity. The sum of the first terms on the rhs of \eqref{pillpiF} and \eqref{dll} gives $\tfrac{i}{4}[\pi_j,[\p_i\la^j,\la^i]]_+$. In this way we obtain \begin{multline} \mH^2=\pi_i\rho^{ij}\pi_j+\tfrac{i}{4}[\p_i\la^j,\la^i]\pi_j +\tfrac{i}{4}\pi_j[\p_i\la^j,\la^i]\\ -\tfrac{1}{4}\p_j([\la^j,\p\cdot\la]_+)+\tfrac{1}{4}(\p\cdot\la)^2 -\tfrac{i}{2}\la^i\la^j\hF_{ij}\,. \end{multline} With the use of further notation \begin{gather} \La_j=\tfrac{i}{4}(\rho^{-1})_{jk}[\p_i\la^k,\la^i]\,,\quad \pi_{\La i}=\pi_i+\La_i\,,\label{lapila}\\[1ex] Q=\tfrac{1}{4}\p_j([\la^j,\p\cdot\la]_+)-\tfrac{1}{4}(\p\cdot\la)^2\,,\quad N=Q+\La_i\rho^{ij}\La_j\,,\label{QN}\\[1ex] \Bc=\tfrac{i}{2}\la^i\la^j\hF_{ij}\,,\label{B} \end{gather} we can write \begin{equation}\label{H2} \mH^2=\pi_{\La}\rho\pi_{\La}-N-\Bc\,, \end{equation} where in the first term on the rhs a symbolic notation for summation over indices is used. A straightforward calculation shows that both $Q$ and $N$ are positive numerical functions (times the unit matrix; see Appendix \ref{spvar}, formula \eqref{estQ} for $Q$, and then for $N$ this is obvious). Therefore, if we further denote \begin{equation}\label{defs} s=\sqrt{\rho}\,, \end{equation} then for $\vph\in\Dc_c(\pv^2)$ we have \begin{equation}\label{hatHbo} \|\mH\vph\|\leq\big(\|s\pi_\La\vph\|^2-(\vph,\Bc\vph)\big)^{\frac{1}{2}} \leq \|s\pi_\La\vph\|+|(\vph,\Bc\vph)|^{\frac{1}{2}} \end{equation} Next denote \begin{equation}\label{Xa} X=\tfrac{1}{2}\mu^{-1} \hA_i\la^i=\tfrac{1}{2}\beta a_i\la^i\,,\qquad a_i=(g^{\tau\tau})^{\frac{1}{2}}\hA_i\,. \end{equation} Below we shall need the following identity valid, with our assumptions on $\hA_i$, on $\Dc_c(\pv)$: \begin{equation}\label{XhatH} \beta[X,\mH] =a_i\rho^{ij}\pi_{\La j}-\tfrac{i}{2}\la^i\la^j\p_i a_j -\tfrac{i}{2}(\p_i\rho^{ij})a_j\,. \end{equation} To show this we note that $[X,\mH]=\tfrac{1}{2}\beta[\la^ia_i,\mH]_+$, write $\mH=\la^j\pi_j-\tfrac{i}{2}\p\cdot\la$, and then the lhs of \eqref{XhatH} takes the form \begin{multline} \tfrac{1}{2}a_i[\la^i,\la^j]_+\pi_j-\tfrac{i}{2}\la^j\p_j(\la^i a_i) -\tfrac{i}{4}a_i [\la^i,\p_j\la^j]_+\\ =a_i\rho^{ij}\pi_{\La j}-\tfrac{i}{2}\la^j\la^i\p_j a_i -\tfrac{i}{2}a_i\big(\la^j\p_j\la^i+\tfrac{1}{2}[\la^i,\p_j\la^j]_+ +\tfrac{1}{2}[\p_j\la^i,\la^j]\big)\,, \end{multline} where after the equality sign we have added and subtracted the term $a_i\rho^{ij}\La_j$. It is now easy to show that the terms in parentheses multiplying $a_i$ sum up to $\tfrac{1}{2}\p_j[\la^j,\la^i]_+$, which ends the proof of \eqref{XhatH}. The spreading of the past and future is characterized in our coordinate system by Lemma \ref{pastfut} in Appendix \ref{spvar}. Denote \begin{equation} \Dc(r,\pv^2)=\{\psi\in \Dc(\pv^2)\mid \psi(\z)=0\ \text{for}\ |\z|\geq r\}\,, \end{equation} so that \begin{equation} \Dc_c(\pv^2)=\bigcup_{r>0}\Dc(r,\pv^2)\,. \end{equation} We set $\tau_0=0$ in this lemma, and replace $r_0$ and $r$ by $r$ and $r(\tau)$, respectively, so that \begin{equation} r(\tau)=\rrm|\tau|+r\tm\,. \end{equation} Then according to this lemma, and statement (C) of Theorem \ref{interDirac}, we have \begin{equation}\label{spread} \psi\in\Dc(r,\pv^2)\quad \Longrightarrow\quad U(\tau,0)\psi\in \Dc(r(\tau),\pv^2)\,. \end{equation} For a measurable function $f(\tau,\z)$ we define a semi-norm function \begin{equation} \tau\mapsto\|f\|_{r,\tau}=\essup_{|\z|\leq r(\tau)}|f(\tau,\z)|\,, \end{equation} and then for $\psi$ as in \eqref{spread} we find \begin{equation}\label{estsemi} \|f(\tau,.)U(\tau,0)\psi\|\leq\|f\|_{r,\tau}\|\psi\|\,. \end{equation} Note that according to Lemma \ref{pastfut} we have \begin{equation} |\z|\leq r(\tau)\ \Longrightarrow\ \zm\leq \rrm\tm+r|\tau|\,, \end{equation} so in that case \begin{equation} |\z|\leq\zm\leq 2\rrm\tm\,. \end{equation} For any measurable function $k(\z)$ and $r>0$ we shall denote \begin{equation} \|k\|_r=\essup_{|\z|\leq r}|k(\z)|\,. \end{equation} For a function $f(\tau,\z)$ we obviously have $\|f(\tau,.)\|_r\leq\|f(\tau,.)\|_{r,\tau}$. Our scattering theorem will apply to potentials satisfying the following conditions of increasing restrictiveness. \begin{asu}\label{asum0} Potential $\hA_\mu(\tau,\z)$ satisfies the assumptions of Theorem \ref{gengau}, and in addition the mapping $ \tau\mapsto \hA_i(\tau,\z) $ is in $C^1(\mR,L^\infty_\mathrm{loc}(\mR^3))$. For each $r>0$ the following expressions are integrable on $\mR$ with respect to $\tau$: \begin{equation} \|\hA_\tau\|_r\,,\quad \tm^{-2}\|\hA_i\|_r^2\,,\quad \tm^{-2}\|\p_i\hA_j\|_r\,,\quad \tm^{-1}\|\p_\tau\hA_i\|_r\,. \end{equation} \end{asu} \begin{asu}\label{asum} Potential $\hA_\mu(\tau,\z)$ satisfies the assumptions of Theorem \ref{gengau}, and in addition the mapping $ \tau\mapsto \hA_i(\tau,\z) $ is in $C^1(\mR,L^\infty_\mathrm{loc}(\mR^3))$. For each $r>0$ the following expressions are integrable on~$\mR$ with respect to $\tau$: \begin{gather} \|\hA_\tau\|_{r,\tau}\,,\\ \tm^{-2}\big(\|\hA_i\|_{r,\tau}^2+\|z^i\hA_i\|_{r,\tau}^2\big)\,,\\ \tm^{-2}\big(\|\p_i\hA_j\|_{r,\tau}+\|z^i\p_i\hA_j\|_{r,\tau} +\|z^j\p_i\hA_j\|_{r,\tau}+\|z^iz^j\p_i\hA_j\|_{r,\tau}\big)\,,\\ \tm^{-1}\big(\|\p_\tau\hA_i\|_{r,\tau}+\|z^i\p_\tau\hA_i\|_{r,\tau}\big)\,. \end{gather} Moreover, let $\xi:[1,\infty)\mapsto \mR$ be an appropriately chosen smooth, positive, nondecreasing function, such that $\xi(1)=1$ and \begin{equation}\label{xib} \xi'(u)\leq \frac{\kappa}{u}\xi(u)\,,\ \kappa\in(0,\tfrac{1}{2})\,,\qquad |\xi''(u)|\leq\frac{\con}{u}\xi(u)\,, \end{equation} where in the second bound the constant is arbitrary. The following bounds are satisfied \begin{align} \|\hF_{ij}\|_{r,\tau}+\|z^i\hF_{ij}\|_{r,\tau} &\leq \con(r)\frac{\tm}{\xi(\tm)}\,,\\ \|\hF_{i\tau}\|_{r,\tau}+\|z^i\hF_{i\tau}\|_{r,\tau} &\leq \frac{\con(r)}{\xi(\tm)}\,, \end{align} and for each $r>0$ the following expressions are integrable on $\mR$ \begin{equation} \frac{\|\xi(\zm)\hA_i\|_{r,\tau}}{\tm\xi(\tm)}\,,\qquad \frac{\|\xi(\zm)z^i\hA_i\|_{r,\tau}}{\tm\xi(\tm)}\,. \end{equation} \end{asu} Before stating the theorem we make a comment on the function $\xi$ and consider some additional consequences of Assumption \ref{asum}. If one sets $\xi(u)=u^\kappa\xi_0(u)$, then the first bound in \eqref{xib} is equivalent to $\xi_0'(u)\leq0$. It follows that \begin{equation}\label{xiukap} \xi(u)\leq u^\kappa\,. \end{equation} Therefore, $\xi$ is a slowly increasing, non-oscillating function. Examples include: \begin{align} &\xi_1(u)=u^\kappa\,,\label{uep}\\ &\xi_2(u)=\Big[1+\frac{\kappa}{m}\log (u)\Big]^m\,,\quad m>0\,,\label{logep} \end{align} where $\kappa$ is as assumed in \eqref{xib}. A particular choice of $\xi$ must guarantee the validity of the assumptions (if this is possible). By the Schwarz inequality also the following integral is finite: \begin{equation} \int_{\mR}\frac{\|\hA_i\|_{r,\tau}}{\tm^2}d\tau \leq \sqrt{\pi}\Big(\int_{\mR}\frac{\|\hA_i\|_{r,\tau}^2}{\tm^2}d\tau\Big)^{\frac{1}{2}}<\infty \end{equation} and similarly for $z^i\hA_i$. Moreover, for $\tau_2\geq\tau_1$ we have \begin{equation} \frac{\hA_i(\tau_2,\z)}{\langle\tau_2\rangle} -\frac{\hA_i(\tau_1,\z)}{\langle\tau_1\rangle}= \int_{\tau_1}^{\tau_2}\Big(\frac{\p_\si\hA_i(\si,\z)}{\sm} -\frac{\si\hA_i(\si,\z)}{\sm^3}\Big)d\si\,, \end{equation} and similarly for $z^i\hA_i$. Therefore, by Assumption \ref{asum} the functions \begin{equation} \tm^{-1}\|\hA_i\|_{r,\tau}\quad \text{and}\quad \tm^{-1}\|z^i\hA_i\|_{r,\tau} \end{equation} have limits for $\tau\to\pm\infty$. Limits different from zero would contradict other assumptions, so \begin{equation} \lim_{\tau\to\pm\infty}\frac{\|\hA_i\|_{r,\tau}}{\tm} =\lim_{\tau\to\pm\infty}\frac{\|z^i\hA_i\|_{r,\tau}}{\tm}=0\,. \end{equation} For the sake of the proof of the coming theorem we note the following estimates easily obtained with the use of formula \eqref{laspec}: \begin{equation}\label{laA} |\la^i\hA_i|\leq \frac{\con}{\tm}\bigg(\frac{\tmz}{\tm}|\hA_i|+|z^i\hA_i|\bigg)\,, \end{equation} \begin{equation}\label{laladA} |\la^i\la^j\p_i\hA_j| \leq\frac{\con}{\tm^2}\Big[\frac{\tmz^2}{\tm^2}|\p_i\hA_j| +\frac{\tmz}{\tm}\big(|z^i\p_i\hA_j|+|z^j\p_i\hA_j|\big) +|z^iz^j\p_i\hA_j|\Big]\,. \end{equation} \begin{thm}\label{scat} (i) Let $\hA_\mu$ satisfy Assumption \ref{asum0}. Then the following strong limits exist: \begin{gather} \slim_{\tau\to\pm\infty}U(0,\tau)U_0(\tau,0)=\W_\mp\,,\label{waveop}\\ \slim_{\tau\to\pm\infty}U_\Phi(0,\tau)=U_\Phi(0,\pm\infty) =\Omega_{\mp}U_{0\Phi}(0,\pm\infty)\,.\label{waveopphi} \end{gather} (ii) Let $\hA_\mu$ satisfy Assumption \ref{asum}. Then the operators $\W_\mp$ are unitary and also the following strong limits exist: \begin{gather} \slim_{\tau\to\pm\infty}U_0(0,\tau)U(\tau,0)=\W_\mp^*\,, \label{waveopad}\\ \slim_{\tau\to\pm\infty}U_\Phi(\tau,0)=U_\Phi(\pm\infty,0) =U_{0\Phi}(\pm\infty,0)\Omega^*_{\mp}\,.\label{waveopphiad} \end{gather} \end{thm} \begin{proof}For the sake of the whole proof we assume that $\psi\in\Dc(r,\pv^2)$ and $\|\psi\|=1$. (i) For $\tau\to\infty$, we prove the existence of the limit \eqref{waveopphi}, from which the limit \eqref{waveop} follows with the use of Theorem \ref{UF0}. The case $\tau\to-\infty$ is analogous. We note the identity \begin{equation} \big(1-\tfrac{1}{2}\beta\mH\big)\beta-H\big(1-\tfrac{1}{2}\beta\mH\big) =(\mu-\beta+\hA_\tau)\big(\tfrac{1}{2}\beta\mH-1\big)-\tfrac{1}{2}\beta\mH^2\,, \end{equation} where $\mH$ is as defined in \eqref{hatH} (to show this one eliminates $H$ with the use of \eqref{hatH} and takes into account the anticommutation relation \eqref{antHbe}). Using it, we obtain the evolution equation \begin{multline} i\p_\tau\big[U(0,\tau)\big(1-\tfrac{1}{2}\beta\mH\big)\Phi(\tau)\big]\psi\\ =U(0,\tau)\big[-\tfrac{i}{2}\beta(\p_\tau\mH) +(\mu-\beta+\hA_\tau)\big(\tfrac{1}{2}\beta\mH-1\big) -\tfrac{1}{2}\beta\mH^2\big]\Phi(\tau)\psi\,. \end{multline} Taking into account the anticommutation relation \eqref{antHbe} we can write this in the form \begin{equation}\label{UU0} \begin{aligned} i\p_\tau\big[U_\Phi(0,\tau)-\tfrac{1}{2}&U(0,\tau)\Phi(\tau)^*\beta\mH\big]\psi\\ =-&U(0,\tau)\Phi(\tau)\big[\mu-\beta+\hA_\tau+\tfrac{1}{2}\beta\mH^2\big]\psi\\ +\tfrac{1}{2}&U(0,\tau)\Phi(\tau)^*\beta\big[-i\dot{\mH} +(\mu-\beta+\hA_\tau)\mH\big]\psi\,. \end{aligned} \end{equation} We estimate the terms in this equation, starting with the second term in brackets on the lhs, and then going to the successive terms on the rhs. As $|\z|\leq r$ on the support of $\psi$, each $\la^i$, $\p_i\la^j$ and $\p_i\p_j\la^k$ give a bounding factor $\con(r)\tm^{-1}$, and each $\p_\tau\la^i$ a factor $\con(r)\tm^{-2}$, which leads to an easy straightforward estimation: \begin{gather} \|\mH\psi\|\leq\frac{\con(r)}{\tm}\big(\|p_i\psi\|+\|\hA_i\|_r+1\big)\,,\\[1ex] \|(\mu-\beta)\psi\| \leq\frac{r^2}{2\tm^2}\,,\qquad \|\hA_\tau\psi\|\leq\|\hA_\tau\|_r\,,\\[1ex] \|\mH^2\psi\|\leq \frac{\con(r)}{\tm^2}\Big[\|p_ip_j\psi\| +\big(\|\hA_i\|_r+1\big)\big(\|\hA_i\|_r+1+\|p_i\psi\|\big)\Big]\,, \end{gather} \begin{gather} \|\dot{\mH}\psi\| \leq\frac{\con(r)}{\tm^2}\big(\|p_i\psi\|+\|\hA_i\|_r+1\big) +\frac{\con(r)}{\tm}\|\p_\tau\hA_i\|_r\,,\\[1ex] \|(\mu-\beta+\hA_\tau)\mH\psi\| \leq\frac{\con(r)}{\tm}\Big(\frac{r^2}{\tm^2}+\|\hA_\tau\|_r\Big) \big(\|p_i\psi\|+\|\hA_i\|_r+1\big)\,. \end{gather} Therefore, with the conditions of Assumption \ref{asum0} all the terms on the rhs of \eqref{UU0} are integrable on $\mR$, so the strong limit of $U_\Phi(0,\tau)\psi- \tfrac{1}{2}U(0,\tau)\Phi(\tau)^*\beta\mH\psi$ exists. But the second term vanishes in the limit, so the thesis follows for $\psi\in\Dc_c(\pv^2)$, and then by isometry for all $\psi\in\Hc$. (For the integrability of $\tm^{-2}\|\hA_i\|_r$ and for vanishing of $\tm^{-1}\|\hA_i\|_r$ one argues similarly as in the remarks following Assumption \ref{asum}.) (ii) We prove the existence of the limit \eqref{waveopad} for $\tau\to\infty$, from which the limit \eqref{waveopphiad} follows. Combined with the existence of the limit \eqref{waveop}, this also leads to unitarity and the conjugation relation. The case $\tau\to-\infty$ is similar. We note the identity \begin{equation} (1+X)H-H_0(1+X)=\hA_\tau+WX +[X,\mH]\,, \end{equation} where $W$ and $X$ are defined in \eqref{WA} and \eqref{Xa}, respectively, and we used the fact that \begin{equation} [X,H-\mH]=[X,\mu]=-2\mu X=-\hA_i\la^i\,. \end{equation} Thus \begin{multline}\label{U0U} i\p_\tau\big[U_0(0,\tau)(1+X)U(\tau,0)\big]\psi\\ =U_0(0,\tau)\Big(i\p_\tau X+\hA_\tau+WX +[X,\mH]\Big)U(\tau,0)\psi\,. \end{multline} If we can show that the norm of the rhs of \eqref{U0U} is integrable over $[0,+\infty)$, then the strong limit of $U_0(0,\tau)(1+X)U(\tau,0)\psi$ for $\tau\to\infty$ exists. But with the use of formula \eqref{estsemi}, and taking into account \eqref{laA} and the value of $g^{\tau\tau}$ to be found in Appendix \ref{spvar}, we obtain \begin{equation}\label{Xnorm} \|X(\tau)U(\tau,0)\psi\|\leq\|X\|_{r,\tau} \leq \frac{\con}{\tm}\Big(\|\hA_i\|_{r,\tau}+\|z^i\hA_i\|_{r,\tau}\Big)\,, \end{equation} so this norm vanishes in the limit, which then implies the desired result. We estimate the norms of the successive terms on the rhs of \eqref{U0U}, again with the use of \eqref{estsemi}. Differentiating formula \eqref{gla} with respect to $\tau$ we find \begin{multline}\label{dXnorm} \|(\p_\tau X)U(\tau,0)\psi\|\leq\|\p_\tau X\|_{r,\tau} \leq \frac{\con}{\tm^2}\Big(\|\hA_i\|_{r,\tau}+\|z^i\hA_i\|_{r,\tau}\Big)\\ +\frac{\con}{\tm}\Big(\|\p_\tau\hA_i\|_{r,\tau}+\|z^i\p_\tau\hA_i\|_{r,\tau}\Big)\,, \end{multline} which is integrable. The norm of the second term is bounded by $\|\hA_\tau(\tau)\|_{r,\tau}$, which is integrable by assumption. Next we note that \begin{equation} \beta WX=\hA_\tau\beta X-\tfrac{1}{2}(g^{\tau\tau})^{\frac{1}{2}}\hA_i\rho^{ij}\hA_j\,, \end{equation} so using the explicit form of $(g^{\tau\tau})^{\frac{1}{2}}\rho^{ij}$, see \eqref{rhoex}, we estimate the norm of the third term on the rhs of \eqref{U0U} by \begin{equation} \|WX\|_{r,\tau}\leq \con\|\hA_\tau\|_{r,\tau}\|X\|_{r,\tau} +\frac{\con\rrm}{\tm^2}\Big(\|\hA_i\|^2_{r,\tau}+\|z^i\hA_i\|^2_{r,\tau}\Big)\,, \end{equation} which ensures integrability. To estimate the norm of the fourth term we use \eqref{XhatH}, \eqref{laladA} and \eqref{divro} to find \begin{multline} \|\la^i\la^j\p_i a_j+(\p_i\rho^{ij})a_j\|_{r,\tau} \leq \con\frac{\rrm}{\tm^2}\|\p_i\hA_j\|_{r,\tau}\\ +\frac{\con}{\tm^2}\Big(\|\hA_i\|_{r,\tau}+\|z^i\hA_i\|_{r,\tau} +\|z^i\p_i\hA_j\|_{r,\tau}+\|z^j\p_i\hA_j\|_{r,\tau} +\|z^iz^j\p_i\hA_j\|_{r,\tau}\Big)\,, \end{multline} which again is integrable. We are now left with the single term $a\rho\pi_\La U(\tau,0)\psi$. As it turns out, in this case the methods applied up to now are insufficient and it is here that we make use of the function $\xi$. We write $\psi_\tau= U(\tau,0)\psi$ and note that \begin{align} \|a\rho\pi_{\La}\psi_\tau\| &\leq \|sa\xi(\zm)\|_{r,\tau}\|\xi(\zm)^{-1}s\pi_\La\psi_\tau\|\\ &\leq \tm^{-1}\Big(\|\xi(\zm)\hA_i\|_{r,\tau} +\|\xi(\zm) z^i\hA_i\|_{r,\tau}\Big)\|\xi(\zm)^{-1}s\pi_\La\psi_\tau\|\,. \end{align} Now, the estimation of $\|\xi(\zm)^{-1}s\pi_\La\psi_\tau\|$ is the most difficult part of the proof and we shift it to the lemma below. Substituting its result in the above estimate and using Assumption \ref{asum} one completes the proof of the existence of the limit $\tau\to+\infty$. \end{proof} \begin{lem}\label{scatlem} Under the conditions of Assumption \ref{asum}, for $\psi\in \Dc_c(\pv^2)$ the following estimate holds \begin{equation} \|\xi(\zm)^{-1}s\pi_\La U(\tau,0)\psi\|\leq \con(\psi)\xi(\tm)^{-1}\,. \end{equation} \end{lem} \begin{proof} We assume again that $\psi\in\Dc(r,\pv^2)$ and $\|\psi\|=1$. We observe that the lhs of the inequality may be equivalently replaced by $\|s\pi_\La \xi(\zm)^{-1}U(\tau,0)\psi\|$. Indeed, we have \begin{equation} \big\|\big[s\pi_\La,\xi(\zm)^{-1}\big]\big\|_\infty = \Big\|\frac{\z\xi'(\zm)}{\tm\xi(\zm)^2}\Big\|_\infty \leq\Big\|\frac{\kappa\,\z}{\tm\zm\xi(\zm)}\Big\|_\infty \leq\frac{\kappa}{\tm}\,. \end{equation} Now we shall estimate the norm squared \begin{equation} \|s\pi_\La\xi(\zm)^{-1}\psi_\tau\|^2 =(\pi_\La\xi(\zm)^{-1}\psi_\tau,\rho\pi_\La\xi(\zm)^{-1}\psi_\tau) \end{equation} by first finding a differential inequality, and then integrating. Preparing for that, we denote \begin{equation}\label{defnu} \nu=i\xi(\zm)^{-1}[H,\xi(\zm)]=\frac{\la^i\p_i\xi(\zm)}{\xi(\zm)} =\frac{\xi'(\zm)}{\tm\xi(\zm)}z^i\al^i\,, \end{equation} with the standard notation $\al^i=\beta\gamma^i$, and observe that \begin{equation}\label{xiH} \xi(\zm)^{-1} H=\mH\xi(\zm)^{-1}+(\hA_\tau+\mu-i\nu)\xi(\zm)^{-1}\,. \end{equation} To shorten notation we shall write $\psi_\tau^\xi=\xi(\zm)^{-1}\psi_\tau$. Looking at the explicit form of $\Lambda_i$ \eqref{Laspec} we note that \begin{equation} [\Lambda_i,\mu]=0\,,\qquad [\Lambda_i,\nu]_+=0\,. \end{equation} Now calculate \begin{multline} \p_\tau[\pi_\La\psi_\tau^\xi] =-i\pi_\La\xi(\zm)^{-1} H\psi_\tau+(\p_\tau\La+\p_\tau\hA)\psi_\tau^\xi\\ =-i\pi_\La \mH\psi_\tau^\xi +(\dot{\La}+\hF_{\tau .}+\p(i\nu-\mu)+2\nu\La)\psi_\tau^\xi -(\nu+i\hA_\tau+i\mu)\pi_\La\psi_\tau^\xi\,, \end{multline} and \begin{equation}\label{difpixipsi} \begin{aligned} \p_\tau(\pi_\La\psi_\tau^\xi,\rho\pi_\La\psi_\tau^\xi) &-(\pi_\La\psi_\tau^\xi,\dot{\rho}\pi_\La\psi_\tau^\xi) =-2(\pi_\La\psi_\tau^\xi,\rho\nu\pi_\La\psi_\tau^\xi)\\ &-i(\pi_\La\rho\pi_\La\psi_\tau^\xi,\mH\psi_\tau^\xi) +i(\mH\psi_\tau^\xi,\pi_\La\rho\pi_\La\psi_\tau^\xi)\\ &+2\Rp(\pi_\La\psi_\tau^\xi,\rho[\dot{\La} +\hF_{\tau .}+\p(i\nu-\mu)+2\nu\La]\psi_\tau^\xi)\,, \end{aligned} \end{equation} where in both identities $\hF_{\tau .}$ denotes $\hF_{\tau i}$ with the index $i$ suppressed (in the second identity summation over this index is implied). In the first identity we have used \eqref{xiH} and commuted $\pi_\La$ with the term $(\hA_\tau+\mu-i\nu)$. From now on we continue the proof for $\tau\geq0$; for $\tau\leq0$ the proof is analogous, but equation \eqref{difpixipsi} has to be multiplied by $-1$ before continuing. The first term on the rhs of \eqref{difpixipsi} is bounded in absolute value by \begin{equation} 2\|\nu\|_\infty(\pi_\La\psi_\tau^\xi,\rho\pi_\La\psi_\tau^\xi) \leq 2\kappa\tm^{-1}\|s\pi_\La\psi_\tau^\xi\|^2\,, \end{equation} where we used the estimate given in \eqref{nusnu}. With the use of formula \eqref{H2} the second line of equation \eqref{difpixipsi} takes the form $2\Rp i(\mH\psi_\tau^\xi,(N+\Bc)\psi_\tau^\xi)$, and thanks to the estimate \eqref{hatHbo} is bounded in absolute value by \begin{equation} 2\big(\|s\pi_\La\psi_\tau^\xi\|+|(\psi_\tau^\xi,\Bc\psi_\tau^\xi)|^{\frac{1}{2}}\big)\|(N+\Bc)\psi_\tau^\xi\|\,. \end{equation} The third line in \eqref{difpixipsi} is bounded by \begin{equation} 2\|s\pi_\La\psi_\tau^\xi\| \|s(\dot{\La}+\hF_{\tau\,.}+\p(i\nu-\mu)+2\nu\La)\psi_\tau^\xi\|\,. \end{equation} Finally, we observe that for $\tau>0$ we have (see \eqref{dtro}) \begin{equation} \dot{\rho}\leq-\frac{2\tau}{\tm^2}\rho\,, \end{equation} which allows us to use \eqref{difpixipsi} for the following estimate: \begin{equation}\label{eqtri} \p_\tau\|s\pi_\La\psi_\tau^\xi\|^2\leq -b\|s\pi_\La\psi_\tau^\xi\|^2+2c\|s\pi_\La\psi_\tau^\xi\|+d\,, \end{equation} where \begin{gather} b=\frac{2\tau}{\tm^2}-\frac{2\kappa}{\tm}\,,\label{b}\\[1ex] c=\|(N+\Bc)\psi_\tau^\xi\|+\|s(\dot{\La}+\hF_{\tau\,.}+\p(i\nu-\mu) +2\nu\La)\psi_\tau^\xi\|\,,\label{c}\\[2ex] d=2|(\psi_\tau^\xi,\Bc\psi_\tau^\xi)|^{\frac{1}{2}}\|(N+\Bc)\psi_\tau^\xi\|\,.\label{d} \end{gather} The second term on the rhs of \eqref{eqtri} may be estimated as follows \begin{align} 2c\|s\pi_\La\psi_\tau^\xi\| &=2\Big[\frac{1-2\kappa}{\tm}\Big]^{\frac{1}{2}} \|s\pi_\La\psi_\tau^\xi\|\ \Big[\frac{\tm}{1-2\kappa}\Big]^{\frac{1}{2}}c\\ &\leq\frac{1-2\kappa}{\tm}\|s\pi_\La\psi_\tau^\xi\|^2+\frac{\tm c^2}{1-2\kappa}\,, \end{align} which results in the inequality \begin{equation}\label{eqtri1} \p_\tau\|s\pi_\La\psi_\tau^\xi\|^2\leq -b_0\|s\pi_\La\psi_\tau^\xi\|^2+d_0\,, \end{equation} where \begin{equation}\label{b0d0} b_0=\frac{2\tau}{\tm^2}-\frac{1}{\tm}\,,\quad d_0=\frac{\tm c^2}{1-2\kappa}+d\,. \end{equation} We set \begin{equation} \|s\pi_\La\psi_\tau^\xi\|^2=\exp\Big(-\int_0^\tau b_0(\si)d\si\Big)f(\tau) =\frac{\tau+\tm}{\tm^2}f(\tau) \end{equation} and then \eqref{eqtri1} takes the form \begin{equation} \p_\tau f\leq \frac{\tm^2}{\tau+\tm}d_0(\tau)\leq \tm d_0(\tau)\,. \end{equation} We note that $f(0)=\|[s\pi_\La\psi_\tau^\xi]_{\tau=0}\|^2$ and find \begin{equation}\label{spb} \|s\pi_\La\psi_\tau^\xi\|^2 \leq \frac{2}{\tm}\bigg[\|[s\pi_\La\psi_\tau^\xi]_{\tau=0}\|^2 +\int_0^\tau \sm d_0(\si)d\si\bigg] \end{equation} We have to estimate $d_0$ defined in \eqref{b0d0}. We start by estimating $c$. For the terms depending on the electromagnetic field we have (we use the form of $s$ and estimates of $\la^i$ given in Appendix \ref{spvar}) \begin{equation}\label{Best} \|\Bc\|_{r,\tau}\leq\frac{\con\rrm^2}{\tm^2} \big(\|\hF_{ij}\|_{r,\tau}+\|z^i\hF_{ij}\|_{r,\tau}\big) \leq \frac{\con(r)}{\tm\xi(\tm)}\,, \end{equation} \begin{equation} \|s\hF_{\tau\,.}\|_{r,\tau} \leq \frac{\con\rrm}{\tm} \big(\|\hF_{\tau i}\|_{r,\tau}+\|z^i\hF_{\tau i}\|_{r,\tau}\big) \leq \frac{\con(r)}{\tm\xi(\tm)}\,. \end{equation} For the term $s^{ij}\p_j(i\nu-\mu)$, using the estimates \eqref{esdemu} and \eqref{esdenu} in Appendix \ref{spvar} we find\footnote{The problem of estimation of the term $s^{ij}\p_j\mu$ is the ultimate reason for our introduction of the function $\xi$. Without it, the bound in \eqref{estpmu} would have the form $\con(r)\tm^{-1}$, which would be insufficient for our application.} \begin{equation}\label{estpmu} \|s\p(i\nu-\mu)\psi_\tau^\xi\| \leq\Big\|\frac{s\p(i\nu-\mu)}{\xi(\zm)}\Big\|_{r,\tau} \leq\frac{\con}{\tm^2}\Big\|\frac{\zm}{\xi(\zm)}\Big\|_{r,\tau}\leq\frac{\con(r)}{\tm\xi(\tm)}\,, \end{equation} where we used the fact, that both $u/\xi(u)$ as well as $\xi(u)$ are increasing, so \begin{equation} \frac{\zm}{\xi(\zm)}\leq\frac{2\rrm\tm}{\xi(2\rrm\tm)}\leq2\rrm\frac{\tm}{\xi(\tm)}\,. \end{equation} The estimation of the other terms in \eqref{c} uses the bounds \eqref{LaLa}, \eqref{estN} and \eqref{nusnu} and gives \begin{equation}\label{NLaLa} \|N\psi_\tau^\xi\|+\|s\dot{\La}\psi_\tau^\xi\| +\|2\nu\La\psi_\tau^\xi\|\leq\frac{\con}{\tm^2}\,, \end{equation} so summing up we have \begin{equation} c\leq\frac{\con(r)}{\tm\xi(\tm)}\,,\qquad \tm\, c^2\leq\frac{\con(r)}{\tm\xi(\tm)^2}\,. \end{equation} The use of \eqref{Best} and \eqref{NLaLa} shows that also \begin{equation} d(\tau)\leq \frac{\con(r)}{[\tm\xi(\tm)]^{\frac{3}{2}}} \leq\frac{\con(r)}{\tm\xi(\tm)^2}\,, \end{equation} where we used the bound $u^\frac{1}{2}\geq\xi(u)\geq1$, see \eqref{xiukap}. Summing up, we obtain \begin{equation} d_0(\tau)\leq \frac{\con(r)}{\tm\xi(\tm)^2}\,. \end{equation} Now, it follows from \eqref{xib} that $u^\kappa/\xi(u)$ is an increasing function. Therefore, \begin{align} \int_0^\tau \sm d_0(\si)d\si &\leq \con(r)\int_0^\tau \frac{d\si}{\xi(\sm)^2} \leq \con(r) \frac{\tm^{2\kappa}}{\xi(\tm)^2}\int_0^\tau\frac{d\si}{\sm^{2\kappa}}\\ &\leq \con(r)\frac{\tm}{\xi(\tm)^2}\,. \end{align} This, when used in \eqref{spb}, gives \begin{equation} \|s\pi_\La\psi_\tau^\xi\|^2 \leq\frac{2\|[s\pi_\La\psi_\tau^\xi]_{\tau=0}\|^2}{\tm} +\frac{\con(r)}{\xi(\tm)^2} \leq \frac{\con(\psi)}{\xi(\tm)^2}\,, \end{equation} where for the second inequality we used \eqref{xiukap}. \end{proof} \section{Typical electromagnetic field and its special gauges}\label{typspec} Assumption \ref{asum}, on which our main theorem on scattering \ref{scat} is based, is rather technical and not easy for interpretation. Here we formulate a rather typical situation met in scattering processes.\footnote{Possible oscillating terms in the asymptotic behavior of charged currents are not taken into account. One can expect that fields produced by such terms decay more rapidly than those considered here; see also Discussion.} We show that the electromagnetic field thus identified admits a gauge in which Assumption \ref{asum} is satisfied. Moreover, there is a class of gauges which need not satisfy this assumption, but still assure a similar asymptotic structure. The retarded and advanced potentials are defined in terms of the source current $J$ in standard way (as $\varphi_{\ret/\adv}$ is defined in terms of $\rho$ in \eqref{adv} in Appendix \ref{decay}). Also, the radiated field of the current $J$ is defined in standard way, $A_\rad=A_\ret-A_\adv$. The Heaviside step function is denoted by $\theta$. \begin{asu}\label{FAJ} The Lorenz potential $A_a$ of the electromagnetic field $F_{ab}$ is given by \begin{equation} A=A_\ret+A_\inc=A_\adv+A_\out\,, \end{equation} where $A_\ret$ is the retarded potential of a current $J$ satisfying the assumptions listed below, and $A_\inc$ is the radiated potential of another current $J_\inc$ with similar properties as $J$. Then also $A_\adv$ is the advanced potential of the current~$J$, and $A_\out$ is the radiated potential of the current $J_\out=J+J_\inc$, which has similar properties as $J$ and $J_\inc$. The conserved current $J(x)$ is of class $C^3$ and for some $0<\vep<\frac{1}{2}$ satisfies the following estimates: \begin{align} &|\n^\al J(x)| \leq\frac{\con}{(|x|+1)^{3+|\al|}}\bigg[\theta(x^2)+\frac{1}{(|x|+1)^\vep}\bigg]\,, & &\text{for}\quad |\al|\leq3,\label{estJ}\\[1ex] &|\n^\al(x\cdot\n+3) J(x)|\leq\frac{\con}{(|x|+1)^{3+|\al|+\vep}}\,,\label{esthomJ} & &\text{for}\quad |\al|\leq2\,. \end{align} The same is assumed for $J_\inc$, and then the same follows for $J_\out$. The potential $A$ is then of class $C^3$ on the Minkowski spacetime. \end{asu} Note that $A$ is a linear combination of retarded and advanced potentials of currents satisfying \eqref{estJ}. Therefore, the last statement of Assumption \ref{FAJ} is a consequence of Lemma \ref{estlemrv} (i) in Appendix \ref{decay}. \begin{thm}\label{totelmga} Let the electromagnetic field $F$ and its Lorenz potential $A$ satisfy Assumption \ref{FAJ}. Define a new gauge by \begin{equation}\label{gg} \cA(x)=A(x)-\n S(x)\,. \end{equation} Then the following holds.\\ (i) For the choice \begin{equation} S(x)=\gf(x)\equiv\log(\tm\zm)\,x\cdot A(x)\,,\label{specgauge} \end{equation} the potential \eqref{gg} fulfills Assumption \ref{asum}, so Theorems \ref{gengau} and \ref{scat} are satisfied. \\ (ii) Let $S(x)$ be another gauge function, such that the difference \begin{equation} \gau(x)=S(x)-\gf(x) \end{equation} satisfies the assumptions of Theorem \ref{gengau} (ii), so that the thesis of this theorem is true. Suppose, in addition, that there exist point-wise limits \begin{equation}\label{limgau} \lim_{\tau\to\pm\infty}\gau(\tau,\z)\equiv \gau_\pm(\z)\,. \end{equation} Then the potential \eqref{gg} satisfies the thesis of Theorem \ref{scat} (but not necessarily the assumption of this theorem).\\ (iii) In particular, the gauges defined by: \begin{align} &\text{(a)}\qquad \glo(x)=\log\tm \, x\cdot A(x)\,,\label{consprop}\\ &\text{(b)}\qquad \gtr(x)=\int_0^\tau \p_\si x(\si,\z)\cdot A(\si,\z)d\si\,,\label{spacegau} \end{align} are in the class defined in (ii). In case (b) one has $\hA_\tau=0$ and \begin{equation}\label{Aperp} \hA_i(\tau,\z)=\hat{A}_i(0,\z)-\int_0^\tau\hF_{i\si}(\si,\z)d\si\,. \end{equation} \end{thm} \begin{rem}\label{lorinv} Assumption III, and consequently the validity of Theorem~\ref{totelmga}, is independent of the choice of the time axis for the definition of the foliation~\eqref{tauzed}. \end{rem} \begin{proof}[Proof of Thm.\ \ref{totelmga}] (i) This potential obeys Theorem \ref{elmbo} in Appendix \ref{spgava}. As $\cA$ is of class $C^2$ on the Min\-kow\-ski spacetime, so the assumption \eqref{pot} of Theorem \ref{gengau} and the assumption on continuous differentiability of the mapping $\tau\mapsto \hA_i(\tau,\z)$ in Assumption \ref{asum} are clearly satisfied. Denote \begin{equation} \xi(u)=u^\kappa\,,\qquad \kappa<\vep<\tfrac{1}{2}\,. \end{equation} The estimates of Theorem \ref{elmbo} imply then the following norm bounds: \begin{gather} \|\hF_{i\tau}\|_\infty\,,\ \|z^i\hF_{i\tau}\|_\infty\leq\frac{\con}{\tm}\,,\\[1ex] \|\hF_{ij}\|_\infty\,,\ \|z^i\hF_{ij}\|_\infty\leq\con\,,\\[1ex] \|\hA_\tau\|_{r,\tau}\leq\con\Big(\frac{1+\log\tm}{\tm^{1+\vep}} +\frac{\log\rrm}{\tm^3}\Big)\,,\\[1ex] \|(1+\log\zm)^{-1}\hA_\tau\|_\infty+\|\p_i\hA_\tau\|_\infty+\|z^i\p_i\hA_\tau\|_\infty \leq\con\frac{1+\log\tm}{\tm^{1+\vep}}\,,\\ \|\p_i\p_j\hA_\tau\|_\infty\leq\con(1+\log\tm)\,,\\[1ex] \|\hA_i\|_\infty+\|z^i\hA_i\|_\infty \leq\|\xi(\zm)\hA_i\|_\infty+\|\xi(\zm)z^i\hA_i\|_\infty \leq\con(1+\log\tm)\,, \end{gather} \begin{gather} \|\p_\tau\hA_i\|_\infty+\|z^i\p_\tau\hA_i\|_\infty \leq\frac{\con}{\tm}\,,\\[1ex] \|\p_i\hA_j\|_\infty+\|z^i\p_i\hA_j\|_\infty +\|z^j\p_i\hA_j\|_\infty+\|z^iz^j\p_i\hA_j\|_\infty \leq\con(1+\log\tm)\,. \end{gather} It is now easily checked that the potential satisfies assumption \eqref{pot2} in Theorem \ref{gengau} and all the remaining bounds in Assumption \ref{asum}, from which the thesis follows. (Note that in this case all expressions are bounded in $L^\infty$-norm, except for $\hA_\tau$.) (ii) Let the evolution operator $U(\tau,\si)$ refer to the potential defined in~(i), and denote the new potential now considered by $\hA^\gau$. Following further notation used in Theorem \ref{gengau} we have \begin{multline} \lim_{\tau\to\pm\infty}U^\gau_\Phi(\tau,0) =\lim_{\tau\pm\infty}\Phi^*(\tau)e^{i \gau(\tau)}U(\tau,0)e^{-i\gau(0)}\\ =\lim_{\tau\to\pm\infty}e^{i\gau(\tau)}U_\Phi(\tau,0)e^{-i\gau(0)} =e^{i\gau_\pm}U_\Phi(\pm\infty,0)e^{-i\gau(0)}\,, \end{multline} so \begin{multline} \slim_{\tau\to\pm\infty}U_0(0,\tau)U^\gau(\tau,0) =U_{0\Phi}(0,\pm\infty)U^\gau_\Phi(\pm\infty,0)\\ =U_{0\Phi}(0,\pm\infty)e^{i\gau_\pm}U_\Phi(\pm\infty,0)\W_\mp e^{-i\gau(0)} \equiv \W^\gau_\mp\,. \end{multline} Similarly for the limits of the conjugated operators. (iii) Both gauges are easily seen to satisfy condition \eqref{potgau} of Theorem \ref{gengau}; we turn to the estimates \eqref{potgau2}. In the case (a), with $C(x)=x\cdot A(x)$, we have \begin{equation} \gau(\tau,\z)=-\log\zm\,C(\tau,\z)\,, \end{equation} and the estimates are easily checked with the use of the results of the proof of Theorem \ref{elmbo}. Also, it follows from the estimate of $|\p_\tau C|$ given there that $C(\tau,\z)$ has limits for $\tau\to\pm\infty$. In the case (b), it is now sufficient to investigate the difference of the gauge function $\gtr(x)$ as compared to the gauge function of case (a): \begin{equation} \gau'(x)=\gtr(x)-\glo(x)\,. \end{equation} Differentiating and using the form of $\p_\tau x$ given in the proof of Theorem \ref{elmbo} we find \begin{gather} \p_\tau\gau'(\tau,\z)=\tm^{-2}\zm A_0-\log\tm\p_\tau C\,,\\ \p_i\p_\tau\gau'(\tau,\z)=\frac{1}{\tm^2}\Big[\frac{z^i}{\zm}A_0+\zm\p_i A_0\Big] -\log\tm\p_i\p_\tau C\,, \end{gather} Now noting that $\p_i\gau(0,\z)=0$, integrating (the last term by parts) and applying the derivative $\p_j$ we obtain (fields in the integrand depend on $(\si,\z)$) \begin{multline} \p_i\p_j\gau'(\tau,\z) =\int_0^\tau\Big[\frac{d^{ij}}{\zm}A_0+\frac{1}{\zm}(z^i\p_j A_0+z^j\p_i A_0) +\zm\p_i\p_jA_0+\si\p_i\p_j C\Big]\frac{d\si}{\sm^2}\\ -\log\tm\p_i\p_j C\,, \end{multline} with $d^{ij}$ defined in \eqref{dd}. With the use of the estimates listed in the proof of Theorem \ref{elmbo} one finds \begin{gather} |\p_\tau\gau'|\leq\con\frac{1+\log\tm}{\tm^{1+\vep}}\,,\qquad |\p_i\p_\tau\gau'|\leq\con\frac{1+\log\tm}{\tm^{1+\vep}}\,,\\ |\p_i\p_j\gau'|\leq\con(1+\log\tm)\,,\qquad |\p_i\gau'|\leq\con\,, \end{gather} the fourth estimate by the integration of the second one. Thus the estimates \eqref{potgau2} are satisfied. Finally, $\p_\tau\gau'$ is integrable on $\mR$, so the thesis follows. \end{proof} \section{Discussion}\label{disc} There are three questions we want to address in this section: \begin{itemize} \item[(i)] How far is the present analysis from a complete treatment of the Max\-well-Dirac system? \item[(ii)] Is there a further physical selection criterion to choose a gauge from the class of gauges obtained in Theorem \ref{totelmga}? \item[(iii)] Open problems. \end{itemize} With regard to the first of these questions we note that the form of the charged currents producing electromagnetic fields in Assumption \ref{FAJ} mimics what one should expect in fully interacting theory. A possible shortage of this assumption rests in the estimates of derivatives of the currents, which would not be satisfied for oscillating terms in asymptotic behavior. The Dirac field current does have such asymptotic terms, but it is quite plausible to predict that oscillations dump the asymptotic behavior of fields produced by them. What would be needed is an appropriately fast vanishing of the leading oscillating asymptotic terms in the neighborhood of the lightcone (which is a~quite reasonable prediction). This would presumably lead to similar behavior of electromagnetic potentials as that following from Assumption \ref{FAJ}. Moreover, we note that our Assumption \ref{asum}, on which our analysis is based, leaves much more room for the types of potentials than Assumption \ref{FAJ}. Let us once more at this point recall the work by Flato et al.\cite{fst95}. These authors do have a theorem on the evolution of the complete system, but in a rather restricted setting and with not much control over the range of validity. The theorem states that in the space of smooth (i.e.\ $C^\infty$) initial data there exists a neighborhood of zero, which gives rise to the Cauchy evolution and completeness. Our analysis is not fully developed to the full interacting case, but gives complete results for the Dirac part of the system, with the electromagnetic fields plausibly guessed. Our main motivation for the present work was the expectation that the choice of gauge does matter for asymptotic behavior and its interpretation. We have shown that for a class of gauges the asymptotic behavior of the Dirac field approaches that of a free field, without the need for any dynamical corrections. This brings us to the second question mentioned at the beginning. Theorem \ref{totelmga} identifies a class of gauges for which the situation described above takes place. The concrete gauge defined by \eqref{specgauge} was used for technical reasons: this gauge satisfies Assumption \ref{asum}, and it is the only gauge with that property among the gauges explicitly mentioned in Theorem \ref{totelmga}. However, we want to argue now, in less precise terms then those of the preceding sections, that a~different choice of gauges in the given class has a definite physical interpretation when an extension, as mentioned above, to fully interacting system is considered. Such extension, partly based on conjectures, was considered in \cite{her95}, and we have to briefly recall a few results of this analysis. It was found that if one uses inside the lightcone the gauge related to the Lorenz gauge $A$ (of similar properties as those of Assumption III) by \begin{equation} \cA_\mathrm{cone}(x)=A(x)-\n\gco(x)\,,\qquad \gco(x)=\log\sqrt{x^2}\,x\cdot A(x)\,, \end{equation} or another with similar timelike asymptotic behavior, then the asymptotic total four-mo\-men\-tum and angular momentum taken away into timelike infinity have the same functional form as those for the free Dirac field. Moreover, the energy-momentum radiated into null infinity is fully due to the free outgoing electromagnetic field. However, the angular momentum going out into the null infinity, in addition to the free radiated contribution, has a mixed adv-out electromagnetic terms. These latter terms may be incorporated into the free Dirac field by a change of the asymptotic limit addition to the gauge function. As a result, both energy-momentum as well as angular momentum are clearly separated into electromagnetic and Dirac parts.\footnote{For explicit expressions and more extensive discussion we refer the reader to \cite{her95}, Section~V. Here we would only like to reassure the reader that the problems met for angular momentum in electrodynamics are accounted for in that discussion.}\pagebreak[2] Now we would like to identify in our class of global gauges those for which this separation may be expected. It is not difficult to show that for points inside the future lightcone, represented by $\la v$, with $\la>0$ and $v$ on the future unit hyperboloid, we have \begin{equation} \lim_{\la\to\infty}[\glo(\la v)-\gco(\la v)]=0\ \end{equation} for the gauge function $S_{\log}$ defined in Theorem \ref{totelmga} (iii). Therefore, we put forward the selection criterion for the gauge functions $S$ to be used in \eqref{gg} for the definition of $\cA$: \begin{equation} S(\tau,\z)=S_{\log}(\tau,\z)+\Delta S(\tau,\z)\,, \end{equation} where $\Delta S$ has the limit \begin{equation} \Delta S(\infty,\z)=\lim_{\tau\to\infty}\Delta S(\tau,\z) \end{equation} which satisfies the condition of the separation of angular momentum as described above (it is easy to see that for $\Delta S$ sufficiently regular the rhs is equal to $\dsp\lim_{\la\to\infty}\Delta S(\la(\zm,\z))$). Similar conditions should be applied for past infinity. Gauges in the class thus selected have the property announced in the introduction: $x\cdot\cA(x)$ vanishes asymptotically in timelike directions. This may be checked easily for $S_{\log}$, and if $\Delta S(\tau,\z)$ is not oscillating in $\tau$, the same is true for this part. The existence and completeness of the wave operators, as indicated in Remark \ref{lorinv}, does not depend on the choice of the time axis for the definition of our spacetime foliation. On the other hand, and this is the first of open problems, the precise transformation law from one inertial observer to another needs further investigation. More generally, one can ask what is a general class of Cauchy foliations for which the results could be repeated, and how the results would depend on the choice in the class. Let us end this discussion by expressing the belief that our scheme could be applied to analogous problems on at least some of the smooth curved spacetime backgrounds. Whether, as inquired by one of the Referees, further extension to black hole type spacetimes would be possible, is a more speculative question.
{ "timestamp": "2020-12-29T02:27:29", "yymm": "2012", "arxiv_id": "2012.14170", "language": "en", "url": "https://arxiv.org/abs/2012.14170" }
\section{Introduction} Manifold learning is broadly concerned with analyzing high-dimensional data sets that have a low intrinsic dimensionality. The standard assumption is that the input data set $\mathcal{X} = \{{\bf x}_1, \ldots, {\bf x}_n\} \subseteq \mathbb{R}^D$ lies on or near a $d$-dimensional submanifold ${\mathcal M} \subseteq \mathbb{R}^D$ where $ d \ll D$. The key tasks are dimensionality reduction \cite{TenenbaumDesilvaLangford2000,RoweisSaul2000,DonohoGrimes2003,BelkinNiyogi2003,ZhangZha2004,CoifmanLafon2006,VandermaatenHinton2008,McinnesEtal2018,ZhangMoscovichSinger2021}, function representation and approximation \cite{GavishNadlerCoifman2010,ChengWu2013,LiaoMaggioniVigogna2016,SoberAizenbudLevin2021} and semi-supervised learning \cite{BelkinNiyogi2004,GoldbergEtal2009,MoscovichJaffeNadler2017}. Most data analysis methods in this setting rely on pairwise Euclidean distances between the data points. In this paper, we focus on manifold learning methods that use a graph Laplacian. These include the popular spectral embedding methods Laplacian eigenmaps \cite{BelkinNiyogi2003,BelkinNiyogi2004} and diffusion maps \cite{CoifmanLafon2006}. Both methods map the input points to the eigenvectors of a graph Laplacian operator $\mathcal{L}_n$ (or weighted variant thereof). By definition, $\mathcal{L}_n$ acts on a function $f: \mathcal{X} \rightarrow \mathbb{R}$ \nolinebreak via \begin{align} \label{def:Ln} \left(\mathcal{L}_n f \right)({\bf x}_i) := \sum_{j=1}^n W_{ij} \left( f({\bf x}_j) - f({\bf x}_i) \right), \quad&& W_{ij} := \exp\left(-\frac{\|{\bf x}_j-{\bf x}_i\|_2^2}{\sigma_n^2}\right). \end{align} Under suitable conditions, as $n \to \infty$ the discrete graph Laplacian operator $\mathcal{L}_n$ converges to the continuous Laplace-Beltrami operator $\Delta_{\mathcal{M}}$ on the manifold~\cite{BelkinNiyogi2008}, and its eigenvectors converge to the Laplacian eigenfunctions \cite{BelkinNiyogi2007}. While the convergence results can be extended to more general affinity kernels of the form $W_{ij} = K_{\sigma_n}(\|{\bf x}_i-{\bf x}_j\|_2)$, the role of the Euclidean norm here is essential. This poses a potential limitation for graph Laplacian methods since Euclidean metrics are not always the best choice for all application domains \cite{BelletHabrardSebban2015}. Furthermore, some non-Euclidean metrics use compressed representations, which can have practical benefits in terms of runtime and memory requirements. Following this line of reasoning leads to the following questions: can the machinery of discrete Laplacian operators be generalized to non-Euclidean metrics? Does doing so yield any practical benefits? If so, what is the underlying theory? This paper is an initial step in answering these questions. The main contribution of the paper is the derivation of the continuum limit of discrete Laplacian operators similar to \eqref{def:Ln} but with an affinity kernel based on an arbitrary norm. Our key result (Theorem~\ref{thm:limit}) is a proof that using \emph{any norm}, graph Laplacians converge to an explicit second-order differential operator on $\mathcal{M}$. In contrast to the Euclidean case, in the general case the limiting operator is not intrinsic to the manifold, i.e., it depends on the embedding of $\mathcal M$ in $\mathbb{R}^D$. Furthermore, it has non-vanishing and possibly discontinuous first-order terms. The second-order coefficients of the limiting differential operator at a point $\textup{\bf p} \in {\mathcal M}$ is given by the second moments of the intersection of the tangent plane to $\mathcal{M}$ at $\textup{\bf p}$ and the given norm's unit ball. The first-order terms depend on the second fundamental form of $\mathcal{M}$ at $\textup{\bf p}$ and the tangent cones to the norm's unit sphere, through a function we call $\operatorname{tilt}$, defined in Section \ref{subsec:tilt-const}. In a second contribution, which was the original motivation for this work, we present in Section~\ref{sec:experiments} a variant of Laplacian eigenmaps that is based on a norm that approximates the Earthmover's distance (EMD), also known as the Wasserstein-1 metric, for learning volumetric shape spaces. This is motivated by an important problem in structural biology: learning the conformation space of flexible proteins and other macromolecules with continuous variability from cryo-electron microscopy images. Empirically, we demonstrate that classical (Euclidean) Laplacian eigenmaps are at a disadvantage compared to Laplacian eigenmaps based on this approximate EMD, as it requires far fewer sample points to recover the intrinsic manifold of motion. Furthermore, as we show in Section \ref{sec:sparsity}, the proposed method can achieve faster runtime and a smaller memory footprint through an intermediate compressed representation. This demonstrates, at least for certain data sets, the use of non-Euclidean norms in Laplacian-based manifold learning is desirable from a practical view. \subsection{\textbf{Related work}} Our convergence proof builds on the well-known proof of Belkin and Niyogi's for the case of the Euclidean norm \cite{BelkinNiyogi2008}. However, the argument for the Euclidean case is \textit{not} directly adaptable to the case of other norms. It relies on a special property of the Euclidean norm: that Euclidean distances provide a second-order approximation to manifold geodesic distances (see [6, Figure 1]). This fails for general norms, which do not even give a first-order approximation to geodesic distances. This difference introduces a first-order derivative term in the limit in the general case. Another technical difference is that, in the standard case, the intersection of an embedded tangent space with the Euclidean unit ball is rotationally symmetric. This gives rise to the Laplace-Beltrami operator, which is the only second-order rotationally symmetric differential operator (up to scale). The property fails for general norms, thereby introducing ``cross-terms" in the second-order term of the general limit. In \cite{TingHuangJordanICML2010}, a different extension of the convergence proof for graph Laplacian methods appeared. That work analyzed $k$-nearest neighbor graphs and other constructions, but based on a Euclidean norm. We do not pursue this direction. To the best of our knowledge, most Laplacian-based manifold learning works employ the standard Euclidean norm. Two notable exceptions are the works of Mishne and collaborators \cite{MishneEtal2016,MishneEtal2017} where tree-based metrics \cite{CoifmanLeeb2013} were used as a basis for diffusion maps. These metrics can be interpreted as hierarchical Earthmover's distances. However since the trees are data-dependent, our main theorem does not apply since we do require a data-independent norm. The application section in this paper is an extension of \cite{ZeleskoMoscovichKileelSinger2020}, where we first proposed to use a variant of diffusion maps based on an approximate Earthmover's distance. In \cite{RaoMoscovichSinger2020}, the same approximate Earthmover's distance was used for the clustering of cryo-EM images. Lieu and Saito's work \cite{LieuSaito2011} is another that combines diffusion maps and the Earthmover's distance. However, the order of operations is different: they first use Euclidean diffusion maps and only then apply the Earthmover's distance to the resulting point clouds. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.55} \normalsize \begin{tabular}{ll} \bf Symbol & \bf Description\\\hline ${\mathcal M} \subseteq \mathbb{R}^D$ & Compact embedded Riemannian submanifold \\ $d = \dim(\mathcal{M})$ & Dimension of $\mathcal{M}$ \\ $\textup{\bf p} \in \mathcal{M}$ & Point on $\mathcal{M}$\\ $T_\textup{\bf p} {\mathcal M}$ & Tangent space to ${\mathcal M}$ at $\textup{\bf p}$ \\ $\textup{exp}_{\textup{\bf p}}: T_{\textup{\bf p}} {\mathcal M} \rightarrow {\mathcal M}$ & Exponential map for $\mathcal{M}$ at $\textup{\bf p}$ \\ $\textup{\bf s} \in \mathbb{R}^d \cong T_{\textup{\bf p}}\mathcal{M}$ & Geodesic normal coordinates for $\mathcal{M}$ around $\textup{\bf p}$ \\ $f : \mathcal{M} \rightarrow \mathbb{R}$ & Function on $\mathcal{M}$ \\ $\widetilde f = f \circ \operatorname{exp}_{\textup{\bf p}}$ & Function pulled-back to tangent space\\ $\operatorname{grad}\widetilde f: \mathbb{R}^d \to \mathbb{R}^d$ & Gradient of $\widetilde{f} $\\ $\operatorname{hess}\widetilde f: \mathbb{R}^d \to \mathbb{R}^{d \times d}$ & Hessian of $\widetilde{f}$\\ $L_\textup{\bf p}: T_\textup{\bf p} {\mathcal M} \rightarrow \mathbb{R}^D$ & Differential of exponential map at $\textup{\bf p}$ (Eq. \eqref{eq:expp-taylor}) \\ $Q_\textup{\bf p}:T_\textup{\bf p}{\mathcal M} \rightarrow \mathbb{R}^D$ & Second fundamental form of $\mathcal{M}$ at $\textbf{p}$ (Eq. \eqref{eq:expp-taylor}) \\ $\mathcal{B} \subseteq \mathbb{R}^D$ & Origin-symmetric convex body \\ $\| \cdot \|_{\mathcal{B}} : \mathbb{R}^D \to \mathbb{R}_{\ge 0} $ & Norm with unit ball ${\mathcal{B}}$ \\ $\| \cdot \|_2: \mathbb{R}^D \to \mathbb{R}_{\ge 0}$ & Euclidean norm \\ $\| \cdot \|_{\textbf{w},1}: \mathbb{R}^D \to \mathbb{R}_{\ge 0}$ & Weighted $\ell_1$-norm \\ $K_{\sigma} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ & Affinity kernel with width parameter $\sigma > 0$ \\ $\mathcal{L}_{n}$ & Point-cloud Laplacian based on $\| \cdot \|_2$ (Eq. \eqref{eq:discrete-Lap})\\ $\mathcal{L}_{n, {\mathcal{B}}}$ & Point-cloud Laplacian based on $\| \cdot \|_{\mathcal{B}}$ (Definition \ref{discrete_laplacian_B})\\ $\Delta_{\mathcal M}$ & Laplace-Beltrami operator on $\mathcal{M}$ \\ $\Delta_{\mathcal{M}, \mathcal{B}}$ & Laplacian-like differential operator (Definition \ref{def:Laplace-like})\\ ${\operatorname{tilt}_{\M, \B, \p}}$ & Tilt function at $\textup{\bf p}$ (Proposition/Definition \ref{def:tilt-const}) \\ $\overline{S}, S^{\circ}, \partial S$ & Closure, interior, boundary of a set \\ $TC_{\textbf{y}}(\mathcal{Y}) \subseteq \mathbb{R}^D$ & Tangent cone to $\mathcal{Y} \subseteq \mathbb{R}^D$ at $\textup{\textbf{y}} \in \mathcal{Y}$ (Eq. \eqref{eq:def-tan-cone}) \\ $\mathcal W$ & Wavelet transform \\ $\langle \cdot, \cdot \rangle$ & Inner product \\ $\mathbb{R}_{\ge 0}, \mathbb{R}_{>0}$ & Non-negative/strictly positive real numbers \end{tabular} \end{center} \caption{\small List of notation.} \label{table:notation} \end{table} \addcontentsline{toc}{subsection}{List of symbols} \section{\textbf{Background: graph Laplacian methods}} \label{subsec:graphLaplacian} In this section, we review graph Laplacian methods in more detail than in the introduction. Given a subset $\mathcal{X} =\{{\bf x}_1, \ldots, {\bf x}_n\} \subseteq \mathbb{R}^D$, and an affinity function $K_{\sigma_n} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$, consider the symmetric matrix of pairwise affinities: \begin{align} \label{def:W_ij} W_{ij} := K_{\sigma_n}(\|{\bf x}_i - {\bf x}_j\|_2). \end{align} The canonical choice for $K_{\sigma_n}$ is the Gaussian kernel, $K_{\sigma_n}(t) = \exp(-t^2/\sigma_n^2)$ (up to normalization conventions) where the width parameters $\sigma_n$ decay to zero at an appropriate rate. Another possibility is the 0/1 kernel, $K_{\sigma_n}(t) = \mathbb{1}(t \leq \sigma_n)$. The matrix $W$ defines a weighted graph $G=(\mathcal{X},E,W)$ where the set of edges $E$ consists of all the pairs $(i,j)$ for which $W_{ij} > 0$. Define the diagonal degree matrix by $D_{ij} = \delta_{ij}\sum_k W_{ik}$. The (unnormalized, negative semi-definite) Laplacian matrix of $G$, or the \emph{graph Laplacian}, is defined to be \begin{align} \label{def:LG} \mathcal{L}_G := W - D. \end{align} \vspace{-0.5em} \begin{Remark} \label{remark:negsemidefinite} As a warning, several other authors use the \textit{positive} semi-definite graph Laplacian convention, $\mathcal{L}^{\textup{psd}}_G := D - W$. In this paper, we chose the negative semi-definite conventions for both the discrete and continuous Laplacians. \end{Remark} The graph Laplacian acts on vectors $f \in \mathbb{R}^n$. We think of $f$ as a real-valued function on the vertex set $\mathcal{X}$. Then the graph Laplacian averages, for each vertex, the differences between the function's value at the vertex and its neighbors: \begin{align}\label{eq:rightmost} \left(\mathcal{L}_G f \right)({\bf x}_i) = \sum_{j=1}^n W_{ij}(f({\bf x}_j) - f({\bf x}_i)) = \sum_{j=1}^n K_{\sigma_n}(\|{\bf x}_j - {\bf x}_i\|_2) \left( f({\bf x}_j) - f({\bf x}_i)\right). \end{align} Note $\mathcal L_G$ is an $n \times n$ symmetric negative semi-definite matrix. We list its eigenvalues in descending order \begin{align*} 0 = \lambda_0 \geq \lambda_1 \geq \ldots \geq \lambda_{n-1}, \end{align*} and choose corresponding real orthonormal eigenvectors \begin{align*} \phi_0, \ldots, \phi_{n-1} \in \mathbb{R}^n \end{align*} where $\phi_0 = n^{-1/2} \mathbf{1} $. These eigenvectors give an orthonormal basis of functions on $\mathcal{X}$. Two common uses for the Laplacian eigenvectors are: \begin{enumerate} \item As a basis for function representation and approximation of real-valued functions $g$ defined on $\mathcal X$ \cite{BelkinNiyogi2004,CoifmanEtal2005,LeeIzbicki2016}, \begin{align*} g({\bf x}_i) = \sum_{j=0}^{n-1} \alpha_j \phi_j({\bf x}_i). \end{align*} \item As a method for dimensionality reduction of the input set $\mathcal{X}$ \cite{BelkinNiyogi2003,CoifmanLafon2006}, \begin{align*} {\bf x}_i \mapsto \left( \phi_1({\bf x}_i), \ldots, \phi_m ({\bf x}_i) \right). \end{align*} Here, each $\textbf{x}_i$ is mapped into $\mathbb{R}^m$ via the $i$-th coordinates of the first $m$ nontrivial Laplacian eigenvectors. This usage of eigenvectors is motivated by the fact that any closed connected Riemannian manifold is smoothly embedded into $\mathbb{R}^m$ by its first $m$ Laplacian eigenfunctions for some $m$ \cite{Bates2014}. \end{enumerate} The Laplacian matrices $\mathcal{L}_{G}$ are often analyzed as points are added to $\mathcal{X}$ and the graph $G$ grows. In this context, the \textit{manifold assumption} is standard, namely that $\textbf{x}_i$ are drawn i.i.d. from some embedded submanifold $\mathcal{M} \subseteq \mathbb{R}^D$ Other works, for example in the study of clustering, have analyzed the limit of graph Laplacians without making the manifold assumption (see \cite{VonLuxburg2007,TrillosSlepcev2018}). It is convenient to work with an extension of the graph Laplacian that acts on any function $f$ whose domain is a superset of $\mathcal{X}$. Specifically we define the (unnormalized, negative semi-definite) \textit{point-cloud Laplacian} $\mathcal{L}_n$ computed using $\mathcal{X}$ as follows: for each $f : \mathcal{Y} \rightarrow \mathbb{R}$ where $\mathcal{X} \subseteq \mathcal{Y}$, define $\mathcal{L}_n f : \mathcal{Y} \rightarrow \mathbb{R}$ by \begin{align} \label{eq:discrete-Lap} \mathcal{L}_nf(\textup{\bf p}) := \frac{1}{n} \sum_{j=1}^n K_{\sigma_n}(\| \textbf{x}_j - \textup{\bf p} \|_2)(f(\textbf{x}_j) - f(\textup{\bf p})). \end{align} After rescaling by $n$ the point-cloud Laplacian \eqref{eq:discrete-Lap} extends the graph Laplacian \eqref{eq:rightmost}, because for each $f : \mathcal{Y} \rightarrow \mathbb{R}$ where $\mathcal{X} \subseteq \mathcal{Y}$ it holds $\left( n\mathcal{L}_n f \right)\!|_{\mathcal{X}} = \mathcal{L}_G(f|_{\mathcal{X}})$. \subsection{\textbf{Existing theory: graph Laplacians using the Euclidean norm}} It is known that using Euclidean norms to compute affinities, the point-cloud Laplacian converges to the Laplace-Beltrami operator under the manifold assumption (see \cite{HeinAudibertLuxburg2005,Singer2006,BelkinNiyogi2008,GineKoltchinskii2006}). Here is a precise statement. \begin{Theorem}[\normalfont {\cite[Th.~3.1]{BelkinNiyogi2008}}: Convergence of the point-cloud Laplacian based on the Euclidean norm] \label{thm:classical} Let ${\mathcal M}$ be a compact $d$-dimensional embedded Riemannian submanifold of $\mathbb{R}^D$ with Laplace-Beltrami operator $\Delta_{\mathcal M}$. Let $\textup{\textbf{x}}_1, \ldots, \textup{\textbf{x}}_n$ be i.i.d. draws from the uniform measure on $\mathcal{M}$. Fix any constant $\alpha > 0$, and set $\sigma_n = 2 n^{-1/(2d+4+\alpha)}$ and $c_n = \frac{\pi^{d/2}}{4} \sigma_n^{d+2} $. Let $\mathcal{L}_n$ be the point-cloud Laplacian defined in Eq.~\eqref{eq:discrete-Lap} using the Gaussian affinity based on the Euclidean norm, $K_{\sigma}(\|\textup{\textbf{x}}_{j} - \textup{\bf p}\|) = \exp(\| \textup{\textbf{x}}_j - \textup{\bf p} \|_2^2/\sigma_n^2)$. Then given a three-times continuously differentiable function $f : \mathcal{M} \rightarrow \mathbb{R}$ and a point $\textup{\bf p} \in \mathcal{M}$, we have the following convergence in probability: \begin{align*} \frac{1}{c_n} \mathcal{L}_n f(\textup{\bf p}) \xrightarrow{\,\,\,\,\, p \,\,\,\,} \frac{1}{\operatorname{vol}(\mathcal{M})} \Delta_{\mathcal{M}}f(\textup{\bf p}). \end{align*} \end{Theorem} \noindent Our Theorem~\ref{thm:limit} extends Theorem~\ref{thm:classical} to the case of non-Euclidean norms. Meanwhile, for a non-uniform sampling distribution, variants of the point-cloud Laplacian are known to converge pointwise to a weighted Laplacian (or Fokker--Planck) operator, which has an additional density-dependent drift term \cite{CoifmanLafon2006,NadlerLafonCoifmanKevrekidis2005,TingHuangJordanICML2010}. Theorem~\ref{thm:nonuniform} extends this to the case of non-Euclidean norms. In addition to pointwise consistency, spectral consistency has been proven when the norm is Euclidean \cite{BelkinNiyogi2007,HeinAudibertVonLuxburg2007,VonluxburgBelkinBousquet2008,RosascoBelkinDevito2010,TrillosSlepcev2018,TrillosGerlachHeinSlepcev2020,WormellReich2021}. This is a stronger mode of convergence than pointwise convergence: the eigenvalues/eigenvectors of the graph Laplacians converge to the eigenvalues/eigenfunctions of the limiting operator. We leave such considerations for arbitrary norms to future work. \section{Ingredients, main theorem statement, first properties} In this section the primary goal is to formulate our main result, Theorem~\ref{thm:limit}, in Section~\ref{subsec:thm-statement}. Before this, we collect tools from differential geometry and convex geometry. Then in Section~\ref{subsec:tilt-const}, we define a particular function that depends on the second-order geometry of a manifold and the unit ball of a given norm; this turns out to give the correct first-order derivative term in Theorem~\ref{thm:limit}. After the main statement, we explain how it adapts to non-uniform sampling of the manifold (Theorem~\ref{thm:nonuniform}). Then we discuss first properties of the limiting differential operator and show that it reduces to the Laplace-Beltrami operator in the Euclidean case. As a non-Euclidean example, we calculate the limit explicitly for a circle in the plane where the ambient norm is weighted~$\ell_1$. \subsection[\textbf{Preliminaries from Riemannian geometry}]{\textbf{Preliminaries from Riemannian geometry}} \label{subsec:prelim-riem} We start by reviewing some basics from Riemannian geometry. These notions are later used for our theorem statement and its proof. Textbook accounts of differential geometry are abundant; we particularly like Lee's books \cite{Lee-Riem-Book,LeeBook2012}. Throughout the paper, $\mathcal{M} \subseteq \mathbb{R}^D$ denotes a $d$-dimensional compact embedded \textit{Riemannian submanifold} of $\mathbb{R}^D$. We let $\textbf{p} \in \mathcal{M}$ denote a point (typically fixed in our considerations). We write $T_{\textbf{p}}\mathcal{M}$ for the abstract \textit{tangent space} to $\mathcal{M}$ at $\textbf{p}$ (defined in \cite[Ch.~3]{LeeBook2012}). In particular, $T_{\textbf{p}}\mathcal{M}$ is a $d$-dimensional real vector space equipped with an inner product $\langle \cdot, \cdot \rangle = \langle \cdot , \cdot \rangle_{\textbf{p}}$. Further, $0 \in T_{\textbf{p}}\mathcal{M} $ and we do \textit{not} consider $T_{\textbf{p}} \mathcal{M}$ to be embedded in $\mathbb{R}^D$. The canonical mapping from the tangent space into the manifold is the \textit{exponential map} at $\textbf{p}$ \cite[Ch.~5]{Lee-Riem-Book}, denoted $\textup{exp}_{\textbf{p}} : T_{\textbf{p}} \mathcal{M} \rightarrow \mathcal{M}$.\footnote{Compactness of $\mathcal{M}$ and the Hopf-Rinow theorem imply that $\textup{exp}_{\textbf{p}}$ is defined on the entire tangent space $T_{\textbf{p}} \mathcal{M}$.} This is a $C^{\infty}$-map that carries straight lines on $T_{\textbf{p}} \mathcal{M}$ through the origin to \textit{geodesics} on $\mathcal{M}$ through the point $\textbf{p}$. By the inverse function theorem, there exist open neighborhoods $\mathcal{U} \subseteq T_{\textbf{p}}\mathcal{M}$ of $0$ and $\mathcal{V} \subseteq \mathcal{M}$ of $\textbf{p}$ (which we fix once and for all) such that the exponential map restricts to a \textit{diffeomorphism} between these neighborhoods, \begin{align*} \textup{exp}_{{\bf p}}: U \xrightarrow{\sim} V. \end{align*} Further, let us fix once and for all an orthonormal basis on $T_{\textbf{p}}\mathcal{M}$ with respect to $\langle \cdot, \cdot \rangle_{\textbf{p}}$, and write $\textbf{s} = (s_1, \ldots, s_d)^{\top}$ for coordinates on $U$ with respect to this basis; these are \textit{geodesic normal coordinates} for $\mathcal{M}$ around $\textbf{p}$ with the chart given by the exponential map. Identifying $\textup{exp}_{\textbf{p}}$ with $\iota \circ \textup{exp}_{\textbf{p}}$, where $\iota : \mathcal{M} \hookrightarrow \mathbb{R}^D$ is inclusion, $\textup{exp}_{\textbf{p}}$ is a smooth mapping from an open subset of Euclidean space $\mathbb{R}^d$ into Euclidean space $\mathbb{R}^D$, and thus it admits a Taylor expansion around $\textbf{s}=0$: \begin{align} \label{eq:expp-taylor} \textup{exp}_{\textbf{p}}(\textbf{s}) = \textbf{p} + L_{\textbf{p}}(\textbf{s}) + \frac{1}{2} Q_{\textbf{p}}(\textbf{s}) + O(\|\textbf{s}\|_2^3). \end{align} Equation \eqref{eq:expp-taylor} links \textit{intrinsic coordinates} to \textit{extrinsic coordinates} for $\mathcal{M}$. Here: \begin{itemize} \item $L_{\textbf{p}} : T_{\textbf{p}}\mathcal{M} \rightarrow \mathbb{R}^D$ is a homogeneous linear function, the \textit{differential} of the exponential map at $\textbf{p}$, namely $L_{\textbf{p}} = D\textup{exp}_{\textbf{p}}(0)$; and \item $Q_{\textbf{p}} : T_{\textbf{p}}\mathcal{M} \rightarrow \mathbb{R}^D$ is a homogeneous quadratic function (equivalently a linear function of $\textbf{s}\textbf{s}^{\top}$) called the \textit{second fundamental form} of $\mathcal{M}$ at $\textbf{p}$~\cite{Monera2014}. \end{itemize} Consider the image (respectively, translated image) of $L_{\textbf{p}}$: \begin{align} L_{\textbf{p}}(T_{\textbf{p}} \mathcal{M}) & = \{ L_{\textbf{p}}(\textbf{s}) : \textbf{s} \in T_{\textbf{p}}\mathcal{M} \} \subseteq \mathbb{R}^D, \text{ and } \label{eq:emd-linear-tang} \\ \textbf{p} + L_{\textbf{p}}(T_{\textbf{p}} \mathcal{M}) &= \{ \textbf{p} + L_{\textbf{p}}(\textbf{s}) : \textbf{s} \in T_{\textbf{p}}\mathcal{M} \} \subseteq \mathbb{R}^D. \nonumber \end{align} We call these the \textit{linear (respectively, affine) embedded tangent space} of $\mathcal{M}$ at $\textbf{p}$. It is well-known that $L_{\textbf{p}}$ provides an \textit{isometric embedding} of $T_{\textbf{p}}\mathcal{M}$ into $\mathbb{R}^D$, \begin{equation} \label{eq:isometric} \| L_{\textbf{p}}(\textbf{s}) \|_2 = \| \textbf{s} \|_2 \,\, \text{ for all } \textbf{s} \in T_{\textbf{p}}\mathcal{M}. \end{equation} Another important fact is that the second fundamental form $Q_{\textbf{p}}$ takes values in the \textit{normal space} to $\mathcal{M}$ at $\textbf{p}$, that is $Q_{\textbf{p}}(T_{\textbf{p}} \mathcal{M}) \subseteq L_{\textbf{p}}(T_{\textbf{p}}\mathcal{M})^{\perp} \subseteq \mathbb{R}^D$, i.e., \begin{equation} \label{eq:perp} \langle L_{\textbf{p}}(\textbf{s}), Q_{\textbf{p}}(\textbf{s}') \rangle_{\mathbb{R}^D} = 0 \,\, \text{for all } \textbf{s}, \textbf{s}' \in T_{\textbf{p}}\mathcal{M}, \end{equation} Finally, let $\mu$ denote the \textit{density} on $\mathcal{M}$ uniquely determined by the Riemannian structure on $\mathcal{M}$ as in \cite[Prop.~16.45]{LeeBook2012}. The density determines a measure on $\mathcal{M}$, which we refer to as the \textit{uniform measure}. The measure enables integration of measurable functions $f : \mathcal{M} \rightarrow \mathbb{R}$, which we write as $ \int_{\mathcal{M}} f(\textbf{x}) d\mu(\textbf{x})$. Then the Riemannian \textit{volume} of $\mathcal{M}$ is $\operatorname{vol}(\mathcal{M}) = \int_{\mathcal{M}} 1 d\mu(\textbf{x}).$ \subsection[\textbf{Preliminaries from convex geometry}]{\textbf{Preliminaries from convex geometry}}\label{subsec:convex-prelim} We next give a quick reminder on general norms in finite-dimensional vector spaces, and their equivalence with certain convex bodies. A few facts about tangent cones that we will need are also recorded. The only (possibly) novel content here is Proposition~\ref{prop:tan-con-boundary}. A nice textbook on convex geometry is \cite{convex-book}. Let $\| \cdot \|: \mathbb{R}^D \rightarrow \mathbb{R}$ denote an arbitrary vector space \textit{norm} on $\mathbb{R}^D$. This means: \begin{itemize} \setlength\itemsep{0.4em} \item $\| \textbf{v} \| \geq 0$ for all $\textbf{v} \in \mathbb{R}^D$; \item $\| \lambda \textbf{v} \| = | \lambda | \| \textbf{v} \| $ for all $\lambda \in \mathbb{R}$ and $\textbf{v} \in \mathbb{R}^D$; \item $\| \textbf{u} + \textbf{v} \| \leq \| \textbf{u} \| + \| \textbf{v} \|$ for all $\textbf{u}, \textbf{v} \in \mathbb{R}^D$. \end{itemize} Recall that $\| \cdot \|$ is necessarily a continuous function on $\mathbb{R}^D$. Also standard is that all norms on $\mathbb{R}^D$ are \textit{equivalent}, that is if $|\!|\!| \cdot |\!|\!|$ is another norm on $\mathbb{R}^D$, then there exist finite positive constants $c, C$ (depending only on $\| \cdot \|$, $|\!|\!| \cdot |\!|\!|$) such \nolinebreak that \begin{equation} \label{eq:equiv-norms} c |\!|\!| \textbf{v} |\!|\!| \leq \| \textbf{v} \| \leq C |\!|\!| \textbf{v}|\!|\!| \textup{ for all $\textbf{v} \in \mathbb{R}^D$}. \end{equation} We write $\mathcal{B} \subseteq \mathbb{R}^D$ for the \textit{unit ball} with respect to the norm $\| \cdot \|$, \begin{equation} \label{eq:unit-ball} \mathcal{B} = \{ \textbf{v} \in \mathbb{R}^D : \| \textbf{v} \| \leq 1 \}. \end{equation} Then, $\mathcal{B}$ is a \textit{convex body} in $\mathbb{R}^D$. This means: a compact convex subset of $\mathbb{R}^D$ with non-empty interior. Furthermore, the unit ball is \textit{origin-symmetric}, i.e., $\textbf{v} \in \mathcal{B}$ implies $-\textbf{v} \in \mathcal{B}$ for all $\textbf{v} \in \mathbb{R}^D$. Conversely, it is well-known that any origin-symmetric convex body in $\mathbb{R}^D$ occurs as the unit ball for some norm on $\mathbb{R}^D$. Thus, there is a one-to-one correspondence (see \cite[Chapter~2]{convex-book}): \begin{equation*} \{ \text{norms } \| \cdot \| \text{ on } \mathbb{R}^D \} \quad \longleftrightarrow \quad \{\text{origin-symmetric convex bodies } \mathcal{B} \subseteq \mathbb{R}^D \}. \end{equation*} To emphasize this bijection, we shall let $\| \cdot \|_{\mathcal{B}}$ stand for the norm on $\mathbb{R}^D$ with unit ball $\mathcal{B}$ (except in the case of the $\ell_p$-norm where we write $\| \cdot \|_p$), i.e., \begin{equation*} \| \cdot \|_{\mathcal{B}} \, \longleftrightarrow \, \mathcal{B}. \end{equation*} A few general topological remarks follow. Given any subset $\mathcal{Y} \subseteq \mathbb{R}^D$. The (relative) \textit{topological boundary} of $\mathcal{Y}$ is the closure of $\mathcal{Y}$ minus the relative interior of $\mathcal{Y}$, written $\partial \mathcal{Y} := \overline{\mathcal{Y}} \, \setminus \, \operatorname{relint}(\mathcal{Y})$. In the case of the unit ball \eqref{eq:unit-ball}, the boundary is the \textit{unit sphere}: \begin{align*} \partial \mathcal{B} = \{ \textbf{v} \in \mathbb{R}^D : \| \textbf{v} \|_{\mathcal{B}} = 1\}. \end{align*} Given any point $\textbf{y} \in \mathcal{Y}$, the \textit{tangent cone} to $\mathcal{Y}$ at $\textup{\textbf{y}}$ is defined to be \begin{align} \label{eq:def-tan-cone} TC_{\textbf{y}}(\mathcal{Y}) := \left\{ \!\textbf{d} \in \mathbb{R}^D \!: \exists ({\bf y}_{k})_{k=1}^{\infty} \subseteq \mathcal{Y}, (\tau_k)_{k=1}^{\infty} \subseteq \mathbb{R}_{>0} \textup{ s.t. } \tau_k \rightarrow 0, \frac{{\bf y}_k - {\bf y}}{\tau_k} \rightarrow \textbf{d}\! \right\}\!. \end{align} Note that, unlike abstract tangent spaces to manifolds, tangent cones to sets reside in $\mathbb{R}^D$ by definition. We now give a few quick examples of tangent cones. \begin{Example}[Tangent cones in familiar cases] For a submanifold $\mathcal{Y} \subseteq \mathbb{R}^D$, the tangent cone and embedded tangent space always agree: $TC_{\textbf{y}}(\mathcal{Y}) = L_{\textbf{p}}(T_{\textbf{y}}(\mathcal{Y}))$. If $\mathcal{Y} = \{ (x_1, x_2) : x_2^2 = x_1^3 + x_1^2\} \subseteq \mathbb{R}^2$ is the nodal cubic plane curve, and $\textbf{y} = 0$ is the node, the tangent cone is the union of two lines: $TC_{\textbf{y}}(\mathcal{Y}) = \mathbb{R} \begin{pmatrix} 1 \\ 1 \end{pmatrix} \cup \mathbb{R} \begin{pmatrix} 1 \\ -1 \end{pmatrix}$. If $\mathcal{Y} = \{(x_1, x_2) : x_2^2 = x_1^3\} \subseteq \mathbb{R}^2$ is the cuspidal cubic plane curve, and $\textbf{y} = 0$ is the cusp, the tangent cone is a half-line: $TC_{\textbf{y}}(\mathcal{Y}) = \mathbb{R}_{\geq 0} \begin{pmatrix} 1 \\ 0 \end{pmatrix}$. Finally if $\mathcal{Y} = \{ \textbf{x} : \sum_{i=1}^D |x_i| \leq 1 \} \subseteq \mathbb{R}^D$ is the $\ell_1$ unit ball, the tangent cone $TC_{\textbf{y}}(\mathcal{Y})$ is all of $\mathbb{R}^D$, a half-space in $\mathbb{R}^D$ or a polyhedron in $\mathbb{R}^D$ depending on whether $\textbf{y}$ lies in the interior of the unit ball, the relative interior of a facet of the unit sphere or elsewhere on the boundary. \end{Example} We will use the following easy and well-known facts about tangent cones. \begin{Lemma} \label{lem:basic_tangent_cone} \begin{enumerate} \item \cite[Lem.~3.12]{nonlinear-optim-book} For all sets $\mathcal{Y} \subseteq \mathbb{R}^D$ and points $\textup{\textbf{y}} \in \mathcal{Y}$, we have $TC_{\textup{\textbf{y}}}(\mathcal{Y})$ is a closed cone. \item For all sets $\mathcal{Y} \subseteq \mathbb{R}^D$, points $\textup{\textbf{y}} \in \mathcal{Y}$ and linear subspaces $\mathcal{S} \subseteq \mathbb{R}^D$, we have $TC_{\textup{\textbf{y}}}(\mathcal{Y}) \cap \mathcal{S} = TC_{\textup{\textbf{y}}}(\mathcal{Y} \cap \mathcal{S})$. \item \cite[Lem.~3.13]{nonlinear-optim-book} For all convex sets $\mathcal{Y} \subseteq \mathbb{R}^D$ and points $\textup{\textbf{y}} \in \mathcal{Y}$, we have the explicit description (overline denotes closure in the Euclidean topology): \begin{equation} \label{eq:nice-tan-cone} TC_{\textup{\textbf{y}}}(\mathcal{Y}) \, = \, \overline{\mathbb{R}_{\geq 0}\left(\mathcal{Y} - \textup{\textbf{y}}\right)} \, := \, \overline{\left\{ \beta (\widetilde{\textup{\textbf{y}}} - \textup{\textbf{y}}) \in \mathbb{R}^D: \, \beta \in \mathbb{R}_{>0}, \widetilde{\textup{\textbf{y}}} \in \mathcal{Y} \right\}}. \end{equation} In particular, if $\mathcal{Y}$ is convex (respectively, convex with non-empty interior), then $TC_{\textup{\textbf{y}}}(\mathcal{Y})$ is convex (respectively, convex with non-empty interior). \end{enumerate} \end{Lemma} In light of the third item, we know all the possibilities in the plane for the tangent cone to convex sets with non-empty interior. \begin{Example} \label{rem:coneR2} The closed convex cones in $\mathbb{R}^2$ with non-empty interior are precisely $\mathbb{R}^2$, closed half-spaces and the \textit{conical hull} of two linearly independent vectors: \begin{align*} \operatorname{coni}\{\textbf{d}_1, \textbf{d}_2\} := \{\beta_1 \textbf{d}_1 + \beta_2 \textbf{d}_2 : \beta_1, \beta_2 \in \mathbb{R}_{\geq 0} \} \subseteq \mathbb{R}^2, \quad \textbf{d}_1, \textbf{d}_2 \in \mathbb{R}^2. \end{align*} In the latter case, the pair $\textbf{d}_1, \textbf{d}_2$ is unique up to positive scales, and one says that they generate the cone's \textit{extremal rays}, $\operatorname{coni}\{\textbf{d}_1\}$ and $\operatorname{coni}\{\textbf{d}_2\}$. \end{Example} Finally, for technical purposes of the ``tilt construction" developed in the next section, we need to observe that the topological boundary and tangent cone operations commute, at least in the case of our interest. \begin{Lemma} \label{prop:tan-con-boundary} For $\mathcal{B} \subseteq \mathbb{R}^D$ the unit ball of a norm $\| \cdot \|_{\mathcal{B}}$ and a boundary point $\textup{\textbf{y}} \in \partial \mathcal{B}$, the boundary of the tangent cone is the tangent cone of the boundary: \begin{equation} \label{eq:annoying} \partial \left( TC_{\textup{\textbf{y}}}(\mathcal{B}) \right) \, = \, TC_{\textup{\textbf{y}}}(\partial \mathcal{B}). \end{equation} \end{Lemma} We include a proof of Lemma~\ref{prop:tan-con-boundary} in Appendix~\ref{app:tan-con-boundary}, since we could not readily find this statement in the literature. \subsection[\textbf{Tilt construction}]{\textbf{Tilt construction}} \label{subsec:tilt-const} In this section, we present a construction that relates the second-order geometry of a submanifold $\mathcal{M} \subseteq \mathbb{R}^D$ around a point $\textbf{p} \in \mathcal{M}$ to tangent cones to the unit sphere $\partial \mathcal{B} \subseteq \mathbb{R}^D$ of a norm $\| \cdot \|_{\mathcal{B}}$. We name this construction the \textit{tilt function}, and denote it by ${\operatorname{tilt}_{\M, \B, \p}}$. Though not apparent initially, the relevance is that this function is required to define the limiting differential operator for point-cloud Laplacians formed by sampling $\mathcal{M}$ and computing affinities using $\| \cdot \|_{\mathcal{B}}$. Specifically, it appears in the first-order derivative term \nolinebreak in \nolinebreak Eq.~\eqref{eq:def-DeltaMB}. \begin{figure}[ht] \def0{0} \def-2{-2} \def3{3} \def1.5{1.5} \def105{105} \def\Cx+\Rx*cos(\angle){0+3*cos(105)} \def\Cy+\Ry*sin(\angle){-2+1.5*sin(105)} \def2{2} \begin{tikzpicture}[scale=1] \clip(-4.7,-2.6) rectangle (0.5,1); \draw [name path=ellipse,fill=blue!20,draw=none] ({0},{-2}) ellipse ({3} and {1.5}); \coordinate (p) at ({\Cx+\Rx*cos(\angle)}, {\Cy+\Ry*sin(\angle)}); \coordinate (avector) at ({\Cx+\Rx*cos(\angle) + 3*cos(105)},{\Cy+\Ry*sin(\angle) + 1.5*sin(105)}); \coordinate (bvector) at ({\Cx+\Rx*cos(\angle) - 1.5*1.5*sin(105)}, {\Cy+\Ry*sin(\angle) + 1.5*3*cos(105)}); \draw[-latex,line width=1.5pt] (p) -- node[sloped, anchor=center, above] {${\bf a}$} (avector); \draw[-latex,line width=1.5pt] (p) -- node[sloped, anchor=center, above] {${\bf b}$} (bvector); \coordinate (tangentstart) at ({\Cx+\Rx*cos(\angle) - 2*3*sin(105)}, {\Cy+\Ry*sin(\angle) + 2*1.5*cos(105)}); \coordinate (tangentend) at ({\Cx+\Rx*cos(\angle) + 2*3*sin(105)}, {\Cy+\Ry*sin(\angle) - 2*1.5*cos(105)}); \draw[color=red,line width=1pt, name path={tangent}] (tangentstart) -- (tangentend); \path[name path={tilt}] (bvector) -- ({\Cx+\Rx*cos(\angle) - 1.5*1.5*sin(105) + 100*3*cos(105)}, {\Cy+\Ry*sin(\angle) +1.5*3*cos(105) + 100*1.5*sin(105)}); \draw [name intersections={of=tilt and tangent, by={intersect}}] (bvector) -- node[sloped,left,rotate=90]{{$ \text{tilt}(\widehat{\textbf{s}}) \Bigg\{$}} (intersect); \end{tikzpicture} \quad \begin{tikzpicture}[scale=1] \clip(-5.3,2) rectangle (0.9,6); \fill[color=blue!20] (-1,-6) .. controls (-4,2) .. (-2,4) .. controls (2,2) .. (-1,-6) -- (-3,-4); \draw[-latex,line width=1.5pt] (-2,4) -- node[sloped, anchor=center, above] {${\bf a}$} (-3,6); \draw[-latex,line width=1.5pt] (-2,4) -- node[sloped, anchor=center, above] {$\textbf{b}$} (-4,3); \draw[color=red,line width=1pt] (-2,4) -- (-4,2.1); \draw[color=red,line width=1pt] (-2,4) -- (1,2.5); \draw (-4,3) -- node[sloped,left,rotate=90] {$ \text{tilt}(\widehat{\textbf{s}}) \bigg\{$}(-3.7,2.4); \end{tikzpicture} \qquad \caption{\textit{Tilt construction}. These diagrams take place in the 2D linear subspace $\mathcal{S} := \textup{Span}\{\textbf{a}, \textbf{b}\} \subseteq \mathbb{R}^D$, where $\textbf{a}:=L_{\textbf{p}}(\widehat{\textbf{s}})$ and $\textbf{b}:=\tfrac12Q_{\textbf{p}}(\widehat{\textbf{s}})$ are tangent and normal vectors to $\mathcal{M}$ at $\textbf{p}$ respectively. Blue indicates $\widetilde{\mathcal{B}} = \mathcal{B} \cap \mathcal{S}$ (2D linear section of the unit ball $\mathcal{B}$). Red indicates the tangents to $\partial \widetilde{\mathcal{B}}$ at $\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}$. By definition, ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\bf s}})$ equals the signed $\ell_2$-length of the braced line segment. (left) An example where $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \widetilde{\mathcal{B}})$ consists of one well-defined tangent line. Here tilt is positive; (right) An example where $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \widetilde{\mathcal{B}})$ consists of two tangent rays due to a singularity of $\partial \widetilde{\mathcal{B}}$ at $\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}$. Here tilt is negative. }\label{fig:understand-tilt}. \end{figure} \begin{Propdef}[\normalfont Tilt function] \label{def:tilt-const} Let $\mathcal{M} \subseteq \mathbb{R}^D$ be a compact embedded Riemannian submanifold, let $\textup{\textbf{p}} \in \mathcal{M}$ be a point, and let $\widehat{\textup{\textbf{s}}} \in T_{\textup{\textbf{p}}}\mathcal{M}$ be a tangent vector to $\mathcal{M}$ at $\textup{\textbf{p}}$ with $\| \widehat{\textup{\textbf{s}}} \|_{2}=1$. Following Eq.~\eqref{eq:expp-taylor}, consider the differential of the exponential map at $\textup{\textbf{p}}$ and the second fundamental form at $\textup{\textbf{p}}$ both evaluated at $\widehat{\textup{\textbf{s}}}$, and write \begin{align*} \textup{\textbf{a}} := L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}), \qquad \textup{\textbf{b}} := \frac{1}{2} Q_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}). \end{align*} Further, let $\| \cdot \|_{\mathcal{B}}$ denote a norm on $\mathbb{R}^D$ with unit ball $\mathcal{B} \subseteq \mathbb{R}^D$ and unit sphere $\partial \mathcal{B} \subseteq \mathbb{R}^D$. Also write $TC$ to denote tangent cones as defined by Eq.~\eqref{eq:def-tan-cone}. Then, there exists a unique scalar $\eta \in \mathbb{R}$ such that \begin{equation} \label{eq:tilt-def} \frac{\textup{\textbf{b}}}{\|\textup{\textbf{a}}\|_{\mathcal{B}}^2} + \eta \textup{\textbf{a}} \, \in \, TC_{\textup{\textbf{a}}/\|\textup{\textbf{a}}\|_\mathcal{B}}(\partial \mathcal{B}). \end{equation} We define the tilt function by \begin{align} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\bf s}}) := \eta. \end{align} Hence ${\operatorname{tilt}_{\M, \B, \p}}$ is a well-defined function from Euclidean-normalized tangent vectors to $\mathcal{M}$ at $\textup{\textbf{p}}$ into the real numbers. \end{Propdef} \begin{Remark} In the course of proving Propostion/Definition~\ref{def:tilt-const} below, we shall show that the tilt function ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$ only depends on the norm $\| \cdot \|_{\mathcal{B}}$ through the following (typically) two-dimensional central slice of the unit ball: \begin{equation*} \operatorname{Span}\{L_{\textbf{p}}(\widehat{\textbf{s}}), Q_{\textbf{p}}(\widehat{\textbf{s}}) \} \cap \mathcal{B}. \end{equation*} This is two-dimensional origin-symmetric convex body (unless $Q_{\textbf{p}}(\widehat{\textbf{s}})=0$, in which case it is an origin-symmetric line segment). We make two remarks. First, as a consequence, we can visualize the tilt function using two-dimensional figures on the page (see Figure~\ref{fig:understand-tilt}). Second, in general, the central planar sections of the unit ball of a norm can vary significantly, and indeed qualitatively, across different slices. For example, for the $\ell_1$-ball in $\mathbb{R}^3$, there is not a unique combinatorial type for a central planar section: instead either a quadrilateral or a hexagon can occur depending on the specific \nolinebreak slice. \end{Remark} \begin{proof} In the defining equation~\eqref{eq:tilt-def} for ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$, note that it is equivalent to require the left-hand side lies in the linear slice of the tangent cone: \begin{align} \label{eq:simpler-tilt} &\mathcal{S} \cap TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \mathcal{B}) \\[0.2em] &\textup{where } \mathcal{S} := \operatorname{Span}\{L_{\textup{\textbf{p}}}(\widehat{\textbf{s}}), Q_{\textup{\textbf{p}}}(\widehat{\textbf{s}})\} \subseteq \mathbb{R}^D, \nonumber \end{align} since membership in the linear space $\mathcal{S}$ is guaranteed by definition. We shall rewrite the set \eqref{eq:simpler-tilt} using basic properties relating tangent cones, boundaries, and intersection by linear spaces. Firstly, we have \begin{equation} \label{eq:tilt-proof-1} \mathcal{S} \cap TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \mathcal{B}) = TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}\left(\mathcal{S} \cap \partial \mathcal{B} \right) \end{equation} by Lemma~\ref{lem:basic_tangent_cone}, item~2. Next, let \begin{equation*} \widetilde{\mathcal{B}} := \mathcal{S} \cap \mathcal{B} \subseteq \mathcal{S}, \end{equation*} and note this is the unit ball of the restriction of the norm $\| \cdot \|_{\mathcal{B}}$ to the subspace $\mathcal{S}$, which is a norm in its own right on $\mathcal{S}$ (in our notation, $\| \cdot \|_{\widetilde{\mathcal{B}}}$). Then, \begin{equation*} \mathcal{S} \cap \partial \mathcal{B} = \{ \textbf{t} \in \mathcal{S} : \| \textbf{t} \|_{\mathcal{B}} = 1 \} = \{ \textbf{t} \in \mathcal{S} : \| \textbf{t} \|_{\widetilde{\mathcal{B}}} = 1 \} = \partial \widetilde{\mathcal{B}}, \end{equation*} from which it follows \begin{equation} \label{eq:tilt-proof-2} TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\mathcal{S} \cap \partial \mathcal{B}) = TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \widetilde{\mathcal{B}}). \end{equation} Now by Lemma~\ref{prop:tan-con-boundary}, \begin{equation} \label{eq:tilt-proof-3} TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \widetilde{\mathcal{B}}) = \partial TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}). \end{equation} Combining Eq.~\eqref{eq:tilt-proof-1}, \eqref{eq:tilt-proof-2} and \eqref{eq:tilt-proof-3}, we get that \begin{equation} \label{eq:tilt-proof-4} \mathcal{S} \cap TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\partial \mathcal{B}) = \partial TC_{\textbf{a} / \| \textbf{a} \|_{\mathcal{B}}}(\widetilde{\mathcal{B}}). \end{equation} The upshot is that in the defining equation for ${\operatorname{tilt}_{\M, \B, \p}}$ it is equivalent to require membership in the right-hand side of \eqref{eq:tilt-proof-4}. We shall now obtain a more explicit description of the set \eqref{eq:tilt-proof-4}. Firstly, note that $\textbf{a} = L_{\textbf{p}}(\widehat{\textbf{s}}) \neq 0$, since $\| L_{\textbf{p}}(\widehat{\textbf{s}}) \|_{2} = \|\textbf{s}\|_2$ (Eq.~\eqref{eq:isometric}). If $\textbf{b} = \frac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}) =0$, then the subspace $\mathcal{S}$ is one-dimensional. In this case, the existence and uniqueness of $\eta$ in Eq.~\eqref{eq:tilt-def} is clear: $\widetilde{\mathcal{B}}$ is a line segment, $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}})$ is a ray (half-line), and its boundary $\partial TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}})$ is the origin. Thus in the light of Eq.~\eqref{eq:tilt-proof-4}, we must take $\eta = 0$, so that ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}}) = 0$ when the second fundamental form vanishes. Therefore, assume $\textbf{b} \neq 0$. Since $\langle \textbf{a}, \textbf{b} \rangle = 0$ (Eq.~\eqref{eq:perp}), it follows that $\mathcal{S} \cong \mathbb{R}^2$ is two-dimensional and $\widetilde{\mathcal{B}}$ is a convex body in $\mathbb{R}^2$. By Lemma~\ref{lem:basic_tangent_cone}, items 1 and 3, we know the tangent cone $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}$ is a closed convex cone in $\mathbb{R}^2$ with non-empty interior. Then by Example~\ref{rem:coneR2}, the tangent cone is either all of $\mathbb{R}^2$, a half-plane in $\mathbb{R}^2$ or it is conically spanned by two linearly independent vectors. We claim $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}) \neq \mathbb{R}^2$. Indeed since $\widetilde{\mathcal{B}}$ is convex and $\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}$ lies in the boundary of $\widetilde{\mathcal{B}}$, the supporting hyperplane \nolinebreak theorem \nolinebreak implies: \begin{align*} \exists \, \textbf{v} \in \mathcal{S} \setminus \! \{0\} \,\, \exists \, \gamma \in \mathbb{R} \, \textup{ s.t. } \langle \textbf{v}, \textbf{a}/\| \textbf{a} \|_{\mathcal{B}} \rangle = \gamma \,\, \wedge \,\, \forall \,\textbf{u} \in \widetilde{\mathcal{B}}, \, \langle\textbf{v}, \textbf{u} \rangle \geq \gamma. \end{align*} Combining this with Eq.~\eqref{eq:nice-tan-cone}, it follows \begin{align*} TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}) = \overline{\mathbb{R}_{\geq 0}(\widetilde{\mathcal{B}} - \textbf{a}/\|\textbf{a}\|_{\mathcal{B}})} \subseteq \{\textbf{u} \in \mathcal{S} : \langle \textbf{v}, \textbf{u} \rangle \geq 0 \}. \end{align*} In particular, $TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}) \neq \mathbb{R}^2$, so by Example~\ref{rem:coneR2} the tangent cone is either a half-plane or conically spanned by two extremal rays. For now, assume the latter case: there exist linearly independent vectors $\textbf{d}_1, \textbf{d}_2 \in \mathcal{S} \cong \mathbb{R}^2$ such that \begin{equation} \label{eq:tilt-proof-5} TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}) = \operatorname{coni}\{\textbf{d}_1, \textbf{d}_2\} \subseteq \mathbb{R}^2. \end{equation} The set \eqref{eq:tilt-proof-3} is thus the union of two rays: \begin{equation} \label{eq:tilt-proof-membership} \partial TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}}) = \mathbb{R}_{\geq 0} \textbf{d}_1 + \mathbb{R}_{\geq 2} \textbf{d}_2. \end{equation} We shall now finish by proving the existence and uniqueness of $\eta \in \mathbb{R}$ such that \begin{equation} \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2} + \eta \textbf{a} \,\, \in \,\, \mathbb{R}_{\geq 0} \textbf{d}_1 \, \cup \, \mathbb{R}_{\geq 0} \textbf{d}_2. \end{equation} To this end, first note that \begin{align*} -\textbf{a} \, = \, 0 - \|\textbf{a}\|_{\mathcal{B}}\left( \textbf{a} / \| \textbf{a} \|_{\mathcal{B}}\right) \, &\in \, \operatorname{relint}( \overline{\mathbb{R}_{\geq 0} (\widetilde{\mathcal{B}} - \textbf{a}/\|\textbf{a}\|_{\mathcal{B}})}) \\[0.2em] &= \, \operatorname{relint}(TC_{\textbf{a}/\|\textbf{a}\|_{\mathcal{B}}}(\widetilde{\mathcal{B}})) \\[0.2em] &= \, \mathbb{R}_{>0} \textbf{d}_1 + \mathbb{R}_{>0} \textbf{d}_2. \end{align*} where the penultimate equality is again by Eq.~\eqref{eq:nice-tan-cone} and the last equality is by Eq.~\eqref{eq:tilt-proof-5}. Thus there exist positive scalars $\beta_1, \beta_2 \in \mathbb{R}_{>0}$ such that $-\textbf{a} = \beta_1 \textbf{d}_1 + \beta_2 \textbf{d}_2$. Substituting this into $\langle \textbf{a}, \textbf{b} \rangle = 0$ (Eq.~\eqref{eq:perp}), we get \begin{equation} \label{eq:tilt-proof-6} \beta_1 \langle \textbf{d}_1, \textbf{b} \rangle + \beta_2 \langle \textbf{d}_2, \textbf{b} \rangle = 0. \end{equation} Since $\textbf{d}_1, \textbf{d}_2$ form a basis for $\mathcal{S}$ and $\textbf{b} \in \mathcal{S}$ and we are presently assuming $\textbf{b}\neq 0$, it cannot be that $\langle \textbf{d}_1, \textbf{b} \rangle = \langle \textbf{d}_2, \textbf{b} \rangle = 0$. Instead, Eq.~\eqref{eq:tilt-proof-6} combined with $\beta_1, \beta_2 >0$ imply that exactly one of the inner products $\langle \textbf{d}_1, \textbf{b} \rangle, \langle \textbf{d}_2, \textbf{b} \rangle$ is strictly positive while the other is strictly negative. Relabeling if necessary, we can assume that $\langle \textbf{d}_1, \textbf{b} \rangle > 0 > \langle \textbf{d}_2, \textbf{b} \rangle$. With this in hand, let us examine the membership \eqref{eq:tilt-proof-membership}. Notice that for each $\eta \in \mathbb{R}$, it holds \begin{equation} \label{eq:tilt-proof-7} \frac{\textbf{b}}{\| \textbf{a} \|_{\mathcal{B}}} + \eta \textbf{a} \, \notin \, \mathbb{R}_{\geq 0} \textbf{d}_2. \end{equation} This is by taking inner products with $\textbf{b}$: all vectors on the right-hand side of \eqref{eq:tilt-proof-7} have a non-positive inner product with $\textbf{b}$ using $\langle \textbf{d}_2, \textbf{b} \rangle < 0$. Meanwhile on the left-hand side of \eqref{eq:tilt-proof-7}, we have $\langle \textbf{b}, \frac{\textbf{b}}{\| \textbf{a} \|_{\mathcal{B}}} + \eta \textbf{a} \rangle = \frac{\|\textbf{b}\|_2^2}{\| \textbf{a} \|_{\mathcal{B}}} > 0$ (the equality is from $\langle \textbf{a}, \textbf{b} \rangle = 0$ and the strict inequality is from the assumption $\textbf{b} \neq 0$). On the other hand, there do exist scalars $\eta \in \mathbb{R}$ and $\beta \in \mathbb{R}_{\geq 0}$ satisfying \begin{align} \label{eq:tilt-proof-8} \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2} + \eta \textbf{a} =\beta \textbf{d}_1. \end{align} Indeed using that $\textbf{a}, \textbf{b}$ are an orthogonal basis for $\mathcal{S}$, $\| \textbf{a} \|_2^2 = 1$ and $\textbf{d}_1 \in \mathcal{S}$, \nolinebreak note \begin{align}\label{eq:expand-d1} \textbf{d}_1 = \langle \textbf{d}_1, \textbf{a} \rangle \textbf{a} + \frac{\langle \textbf{d}_1, \textbf{b} \rangle}{\| \textbf{b} \|_2^2} \textbf{b}. \end{align} Then substituting Eq.~\eqref{eq:expand-d1} into \eqref{eq:tilt-proof-8} and equating coefficients, we compute the following \textit{unique} solution to Eq.~\eqref{eq:tilt-proof-8}: \begin{align} & \beta = \|\textbf{b}\|_2^2 \Big{/} \! \left( \|\textbf{a}\|_{\mathcal{B}}^2 \langle \textbf{d}_1, \textbf{b} \rangle \right), \nonumber \\[1pt] & \eta = \| \textbf{b} \|_2^2 \langle \textbf{d}_1, \textbf{a} \rangle \Big{/} \!\! \left( \| \textbf{a} \|_{\mathcal{B}}^2 \langle \textbf{d}_1, \textbf{b} \rangle \right). \label{eq:crazy-eta} \end{align} This completes the case when $TC_{\textbf{a}/\| \textbf{a} \|_{\mathcal{B}}}(\widetilde{\mathcal{B}})$ is conically spanned by independent vectors. As for the third case afforded by Example~\ref{rem:coneR2}, when the tangent cone is a half-plane, let the boundary of the half-plane be spanned by $\textbf{d}_1 \in \mathcal{S} \cong \mathbb{R}^2$. Again we arrive at Eq.~\eqref{eq:tilt-proof-8} but without the constraint that $\beta \geq 0$. Solving as before, $\eta$ is uniquely determined and given by Eq.~\eqref{eq:crazy-eta}. This completes the proof that $\eta$ exists and is unique. In sum: if $Q_{\textbf{p}}(\widehat{\textbf{s}})=0$ then ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}}) =0$, and otherwise ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$ is given by Eq.~\eqref{eq:crazy-eta}. \hfill \qed \end{proof} In the next statement, we assume the local continuous differentiability of the norm to get a more explicit expression for the tilt function. The proof is in Appendix~\ref{app:simple-title-C1}. \begin{Proposition}[\normalfont{Simplifications for tilt in the case of $C^1$-norm}] \label{prop:simple-title-C1} Regard the norm $\| \cdot \|_{\mathcal{B}}$ as a function from $\mathbb{R}^D$ to $\mathbb{R}$. \begin{enumerate} \item Let $\widehat{\textup{\textbf{a}}}$ be a point in $\mathbb{R}^D$ with $\| \widehat{\textup{\textbf{a}}} \|_{\mathcal{B}} = 1$. If $\| \cdot \|_{\mathcal{B}}$ is continuously differentiable in a neighborhood of $\widehat{\textup{\textbf{a}}}$, then the tangent cone to the $\| \cdot \|_{\mathcal{B}}$-unit sphere at $\widehat{\textup{\textbf{a}}}$ is the hyperplane: \begin{align}\label{eq:simpler-TC} TC_{\widehat{\textup{\textbf{a}}}}(\partial \mathcal{B}) = \left\{ \textup{\textbf{v}} \in \mathbb{R}^D : \left\langle \textup{\textbf{v}}, \, \operatorname{grad}\| \cdot \|_{\mathcal{B}} (\widehat{\textup{\textbf{a}}})\right\rangle = 0 \right\}. \end{align} \item Assume the setup of Proposition/Definition~\ref{def:tilt-const}, and further that $\| \cdot \|_{\mathcal{B}}$ is continuously differentiable in a neighborhood of the point $L_{\textup{\bf p}}({\widehat{\textup{\bf s}}})$. Then, the tilt function equals \begin{align} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\bf s}}) = \frac{-\left\langle \operatorname{grad} \| \cdot \|_{\mathcal{B}} (L_{\textup{\bf p}}(\widehat{\textup{\bf s}})), \, \tfrac{1}{2} Q_{\textup{\bf p}}(\widehat{\textup{\bf s}}) \right\rangle}{\left\langle \operatorname{grad} \| \cdot \|_{\mathcal{B}} (L_{\textup{\bf p}}(\widehat{\textup{\bf s}})), \, L_{\textup{\bf p}}(\widehat{\textup{\bf s}}) \right\rangle} \, \frac{1}{\left\| L_{\textup{\bf p}}(\widehat{\textup{\bf s}}) \right\|_{\mathcal{B}}^2}. \end{align} \end{enumerate} \end{Proposition} \subsection[\textbf{Laplacian-like operator and main theorem statement}]{\textbf{Laplacian-like operator and main theorem statement}} \label{subsec:thm-statement} Here we state our main result in Theorem~\ref{thm:limit}. We first need to define both sides of Eq.~\eqref{eq:main-limit}, in particular the differential operator $\Delta_{\mathcal{M}, \mathcal{B}}$ (Definition~\ref{def:Laplace-like}). For simplicity, we consider only the standard \textup{Gaussian kernel} with width $\sigma$: \begin{align*} K_{\sigma} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{> 0} \quad \textup{ defined by } \quad K_{\sigma}(t) = \textup{exp}(-t^2/\sigma^2). \end{align*} \begin{Definition}[\normalfont Point-cloud Laplacian with respect to an arbitrary norm] \label{discrete_laplacian_B} Let $\| \cdot \|_{\mathcal{B}}$ be a norm on $\mathbb{R}^D$. Let $\mathcal{X} = \{ {\bf x}_1, \ldots, {\bf x}_n \} \subseteq \mathbb{R}^D$ be a set of points. Then we define the \textit{point-cloud Laplacian} computed using the norm $\| \cdot \|_{\mathcal{B}}$ and the point set $\mathcal{X} \subseteq \mathbb{R}^D$ to act on functions whose domains contain $\mathcal{X}$ as follows: for each $f : \mathcal{Y} \rightarrow \mathbb{R}$ where $\mathcal{X} \subseteq \mathcal{Y}$, define $\mathcal{L}_{n, \mathcal{B}}f : \mathcal{Y} \rightarrow \mathbb{R}$ by \begin{equation} \label{eq:discrete-Lap-1} \mathcal{L}_{n, {\mathcal{B}}} f(\textup{\bf p}) := \frac{1}{n} \sum_{i=1}^n K_{\sigma_n}(\| {\bf x}^{(i)} - {\bf p} \|_{\mathcal{B}})(f({\bf x}^{(i)}) - f({\bf p})). \end{equation} \end{Definition} Compare Eq.~\eqref{eq:discrete-Lap} with Eq.~\eqref{eq:discrete-Lap-1}. In what follows, $d\textup{\textbf{s}}$ is the Lebesgue measure on $T_{\textup{\textbf{p}}}\mathcal{M}$, and $d\widehat{\textup{\textbf{s}}}$ is the uniform measure on the sphere $\{\widehat{\textup{\textbf{s}}} \in T_{\textup{\textbf{p}}}\mathcal{M} : \|\widehat{\textup{\textbf{s}}}\|_2 =1 \}$. \begin{Definition}[\normalfont Laplacian-like differential operator with respect to an arbitrary norm] \label{def:Laplace-like} The \textit{Laplacian-like operator} on a submanifold $\mathcal{M} \subseteq \mathbb{R}^D$ with respect to a norm $\| \cdot \|_{\mathcal{B}}$ is defined to act on functions $f : \mathcal{M} \rightarrow \mathbb{R}$ according to \begin{align} \label{eq:def-DeltaMB} (\Delta_{\mathcal{M}, \mathcal{B}} f)(\textbf{p}) := &\left\langle (\operatorname{hess} \widetilde{f})(0) , \, \tfrac{1}{2} \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| L_{\textup{\textbf{p}}}(\textup{\textbf{s}}) \|_{\mathcal{B}} \leq 1\}} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d \textup{\textbf{s}} \right\rangle \nonumber \\[2pt] & +\left\langle (\operatorname{grad} \widetilde{f})(0), \, \int_{\{\widehat{\textup{\textbf{s}}} \in T_{\textup{\textbf{p}}}\mathcal{M} : \| \widehat{\textup{\textbf{s}}} \|_2 =1 \}} \widehat{\textup{\textbf{s}}} \, \|L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}})\|_{\mathcal{B}}^{-d} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\textbf{s}}}) \, d\widehat{\textup{\textbf{s}}} \right\rangle \end{align} where $\widetilde{f} = f \circ \textup{exp}_{\textup{\textbf{p}}} : T_{{\bf p}}\mathcal{M} \rightarrow \mathbb{R}$ and $L_{\textup{\textbf{p}}} = D\textup{exp}_{\textup{\textbf{p}}}(0): T_{\textup{\textbf{p}}}\mathcal{M} \rightarrow \mathbb{R}^D$. \end{Definition} \begin{Remark}[\normalfont{Extrinsic interpretation of the integration domains and integrands in Eq.~\eqref{eq:def-DeltaMB}}] \label{rem:extrinsic-nice} Both domains of integration in Definition~\ref{def:Laplace-like} are subsets of the abstract tangent space $T_{\textbf{p}}\mathcal{M}$, since we have written the integrals in parameterized form. Using the isometry $L_{\textbf{p}}$, we can identify the first domain with the $d$-dimensional intersection of the embedded tangent space and the unit ball: \begin{align*} L_{\textbf{p}}(T_{\textbf{p}}\mathcal{M}) \cap \mathcal{B}, \end{align*} and the first integral in Eq.~\eqref{eq:def-DeltaMB} with the second-moment of this convex body: \begin{align*} \int_{\textbf{t} \in L_{\textbf{p}}(T_{\textbf{p}}\mathcal{M}) \cap \mathcal{B}} \textbf{t} \textbf{t}^{\top} d\textbf{t}. \end{align*} Meanwhile, under the mapping $s \mapsto L_{\textbf{p}}(\textbf{s})/\|L_{\textbf{p}}(\textbf{s})\|_{\mathcal{B}}$, the second domain of integration in Definition \ref{def:Laplace-like} identifies with the $(d-1)$-dimensional intersection of the embedded tangent with the unit sphere: \begin{align*} L_{\textbf{p}}(T_{\textbf{p}}\mathcal{M}) \cap \partial \mathcal{B}, \end{align*} and the second integral in Eq.~\eqref{eq:def-DeltaMB} is a weighted first-moment of this boundary. \end{Remark} \begin{Theorem}[\normalfont Main result: Convergence of the point-cloud Laplacian based on an arbitrary norm] \label{thm:limit} Let $\| \cdot \|_{\mathcal{B}}$ be a norm on $\mathbb{R}^D$ with unit ball $\mathcal{B}$. Let $\mathcal{M}$ be a compact $d$-dimensional embedded Riemannian submanifold of $\mathbb{R}^D$. Let ${\bf x}_1, \ldots, {\bf x}_n$ be i.i.d. draws from the uniform measure on $\mathcal{M}$. Fix any constant $\alpha>0$, and set $\sigma_n = n^{-1/(2d + 4 + \alpha)}$ and $c_n := \Gamma(\frac{d+4}{2}) \sigma_n^{d+2}$. Then given a three-times continuously differentiable function $f: \mathcal{M} \rightarrow \mathbb{R}$ and a point $\textup{\textbf{p}} \in \mathcal{M}$, we have the following almost sure convergence: \begin{align} \label{eq:main-limit} \frac{1}{c_n} \mathcal{L}_{n, \mathcal{B}}f(\textup{\textbf{p}}) \xrightarrow{\,\,\,\,\, \textup{a.s.} \,\,\,\,} \frac{1}{\textup{vol}(\mathcal{M})} \Delta_{\mathcal{M}, \mathcal{B}} f(\textup{\textbf{p}}). \end{align} \end{Theorem} \noindent Theorem~\ref{thm:limit} is proved in Section~\ref{sec:main-proof}. \begin{Remark}[Comparing $\Delta_{\mathcal{M}, \mathcal{B}}$ and $\Delta_{\mathcal{M}}$] Note two key features that distinguish the continuum limit for general norms from the Laplace-Beltrami operator: \begin{itemize}\setlength\itemsep{0.5em} \item The operator $\Delta_{\mathcal{M}, \mathcal{B}}$ is typically \textbf{extrinsic}. By Remark~\ref{rem:extrinsic-nice}, both terms in \eqref{eq:def-DeltaMB} vary with the orientation of the embedded tangent space $L_{\textbf{p}}(T_{\textbf{p}}\mathcal{M}) \subseteq \mathbb{R}^D$ in relation to the ball $\mathcal{B} \subseteq \mathbb{R}^D$. (For a concrete example, see Section~\ref{subsec:circle-example}.) \item The operator $\Delta_{\mathcal{M},\mathcal{B}}$ has a \textbf{first-order derivative term}. \end{itemize} The Euclidean norm is special on both counts. However despite the added complexity of general norms, there can be \textbf{practical advantages} to using $\Delta_{\mathcal{M}, \mathcal{B}}$ over $\Delta_{\mathcal{M}}$. At least this can hold for a well-chosen norm, when reducing the dimension of certain data sets. We illustrate this numerically in Section~\ref{sec:experiments}. \end{Remark} We finish this section with an easy extension of Theorem~\ref{thm:limit} to the case where the sampling of $\mathcal{M}$ is non-uniform. \begin{Theorem}[\normalfont{Convergence of the point-cloud Laplacian based on an arbitrary norm with non-uniform sampling}] \label{thm:nonuniform} Assume the same setup as Theorem~\textup{\ref{thm:limit}} above, except $\textup{\textbf{x}}_1, \ldots, \textup{\textbf{x}}_n$ are i.i.d. draws from a probability distribution on $\mathcal{M}$ described by a $C^3$ probability density function, $dP(\textup{\textbf{x}}) = P(\textup{\textbf{x}}) d\mu(\textup{\textbf{x}})$. Then, the almost sure limit of the LHS of \eqref{eq:main-limit} exists and equals $\Delta_{\mathcal{M}, \mathcal{B}, P} f(\textup{\bf p})$, where \begin{equation} \Delta_{\mathcal{M}, \mathcal{B}, P} := \operatorname{vol}(\mathcal{M})P \Delta_{\mathcal{M}, \mathcal{B}} + \delta_{\mathcal{M}, \mathcal{B}, P}. \end{equation} Here $\delta_{\mathcal{M}, \mathcal{B}, P}$ only modifies the first-order derivative term, and is defined by \begin{align} \label{eq:additional-term} (\delta_{\mathcal{M},\mathcal{B}, P} f)(\textup{\bf p}) & := \left\langle ( \operatorname{grad} \widetilde{f}(0) ) (\operatorname{grad} \widetilde{P}(0))^{\top} , \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| L_{\textup{\textbf{p}}}(\textup{\textbf{s}}) \|_{\mathcal{B}} \leq 1\}} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d \textup{\textbf{s}} \right\rangle \nonumber \\ & = \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| L_{\textup{\textbf{p}}}(\textup{\textbf{s}}) \|_{\mathcal{B}} \leq 1\}} \langle \operatorname{grad} \widetilde{f}(0), \textup{\bf s} \rangle \langle \operatorname{grad} \widetilde{P}(0), \textup{\bf s} \rangle d\textup{\bf s}, \end{align} where $\widetilde{f} = f \circ \operatorname{exp}_{\textup{\bf p}}$ and $\widetilde{P} = P \circ \operatorname{exp}_{\textup{\bf p}}$. \end{Theorem} \begin{proof} Using the reduction in \textup{\cite[Sec.~5]{BelkinNiyogi2008}} and then Theorem~\ref{thm:limit}, the LHS of \eqref{eq:main-limit} tends to $\Delta_{\mathcal{M}, \mathcal{B}} h(\textup{\bf p})$ where $h : \mathcal{M} \rightarrow \mathbb{R}$ is defined by $h(\textup{\textbf{x}}) := \left( f(\textup{\textbf{x}}) - f(\textup{\textbf{p}}) \right)P(\textup{\textbf{x}})$. Let $\widetilde{h} = h \circ \operatorname{exp}_{\textup{\bf p}}$, so $\widetilde{h} = \widetilde{f} \, \widetilde{P} - \widetilde{f}(0) \widetilde{P}$. Then $\operatorname{grad} \widetilde{h} (0) = \widetilde{P}(0) \operatorname{grad} \widetilde{f}(0)$ and $\operatorname{hess} \widetilde{h}(0) = \widetilde{P}(0) \operatorname{hess} \widetilde{f}(0) + ( \operatorname{grad} \widetilde{f}(0) ) (\operatorname{grad} \widetilde{P}(0))^{\top} + ( \operatorname{grad} \widetilde{P}(0) ) (\operatorname{grad} \widetilde{f}(0))^{\top}$. Inserting these formulas into Definition~\ref{def:Laplace-like} and rearranging gives the result. \hfill \qed \end{proof} \subsection[First properties of the Laplacian-like differential operator]{\textbf{First properties of $\Delta_{\mathcal{M}, \mathcal{B}}$}} \label{sec:first_properties} We give a few basic properties of the limit in Theorem~\ref{thm:limit}. Firstly, it is elliptic. \begin{Lemma} [\normalfont{$\Delta_{\mathcal{M}, \mathcal{B}}$ is elliptic}] \label{lem:unif-elliptic} For all compact embedded Riemannian submanifolds $\mathcal{M} \subseteq \mathbb{R}^D$ and all norms $\| \cdot \|_{\mathcal{B}}$ on $\mathbb{R}^D$, the Laplacian-like operator $\Delta_{\mathcal{M}, \mathcal{B}}$ is a uniformly elliptic differential operator on $\mathcal{M}$. \end{Lemma} The proof of this lemma is in Appendix~\ref{app:unif-elliptic}. Next, we investigate the regularity properties of the coefficients of $\Delta_{\mathcal{M}, \mathcal{B}}$. Surprisingly, the first-order coefficient function need not even be continuous everywhere. Below, $T\mathcal{M} = \bigsqcup_{\textbf{p} \in \mathcal{M}} T_{\textbf{p}} \mathcal{M}$ is the \textup{tangent bundle} of $\mathcal{M}$, and $\operatorname{Sym}^2(T\mathcal{M}) = \bigsqcup_{\textbf{p} \in \mathcal{M}} \operatorname{Sym}^2(T_{\textbf{p}} \mathcal{M})$ is its \textup{symmetric square} bundle \cite[Ch.~10]{LeeBook2012}. \begin{Proposition} [\normalfont{Continuity properties of $\Delta_{\mathcal{M}, \mathcal{B}}$}] \label{prop:continuity-properties} For all compact embedded Riemannian submanifolds $\mathcal{M} \subseteq \mathbb{R}^D$ and all norms $\| \cdot \|_{\mathcal{B}}$ on $\mathbb{R}^D$, the Laplacian-like operator $\Delta_{\mathcal{M}, \mathcal{B}}$ has the following continuity properties. \begin{enumerate} \item As a section of $\operatorname{Sym}^2(T \mathcal{M})$, the coefficient of the second-order term, \begin{align}\label{eq:cont-pf-state-1} \frac{1}{2} \int_{\textup{\bf s} \in T_{\textup{\bf p}}\mathcal{M} : \| L_{\textup{\bf p}}(\textup{\bf s})\|_{\mathcal{B}} \leq 1} \textup{\bf s} \textup{\bf s}^{\top} d\textup{\bf s}, \end{align} is continuous at all points $\textup{\bf p} \in \mathcal{M}$. \item As a section of $T \mathcal{M}$, the coefficient of the first-order term, \begin{align}\label{eq:cont-pf-state-2} \int_{\widehat{\textup{\bf s}} \in T_{\textup{\bf p}}\mathcal{M} : \|\widehat{\textup{\bf s}}\|_2=1} \widehat{\textup{\bf s}} \| L_{\textup{\bf p}}(\widehat{\textup{\bf s}}) \|^{-d}_{\mathcal{B}} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\bf s}}) d\widehat{\textup{\bf s}}, \end{align} is continuous at all points $\textup{\bf p} \in \mathcal{M}$ such that the norm $\| \cdot \|_{\mathcal{B}}$ is continuously differentiable in a neighborhood of $L_{\textup{\bf p}}(T_{\textup{\bf p}}\mathcal{M}) \cap \mathbb{S}^{D-1}$. The first-order coefficient can be discontinuous at other points $\textup{\bf p} \in \mathcal{M}$. \end{enumerate} \end{Proposition} The proof of the second item relies on the expression for the tilt function in Proposition~\ref{prop:simple-title-C1}, where the norm is locally continuously differentiable. Details are in Appendix~\ref{app:continuity-properties}. \subsection[\textbf{Example: any manifold, Euclidean norm}]{\textbf{Example: any manifold, Euclidean norm}} \label{sec:example_euclidean} Let us first check that Theorem~\ref{thm:limit} agrees with the standard Euclidean theory. Let $\mathcal{M} \subseteq \mathbb{R}^D$ be any $d$-dimensional compact smooth embedded Riemannian manifold, $\textbf{p} \in \mathcal{M}$, and consider the \textup{Euclidean norm} $\| \cdot \|_2$ with Euclidean unit ball $\mathcal{B} = \{{\bf x}: \|{\bf x}\|_2 \le 1 \} \subseteq \mathbb{R}^D$. We first argue that ${\operatorname{tilt}_{\M, \B, \p}} \equiv 0$. Let $\widehat{\textbf{s}} \in T_{\textbf{p}}\mathcal{M}$ with $\| \widehat{\textbf{s}} \|_2 =1$. Set $\textbf{a} = L_{\textbf{p}}(\widehat{\textbf{s}})$, $\textbf{b} = \frac{1}{2} Q_{\textbf{p}}(\widehat{\textbf{s}})$ and $\mathcal{S} = \textup{Span}\{\textbf{a}, \textbf{b}\}$. If $\textbf{b} = 0$, then ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}}) = 0$. Else, put $\widetilde{\mathcal{B}} := \mathcal{B} \cap \mathcal{S}$. By construction, ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}}) = \eta$ for $\eta \in \mathbb{R}$ uniquely determined \nolinebreak by \begin{align*} \textbf{b} + \eta \textbf{a} \in TC_{\textbf{a}}(\partial \widetilde{\mathcal{B}}), \end{align*} using $\| \textbf{a} \|_2 = 1$ since $L_{\textbf{p}}$ is an isometry (Eq. \eqref{eq:isometric}). However, $\widetilde{\mathcal{B}}$ is a Euclidean unit disk in $\mathcal{S} \cong \mathbb{R}^2$, and $\partial \widetilde{\mathcal{B}}$ is a Euclidean unit circle in $\mathbb{R}^2$. So, $TC_{\textbf{a}}(\partial \widetilde{\mathcal{B}})$ is the orthogonal complement of $\mathbb{R} \textbf{a}$ inside $\mathcal{S}$. This gives $TC_{\textbf{a}}(\partial \widetilde{\mathcal{B}}) = \mathbb{R} \textbf{b}$ since $Q_{\textbf{p}}$ takes values in the normal space (Eq. \eqref{eq:perp}). Clearly then, $\eta = 0$ and ${\operatorname{tilt}_{\M, \B, \p}} \equiv 0$. We have verified that the first-order term vanishes in $\Delta_{{\mathcal M}, \mathcal{B}}$. As for the second-order term, we compute the following second moment: \begin{align} \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| L_{\textup{\textbf{p}}}(\textup{\textbf{s}}) \|_2 \leq 1\}} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d \textup{\textbf{s}} & = \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| \textup{\textbf{s}} \|_2 \leq 1\}} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d \textup{\textbf{s}} \quad & \!\! [\textup{by Eq.}~\eqref{eq:isometric}]\nonumber \\ & = \left( \int_{\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| \textup{\textbf{s}} \|_2 \leq 1\}} s_1^2 d\textbf{s} \! \right) \! I_{d} & \!\![\textup{oddness, symmetry}] \nonumber \\ & \propto I_d. \label{eq:prop-constant} \end{align} Here the second equality used that the integration domain $\{\textup{\textbf{s}} \in T_{\textup{\textbf{p}}}\mathcal{M}: \| \textup{\textbf{s}} \|_2 \leq 1\}$ is preserved under sign flips of individual coordinates of $\textbf{s}$, hence the off-diagonal terms $s_i s_j$ ($i \neq j$) integrate to 0. Plugging into Definition~\ref{def:Laplace-like}, we obtain \begin{align*} \Delta_{\mathcal{M}, \mathcal{B}}f(\textbf{p}) & \propto \left\langle \operatorname{hess} \widetilde{f}(0), I_d \right\rangle \\ &= \textup{trace}\left({\text{hess} \widetilde{f}}(0) \right) \\ & =\Delta_{\mathcal{M}}f(\textbf{p}). \end{align*} This is the usual Laplace-Beltrami operator on $\mathcal{M}$ applied to $f$ and evaluated at $\textup{\bf p}$. Thus, we have checked that Theorem~\ref{thm:limit} indeed recovers Theorem~\ref{thm:classical}. \begin{Remark} Due to the lack of normalization in Definition~\ref{def:Laplace-like}, our Laplacian-like operator $\Delta_{\mathcal{M},\mathcal{B}}$ in the Euclidean case differs from the usual Laplace-Beltrami operator $\Delta_{\mathcal{M}}$ by a multiplicative constant. From Eq.~\eqref{eq:prop-constant}, the constant is \begin{align*} & \frac{1}{2}\int_{\{\textbf{s} \in T_{\textbf{p}} \mathcal{M} : \| \textbf{s} \|_2 \leq 1\}} s_1^2 d\textbf{s} \,\, = \,\, \frac{1}{2d} \int_{\{\textbf{s} \in T_{\textbf{p}} \mathcal{M} : \| \textbf{s} \|_2 \leq 1\}} \| \textbf{s} \|_2^2 d\textbf{s} \\[0.5em] & = \frac{1}{2d}\int_{0}^{1} r^{d+1}\operatorname{vol}(\mathbb{S}^{d-1}) dr \,\, = \,\, \frac{1}{2d} \int_{0}^{1} r^{d+1} \frac{2 \pi^{d/2}}{\Gamma(\frac{d}{2})} dr \,\, = \,\, \frac{\pi^{d/2}}{4\Gamma(\frac{d+4}{2})}. \end{align*} This is simply the ratio of prefactors in the scales $c_n$ in Theorems~\ref{thm:classical} and \ref{thm:limit}. \end{Remark} \subsection[\textbf{Example: circle in the plane, weighted Manhattan norm}]{\textbf{Example: circle in the plane, weighted $\ell_1$-norm}} \label{subsec:circle-example} Next, we look at a non-Euclidean example in full detail. Consider the (Euclidean) unit circle in $\mathbb{R}^2$ where the ambient norm is a weighted $\ell_1$-norm. That is, let $\mathcal{M} = S^1 = \{\textbf{x} = (x_1, x_2)^{\top} \in \mathbb{R}^2 : x_1^2 + x_2^2 = 1 \}$ and use the norm $\| \cdot \|_{\textbf{w},1}$ defined by $\| \textbf{x} \|_{\textbf{w},1} = w_1 |x_1| + w_2 |x_2|$ where $\textbf{w} = (w_1, w_2)^{\top} \in \left(\mathbb{R}_{>0}\right)^2$. The unit ball of $\| \cdot \|_{\textbf{w},1}$ is the region \begin{align*} \mathcal{B} = \left\{ \textbf{x} = (x_1, x_2)^{\top} \in \mathbb{R}^2 : w_1 |x_1| + w_2 |x_2| \leq 1 \right\}, \end{align*} while the unit sphere of $\| \cdot \|_{\textbf{w},1}$ is \begin{align*} \partial \mathcal{B} = \left\{ \textbf{x} = (x_1, x_2)^{\top} \in \mathbb{R}^2 : w_1 |x_1| + w_2 |x_2| = 1 \right\}, \end{align*} a rhombus with vertices $(\pm (1/w_1), 0), (0, \pm (1/w_2))$. Let $\textbf{p} = (\textup{cos} (\theta), \textup{sin} (\theta))^{\top} \in S^1$. Parameterize $T_{\textbf{p}}S^1$ (with respect to a fixed unit basis vector) using $\psi \in \mathbb{R}$. The exponential map is \begin{align*} \textup{exp}_{\textbf{p}} : T_{\textbf{p}}S^1 \rightarrow S^1, \quad \psi \mapsto \left(\textup{cos}(\theta + \psi), \textup{sin}(\theta + \psi) \right)^{\top}. \end{align*} The differential of this is \begin{align*} L_{\textbf{p}}(\psi) = \psi \left(-\textup{sin}(\theta), \textup{cos}(\theta) \right)^{\top}\!\!. \end{align*} The second fundamental form is \begin{align*} Q_{\textbf{p}}(\psi) = -\psi^2\left(\textup{cos}(\theta), \textup{sin}(\theta) \right)^{\top}\!\!. \end{align*} We take on the terms in the limiting operator in Definition~\ref{def:Laplace-like} in turn. For the second-order term, we need the second moment of a line segment: \begin{align*} \tfrac{1}{2} \int_{\{\psi : \|L_{\textbf{p}}(\psi)\|_{\textbf{w},1} \leq 1\}} \psi^2 d\psi &= \tfrac{1}{2} \int_{|\psi| \, \leq \, \| (-\textup{sin}(\theta), \textup{cos}(\theta))^{\top} \|_{\textbf{w},1}^{-1}} \psi^2 d\psi \nonumber \\[3.5pt] &= \frac{1}{3\left( w_1 | \textup{sin}(\theta)| + w_2 |\textup{cos}(\theta)|\right)^3}. \end{align*} As for the first-order coefficient, this becomes a sum of over the two endpoints of the line segment. We shall show the first-order coefficient equals \begin{equation} \label{eq:toughy} \textup{sign}(\textup{cos}(\theta)\textup{sin}(\theta)) \frac{-w_1 | \textup{cos}(\theta) | + w_2 |\textup{sin}(\theta)|}{(w_1|\textup{sin}(\theta)| + w_2 |\textup{cos}(\theta)|)^4}, \end{equation} where $\textup{sign} : \mathbb{R} \rightarrow \{-1,0,1\}$ is given by $\textup{sign}(t) := 1$ if $t>0$; $\textup{sign}(t) := -1$ if $t < 0$; and $\textup{sign}(0) := 0$. By the symmetry of the rhombus $\partial\mathcal{B}$ with respect to individual coordinate sign flips in $\mathbb{R}^2$, one easily sees formula \eqref{eq:toughy} is correct for $\theta$ an integer multiple of $\frac{\pi}{2}$ (the first-order coefficient is zero). Otherwise, we may reduce to verifying correctness of the expression \eqref{eq:toughy} when $\theta \in (0, \frac{\pi}{2})$. Then in this case, it is enough to show \begin{equation} \label{eq:toughy-2} {\operatorname{tilt}_{\M, \B, \p}}(1) = \frac{1}{2} \frac{-w_1 \textup{cos}(\theta) + w_2 \textup{sin}(\theta)}{(w_1 \textup{sin}(\theta) + w_2 \textup{cos}(\theta))^3}. \end{equation} To this end, let $\alpha \in (0, \frac{\pi}{2})$ be half the angle $\partial \mathcal{B}$ makes at $(\frac{1}{w_1}, 0)^{\top}$\!, so $\textup{tan}(\alpha) = w_1/w_2$. Let $\textbf{a} = L_{\textbf{p}}(1)$ and $\textbf{b} = \frac{1}{2} Q_{\textbf{p}}(1)$. Let $\omega$ be the signed angle at $\textbf{a} / \|\textbf{a}\|_{\textbf{w},1} \in \partial \mathcal{B}$ from $\textbf{a} / \|\textbf{a}\|_{\textbf{w},1} \, + \, \mathbb{R}_{\geq 0} (\frac{-1}{w_1}, \frac{-1}{w_2})^{\top}$ to $\textbf{a} / \|\textbf{a}\|_{\textbf{w},1} \, + \, \mathbb{R}_{\geq 0} \textbf{b}$, where counterclockwise counts as positive. By elementary angle chasing, \begin{align*} \omega = \theta - \alpha. \end{align*} Thus (see Figure~\ref{fig:understand-tilt}), \begin{align*} {\operatorname{tilt}_{\M, \B, \p}}(1) &= \left\| \frac{\textbf{b}}{\|\textbf{a} \|_{\textbf{w},1}^2} \right\|_{2} \! \textup{tan}(\omega) = \frac{1}{2 \|\textbf{a} \|_{\textbf{w},1}^2} \textup{tan}(\theta - \alpha) \\ &= \frac{1}{2\left( w_1 \textup{sin}(\theta) + w_2 \textup{cos}(\theta) \right)^2} \frac{\textup{tan}(\theta) - (w_1/w_2)}{1 + \textup{tan}(\theta) (w_1/w_2)}, \end{align*} which indeed simplifies to Eq.~\eqref{eq:toughy-2}. Summarizing: for each angle $\theta \in [0,2\pi]$, Theorem~\ref{thm:limit} implies the Laplacian-like differential operator $\Delta_{\mathcal{M}, \mathcal{B}}$ is given by \begin{equation} \label{eq:limit-op-circle} \textup{sign}(\cos \theta \, \sin \theta) \, \frac{- w_1 |\cos \theta| + w_2 |\sin \theta| }{\left(w_1 |\sin \theta| + w_2 |\cos \theta| \right)^4} \, \frac{d}{d \theta} \,\, + \,\, \frac{1}{3 \left(w_1 |\sin \theta| + w_2 |\cos \theta| \right)^3} \, \frac{d^2}{d \theta^2}. \end{equation} As an independent numerical verification of this formula, we performed the following experiment. We fixed a particular function $f : S^1 \rightarrow \mathbb{R}$ (namely a certain trigonometric polynomial). We drew $n$ points uniformly i.i.d from the circle. We computed the empirical point-cloud Laplacian applied to $f$, using Eq.~\eqref{eq:discrete-Lap-1} and evaluating $\mathcal{L}_{n,\mathcal{B}} f$ along a dense regular grid. For comparison, we evaluated the Laplacian-like operator applied to $f$, using Eq.~\eqref{eq:limit-op-circle} and evaluating $\Delta_{\mathcal{M}, \mathcal{B}}f$ along the same grid. Figure~\ref{fig:empirical-v-theoretical} shows a convincing match: as the sample size $n$ grows, the empirical and theoretical plots match up increasingly well. \begin{figure} \includegraphics[width=\linewidth]{check_L1_laplacian_n=4000} \includegraphics[width=\linewidth]{check_L1_laplacian_n=40000} \caption{\textit{Empirical vs. theoretical weighted $\ell_1$ Laplacian on the circle} ($w_1 = 1, w_2 = 1.5$) applied to the function $f(\theta) = \sin(\theta) + \cos(2 \theta) + \cos(5\theta)$. For the empirical Laplacian, the samples were drawn from the uniform distribution on the unit circle. (top panel) $n = 4,000$ samples; (bottom panel) $n = 40,000$ samples. Here $\mathcal{L}_{n, \mathcal{B}}f$ is scaled by $\operatorname{vol}(S^1)/(\Gamma(\tfrac{d+4}{2})\sigma_n^{d+2})$.} \label{fig:empirical-v-theoretical} \end{figure} Appendix~\ref{sec:numerical_eigenfunction_computation} presents numerical results on the \textup{eigenfunctions} of \eqref{eq:limit-op-circle}. \begin{Remark} \label{rem:discontinuous} The coefficient of $\frac{d}{d \theta}$ in Eq.~\eqref{eq:limit-op-circle} is discontinuous at $\theta = 0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$. Thus, this example confirms the second sentence of Proposition~\ref{prop:continuity-properties}, item 2. \end{Remark} \section{\textbf{Proof of Theorem \ref{thm:limit}}} \label{sec:main-proof} To improve readability, we split the proof of Theorem~\ref{thm:limit} into several steps. First, we reduce to the population limit ($n=\infty$), replacing sums by integrals, via concentration of measure. The integrals are then parameterized by geodesic normal coordinates on the manifold. Both of these are standard steps in the analysis of empirical Laplacians based on the $\ell_2$-norm. We then replace the Gaussian kernel by the 0/1 kernel, so all considerations are local. The domain of integration becomes the intersection of the manifold $\mathcal{M}$ with the convex body $\sigma \mathcal{B}$ (for $\sigma \rightarrow 0$). This being a potentially unwieldy domain, we substitute the Taylor series expansion of the exponential map to replace the manifold $\mathcal{M}$ by first- and second-order approximations around $\textbf{p}$. For the term involving the second fundamental form, we switch to spherical coordinates. Then we study the radial domain of integration. We consider this step (Section~\ref{subsec:boundary}) to be the proof's most technical. Following this analysis, the tilt function emerges (Proposition~\ref{prop:obtain-tilt}), and dominated convergence is used to finish. \subsection[Step 1: reduce to the population limit]{\textbf{Step~1: reduce to the population limit ($n \to \infty$)}} This is a standard application of concentration of measure. Let \begin{align*} S_n^{(i)} &:= \operatorname{vol}(\mathcal{M})\frac{1}{nc_n} K_{\sigma_n}(\| {\bf x}_i - {\bf p} \|_{\mathcal{B}})(f({\bf x}_i) - f({\bf p})),\\ \quad S_n &:= (\operatorname{vol}(\mathcal{M})/c_n) \mathcal{L}_{n,{\mathcal{B}}} f(\textup{\bf p}) = \sum_{i=1}^n S_n^{(i)}. \end{align*} \noindent For a fixed sample size $n$, the values $S_n^{(1)}, \ldots, S_n^{(1)}$ are i.i.d. random variables. By the continuity of $f$ and compactness of $\mathcal{M}$, there is a constant $C_0 > 0$ such that $|f(\textbf{x})| \leq c_0$ for all $\mathbf{x} \in \mathcal{M}$. Recalling $K_{\sigma_n} \le 1$, it follows that \begin{equation} \label{eq:Sin-bound} |S_n^{(i)}| \leq (2C_0\operatorname{vol}(\mathcal{M}))/(nc_n). \end{equation} Let $\epsilon > 0$. By inequality~\eqref{eq:Sin-bound} together with Hoeffding's inequality, \begin{equation} \label{eq:hoeff} \mathbb{P}\left( \lvert S_n - \mathbb{E}[S_n] \rvert \geq \frac{\epsilon}{2} \right) \! \leq \! 2 \textup{exp}\left( \frac{-\epsilon^2 n c_n^2}{32C_0^2\operatorname{vol}(\mathcal{M})^2} \right) \! = \! 2 \textup{exp}\left( \frac{-\epsilon^2 \Gamma(\frac{d+4}{2}) n^{\frac{\alpha}{2d+4+\alpha}}}{32C_0^2\operatorname{vol}(\mathcal{M})^2} \right) \end{equation} where we substituted $c_n = \Gamma(\frac{d+4}{2}) \sigma_n^{d+2}$ and $\sigma_n = n^{-1/(2d+4+\alpha)}$. Here \begin{align*} \mathbb{E}[S_n] \,\, = \,\, \frac{1}{\Gamma(\frac{d+4}{2}) \sigma_n^{d+2}} \int_{\textbf{x} \in \mathcal{M}} K_{\sigma_n}(\| \textbf{x} - \textbf{p} \|_{\mathcal{B}}) \left(f(\textbf{x}) - f(\textbf{p})\right) d\mu(\textbf{x}), \end{align*} where $d\mu$ is the Riemannian volume density on $\mathcal{M}$. Since $\sigma_n \rightarrow 0$ as $n \rightarrow \infty$, \textbf{assuming we proved} \begin{equation} \label{eq:pop-limit-gamma} \lim_{\sigma \rightarrow 0} \frac{1}{\Gamma(\frac{d+4}{2}) \sigma^{d+2}} \int_{\textbf{x} \in \mathcal{M}} K_{\sigma}(\| \textbf{x} - \textbf{p} \|_{\mathcal{B}}) \left(f(\textbf{x}) - f(\textbf{p})\right) d\mu(\textbf{x}) \, = \, \Delta_{\mathcal{M},\mathcal{B}} f(\textbf{p}), \end{equation} \textbf{\textup{then it would follow}} there exists $n_0 = n_0(\epsilon)$ such that for all $n > n_0$, \begin{equation} \label{eq:if-would} \bigl\lvert \mathbb{E}[S_n] - \Delta_{\mathcal{M}, \mathcal{B}}f(\textbf{p}) \bigr\rvert \leq \frac{\epsilon}{2}. \end{equation} Combining inequalities \eqref{eq:hoeff} and \eqref{eq:if-would} gives, for all $n > n_0$, \begin{align} \label{eq:almost-there} \mathbb{P}(\bigl\lvert S_n - \Delta_{\mathcal{M}, \mathcal{B}}f(\textup{\bf p}) f(\textbf{p})\bigr\rvert \geq \epsilon) & \leq \mathbb{P}(\bigl\lvert S_n - \mathbb{E}[S_n]\bigr\rvert \geq \tfrac{\epsilon}{2}) \\[0.4em] & \leq 2 \textup{exp}\!\left( \frac{-\epsilon^2 \Gamma(\tfrac{d+4}{2}) n^{\frac{\alpha}{2d+4+\alpha}}}{32C_0^2\operatorname{vol}(\mathcal{M})^2} \right)\!. \end{align} By $\alpha > 0$, the RHS of \eqref{eq:almost-there} converges to $0$ as $n \rightarrow \infty$. Since $\epsilon$ was arbitrary, this shows that $S_n$ converges to $\Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p})$ in probability. Dividing by $\operatorname{vol}({\mathcal M})$ gives $\mathcal{L}_{n,{\mathcal{B}}}f(\textbf{p}) \xrightarrow{\, p \,} (1/\operatorname{vol}(\mathcal{M})) \Delta_{\mathcal{M}, \mathcal{B}}f(\textup{\bf p})$. We can upgrade this to almost sure convergence, simply by noting that the series \[ \sum_{n=1}^{\infty} 2 \, \textup{exp}\!\left( \frac{-\epsilon^2 \Gamma(\tfrac{d+4}{2}) n^{\frac{\alpha}{2d+4+\alpha}}}{32C_0^2 \operatorname{vol}(\mathcal{M})^2} \right)\! \] converges and citing the Borel-Cantelli lemma. \textup{It remains to actually prove~\eqref{eq:pop-limit-gamma}}. \subsection[\textbf{Step 2: reduce to the indicator function kernel}]{\textbf{Step~2: reduce to the indicator function kernel}} \label{subsec:switch-indicator} In this step, we replace the Gaussian kernel $K_{\sigma_n}$ by the indicator function kernel $\mathbb{1}_{\sigma_n}$, defined by \begin{align*} \mathbb{1}_{\sigma_n} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0} \textup{ where } \mathbb{1}_{\sigma_n}(t) := 1 \textup{ if } t \in [0, \sigma_n] \textup{ and } \mathbb{1}_{\sigma_n}(t) := 0 \textup{ if } t > \sigma_n. \end{align*} Precisely, we show \begin{equation} \label{eq:pop-limit} \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^{d+2}} \int_{\textbf{x} \in \mathcal{M}} \! \mathbb{1}_{\sigma}(\| \textbf{x} - \textbf{p} \|_{\mathcal{B}}) \left(f(\textbf{x}) - f(\textbf{p})\right) d\mu(\textbf{x}) \, = \, \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) \end{equation} implies the required formula \eqref{eq:pop-limit-gamma}, and thereby we will reduce to proving \eqref{eq:pop-limit}. To achieve this reduction, we now assume \eqref{eq:pop-limit}. Write the Gaussian kernel $K_{\sigma}$ as a sum of indicator functions: \begin{equation} \label{eq:kappa-def} K_{\sigma}(t) = \int_{s=0}^{\infty} \kappa_{\sigma}(s) \mathbb{1}_{s}(t) ds = \int_{s=t}^{\infty} \kappa_{\sigma}(s) ds, \end{equation} where $\kappa_{\sigma} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ is given by $\kappa_{\sigma}(s) := (2s/\sigma^2) \textup{exp}(-s^2/\sigma^2)$. Then, \begin{align} & \int_{\textbf{x} \in \mathcal{M}} K_{\sigma}(\| \textbf{x} - \textbf{p}\|_{\mathcal{B}}) (f(\textbf{x}) - f(\textbf{p})) d\mu(\textbf{x}) & \nonumber \\ &= \int_{\textbf{x} \in \mathcal{M}} \left( \int_{\textbf{s} = \| \textbf{x} - \textbf{p} \|_{\mathcal{B}}}^{\infty} \kappa_{\sigma}(s) ds \right) (f(\textbf{x}) - f(\textbf{p})) d\mu(\textbf{x}) \,\, \quad &[\textup{substituting } \eqref{eq:kappa-def}]\nonumber \\ &= \int_{s=0}^{\infty} \kappa_{\sigma}(s) \left( \int_{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq s} f(\textbf{x}) - f(\textbf{p}) d\mu(\textbf{x}) \right)\!ds \,\, \quad &[\textup{Fubini's theorem}]. \label{eq:fubini} \end{align} Define \begin{align*} e(s) := \left(\int_{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq s} f(\textbf{x}) - f(\textbf{p}) d\mu(\textbf{x})\right) \, - \, s^{d+2} \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}). \end{align*} In light of \eqref{eq:pop-limit}, we have \begin{equation} \label{eq:little-o} e(s) = o(s^{d+2}) \,\, \textup{ as } s \rightarrow 0. \end{equation} Fix $\epsilon > 0$. By \eqref{eq:little-o}, we can fix $\delta > 0$ such that \begin{equation} \label{eq:little-o-explicit} 0 \leq s \leq \delta \, \Rightarrow \, |e(s)| \leq \epsilon s^{d+2}. \end{equation} Returning to Eq.~\eqref{eq:fubini}, we may change the upper limit of integration with the following control on the approximation error: \begin{align} \label{eq:drop-to-delta} & \int_{s=0}^{\infty} \kappa_{\sigma}(s) \left( \int_{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq s} f(\textbf{x}) - f(\textbf{p}) d\mu(\textbf{x}) \right)ds \\ & = \int_{s=0}^{\delta} \kappa_{\sigma}(s) \left( \int_{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq s} f(\textbf{x}) - f(\textbf{p}) d\mu(\textbf{x}) \right)ds \,\, + \,\, \textup{exp}(-\delta^2 / \sigma^2)\textup{poly}(\sigma). \nonumber \end{align} To justify Eq.~\eqref{eq:drop-to-delta} holds, we note the parenthesized integral has absolute value bounded by $2c_0 $ for all $s \in [0, \infty]$, by the compactness of $\mathcal{M}$. Thus, a tail bound for the Gaussian kernel implies \eqref{eq:drop-to-delta} (set $k=0$ in Eq.~\eqref{eq:gaussian-even-tail} in Appendix~\ref{app:gaussian}). Now the main term in Eq.~\eqref{eq:drop-to-delta} is \begin{align*} \int_{s=0}^{\delta} \kappa_{\sigma}(s) \big{(}s^{d+2} \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) \, + \, e(s) \big{)} ds. \end{align*} From \eqref{eq:little-o-explicit}, this is bounded above by \begin{equation} \label{eq:plus-eps} \int_{s=0}^{\delta} \kappa_{\sigma}(s) s^{d+2} \left( \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) + \epsilon\right) ds, \end{equation} and below by \begin{equation} \label{eq:neg-eps} \int_{s=0}^{\delta} \kappa_{\sigma}(s) s^{d+2} \left( \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) - \epsilon\right) ds. \end{equation} By additional bounds for the Gaussian (Appendix~\ref{app:gaussian}), the upper and lower bounds \eqref{eq:plus-eps} and \eqref{eq:neg-eps} are equal to \begin{align*} \int_{s=0}^{\infty} \kappa_{\sigma}(s) s^{d+2} \left( \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) \pm \epsilon \right) ds \, + \, \textup{exp}(-\delta^2/\sigma^2)\textup{poly}(\sigma). \end{align*} But, the main term is \begin{align*} \sigma^{d+2} \Gamma(\tfrac{d+4}{2}) \left( \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}) \pm \epsilon \right)\!, \end{align*} by the formula for half the $k$-th absolute moment of $\kappa_{\sigma}$ (Eq.~\eqref{eq:gaussian-central-moment}, Appendix~\ref{app:gaussian}). Using $\lim_{\sigma \rightarrow 0} \textup{exp}(-\delta^2/\sigma^2) \textup{poly}(\sigma) = 0$, and the fact that $\epsilon$ is arbitrary, we achieve what we wanted: \begin{align*} \int_{\textbf{x} \in \mathcal{M}} \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^{d+2}} K_{\sigma}(\| \textbf{x} - \textbf{p}\|_{\mathcal{B}}) (f(\textbf{x}) - f(\textbf{p})) d\mu(\textbf{x}) \,\, = \,\, \Gamma(\tfrac{d+4}{2}) \Delta_{{\mathcal M},{\mathcal{B}}}f(\textbf{p}). \end{align*} To sum up, Eq.~\eqref{eq:pop-limit} implies Eq.~\eqref{eq:pop-limit-gamma}. It remains to prove Eq.~\eqref{eq:pop-limit}. \subsection[\textbf{Step 3: use geodesic normal coordinates and Taylor expand}]{\textbf{Step~3: use geodesic normal coordinates and Taylor expand}} In this step, we express the integral in the LHS of \eqref{eq:pop-limit} in normal coordinates, \begin{equation} \label{eq:key-int-again} \int_{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq \sigma} f(\textbf{x}) - f(\textbf{p}) d\mu(\textbf{x}). \end{equation} We parameterize it using the exponential map (Section~\ref{subsec:prelim-riem}), \begin{align*} \textup{exp}_{{\bf p}}: U \xrightarrow{\sim} V, \end{align*} where $U \subseteq T_{\textbf{p}}\mathcal{M}$ and $V \subseteq \mathcal{M}$ are open neighborhoods of $0$ and $\textbf{p}$ respectively. Note that there exists some constant $\sigma_0 > 0$ such that for all $\sigma \leq \sigma_0$ the domain of integration in \eqref{eq:key-int-again} is contained in $V$, \begin{equation} \label{eq:exp-UV} \{\textbf{x} \in \mathcal{M} : \| \textbf{x} - \textbf{p} \|_{\mathcal{B}} \leq \sigma \} \subseteq V. \end{equation} This follows from the fact that $\mathcal{M}$ is an embedded submanifold of $\mathbb{R}^D$, hence $V$ can be written as an open set of $\mathbb{R}^D$ intersected with $\mathcal{M}$, and the fact $\| \cdot \|_{\mathcal{B}}$ is equivalent to the Euclidean norm on $\mathbb{R}^D$ and so induces the same open sets. Therefore, by a change of variables, for each $\sigma \leq \sigma_0$, the integral \eqref{eq:key-int-again} equals \begin{equation} \label{eq:switch-geo} \int_{\textbf{s} \in U : \| \textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p}\|_{\mathcal{B}} \leq \sigma} \left( \widetilde{f}(\textbf{s}) - \widetilde{f}(0) \right) \bigl\lvert \textup{det} D \textup{exp}_{\textbf{p}}(\textbf{s}) \bigr\rvert d\textbf{s}, \end{equation} where $\textbf{s} = (s_1, \ldots, s_d)^{\top}$ denotes coordinates for $T_{\textbf{p}}\mathcal{M}$ with respect to an orthonormal basis and $d\textbf{s}$ denotes the Lebesgue measure on $(T_{\textbf{p}}\mathcal{M}, \langle \cdot , \cdot \rangle_{\textbf{p}})$. Our goal is to approximate the integral \eqref{eq:switch-geo} up to order $\sigma^{d+2}$. To this end, we will consider three Taylor expansions: \begin{align} & \widetilde{f}(\textbf{s}) \, = \, \widetilde{f}(0) + \text{grad}\widetilde{f}(0)^{\top} \textbf{s} + \tfrac{1}{2} \textbf{s}^{\top} \text{hess} \widetilde{f}(0) \textbf{s} + O(\| \textbf{s} \|_2^3), \label{eq:taylor-f}\\[0.8pt] & \textup{det} D\textup{exp}_{\textbf{p}}(\textbf{s}) \, = \, 1 - \tfrac{1}{6} \textbf{s}^{\top} \textup{Ric}(\textbf{p})\textbf{s} + O(\| \textbf{s} \|_2^3) \, = \, 1 + O(\| \textbf{s} \|_2^2), \label{eq:taylor-Dexp}\\[0.8pt] & \textup{exp}_{\textbf{p}}(\textbf{s}) \,\, = \,\, \textbf{p} \, + \, L_{\textbf{p}}(\textbf{s}) \, + \, \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \, + \, O(\| \textbf{s} \|_2^3). \label{eq:taylor-exp} \end{align} Here $\textup{Ric}(\textbf{p}) \in \mathbb{R}^{d \times d}$ stands for the \textup{Ricci curvature} of $\mathcal{M}$ at $\textbf{p}$ (see \cite[Ch.~7]{Lee-Riem-Book}). Also, see Section~\ref{subsec:prelim-riem} for discussion on $L_{\textbf{p}}$ and $Q_{\textbf{p}}$. Substituting equations \eqref{eq:taylor-f} and \eqref{eq:taylor-Dexp} into the integral~\eqref{eq:switch-geo} leads to \begin{equation} \label{eq:taylor-int} \int_{\textbf{s} \in U : \| \textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p}\|_{\mathcal{B}} \leq \sigma} \text{grad} \widetilde{f}(0)^{\top} \textbf{s} \, + \, \tfrac{1}{2} \textbf{s}^{\top} \text{hess} \widetilde{f}(0) \textbf{s} \, + \, O(\| \textbf{s} \|_2^3) \,\, d\textbf{s}. \end{equation} \subsection[\textbf{Step 4: approximate the domain of integration}]{\textbf{Step~4: approximate the domain of integration}} In this step, we approximate the \textup{domain} of integration in \eqref{eq:taylor-int} using the Taylor expansion of the exponential map (Definition~\ref{def:domain-approx}). Then we assess the quality of our approximations (Proposition~\ref{prop:domain-bounds}). \begin{Definition} \label{def:domain-approx} For each $\sigma >0$, we define three subsets of $T_\textup{\bf p} {\mathcal M}$ as follows. \begin{align*} & \operatorname{Exact}(\sigma) := \{ \textup{\bf s} \in U : \left\| \textup{exp}_\textup{\bf p}(\textup{\bf s}) - \textup{\bf p} \right\|_{\mathcal{B}} \leq \sigma \}, \\[0.5pt] & \operatorname{Approx}^{(1)}(\sigma) := \{ \textup{\bf s} \in T_\textup{\bf p} {\mathcal M} : \left\| L_\textup{\bf p}(\textup{\bf s}) \right\|_{\mathcal{B}} \leq \sigma \}, \\[0.5pt] & \operatorname{Approx}^{(2)}(\sigma) := \{ \textup{\bf s} \in T_\textup{\bf p} {\mathcal M} : \left\| L_\textup{\bf p}(\textup{\bf s}) + \tfrac12 Q_\textup{\bf p}(\textup{\bf s}) \right\|_{\mathcal{B}} \leq \sigma \}. \end{align*} The set $\operatorname{Exact}(\sigma) \subseteq T_\textup{\bf p} {\mathcal M}$ is the exact domain of integration, parameterized on the tangent space. The sets $\operatorname{Approx}^{(1)}(\sigma)$ and $\operatorname{Approx}^{(2)}(\sigma)$ are approximations to $\operatorname{Exact}(\sigma)$, where the manifold around $\textup{\bf p}$ is approximated to first and second order, respectively. \end{Definition} \smallskip \begin{Proposition} \label{prop:domain-bounds} \begin{enumerate}[i.] \item \!There exist constants $c_1, \sigma_1 > 0$ such that for all $\sigma \leq \sigma_1$, \begin{equation} \label{eq:nice-exactU} \operatorname{Exact}(\sigma) \subseteq \{\textup{\bf s} \in T_\textup{\bf p} {\mathcal M} : \left\|\textup{\bf s} \right\|_2 \leq c_1 \sigma\} . \end{equation} \item \!There exist constants $c_2, c_3, \sigma_2 > 0$ such that for all $\sigma \leq \sigma_2$, \begin{equation} \label{eq:lem-inc} (1-c_2 \sigma) \operatorname{Approx}^{(1)}(\sigma) \subseteq \operatorname{Exact}(\sigma) \subseteq (1+c_3 \sigma) \operatorname{Approx}^{(1)}(\sigma). \end{equation} \label{eq:domain-approx1} \item \!There exist constants $c_4, c_5, \sigma_3 > 0$ such that for all $\sigma \leq \sigma_3$, \begin{equation} \label{eq:lem-inc-approx2-nice} (1-c_5 \sigma^2) \operatorname{Approx}^{(2)}(\sigma) \subseteq \operatorname{Exact}(\sigma) \subseteq (1+c_4 \sigma^2) \operatorname{Approx}^{(2)}(\sigma). \end{equation} \end{enumerate} \end{Proposition} \begin{proof} \underline{\textup{part~i.}} First, note that for any $\epsilon > 0$, we can shrink $U$ and $V$ in \eqref{eq:exp-UV} so as to guarantee that for all $\textup{\bf s} \in U$ we have \begin{equation} \label{eq:assume-U} \left\| \textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} - L_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \leq \epsilon \| \textbf{s} \|_2. \end{equation} Let $\textbf{s} \in \operatorname{Exact}(\sigma)$. The result follows from: \begin{align*} \sigma &\geq \left\|\textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} && \text{[definition of $\operatorname{Exact}(\sigma)$]} \\ &= \left\| L_{\textbf{p}}(\textbf{s}) - (L_{\textbf{p}}(\textbf{s}) -\textup{exp}_{\textbf{p}}(\textbf{s}) + \textbf{p}) \right\|_B \\ &\ge \| L_{\textbf{p}}(\textbf{s})\|_{\mathcal{B}} - \| L_{\textbf{p}}(\textbf{s}) -\textup{exp}_{\textbf{p}}(\textbf{s}) + \textbf{p}\|_{\mathcal{B}} && \text{[reverse triangle inequality]} \\ &\ge \| L_{\textbf{p}}(\textbf{s})\|_{\mathcal{B}} - \epsilon \| \textbf{s} \|_2 && \text{[using \eqref{eq:assume-U}]} \\ & \ge c \| L_{\textbf{p}}(\textbf{s})\|_2 - \epsilon \| \textbf{s} \|_2 && \text{[norm equivalence, see \eqref{eq:equiv-norms}]} \\ &= (c-\epsilon) \| \textup{\bf s} \|_2. && \text{[$L_\textup{\bf p}$ is an isometry]} \end{align*} \medskip \underline{\textup{part~ii.}} For the right inclusion, assume $\textbf{s} \in \operatorname{Exact}(\sigma)$. Then, \begin{align*} \sigma &\ge \left\|\textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} && \quad \,\,\,\,\,\,\, \text{[definition of $\operatorname{Exact}(\sigma)$]}\\ &= \left\| L_\textup{\bf p}(\textup{\bf s}) + O(\|\textup{\bf s}\|^2)\right\|_{\mathcal{B}} && \quad \,\,\,\,\,\,\, \text{[from \eqref{eq:expp-taylor}]} \\ &= \left\| L_\textup{\bf p}(\textup{\bf s}) + O(\sigma^2)\right\|_{\mathcal{B}} && \quad \,\,\,\,\,\,\, \text{[shown in part i]} \\ &= \left\| L_\textup{\bf p}(\textup{\bf s}) \right\|_B + O(\sigma^2). && \quad \,\,\,\,\,\,\, \text{[triangle inequality]} \end{align*} Take $c_3$ to be the implicit constant inside the $O(\sigma^2)$ term, it follows that $\| L_\textup{\bf p}(\textup{\bf s}) \|_B \le \sigma + c_3 \sigma^2$ and therefore $\textup{\bf s} \in (1+c_3 \sigma) \operatorname{Approx}^{(1)}(\sigma)$. For the left inclusion, let $\textup{\bf s} \in (1-c_2 \sigma) \operatorname{Approx}^{(1)}(\sigma)$. It follows by definition that $\|L_\textup{\bf p}(\textup{\bf s})\|_{\mathcal{B}} \le \sigma - c_2 \sigma^2$. We have just shown that \begin{align*} \left\|\textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} &= \left\| L_\textup{\bf p}(\textup{\bf s}) \right\|_B + O(\sigma^2) \end{align*} Therefore, \begin{align*} \left\|\textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} &\le \sigma - c_2 \sigma^2 + O(\sigma^2) \end{align*} Picking $c_2$ to be the implicit constant inside the $O(\sigma^2)$ term guarantees that $\left\|\textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} \le \sigma$ and hence that $\textbf{s} \in \operatorname{Exact}(\sigma)$. \medskip \underline{\textup{part~iii.}} From Eq. \eqref{eq:expp-taylor} we have that \begin{align} \label{eq:thing} \left\| \textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} &= \left\| L_{\textup{p}}(\textbf{s}) + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) + O(\|\textbf{s}\|^3) \right\|_{\mathcal{B}} \\ &= \left\| L_{\textup{p}}(\textbf{s}) + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s})\right\|_{\mathcal{B}} + O(\|\textbf{s}\|^3) \end{align} By part i, $\|\textbf{s}\|_2 = O(\sigma)$. From \eqref{eq:thing} and the triangle inequality, \begin{equation} \label{eq:c4'} \left\| \textup{exp}_{\textbf{p}}(\textbf{s}) - \textbf{p} \right\|_{\mathcal{B}} = \left\| L_{\textbf{p}}(\textbf{s}) + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \, \leq \, \sigma + c_4' \sigma^3, \end{equation} for some constant $c_4' > 0$ and all sufficiently small $\sigma$. We want to find a constant $c_4 > 0$ such that $ \textbf{s} / (1 + c_4 \sigma^2) \in \operatorname{Approx}^{(2)}(\sigma)$. To this end, compute \begin{align} \label{eq:approx2-bound} &\left\| \frac{1}{1 + c_4 \sigma^2} L_{\textbf{p}}(\textbf{s}) \, + \, \frac{1}{(1 + c_4 \sigma^2)^2 } \frac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \nonumber \\[0.2pt] &\leq \,\,\, \frac{1}{1 + c_4 \sigma^2} \left\| L_{\textbf{p}}(\textbf{s}) + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \,\, + \, \left(\frac{1}{1+c_4 \sigma^2} - \frac{1}{(1+c_4 \sigma^2)^2} \right) \left\| \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \nonumber \\[1.5pt] &\leq \,\,\, \frac{1}{1+c_4\sigma^2} \left( \sigma + c_4' \sigma^3 \right) \,\, + \,\, \left(\frac{1}{1+c_4 \sigma^2} - \frac{1}{(1+c_4 \sigma^2)^2} \right) O(\sigma^2) \nonumber \\[3pt] &= \,\,\, \left(1 - c_4 \sigma^2 + O(\sigma^4) \right) \left(\sigma + c_4' \sigma^3 \right) \,\, + \,\, \left( c_4 \sigma^2 + O(\sigma^4) \right) O(\sigma^2) \nonumber \\[3pt] &= \,\,\, \sigma \,\, + \,\, (c_4' - c_4) \sigma^3 \,\, + \,\, O(\sigma^4). \end{align} Here we used the triangle inequality, the bound \eqref{eq:c4'}, part i, and Taylor expansions in $\sigma$ for $(1 + c_4 \sigma^2)^{-1}$ and $(1 + c_4 \sigma^2)^{-2}$. Thus, take $c_4 = 2c_4'$. For small enough $\sigma$, the RHS of Eq.~\eqref{eq:approx2-bound} is at most $\sigma$, so $ \textbf{s} / (1 + c_4 \sigma^2) \in \operatorname{Approx}^{(2)}(\sigma)$. Now consider the leftmost inclusion in part iii. Assume $\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)$. We first prove that $\| \textbf{s} \|_2 = O(\sigma)$. Indeed, \begin{align} \label{eq:approx2-Osigma} & \sigma \,\, \geq \,\, \left\| L_{\textbf{p}}(\textbf{s}) + \tfrac{1}{2}Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} \,\, \gtrsim \,\, \left\| L_{\textbf{p}}(\textbf{s}) + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_2 \nonumber \\[0.5pt] & = \, \sqrt{\left\| L_{\textbf{p}}(\textbf{s}) \right\|_2^2 \, + \, \tfrac{1}{4} \left\| Q_{\textbf{p}}(\textbf{s}) \right\|_2^2} \,\,\, \geq \, \left\| L_{\textbf{p}}(\textbf{s}) \right\|_2 \, = \, \left\| \textbf{s} \right\|_2. \end{align} The first equality comes from orthogonality between the images of $Q_{\textbf{p}}$ and $L_{\textbf{p}}$ \eqref{eq:perp}. Let the implicit constant in \eqref{eq:approx2-Osigma} be $c_5'$. Similarly to above, let us set $c_5 = 2c_5'$ and compute (for sufficiently small $\sigma$): \begin{align*} & \, \left\| \textup{exp}_{\textbf{p}}\!\left((1-c_5 \sigma^2)\textbf{s}\right) - \textbf{p} \right\|_{\mathcal{B}} \leq \left\| (1 - c_5 \sigma^2 ) L_{\textbf{p}}(\textbf{s}) + (1 - c_5 \sigma^2)^2 \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} + c_5' \sigma^3 \nonumber \\[2pt] & \leq (1 - c_5 \sigma^2) \left\|L_{\textbf{p}} + \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} + \left( (1 - c_5 \sigma^2 ) - (1 - c_5 \sigma^2)^2 \right) \left\| \tfrac{1}{2} Q_{\textbf{p}}(\textbf{s}) \right\|_{\mathcal{B}} + c_5' \sigma^3 \nonumber \\[2pt] & \leq (1 - c_5 \sigma^2)\sigma + O(\sigma^4) + c_5' \sigma^3 = \sigma - c_5' \sigma^3 + O(\sigma^4) \leq \sigma. \end{align*} We used the Taylor expansion \eqref{eq:taylor-exp}, the triangle inequality, the bound \eqref{eq:approx2-Osigma}, and the triangle inequality again. Hence $(1 - c_5 \sigma^2) \textbf{s} \in \operatorname{Exact}(\sigma)$. \hfill \qed \end{proof} \subsection[Step 5: drop the cubic error term and obtain the second-order term]{\textbf{Step~5: drop $O(\|\textbf{s}\|_2^3)$ and obtain the second-order term}} \label{subsec:drop-stuff} In this step we prove that each of the terms in the integral \eqref{eq:taylor-int} can be approximated up to an additive error of $O(\sigma^{d+3})$ by switching from the exact domain $\operatorname{Exact}(\sigma)$ to the approximate domains $\operatorname{Approx}^{(1)}(\sigma)$ and $\operatorname{Approx}^{(2)}(\sigma)$ defined in Definition \ref{def:domain-approx}. \begin{Proposition} \label{prop:int-bounds} The following bounds hold: \begin{enumerate}[i.] \setlength\itemsep{0.5em} \item $\int_{\textup{\bf s} \in \operatorname{Exact}(\sigma)} O(\| \textup{\textbf{s}} \|_2^3) d\textup{\textbf{s}} \, = \, O(\sigma^{d+3})$, \item $ \int_{\textup{\bf s} \in \operatorname{Exact}(\sigma)} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d\textup{\textbf{s}} \,\, = \int_{\textup{\textbf{s}} \in \operatorname{Approx}^{(1)}(\sigma)} \textup{\textbf{s}} \textup{\textbf{s}}^{\top} d\textup{\textbf{s}} \, + \, O(\sigma^{d+3})$, \item $\int_{\textup{\textbf{s}} \in \operatorname{Exact}(\sigma)} \textup{\textbf{s}} d\textup{\textbf{s}} \, = \int_{\textup{\textbf{s}} \in \operatorname{Approx}^{(2)}(\sigma)} \textup{\textbf{s}} d\textup{\textbf{s}} \, + \, O(\sigma^{d+3})$. \end{enumerate} \end{Proposition} \begin{proof} Let $\ominus$ denote the symmetric difference of sets. \noindent \underline{\textup{part~i.}} Let $\sigma \leq \sigma_1$. By Proposition~\ref{prop:domain-bounds}, part~i, we have \begin{align*} & \int_{\textbf{s} \in \operatorname{Exact}(\sigma)} O(\| \textbf{s} \|_2^3) \, d\textbf{s} \,\, \lesssim \,\, \int_{\|\textbf{s}\|_2 \leq c_1 \sigma} \| \textbf{s} \|_2^3 \\[0.4em] & \quad \le (c_1 \sigma)^3 \text{vol}\{\textup{\bf s} \in \mathbb{R}^d:\|s\|_2 \le c_1 \sigma\} = O(\sigma^{d+3}). \end{align*} \noindent \underline{\textup{part~ii.}} Let $\sigma \leq \sigma_2$. By Proposition~\ref{prop:domain-bounds}, part~ii, we see \begin{align*} \operatorname{Exact}(\sigma) \ominus \operatorname{Approx}^{(1)}(\sigma) \, \subseteq \, (1+c_3 \sigma) \operatorname{Approx}^{(1)}(\sigma) \setminus (1 - c_2 \sigma) \operatorname{Approx}^{(1)}(\sigma). \end{align*} Then we have \vspace{-0.5em} \begin{align*} & \left\| \int_{\textbf{s} \in \operatorname{Exact}(\sigma)} \textbf{s} \textbf{s}^{\top} d\textbf{s} - \int_{\textbf{s} \in \operatorname{Approx}^{(1)}(\sigma)} \textbf{s} \textbf{s}^{\top} d\textbf{s} \right\|_F \nonumber \\[-0.5pt] &= \left\| \int_{\textbf{s} \in \operatorname{Exact}(\sigma) \setminus \operatorname{Approx}^{(1)}(\sigma)} \textbf{s} \textbf{s}^{\top} d\textbf{s}\right\|_F \nonumber \\[-0.5pt] &\le \int_{\textbf{s} \in \operatorname{Exact}(\sigma) \setminus \operatorname{Approx}^{(1)}(\sigma)} \left\| \textup{\bf s} \textup{\bf s}^\top \right\|_F d\textup{\bf s} \nonumber \\[-0.5pt] & \leq \, \int_{\textbf{s} \in \operatorname{Exact}(\sigma) \ominus \operatorname{Approx}^{(1)}(\sigma)} \left\| \textbf{s} \textbf{s}^{\top} \right\|_F d\textbf{s} \nonumber \\[-0.5pt] & \leq \, \int_{\textbf{s} \in (1 + c_3 \sigma) \operatorname{Approx}^{(1)}(\sigma) \setminus (1 - c_2 \sigma) \operatorname{Approx}^{(1)}(\sigma)} \| \textbf{s} \|_2^2 d\textbf{s} \nonumber \\[-0.5pt] & = \, \int_{\textbf{s} \in (\sigma + c_3 \sigma^2) \operatorname{Approx}^{(1)}(1) \setminus (\sigma - c_2 \sigma^2) \operatorname{Approx}^{(1)}(1)} \| \textbf{s} \|_2^2 d\textbf{s} \nonumber \\[-0.5pt] & = \, \left( (\sigma + c_3 \sigma^2)^{d+2} - (\sigma - c_2 \sigma^2)^{d+2}\right) \int_{\| L_{\textbf{p}}(\textbf{s}) \|_{\mathcal{B}} \leq 1} \| \textbf{s} \|_2^2 d\textbf{s}. \nonumber \end{align*} \vspace{-0.2em} This last quantity is $O(\sigma^{d+3})$, because ${\mathcal{B}}$ is bounded and $L_\textup{\bf p}$ is an isometry. \medskip \medskip \noindent \underline{\textup{part~iii.}} Let $\sigma \leq \sigma_3$. By Proposition~\ref{prop:domain-bounds}, part~iii, \begin{align*} &\operatorname{Exact}(\sigma) \, \ominus \, \operatorname{Approx}^{(2)}(\sigma) \\ &\subseteq \, (1 + c_4 \sigma^2) \operatorname{Approx}^{(2)}(\sigma) \setminus (1-c_5\sigma^2) \operatorname{Approx}^{(2)}(\sigma). \end{align*} Then, \begin{align*} &\left\| \int_{\textbf{s} \in \operatorname{Exact}(\sigma)} \textbf{s} d\textbf{s} \, - \, \int_{\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s} \right\|_2 = \left\| \int_{\textbf{s} \in \operatorname{Exact}(\sigma) \setminus \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s}\right\|_2 \\[-0.5pt] &\le \int_{\textbf{s} \in \operatorname{Exact}(\sigma) \setminus \operatorname{Approx}^{(2)}(\sigma)} \left\|\textup{\bf s}\right\|_2 d\textup{\bf s} \,\, \leq \,\, \int_{\textup{\bf s} \in \operatorname{Exact}(\sigma) \ominus \operatorname{Approx}^{(2)}(\sigma)} \| \textup{\bf s} \|_2 d\textup{\bf s} \nonumber \\[-0.5pt ] & \leq \, \int_{\textbf{s} \in (1+c_4 \sigma^2)\operatorname{Approx}^{(2)}(\sigma) \setminus (1-c_5 \sigma^2) \operatorname{Approx}^{(2)}(\sigma)} \| \textbf{s} \|_2 d\textbf{s}. \end{align*} The upper bound equals \begin{align*} & \left( (1 + c_4 \sigma^2)^{d+1} - (1 - c_5 \sigma^2)^{d+1} \right) \int_{\textup{\bf s} \in \operatorname{Approx}^{(2)}(\sigma)} \| \textup{\bf s} \|_2 d\textup{\bf s} \nonumber \\[-0.5pt] & = O(\sigma^2) \int_{\textup{\bf s}' \in \operatorname{Approx}^{(2)}(1)} \| \sigma \textup{\bf s}' \|_2 \sigma^d d \textup{\bf s}' \nonumber = O(\sigma^{d+3}) \int_{\textup{\bf s}' \in \operatorname{Approx}^{(2)}(1)} \| \textup{\bf s}' \|_2 d \textup{\bf s}' \end{align*} The last quantity is $O(\sigma^{d+3})$, because $\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)$ implies $\|\textbf{s}\|_2 = O(\sigma)$, as shown in the argument for Proposition~\ref{prop:domain-bounds}, part~iii. \hfill \qed \end{proof} \noindent Now, plug Proposition~\ref{prop:int-bounds} into the integral \eqref{eq:taylor-int}: \begin{align*} & \int_{\textbf{s} \in \operatorname{Exact}(\sigma)} \text{grad} \widetilde{f}(0)^{\top} \textbf{s} \, + \, \tfrac{1}{2} \textbf{s}^{\top} \text{hess} \widetilde{f}(0) \textbf{s} \, + \, O(\| \textbf{s} \|_2^3) \,\, d\textbf{s} \nonumber \\ &=\left\langle \text{grad} \widetilde{f}(0), \int_{\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s}\right\rangle \nonumber \\ &\ \ \ \ + \left\langle \text{hess} \widetilde{f}(0), \tfrac{1}{2} \int_{\textbf{s} \in \operatorname{Approx}^{(1)}(\sigma)} \textbf{s} \textbf{s}^{\top} d\textbf{s}\right\rangle_F + O(\sigma^{d+3}), \end{align*} where linearity of $L_{\textbf{p}}$ gives \begin{align*} \int_{\textbf{s} \in \operatorname{Approx}^{(1)}(\sigma)} \textbf{s} \textbf{s}^{\top} d\textbf{s} = \sigma^{d+2} \int_{\textbf{s} : \| L_{\textbf{p}}(\textbf{s}) \|_{\mathcal{B}} \leq 1} \textbf{s} \textbf{s}^{\top} d\textbf{s}. \end{align*} Thus, \eqref{eq:taylor-int} divided by $\sigma^{d+2}$ tends to $\Delta_{\mathcal{M}, {\mathcal{B}}}f(\textbf{p})$ as $\sigma \rightarrow 0$, as desired, \textbf{provided we can show} \begin{equation} \label{eq:I1-integral} \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^{d+2}} \int_{\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s} \, = \int_{ \| \widehat{\textup{\textbf{s}}} \|_2 =1 } \widehat{\textup{\textbf{s}}} \, \|L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}})\|_{\mathcal{B}}^{-d} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\textbf{s}}}) \, d\widehat{\textup{\textbf{s}}}. \end{equation} \subsection[\textbf{Step 6: use spherical coordinates}]{\textbf{Step~6: use spherical coordinates}} \label{subsec:spherical-coords} It remains to estimate \begin{equation} \label{eq:hard-int} \int_{\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s}. \end{equation} First, we provide intuition for why this integral should scale like $\sigma^{d+2}$. Let $\mathbb{S}^{d-1}\subseteq\mathbb{R}^{d} \cong T_{\textbf{p}}\mathcal{M}$ denote the $\ell_2$-unit sphere, with density $d\widehat{\textbf{s}}$, where $\widehat{\textbf{s}} \in \mathbb{S}^{d-1}$. Let $ r \in \mathbb{R}_{\geq 0}$ be a radial variable with density $dr$. Consider the integral \eqref{eq:hard-int} in these spherical coordinates. Substituting $\textbf{s} = r\widehat{\textbf{s}}$ and $d \textbf{s} = r^{d-1} dr d\widehat{\textbf{s}}$, \begin{equation} \label{eq:hard-int-sph} \int_{\textbf{s} : \| L_{\textbf{p}}(\textbf{s}) + \frac{1}{2} Q_{\textbf{p}}(\textbf{s}) \|_{\mathcal{B}} \leq \sigma} \textbf{s} d\textbf{s} = \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \int_{r \in \operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma)} r^{d} dr d\widehat{\textbf{s}}, \end{equation} where we define \begin{equation} \label{eq:rad-dom-1} \operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma) := \left\{ r \geq 0: \left\|r L_{\textbf{p}}(\widehat{\textbf{s}}) + \tfrac{r^2}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}) \right\|_{\mathcal{B}} \leq \sigma \right\}. \end{equation} This is the second-order approximation set $\operatorname{Approx}^{(2)}(\sigma)$ intersected with the ray in the direction of $\widehat{\textbf{s}}$. Note that by definition, \label{} \begin{align} \label{eq:radialdomain_vs_approxtwo} \operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma) = \{ r \ge 0 : r \widehat{\textup{\bf s}} \in \operatorname{Approx}^{(2)}(\sigma)\}. \end{align} Compare this domain of integration against the domain for $-\widehat{\textbf{s}}$: \begin{align} \label{eq:rad-dom-2} \operatorname{RadialDomain}(-\widehat{\textbf{s}}, \sigma) & = \left\{ r \geq 0: \left\|r L_{\textbf{p}}(-\widehat{\textbf{s}}) + \tfrac{r^2}{2}Q_{\textbf{p}}(-\widehat{\textbf{s}}) \right\|_{\mathcal{B}} \leq \sigma \right\} \nonumber \\[3pt] & \,= \left\{ r \geq 0: \left\|r L_{\textbf{p}}(\widehat{\textbf{s}}) - \tfrac{r^2}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}) \right\|_{\mathcal{B}} \leq \sigma \right\}. \end{align} Speaking roughly, the condition determining membership in \eqref{eq:rad-dom-1} differs from that in \eqref{eq:rad-dom-2} at the $O(r^2)$ term; the conditions would be the same without the $Q_{\textbf{p}}$ term. On the other hand, the integrand in \eqref{eq:hard-int-sph} is \textup{odd} (that is, it flips sign upon inversion in the origin). Therefore, we should expect ``near cancellation" between the inner integrals: \begin{equation} \label{eq:sum-hard-ints} \widehat{\textbf{s}} \int_{r \in \operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma)} r^{d} dr \,\,\, - \,\,\, \widehat{\textbf{s}} \int_{r \in \operatorname{RadialDomain}(-\widehat{\textbf{s}}, \sigma)} r^{d} dr. \end{equation} Supposing $r = O(\sigma)$ for $r$ in each radial domain (justified in Lemma \ref{lem:radial-dom-bound}), then each of the two terms in \eqref{eq:sum-hard-ints} is $O(\sigma^{d+1})$. Thus after ``near, but not complete, cancellation" the sum \eqref{eq:sum-hard-ints} is expected to be $O(\sigma^{d+2})$. Then, integrating over the unit sphere gives $O(\sigma^{d+2})$ in \eqref{eq:hard-int-sph}. This informal discussion explains why we expect the integral \eqref{eq:hard-int} to be $O(\sigma^{d+2})$, the mechanism being cancellation due to an approximate equality between $\operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma)$ and $\operatorname{RadialDomain}(-\widehat{\textbf{s}}, \sigma)$. We shall now make this claim rigorous. Beyond proving that the integral \eqref{eq:hard-int-sph} is $O(\sigma^{d+2})$, we will prove that dividing \eqref{eq:hard-int-sph} by $\sigma^{d+2}$ produces a well-defined limit as $\sigma \rightarrow 0$, namely the RHS of \eqref{eq:I1-integral}. \begin{Remark} Steps 7-8 below are complicated by the fact that we do not assume smoothness of the norm $\| \cdot \|_{\mathcal{B}}$. As a consequence, \textit{a priori} we cannot Taylor-expand the boundary \nolinebreak points \nolinebreak of the radial domain in the variable $\sigma$. \end{Remark} \begin{Lemma} \label{lem:radial-dom-bound} There exist constants $c_6, \sigma_4 > 0$ such that for all $\sigma \leq \sigma_4$ and all $\widehat{\textup{\bf s}} \in \mathbb{S}^{d-1}$, we have \begin{align*} \operatorname{RadialDomain}(\widehat{\textup{\bf s}}, \sigma) \, \subseteq \, c_6[0, \sigma]. \end{align*} \end{Lemma} \begin{proof} Set $c_6 := 2c_1$ and $\sigma_4 := \min\left(\sigma_1, \sigma_3, \frac{1}{\sqrt{2c_5}} \right)$. For $\sigma \leq \sigma_{4}$, we have \begin{align*} \operatorname{Approx}^{(2)}(\sigma) &\subseteq \frac{1}{1-c_5\sigma^2} \operatorname{Exact}(\sigma) && \text{[by \eqref{eq:lem-inc-approx2-nice}]} \\ &\subseteq \frac{c_1}{1-c_5\sigma^2} \{\textbf{s} \in \mathbb{R}^d: \| \textbf{s} \|_2 \leq \sigma \}. && \text{[by \eqref{eq:nice-exactU}]} \end{align*} Note that $\sigma \leq \frac{1}{\sqrt{2c_5}}$ implies $\frac{c_1}{1-c_5\sigma^2} \leq 2c_1 = c_6$. Therefore, \begin{align*} \operatorname{Approx}^{(2)}(\sigma) \subseteq c_6 \{\textbf{s} \in \mathbb{R}^d : \|\textbf{s} \|_2 \leq \sigma \}. \end{align*} By \eqref{eq:radialdomain_vs_approxtwo} it follows that $\operatorname{RadialDomain}(\widehat{\textbf{s}},\sigma) \subseteq c_6[0,\sigma]$ for all $\widehat{\textbf{s}} \in \mathbb{S}^{d-1}$. \hfill \qed \end{proof} \subsection[Step 7: study the boundary of the radial domain]{\textbf{Step~7: study the boundary of $\operatorname{RadialDomain}(\widehat{\textbf{s}},\sigma)$}} \label{subsec:boundary} In this step, we show that for small enough $\sigma$, the set $\operatorname{RadialDomain}(\widehat{\textbf{s}},\sigma)$ is a single closed interval in $\mathbb{R}_{\geq 0}$. Then, we prove its nonzero boundary point is a continuous function of $(\widehat{\textbf{s}}, \sigma)$ and we bound this to second-order in $\sigma$. Let $ G : \mathbb{S}^{d-1} \times \mathbb{R}_{\geq 0} \longrightarrow \mathbb{R}_{\geq 0}; \,\, (\widehat{\textbf{s}}, r) \longmapsto \left\| rL_{\textbf{p}}(\widehat{\textbf{s}}) + \tfrac{r^2}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}) \right\|_{\mathcal{B}}. $ \begin{Lemma} \label{lem:strict-increasing} There exists a constant $c_7 > 0$ such that for all $\widehat{\textup{\textbf{s}}}\in \mathbb{S}^{d-1}$, the function $r \mapsto G(\widehat{\textup{\textbf{s}}}, r)$ is strictly increasing in $r \in [0,c_7]$. \end{Lemma} \begin{proof} We will show that we can take \begin{align*} c_7 := \begin{cases} 1 \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \textup{if } Q_{\textup{\textbf{p}}} \equiv 0, \\ \min_{\widehat{\textup{\textbf{s}}} \in \mathbb{S}^{d-1}}\! \| L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}) \|_{\mathcal{B}} \big{/} \! \max_{\widehat{\textup{\textbf{s}}} \in \mathbb{S}^{d-1}} \! \|Q_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}) \|_{\mathcal{B}} \quad \textup{otherwise}. \end{cases} \end{align*} Obviously $c_7 >0$, because $\| L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}} \gtrsim \| L_{\textbf{p}}(\widehat{\textbf{s}}) \|_2 = \|\widehat{\textbf{s}} \|_2 = 1$ by equivalence of norms and \eqref{eq:isometric}, and $Q_{\textbf{p}}(\widehat{\textbf{s}}) = O(1)$ by continuity and compactness. Fix $\widehat{\textbf{s}} \in \mathbb{S}^{d-1}$\!. Set $g: \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0} $ by $g(r) = G(\widehat{\textbf{s}},r)$, $\textbf{a} := L_{\textbf{p}}(\widehat{\textbf{s}})$ and $\textbf{b} := \frac{1}{2} Q_{\textbf{p}}(\widehat{\textbf{s}})$. Since $g$ is continuous, we may check $g$ is strictly increasing on the half-open interval $[0,c_7)$. Let $\lambda > 0$ satisfy $(1+\lambda)r < c_7$. By the reverse triangle inequality and the definition of $c_7$, we have \begin{align*} \label{eq:g-increasing-2} g((1+\lambda)r) & = \left\|(1+\lambda)(r\textbf{a} + r^2 \textbf{b}) + (\lambda + \lambda^2)r^2\textbf{b} \right\|_{\mathcal{B}} \\[-0.2pt] & \geq (1+\lambda) \| r \textbf{a} + r^2 \textbf{b} \|_{\mathcal{B}} - (\lambda + \lambda^2)r^2 \|\textbf{b}\|_{\mathcal{B}} \\[-0.2pt] & = \|r \textbf{a} + r^2 \textbf{b} \|_{\mathcal{B}} + \lambda r ( \| \textbf{a} + r \textbf{b} \|_{\mathcal{B}} - (1+\lambda)r \| \textbf{b} \|_{\mathcal{B}} ) \\[-0.2pt] & \geq \|r \textbf{a} + r^2 \textbf{b} \|_{\mathcal{B}} + \lambda r ( \| \textbf{a} \|_{\mathcal{B}} - r \|\textbf{b} \|_{\mathcal{B}} - (1+\lambda)r \|\textbf{b} \|_{\mathcal{B}}) \\[-0.2pt] & > \|r \textbf{a} + r^2 \textbf{b} \|_{\mathcal{B}} + \lambda r ( \| \textbf{a}\|_{\mathcal{B}} - \tfrac{1}{2} \| \textbf{a} \|_{\mathcal{B}} - \tfrac{1}{2} \| \textbf{a} \|_{\mathcal{B}}). \end{align*} The last quantity equals $g(r)$, and the lemma follows. \hfill \qed \end{proof} The following quantity is well-defined as a consequence of Lemma~\ref{lem:strict-increasing}. \begin{Cordef}\label{lem:defrstar} There exists a constant $\sigma_5 > 0$ such that for all $\sigma \leq \sigma_5$ and all $\widehat{\textup{\textbf{s}}} \in \mathbb{S}^{d-1}$, $\operatorname{RadialDomain}(\widehat{\textup{\textbf{s}}}, \sigma)$ is a closed interval. Thus, there exists a function \begin{equation} \label{eq:def-r*-rad} r^* : \mathbb{S}^{d-1} \times [0,\sigma_5] \rightarrow \mathbb{R}_{\geq 0} \quad \! \textup{such that} \,\, \operatorname{RadialDomain}(\widehat{\textup{\textbf{s}}},\sigma) = [0,r^*(\widehat{\textup{\textbf{s}}}, \sigma)]. \end{equation} \end{Cordef} \begin{Lemma} \label{lem:continuous} There exists a constant $\sigma_6 > 0$ such that the restriction of $r^*$ to $\mathbb{S}^{d-1} \times [0, \sigma_6]$ is a continuous function. \end{Lemma} \begin{proof} Take $\sigma_6 = \sigma_5 /2 $. We shall verify continuity of $r^*$ by bare hands. For notational convenience, within this proof, we denote the second argument of $r^*$ by $\tau$ (subscripted and/or primed) rather than by $\sigma$. Fix $(\widehat{\textbf{s}}_1, \tau_1) \in \mathbb{S}^{d-1} \times [0,\sigma_6]$ and let $\epsilon >0$. Lemma~\ref{lem:strict-increasing} says $r \mapsto G(\widehat{\textbf{s}}_1, r)$ is continuous and strictly increasing around $0$. By elementary facts, this has a well-defined continuous inverse function around $G(0) = 0$. The inverse function is $\tau \mapsto r^*(\widehat{\textbf{s}}_1,\tau)$ defined for $\tau \in [0,\sigma_5]$ (Lemma~\ref{lem:defrstar}). So, we can take $\delta' \in (0, \sigma_6)$ such that for all $\tau_2' \in [0, \sigma_5]$, \begin{equation} \label{eq:delta-prime} | \tau_2' - \tau_1| < \delta' \implies | r^*(\widehat{\textbf{s}}_1,\tau_2') - r^*(\widehat{\textbf{s}}_1,\tau_1) | < \epsilon. \end{equation} Since $L_{\textbf{p}}, Q_{\textbf{p}}$ are continuous, there exist $\delta'', \delta''' > 0$ such that for all $\widehat{\textbf{s}}_2 \in \mathbb{S}^{d-1}$, \begin{align} & \| \widehat{\textbf{s}}_2 - \widehat{\textbf{s}}_1 \|_2 < \delta'' \implies \left\| L_{\textbf{p}}(\widehat{\textbf{s}}_2) - L_{\textbf{p}}(\widehat{\textbf{s}}_1) \right\|_{\mathcal{B}} < \frac{1}{c_7} \frac{\delta'}{3}, \label{eq:def-delta''} \\[2pt] & \| \widehat{\textbf{s}}_2 - \widehat{\textbf{s}}_1 \|_2 < \delta''' \implies \left\| \frac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}_2) - \frac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}_1) \right\|_{\mathcal{B}} < \frac{1}{c_7^2} \frac{\delta'}{3}. \label{eq:def-delta'''} \end{align} Define $\delta := \min\left(\delta' / 3, \delta'', \delta'''\right) > 0$. Let $(\widehat{\textbf{s}}_2, \tau_2) \in \mathbb{S}^{d-1} \times [0, \sigma_6]$ satisfy $\left\|\left(\widehat{\textbf{s}}_2, \tau_2 \right) - \left(\widehat{\textbf{s}}_1, \tau_1\right)\right\|_2 < \delta$. We shall verify $|r^{*}(\widehat{\textbf{s}}_2, \tau_2) - r^*(\widehat{\textbf{s}}_1, \tau_1)| < \epsilon$. Put $r_1 := r^*(\widehat{\textbf{s}}_1, \tau_1), r_2 := r^*(\widehat{\textbf{s}}_2, \tau_2)$ and $\tau_2' := G(\widehat{\textbf{s}}_1,r_2)$. By \eqref{eq:delta-prime}, it suffices to check $|\tau_2' - \tau_1 | < \delta'$, as then $\tau_2' \in [0, \sigma_5]$ (because $\tau_2' \leq \tau_1 + \delta' \leq 2 \sigma_6 = \sigma_5$) and also $r^{*}( \widehat{\textbf{s}}_1,\tau_2') = r^{*}( \widehat{\textbf{s}}_1,G(\widehat{\textbf{s}}_1, r_2)) = r_2$. So, \eqref{eq:delta-prime} gives $|r_2 - r_1| < \epsilon$. To see that $|\tau_2' - \tau_1| < \delta$ indeed holds, we write \begin{align*} & \hspace{4em} \tau_2' = \left\|r_2 L_{\textbf{p}}(\widehat{\textbf{s}}_1) + r_2^2 \frac{1}{2} Q_{\textbf{p}}(\widehat{\textbf{s}}_1) \right\|_{\mathcal{B}} =\\ & \left\|(r_2 L_{\textbf{p}}(\widehat{\textbf{s}}_2) + \frac{r_2^2}{2} Q_{\textbf{p}}(\widehat{\textbf{s}}_2) )+ r_2( L_{\textbf{p}}(\widehat{\textbf{s}}_1) - L_{\textbf{p}}(\widehat{\textbf{s}}_2)) + r_2^2 ( \tfrac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}_2) - \tfrac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}}_1) ) \right\|_{\mathcal{B}}\!. \end{align*} Using the triangle inequality, \eqref{eq:def-delta''}, \eqref{eq:def-delta'''} and $r_2 \leq c_7$ (from the proof of Lemma~\ref{lem:defrstar}), \begin{align*} |\tau_2' - \tau_1| \,\, \leq \,\, |\tau_2 -\tau_1| + |\tau_2' - \tau_2| \,\, < \,\, \frac{\delta'}{3} + c_7 \frac{1}{c_7} \frac{\delta'}{3} + c_7^2 \frac{1}{c_7^2}\frac{\delta'}{3} \,\, = \,\, \delta'. \end{align*} This proves $r^{*}$ is continuous on $\mathbb{S}^{d-1} \times [0, \sigma_6]$, when $\sigma_6 = \sigma_5 /2$. \hfill \qed \end{proof} \begin{Lemma} \label{lem:bound-r*} There exist constants $c_8 \geq 0$ and $\sigma_7 > 0$ such that for all $\sigma \leq \sigma_7$ and all $\widehat{\textup{\textbf{s}}} \in \mathbb{S}^{d-1}$, \begin{equation} \label{eq:bound-r*} \frac{1}{\|L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}) \|_{\mathcal{B}}} \sigma - c_8 \sigma^2 \, \leq \, r^*(\widehat{\textup{\textbf{s}}}, \sigma) \, \leq \, \frac{1}{\|L_{\textup{\textbf{p}}}(\widehat{\textup{\textbf{s}}}) \|_{\mathcal{B}}} \sigma + c_8 \sigma^2. \end{equation} \end{Lemma} \begin{proof} We shall prove that we may take \begin{equation} \label{eq:c8-def} c_8 = \frac{\max_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \|Q_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}}{\min_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^3}. \end{equation} Given $\widehat{\textbf{s}} \in \mathbb{S}^{d-1}$. Write $\textbf{a} = L_{\textbf{p}}(\widehat{\textbf{s}})$, $\textbf{b} = \frac{1}{2}Q_{\textbf{p}}(\widehat{\textbf{s}})$, $a = \|\textbf{a}\|_{\mathcal{B}}$ and $b = \|\textbf{b}\|_{\mathcal{B}}$. If $\textbf{b} = 0$, then $r^*(\widehat{\textbf{s}}, \sigma) = (1/a) \sigma$ for all $\sigma \geq 0$, so \eqref{eq:bound-r*} is obviously satisfied. Assume $\textbf{b} \neq 0$. The triangle inequality gives \begin{equation} \label{eq:two-quads} g_{-}(r) \, \leq \, g(r) \, \leq \, g_{+}(r) \quad \textup{for all } r \in \mathbb{R}_{\geq 0}, \end{equation} where $g(r) := G(\widehat{\textbf{s}},r) = \| r \textbf{a} + r^2 \textbf{b} \|_{\mathcal{B}}$, $g_{-}(r) := ar - br^2$ and $g_{+}(r) := ar + br^2$. Note $g_{+}$ is strictly increasing over $r \in [0, \infty)$, while $g_{-}$ is strictly increasing over $r \in [0, \frac{a}{2b}]$ and $g_{-}(\frac{a}{2b}) = \frac{a^2}{4b}$. Let \begin{align*} \sigma'_6 := \min\left(\sigma_5, \frac{\min_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^2}{4 \max_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \| Q_{\textbf{p}}(\widehat{\textbf{s}}) \|_{\mathcal{B}} } \right) \, > \, 0. \end{align*} It follows from \eqref{eq:two-quads} and the intermediate value theorem that for all $\sigma \in [0,\sigma_7']$, \begin{align*} r_{+,+}^*(\sigma) \, \leq \, r^*(\sigma) \, \leq \, r_{-,-}^*(\sigma), \end{align*} where $r^*_{+,+}(\sigma)$ denotes the greater of the two roots in $r$ to the quadratic equation $g_{+}(r) = \sigma$ and where $r^*_{-,-}(\sigma)$ denotes the lesser of the two roots in $r$ to $g_{-}(r) = \sigma$. Explicitly by the quadratic formula and Taylor series for the square root function, we have \begin{align} \label{eqref:quadratic-bounds} & r^*_{+,+}(\sigma) \,\, := \,\, \frac{-a + \sqrt{a^2 + 4b\sigma}}{2b} \,\, = \,\, \frac{1}{a}\sigma - \frac{b}{a^3}\sigma^2 + O(\sigma^3), \nonumber \\[4pt] & r^*_{-,-}(\sigma) \,\, := \,\, \frac{a - \sqrt{a^2 - 4b\sigma}}{2b} \,\, =\,\, \frac{1}{a}\sigma + \frac{b}{a^3}\sigma^2 + O(\sigma^3). \end{align} On the other hand, from the compactness of $\mathbb{S}^{d-1}$, one can check the implicit constants suppressed by the big $O$ notation in \eqref{eqref:quadratic-bounds} may all be taken independently of $\widehat{\textbf{s}}$. At the same time, $\frac{2b}{a^3} \leq c_8$ for each $\widehat{\textbf{s}} \in \mathbb{S}^{d-1}$, by the definition \eqref{eq:c8-def}. Taking $\sigma_7 > 0$ to be sufficiently smaller than $\sigma_7'$ yields the lemma. \hfill \qed \end{proof} \begin{Definition} Set $\sigma_8 := \min(\sigma_6,\sigma_7) > 0$. Define $\eta^* : \mathbb{S}^{d-1} \times (0, \sigma_8] \rightarrow \mathbb{R}$ by \begin{equation} \label{eq:eta-def} r^*(\widehat{\textbf{s}}, \sigma) =: \frac{1}{\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}}\sigma + \tfrac{1}{2}\eta^*(\widehat{\textbf{s}},\sigma) \sigma^2. \end{equation} \end{Definition} By Lemma~\ref{lem:continuous}, $\tfrac{1}{2}\eta^*$ is continuous. By Lemma~\ref{lem:bound-r*}, it is bounded uniformly in absolute value by $2c_8$. \subsection[Step 8: obtain tilt and apply dominated convergence]{\textbf{Step 8: obtain ${\operatorname{tilt}_{\M, \B, \p}}$ and apply dominated convergence}} \label{subsec:obtain-tilt} It remains to establish Eq.~\eqref{eq:I1-integral}: that is, to obtain the first-order term in the limiting differential operator. We do this using spherical coordinates (Section~\ref{subsec:spherical-coords}) and the results about the radial integration domain developed in Section~\ref{subsec:boundary}. By swapping the order of limit and the integration (justified by dominated convergence), the tilt function emerges at last. \begin{Proposition} \label{prop:obtain-tilt} For each $\widehat{\textup{\textbf{s}}} \in \mathbb{S}^{d-1}$, \begin{equation} \label{eq:tilt-lim-eq} \lim_{\sigma \rightarrow 0} \tfrac{1}{2} \eta^*(\widehat{\textup{\textbf{s}}}, \sigma) = {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textup{\textbf{s}}}). \end{equation} In particular, the limit on the LHS exists. \end{Proposition} \begin{proof} By the bound~\eqref{eq:bound-r*} and compactness of $[-c_8, c_8]$, it suffices to show that every accumulation point of $\frac{1}{2}\eta^*(\widehat{\textbf{s}}, \sigma)$ as $\sigma \rightarrow 0$ equals ${\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$. That is, assume $\left( \tau_k \right)_{k=1}^{\infty} \subseteq (0,\sigma_8]$ is such that $\tau_{k} \rightarrow 0$ and $\frac{1}{2}\eta^*(\widehat{\textbf{s}}, \tau_k) \rightarrow \eta \in [-c_8, c_8]$ as $k \rightarrow \infty$; we will show $\eta = {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$. Substituting \eqref{eq:eta-def} into \eqref{eq:def-r*-rad}, and putting $\textbf{a} = L_{\textbf{p}}(\widehat{\textbf{s}})$, $\textbf{b} = \frac{1}{2} Q_{\textbf{p}}(\widehat{\textbf{s}})$, $\eta_k = \eta(\widehat{\textbf{s}}, \tau_k)$, gives \begin{align*} \tau_k = \left\| \left(\frac{\tau_k}{\| \textbf{a}\|_{\mathcal{B}}} + \frac{1}{2} \eta_k \tau_k^2\right)\!\textbf{a} \, \, + \, \left(\frac{\tau_k}{\| \textbf{a}\|_{\mathcal{B}}} + \frac{1}{2} \eta_k \tau_k^2\right)^{\!\!2}\!\textbf{b} \right\|_{\mathcal{B}}. \end{align*} Rearranging and dividing by $\tau_k$, this reads \begin{equation} \label{eq:sub-in-tan} 1 = \left\| \frac{\textbf{a}}{\|\textbf{a}\|_{\mathcal{B}}} \, + \, \left( \frac{1}{2}\eta_k \textbf{a} + \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2}\right)\!\tau_k + \frac{\eta_k \textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}} \tau_k^2 + \frac{\eta_k^2 \textbf{b}}{4} \tau_k^3 \right\|_{\mathcal{B}}. \end{equation} By the definition of tangent cones, \eqref{eq:sub-in-tan} witnesses that \begin{equation} \label{eq:in-tang} \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2} + \eta \textbf{a} \in TC_{\textbf{a} / \| \textbf{a} \|_{\mathcal{B}}}\!\!\left(\partial \mathcal{B}\right). \end{equation} Indeed, in the definition \eqref{eq:def-tan-cone}, take $\mathcal{Y} = \partial \mathcal{B} \subseteq \mathbb{R}^D$; $\textbf{y} = \textbf{a} / \| \textbf{a} \|_{\mathcal{B}} \in \partial \mathcal{B}$; $\textbf{y}_k = \frac{\textbf{a}}{\|\textbf{a}\|_{\mathcal{B}}} \, + \, \left( \frac{1}{2}\eta_k \textbf{a} + \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2}\right)\!\tau_k + \frac{\eta_k \textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}} \tau_k^2 + \frac{\eta_k^2 \textbf{b}}{4} \tau_k^3 \in \partial \mathcal{B}$; the same $\tau_k$; and $\textbf{d} = \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2} + \eta \textbf{a}$. Then, \eqref{eq:in-tang} follows because $(\textbf{y}_k - \textbf{y})/\tau_k = \left( \frac{1}{2} \eta_k \textbf{a} + \frac{\textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}^2} \right) + \frac{\eta_k \textbf{b}}{\|\textbf{a}\|_{\mathcal{B}}} \tau_k + \frac{\eta_k^2 \textbf{b}}{4} \tau_k^2 \rightarrow \textbf{d}$ as $k \rightarrow \infty$, using $\frac{1}{2} \eta_k \rightarrow \eta$, $\tau_k \rightarrow 0$ and $\eta_k = O(1)$ (Lemma \ref{lem:bound-r*}) as $k \rightarrow \infty$. From Proposition/Definition~\ref{def:tilt-const} and the membership~\eqref{eq:in-tang}, we obtain $\eta = {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})$. \hfill \qed \end{proof} \noindent \textit{Finishing the proof of Theorem~\textup{\ref{thm:limit}}}: From the end of Section~\ref{subsec:drop-stuff}, it remains to establish \eqref{eq:I1-integral}. Let $\sigma \leq \min\left(\sigma_5, \sigma_6, \sigma_7, \sigma_8 \right)$. Then using spherical coordinates: \begin{align*} & \frac{1}{\sigma^{d+2}} \int_{\textbf{s} \in \operatorname{Approx}^{(2)}(\sigma)} \textbf{s} d\textbf{s} \nonumber \\[8pt] & = \frac{1}{\sigma^{d+2}} \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \int_{r \in \operatorname{RadialDomain}(\widehat{\textbf{s}}, \sigma)} r^{d} dr d\widehat{\textbf{s}} \hspace{12.2em} \textup{[\eqref{eq:hard-int-sph}]} \nonumber \\[8pt] & = \frac{1}{\sigma^{d+2}} \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \int_{r=0}^{r^*(\widehat{\textbf{s}}, \sigma)} r^d dr d\widehat{\textbf{s}} \hspace{12.7em} \textup{[\eqref{eq:def-r*-rad}, $\sigma \leq \sigma_5$]}. \nonumber \end{align*} Substituting Eq.~\eqref{eq:eta-def} for $r^{*}(\widehat{\textbf{s}}, \sigma)$ and evaluating the inner integral, we obtain \begin{align*} & \frac{1}{\sigma^{d+2}} \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \frac{\widehat{\textbf{s}}}{d+1} \, \left( \frac{1}{\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}} \sigma + \frac{1}{2} \eta^*(\widehat{\textbf{s}},\sigma)\sigma^2\right)^{d+1} \!\!\! d\widehat{\textbf{s}} \hspace{3.1em} \textup{[\eqref{eq:eta-def}, $\sigma \leq \sigma_8$]} \nonumber \\[8pt] & = \frac{1}{(d+1)\sigma} \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \frac{\widehat{\textbf{s}} }{\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^{d+1}} d\widehat{\textbf{s}} \nonumber \nonumber \\[2pt] & + \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \left( \frac{\tfrac{1}{2} \eta^*(\widehat{\textbf{s}},\sigma)}{\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^d} + O(\sigma) \right) d\widehat{\textbf{s}} \quad \quad \quad \,\, \textup{[\eqref{eq:bound-r*}, $\sigma \leq \sigma_7$, $\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^{-1} = \omega(1)$]}\nonumber \\[8pt] & = \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \left( \frac{\tfrac{1}{2} \eta^*(\widehat{\textbf{s}},\sigma)}{\|L_{\textbf{p}}(\widehat{\textbf{s}})\|_{\mathcal{B}}^d} + O(\sigma) \right) d\widehat{\textbf{s}}. \hspace{14em} \textup{[oddness]} \end{align*} By Eq.~\eqref{eq:tilt-lim-eq} and dominated convergence, as $\sigma \to 0$ this integral converges to \begin{align*} \int_{\widehat{\textbf{s}} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \,\| L_{\textbf{p}}(\widehat{\textbf{s}}) \|_{\mathcal{B}}^{-d} \, \lim_{\sigma \rightarrow 0} \frac{1}{2} \eta^*(\widehat{\textbf{s}},\sigma) \, d\widehat{\textbf{s}} = \int_{\widehat{s} \in \mathbb{S}^{d-1}} \widehat{\textbf{s}} \| L_{\textbf{p}}(\widehat{\textbf{s}}) \|_{\mathcal{B}}^{-d} {\operatorname{tilt}_{\M, \B, \p}}(\widehat{\textbf{s}})\, d\widehat{\textbf{s}}. \nonumber \end{align*} The use of dominated convergence is justified because $\widehat{\textbf{s}} \mapsto \frac{1}{2} \eta^{*}(\widehat{\textbf{s}},\sigma)$ is continuous in $\widehat{\textbf{s}}$ for each $\sigma \leq \sigma_6$ (Lemma~\ref{lem:continuous}), it is uniformly bounded in absolute value by $c_8$ for each $\sigma \leq \sigma_7$ (Lemma~\ref{lem:bound-r*}), and $\lim_{\sigma \rightarrow 0} \tfrac{1}{2} \eta^*(\widehat{\textbf{s}}, \sigma)$ exists and is ${\operatorname{tilt}_{\M, \B, \p}}$ (Proposition~\ref{prop:obtain-tilt}). This completes the proof of Theorem~\ref{thm:limit}. \hfill \qed \section{Application: mapping volumetric shape spaces} \label{sec:experiments} In this section, we demonstrate the use of non-Euclidean norms for embedding a set of 3D densities with continuous variability. The specific motivation comes from the field of single-particle cryo-electron microscopy (cryo-EM), an imaging technique for reconstructing the 3D structure of proteins, ribosomes, and other large molecules, using a large set of electron-microscope images. We give a description of cryo-EM and the continuous heterogeneity problem, which naturally lends itself to manifold learning. We then describe our method for mapping general volumetric shape spaces using non-Euclidean diffusion maps, and apply it to a simulated data set that satisfies the assumptions of Theorem~\ref{thm:nonuniform}.. For a broader general introduction to cryo-EM see Chapter 1 of \cite{Glaeser2021} or the more mathematically-oriented reviews \cite{SingerSigworth2020,BendoryBartesaghiSinger2020}. Code for reproducing the numerical results is available at:\\ \indent \url{http://github.com/mosco/manifold-learning-arbitrary-norms} \subsection{\textbf{Single-particle cryo-EM}} The goal of single-particle cryo-EM is to obtain the 3D structure of a molecule of interest. This is done by obtaining a sample of the molecule and freezing it so that it forms a thin sheet of ice. This sheet typically contains hundreds of thousands of copies of the molecule, each suspended at a different orientation. The frozen sample is then imaged using a transmission electron microscope. This results in images that contain many noisy tomographic projections of the same molecule, viewed from different (unknown) directions (See Figure~\ref{fig:PaaZ_from_micrograph_to_reconstruction}). The challenge is to compute a 3D reconstruction of the electrostatic density map. The field of cryo-EM has made such progress over the last decade that it is now common to see reconstructions of large rigid molecules, composed of tens of thousands of individual atoms, with resolutions finer than 3~\r{a}ngstr\"{o}ms, which allow for the accurate fitting of atomic models using specialized software. See the right panel of Figure~\ref{fig:PaaZ_from_micrograph_to_reconstruction} for an example experimental reconstruction. \begin{figure} \includegraphics[width=0.5\linewidth]{micrograph_PaaZ_300dpi} \includegraphics[width=0.55\linewidth]{EMD9873} \caption{(left) Cryo-EM image showing $\sim \!\!220$ noisy tomographic projections of the PaaZ molecule and some contaminants (from~\cite{SingerSigworth2020}); (right) Surface plot of the reconstructed electrostatic density of the PaaZ molecule, based on 118,203 tomographic projections (from~\cite{SathyanarayananEtal2019}). } \label{fig:PaaZ_from_micrograph_to_reconstruction} \end{figure} The basic assumption behind most single-particle cryo-EM methods is that the molecule of interest is \textit{rigid}. Hence, the different electron microscope images are tomographic projections of the same exact 3D volume from different angles (or the at least, there is only a finite set 3D volumes). However, this assumption does not always hold: some molecules have flexible components that can move independently. This fact, known as the \emph{continuous heterogeneity problem} in cryo-EM, poses a difficulty for existing reconstruction methods. One of the key ongoing challenges in the field is the development of new methods that can map the entire space of molecular conformations \cite{Frank2018,JinEtal2014,TagareEtal2015,FrankOurmazd2016,NakaneEtal2018,DashtiEtal2020,LedermanAndenSinger2020,ZhongEtal2021,PunjaniFleet2021}. See \cite{SorzanoEtal2019} for a survey. Several works have applied diffusion maps to this problem domain \cite{DashtiEtal2014,SchwanderFungOurmazd2014,DashtiEtal2020,MoscovichHaleviAndenSinger2020}. In our conference paper \cite{ZeleskoMoscovichKileelSinger2020}, we applied diffusion maps with a particular non-Euclidean norm to a given set of 3D densities. Specifically, we used a fast wavelet-based approximation to the Earthmover's distance (WEMD). Those numerical results were the original motivation for the present paper, and the rest of Section~\ref{sec:experiments} extends them. \subsection{\textbf{WEMD-based diffusion maps}} Given a set of volumetric arrays ${\bf x}_1, \ldots, {\bf x}_n \in \mathcal M \subseteq \mathbb{R}^{N_x \times N_y \times N_z}$, we compute an approximate Earthmover's distance between all pairs of arrays \cite{ShirdhonkarJacobs2008}. This is done by first computing the discrete wavelet transform of each input and then using a weighted $\ell_1$-norm on the pairwise differences of wavelet coefficients, \begin{equation}\label{eq:wemd} \|{\bf x}_i - {\bf x}_j\|_{\textup{WEMD}} := \sum_{\lambda} 2^{-5s/2} \, \lvert \mathcal{W}{\bf x}_i(\lambda) - \mathcal{W}{\bf x}_j(\lambda) \rvert. \end{equation} Here, $\mathcal{W}{\bf x}$ denotes a 3D wavelet transform of ${\bf x}$ \cite{Mallat2009}. The index $\lambda$ contains the wavelet shifts $(m_1,m_2, m_3) \in \mathbb{Z}^3$ and scale parameter $s \in \mathbb{Z}_{\ge 0}$. We then compute pairwise Gaussian affinities, \begin{align} \label{eq:gaussian_kernel} W_{ij} = \exp\left(-\|{\bf x}_i - {\bf x}_j\|_{\textup{WEMD}}^2 \big{/} \sigma^2\right), \end{align} proceed to construct a graph Laplacian, and perform the eigenvector-based embedding as described in Section~\ref{subsec:graphLaplacian}. Since the construction uses a (fixed) norm, the theory described in Section~\ref{subsec:thm-statement} applies in this case. Hence, in the noiseless case, the graph Laplacian converges to an elliptic second-order differential operator on the relevant manifold ${\mathcal M}$ of ATP synthase conformations. Here, ${\mathcal M}$ is embedded in the Euclidean space of arrays of size $N_x \times N_y \times N_z$. \subsection{\textbf{Simulation results}} \begin{figure} \centering \begin{tikzpicture} \node (ATPpartial) at (0,0) {\includegraphics[height=2in]{colored_shaft_final_cropped}}; \node (ATPall) at (3.5,0.44) {\includegraphics[height=2.35in]{colored_atp_final_cropped}}; \node (NoisySlice) at (7.5,0.0) {\includegraphics[height=2in]{slice_noisy}}; \draw[x=.25cm,y=0.60cm,line width=.2ex,stealth-,rotate=+90] (11,0.1) arc (-150:150:1 and 1); \end{tikzpicture} \caption{\textit{ATP synthase.} (left) F$_0$ and axle subunits. These jointly rotate in the presence of hydrogen ions, together forming a molecular electric motor; (middle) the F$_1$ subunit (in cyan) envelops the axle. As the axle rotates, the F$_1$ subunit assembles ATP; (right) representative $2$D slice of the rotated F$_0$ and axle subunits with additive noise shown. } \label{fig:atp_synthase} \end{figure} \begin{figure} \center \includegraphics[width=0.95\linewidth]{L2vsWEMD_noiseless} \vspace{8pt} \includegraphics[width=0.95\linewidth]{L2vsWEMD_noisy} \caption{\textit{Euclidean distance vs. wavelet-based approximate Earthmover's distance} as functions of the angle between rotations of the ATP synthase rotor. (top) distances for rotated volumes without noise; (bottom) distances for the noisy data set. (Euclidean distances were scaled to be comparable to WEMDs.)} \label{fig:compare_angle} \end{figure} We tested our method on a synthetic volumetric data set that mimics the motion space of ATP synthase \cite{YoshidaMuneyukiHisabori2001}, see Figure~\ref{fig:atp_synthase}. This enzyme is a stepper motor with a central asymmetric axle that rotates in 120$\degree$ steps relative to the F$_1$ subunit, with short transient motions in-between the three dominant states. Our synthetic data was generated as follows: we produced 3D density maps of entry 1QO1 \cite{Stock1999} from the Protein Data Bank \cite{Roseetal.2017} using the \texttt{molmap} command in UCSF Chimera \cite{Chimera2004}. These density maps have array dimensions $47 \times 47 \times 107$ and a resolution of 6\AA \ per voxel. We then took random rotations of the F$_0$ and axle subunits, where the angles were drawn i.i.d. according to the following mixture distribution, \begin{align*} \tfrac{2}{5} U[0,360] + \tfrac{1}{5} \mathcal{N}(0,1) + \tfrac{1}{5} \mathcal{N}(120,1) + \tfrac{1}{5} \mathcal{N}(240,1). \end{align*} The resulting density maps formed the clean dataset. The noisy dataset was generated in the same manner but also included additive i.i.d. Gaussian noise with mean zero and a standard deviation of $1/10$ of the maximum voxel value. The discrete wavelet transform of all the volumes in the dataset was computed using PyWavelets \cite{LeeEtal2019} with the \texttt{sym3} wavelet (symmetric Daubechies wavelets of order 3), though other wavelet choices also worked well \cite[Sec 4.2]{ShirdhonkarJacobs2008}. The maximum scale level chosen was $s=6$ to minimize the truncation in Eq.~\eqref{eq:wemd}. The number of resulting wavelet coefficients was 40\% larger than the number of voxels. Figure~\ref{fig:compare_angle} compares the Euclidean norm to the WEMD norm for a range of angular differences for the noiseless and noisy datasets. Note that for the clean dataset, WEMD is monotonic in the absolute value of the X axis (equal to the angular difference between the ATP synthase rotors). This behavior also holds for the Euclidean norm, but only for small angular differences up to $\approx \pm19\degree$. This suggests that an affinity graph built from this dataset using the Euclidean norm can capture the right geometry only when the dataset contains a dense sampling of the angles and when the kernel width is properly calibrated to nearly cut off connections at angles $>19\degree$. \begin{figure} \hspace{-12pt} \begin{tabular}{cccccc} n & $\ell_2$ (noiseless) & $\ell_2$ (noisy) & WEMD (noiseless) & WEMD (noisy) \\\\ \begin{tabular}{c}25\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=25_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=25_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=25_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=25_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \\ \begin{tabular}{c}50\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=50_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=50_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=50_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=50_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \\ \begin{tabular}{c}100\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=100_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=100_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=100_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=100_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \\ \begin{tabular}{c}200\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=200_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=200_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=200_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=200_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \\ \begin{tabular}{c}400\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=400_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=400_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=400_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=400_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \\ \begin{tabular}{c}800\\[44pt] \end{tabular}& \includegraphics[height=56pt]{euclidean_embedding_n=800_std=0_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{euclidean_embedding_n=800_std=0_01644027_seed=2020_gaussian_kernel}& \includegraphics[height=56pt]{wemd_embedding_n=800_std=0_seed=2020_gaussian_kernel_threshold=1_0}& \includegraphics[height=56pt]{wemd_embedding_n=800_std=0_01644027_seed=2020_gaussian_kernel_threshold=1_0} \end{tabular} \caption{\textit{Simulation results}. Euclidean vs. WEMD-based Laplacian eigenmaps into $\mathbb{R}^2$ using the clean and noisy ATP synthase data sets. Sample sizes of $n=25, 50, 100, 200, 400, 800$. The points are translucent to indicate density and the color is the groundtruth angle.} \label{fig:embeddings} \end{figure} Figure~\ref{fig:embeddings} is the result of a two-dimensional Laplacian eigenmaps embedding, once with the Euclidean norm and once with WEMD norm \eqref{eq:wemd}. These embeddings use the unweighted graph Laplacian with a Gaussian kernel, which corresponds to the setting of Theorem~\ref{thm:limit}. For similar results that use the density normalized diffusion maps of \cite{CoifmanLafon2006}, see \cite[Fig.~5]{ZeleskoMoscovichKileelSinger2020}. We chose $\sigma=30$ as the Gaussian kernel width in Eq.~\eqref{eq:gaussian_kernel} for the WEMD embeddings, however the WEMD results were not very sensitive to the particular choice of $\sigma$. In contrast, the Euclidean embeddings required fine-tuning of $\sigma$ to obtain the best results for each sample size. This makes sense given the results of Figure~\ref{fig:compare_angle}. The key takeaway from Figure~\ref{fig:embeddings} is that for the standard Laplacian eigenmaps embedding based on the Euclidean norm, one needs $>400$ samples to conclude that the intrinsic geometry is a circle. In contrast, for the embeddings based on WEMD, even small sample sizes give the right geometry. \subsection{\textbf{Runtime}} The running time of the WEMD-based diffusion maps is similar to that of the standard Euclidean diffusion maps. This follows from the fact that both algorithms need to compute $\binom{n}{2}$ pairwise $\ell_p$-distances ($p \in \{1,2\}$) for vectors of similar length. The cost of the discrete wavelet transform is negligible, since it is linear with respect to the input size. For our sample sizes, the time to form the Gaussian affinity matrix and compute its eigenvectors is also negligible. Table~\ref{fig:runtime} lists single-core running times on an Intel Core i7-8569U CPU. \begin{table} \caption{\textit{Running times [sec]} for computing the discrete wavelet transform (DWT), all pairwise wavelet-based Earthmover approximations (WEMD) not including the DWT, and all pairwise Euclidean ($\ell_2$) distances.} \label{fig:runtime} \centering \begin{tabular}{llll} \toprule $n$ & DWT& WEMD & $\ell_2$ \\ \midrule 25 & 0.3 & 0.13 & 0.09\\ 50 & 0.61 & 0.49 & 0.38\\ 100 & 1.2 & 1.93 & 1.5\\ 200 & 2.4 & 7.6 & 5.5 \\ 400 & 4.9 & 31 & 22\\ 800 & 11 & 126 & 86\\ \bottomrule \end{tabular} \end{table} \subsection[\textbf{Using wavelet sparsity}]{\textbf{Using wavelet sparsity}} \label{sec:sparsity} \begin{table} \caption{\textit{Running times [sec]} for computing all pairs of sparsified wavelet-based Earthmover's distances (sparse-WEMD), as compared to the dense computation.} \label{table:runtime_sparse} \centering \begin{tabular}{llll} \toprule $n$ & Sparse runtime & Sparse runtime & Dense runtime \\ & (noiseless data) & (noisy data) & \\ \midrule 25 & 0.01 & 0.037 & 0.13 \\ 50 & 0.013 & 0.1 & 0.49 \\ 100 & 0.026 & 0.39 & 1.93 \\ 200 & 0.046 & 1.5 & 7.6 \\ 400 & 0.16 & 6.2 & 31 \\ 800 & 0.6 & 25 & 126 \\ \bottomrule \end{tabular} \end{table} To compute the approximate Earthmover's distance between all pairs of volumes, we first compute a weighted discrete wavelet transform of each volume in the data set. For smooth signals, this results in sparse vectors of wavelet coefficients \cite{Mallat2009}. We can use this property by thresholding the vectors of weighted wavelet coefficients, and then storing them in a sparse matrix. This is beneficial because computing the $\ell_1$-distance between two sparse vectors has a runtime that is linear in the number of their non-zero elements. Since the computation of all pairwise $\ell_1$ differences is the slowest part of our procedure, this approach can reduce the running time significantly. To test this, we used the ATP synthase data described in the previous section. First, we subtracted the mean volume from all volumes in the data set. This mean-centering does not change the pairwise WEMD distances but makes the resulting vectors more sparse. We used the hard-thresholding function $h_t$ defined as follows: \begin{align*} h_t(x) := \begin{cases} 0& \text{for } |x| \leq t,\\ x& \text{for } |x| > t. \end{cases} \end{align*} We found a threshold $t$ for the wavelet coefficients such that the $\ell_1$-norm of the post-thresholding weighted wavelet coefficients are $>90\%$ of the $\ell_1$-norm of the dataset prior to thresholding. This threshold was computed on the smallest simulation of size $n=25$ and then applied to the rest of the runs. Figure~\ref{fig:embeddings_sparsified} shows the results of the WEMD embedding following this sparsification step. Table \ref{table:runtime_sparse} shows the running times for the sparsified WEMD. Note that the running times are different for the noiseless and noisy data, since the noisy data is less sparse. However, in both cases, there are significant gains to the running times, with few visually-noticeable changes to the embedding results. \begin{figure} \centering \begin{tabular}{cccc} $n$ & WEMD (noiseless) & WEMD (noisy) \\\\ \begin{tabular}{c}25\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=25_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=25_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \\ \begin{tabular}{c}50\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=50_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=50_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \\ \begin{tabular}{c}100\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=100_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=100_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \\ \begin{tabular}{c}200\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=200_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=200_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \\ \begin{tabular}{c}400\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=400_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=400_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \\ \begin{tabular}{c}800\\[44pt] \end{tabular}& \includegraphics[height=56pt]{wemd_embedding_n=800_std=0_seed=2020_gaussian_kernel_threshold=0_9}& \includegraphics[height=56pt]{wemd_embedding_n=800_std=0_01644027_seed=2020_gaussian_kernel_threshold=0_9} \end{tabular} \caption{\textit{Sparsified results.} Wavelet-EMD-based Laplacian eigenmaps of the clean and noisy ATP synthase data sets, after applying hard-thresholding to obtain sparse coefficient vectors.} \label{fig:embeddings_sparsified} \end{figure} \newpage \section{Conclusion} In this paper, we placed Laplacian-based manifold learning methods that use non-Euclidean norms on a firmer theoretical footing. We proved the pointwise convergence of graph Laplacians computed using general norms to elliptic second-order differential operators. In particular, our proof involved a novel second-order interaction between the manifold $\mathcal{M}$ and the unit ball $\mathcal{B}$, encoded by the function ${\operatorname{tilt}_{\M, \B, \p}}$. We showed that some properties of the usual Laplace-Beltrami operator are lost in the general case. The limiting operator $\Delta_{\mathcal{M}, \mathcal{B}}$ changes with the embedding of $\mathcal{M}$. Further, the limit $\Delta_{\mathcal{M}, \mathcal{B}}$ carries a first-order term that can exhibit discontinuities at certain points of $\mathcal{M}$. In addition, this paper demonstrated practical advantages for using non-Euclidean norms in manifold learning. We considered the task of learning molecular shape spaces. Here data points are conformations represented by 3D arrays, and we want to capture the range of motion. A simulation found that using Laplacian eigenmaps with the wavelet Earthmover's distance (a weighted-$\ell_1$ norm in wavelet coefficients) resulted in a qualitative improvement of sample complexity compared to the Euclidean norm. Thresholding the wavelet coefficients before computing norms reduced the computational \nolinebreak cost.\\\\ \noindent This work suggests several directions worthy of future study: \begin{itemize} \setlength\itemsep{0.7em} \item \textbf{Convergence rates.} With what rate does the convergence in Theorem~\ref{thm:limit} occur? How does this depend on the choice of norm $\| \cdot \|_{\mathcal{B}}$? \item \textbf{Eigenfunctions.} What can be said about the eigenfunctions of $\Delta_{\mathcal{M}, \mathcal{B}}$? How do the discontinuities of the first-order term in $\Delta_{\mathcal{M}, \mathcal{B}}$ impact them? Due to space limitations, we only gave numerical examples in Appendix~\ref{sec:numerical_eigenfunction_computation}. \item \textbf{Spectral convergence.} For general norms, do the eigenvectors of the graph Laplacian converge to the eigenfunctions of the operator $\Delta_{\mathcal{M}, \mathcal{B}}$? \item \textbf{Concentration.} The operator $\Delta_{\mathcal{M}, {\mathcal{B}}}$ depends on $d$- and $2$-dimensional linear sections of the convex body $\mathcal{B} \subseteq \mathbb{R}^D$. When $D \gg d$, is there a sense in which these slices look ``increasingly Euclidean"? Does $\Delta_{\mathcal{M}, {\mathcal{B}}}$ concentrate? \item \textbf{Data-dependent norms.} If the norm chosen is some fixed function of the data set, does a well-defined limit of the graph Laplacians still exist? \item \textbf{Applications.} Are some applied domains better-suited for non-Euclidean norms than others? How should a practitioner decide which norm to use? \end{itemize} \vspace{0.5em} \begin{acknowledgements} We thank Charles Fefferman, William Leeb, Eitan Levin and John Walker for enlightening discussions. Most of this work was performed while AM was affiliated with PACM at Princeton \nolinebreak University. This research was supported by AFOSR FA9550-17-1-0291, ARO W911NF-17-1-0512, NSF BIGDATA IIS-1837992, the Simons Investigator Award, the Moore Foundation Data-Driven Discovery Investigator Award, the Simons Collaboration on Algorithms and Geometry, and start-up grants from the College of Natural Sciences and Oden Institute for Computational Engineering and Sciences at UT Austin. \end{acknowledgements}
{ "timestamp": "2021-07-19T02:05:22", "yymm": "2012", "arxiv_id": "2012.14172", "language": "en", "url": "https://arxiv.org/abs/2012.14172" }
\section{Introduction} This paper will address some of the challenges and possibilities of exoplanet detection and classification for future exosolar system missions. Future missions may allow for travel far outside of our solar system, as well as deep into our own solar system, where return bandwidth will be severely limited; thus, choices of which data (images in particular) are important to "send back" (\cite{Lubin2016}, \cite{Lubin2020}, \cite{Sheerin2020}). The basis for exoplanetary detection via fast interstellar travel is a combination of The Starlight Program \citep{KulKarniRelativistic2017} and recent results that show how exoplanets can be detected, and distinguished from other objects, via AI-based modeling that utilizes simulated data \citep{Bird2020}. The groundwork has been laid for an AI-based small spacecraft that can travel long distances in a short amount of time, gather information on its surroundings with minimal energy requirements, and detect exoplanets and other targets of interest with excellent accuracy. The same core technology we discuss here can be applied to a wide range of astrophysics and cosmology where subtle and often transient phenomenon are critical to retrieve in low SNR situations. In future papers we will discuss using our techniques in these other application spaces. The major points that we will discuss and examine here are related to the accuracy of exoplanetary detection. In our foundational paper \citep{Bird2020}, we used a robust model and detection score for proof of concept. Going forward, this paper will compare a wide array of models using accuracy as our main metric to determine model strength and reliability. \section{Previous Work} The basis for much of our work lies in deep learning via TensorFlow \citep{Abadi2016}, as well as the expected additions, such as cuDNN \citep{ChetlurcuDNN2014} and CUDA, which allows for faster deep neural network processing via a graphics processing unit (GPU). Although the idea of direct exoplanetary detection and imaging via interstellar travel is new, astronomy has been attempting the general feat via light curves for years, and even more recently with deep learning (\cite{ShallueIdentifying2017}, \cite{ZuckerShallow2018}, \cite{CarrascoDeepLearning2018}). For direct imaging purposes, we test a variety of robust models, including variants of each model, and analyze factors such as accuracy and computational complexity. Since deep space is uncharted territory, an extremely large training data set is not possible. Therefore, we include some simpler models to offset the possibility of having models that are too advanced for the data. The overall goal of these models is to be able to identify when a planet is present in an image, while also being capable of not mistaking other astronomical objects for planets. For the simpler models, we will compare MobileNet \citep{MobileNet}, MobileNet V2 \citep{MobileNetv2}, DenseNet 121, 169, and 201 \citep{DenseNet}, and NASNet-Mobile \citep{NASNet}. These provide solid baseline accuracy and low computational complexity, which may prove to be beneficial for our specific needs. For the intricate models, we will compare NASNet-Large \citep{NASNet}, Xception \citep{Xception}, VGG 16 and VGG19 \citep{VGG}, Inception V3 \citep{Inception}, Inception-ResNet V2 \citep{InceptionResNetv2}, and ResNet 50, 50 v2, 101, 101 v2, 152, and 152 v2 \citep{ResNet}. In contrast to the simpler models listed above, the training time and complexity will increase with these. However, that process is done beforehand while the wafer satellite (wafersat) is still on Earth, so these concerns are negligible when compared to the possible gains in accuracy from the more robust models. These models have been tested against each other in the past to some degree. ResNet has been shown to out-perform VGG (\cite{ResNet},\cite{Canziani2016}) and even advanced Inception models \citep{ResNetBeatsVGGandInception}, while other results show all of these models being out-performed by the DenseNet and InceptionResNet architectures \citep{DenseIRNwins}. The structure of these models and their performance is dependent on the data that is being processed. In this case, we are training on simulated images of planets and testing on real images of planets. This concept was shown to be viable in \cite{Bird2020}; however, optimizing this process will require an in-depth look at advanced deep learning techniques and models. \section{The Process} \subsection{Deep Neural Network Architecture} Deep neural networks, including those used for object detection, begin by deconstructing images into pixel-based groupings that constitute an input layer. This layer, along with the hidden layer(s) and output layer, is comprised of smaller entities called neurons. Each layer of neurons is connected to the next via weights, which are learned through a training process. Gradient descent is a powerful and widely used method that allows us to minimize the cost function in order to get the most effective learning process. By taking the negative gradient, we minimize the cost function. After all is done, we are left with a network that can take an input image and output something of interest based on the training and model parameters. A more illuminating analogy would be to treat the initial inputs (pixels, or in the case of a convolutional neural network, groupings of pixels) as an input tensor. This input tensor is then essentially acted upon by a function (the neural network), which outputs a tensor corresponding to the the categorization of the input (in our case it is a binary categorization). This function initializes with random values, and is then optimized via the methods described above such that the output tensor has the highest accuracy when identifying inputted data. \subsection{The Setup} As discussed in detail in \cite{Bird2020}, the simulator (SpaceEngine.org) provides us with easy access to 4K, 3-D rendered images of exoplanets. Although they are randomly generated, one could create a specific planet, or filter planets by a set of conditions in order to achieve a subset of planets that have certain traits. All models were pre-trained on ImageNet \citep{ImageNet} and fine-tuned on simulated images of exoplanets. This allowed for a robust learning experience for features, and a more specific learning experience for our data set. All models were evaluated using an AMD Ryzen Threadripper 3970X 32-Core Processor @ 3.70 GHz, 128 GB of RAM and an NVIDIA Titan RTX graphics card. In most deep learning applications for image analysis, both training sets and testing sets contain images of the same object. In our deep learning application, the training set is taken from a universe simulator (SpaceEngine.org), and the testing images are real images of planets. Without a simulator, we would not have enough images of planets, and those planets would not constitute a large enough sub-sample of possible exoplanets. By using a physics-based simulator, we can produce an abundance of realistic novel exoplanets to train on. Then, we use real planets to test the model's accuracy. This translates directly to the wafersats process during an actual interstellar journey. Image counts for all three sets is shown below in Table 1. \begin{table}[ht!] \centering \begin{tabular}{||c c c||} \hline Training & Validation & Testing \\ [0.5ex] \hline\hline 915 & 200 & 284 \\ [1ex] \hline \end{tabular} \caption{Image count for the training, validation, and testing sets.} \end{table} The process being performed here is unique for two major reasons. First, the entire training set is simulated images, while the entire testing set is real images. This presents a particular challenge for neural networks, as they learn in a template space and are then tested in a real space. Second, deep space provides an enormously large variety of objects. For example, gas giants vary wildly in many ways, such as size, feature differences, surface gas formations, colors, temperature, and more. In our solar system alone, we witness two quite unique gas giants: Saturn with its rings and famous hexigon, and Jupiter with its eye and dolphin formations. Training on extremely unique objects can cause neural networks to lose their generality and under-perform. The images below in Figure \ref{fig:Image_Example} compare real images taken by NASA against simulated images. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{Image_Example.png} \caption{Two examples of real testing images are Jupiter and Saturn, seen on the left, while the right side shows examples of simulated training images.} \label{fig:Image_Example} \end{figure} \section{The Results} For each model previously mentioned, we trained, validated, and tested the neural network in batches of five epochs each. An epoch is when the entire data set is run through the neural network once. \begin{table}[ht!] \centering \begin{tabular}{||c c c||} \hline Model & Maximum Accuracy Achieved & Respective Epoch \\ [0.5ex] \hline\hline VGG19 & 0.9964788556 & 5 \\ \hline VGG16 & 0.9929577708 & 5 \& 25 \\ \hline ResNet50 & 0.9894366264 & 85 \& 120 \\ \hline ResNet101 & 0.9894366264 & 115 \\ \hline ResNet152 & 0.9894366264 & 10 ** \\ \hline MobileNet & 0.985915482 & 5 \\ \hline MobileNet v2 & 0.9788732529 & 5** \\ \hline DenseNet121 & 0.9788732529 & 5 \\ \hline DenseNet169 & 0.9788732529 & 5 - 15 \\ \hline ResNet152 v2 & 0.9753521085 & 25 * \\ \hline DenseNet201 & 0.9753521085 & 5-15 \\ \hline Inception v3 & 0.9683098793 & 5 \& 15 \\ \hline ResNet101 v2 & 0.9647887349 & 5 \\ \hline ResNet50 v2 & 0.9612675905 & 10 - 20 \\ \hline NasNet-Mobile & 0.9542253613 & 10 \\ \hline Inception-ResNet v2 & 0.950704217 & 5 \& 10 \\ \hline Xception & 0.950704217 & 10 \\ \hline NasNet-Large & 0.9154929519 & 5 \\ \hline \end{tabular} \caption{Maximum epoch-based accuracy achieved for each model, ordered from highest to lowest. \textit{* denotes continued accuracy for all remaining epoch counts. ** denotes that maximum accuracy was sporadically achieved again after first occurrence.}} \end{table} From Table 2 above, as well as Figures \ref{fig:Results_Chart_Ordered} and \ref{fig:Max_Accuracy_Chart} below, it can be seen that VGG19 reached the highest maximum accuracy at five epochs, while VGG16 reached the second-highest maximum accuracy at both five and 25 epochs. This result is extremely interesting, as VGG variants are typically under-performing models when compared to Inception or ResNet variants. MobileNet performed extremely well, not only in maximum accuracy achieved, but also in terms of consistency. The remaining models were simply out-performed and provided no concrete reason why they should be chosen as a viable model for this specific task. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Results_Chart_Ordered.png} \caption{Accuracy of all models based on epoch count.} \label{fig:Results_Chart_Ordered} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Max_Accuracy_Chart.png} \caption{Accuracy of all models based on epoch count.} \label{fig:Max_Accuracy_Chart} \end{figure} Based on Figure \ref{fig:Results_Chart_98_or_above_Ordered} below, ResNet50 and ResNet101 both dipped below 88\% accuracy, and often times bounced from low to high accuracy, showing clear signs of inconsistency. Despite overall good performance, ResNet152 has large dips in the 5-20 epoch count, the main area where most models performed at their best. For these reasons, the ResNet variants were ultimately rejected as reasonable choices, as their epoch-based accuracy fluctuated too wildly. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Results_Chart_98_or_above_Ordered.png} \caption{Accuracy of models that reached at least 98\% maximum accuracy based on epoch count.} \label{fig:Results_Chart_98_or_above_Ordered} \end{figure} One very interesting result was concerning the remaining 98\% or better maximum accuracy models, namely VGG19, VGG16, and MobileNet. Pertaining to Figure \ref{fig:Results_Chart_98_or_above_dependable} below, we can see that VGG19 and VGG16 fluctuate to some degree, while MobileNet acts as a hedge. While the dependability of VGG19 and VGG16 in certain epoch ranges is vastly superior, MobileNet grants you a consistently strong choice across all epoch ranges. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Results_Chart_98_or_above_dependable.png} \caption{Accuracy of dependable models that reached at least 98\% maximum accuracy based on epoch count. These do not include the ResNet variants.} \label{fig:Results_Chart_98_or_above_dependable} \end{figure} We note that once ResNet variants are removed for their instability, every model reaches its peak accuracy at the 5-25 epoch range. Conditional on this range, the strongest models remain VGG19 and VGG16, but we can clearly see that their strength can fluctuate with small changes to epoch. Yet, MobileNet, MobileNet v2, DenseNet169, and DenseNet201 respectively perform the most consistently, while preserving most of the accuracy that we see in VGG19 and VGG16. Some insight can be obtained by breaking down the accuracy into false negatives, which occur when a planet is present but the model does not identify it, and false positives, which occur when a planet is not present but the model identifies one. For our objective, false negatives are a much more severe error, as finding a planet and missing it is the worst possible situation. Alternatively, false positives would simply send back a picture of empty space to Earth, which would result in something mildly interesting, but nothing lost. Linking this information to our previous findings, ResNet variants continue to show instability, with some models having as many as 13 false negatives. Both VGG variants had no false negatives, meaning that they exhibit both extreme accuracy and reliability in detecting planets when they are actually present in the image. Lastly, we noted that DenseNet variants were very reliable in the 5-25 epoch range. In terms of false negatives, all DenseNet variants continue this stability with no false negatives. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Results_Chart_5_to_25_Epochs_No_ResNet.png} \caption{Accuracy of dependable models based on the 5-25 epoch range.} \label{fig:Results_Chart_5_to_25_Epochs_No_ResNet} \end{figure} These results connect well to the previous section, where we outlined the unique circumstances that surround this particular problem. We note that we are training models on extremely specific objects in space, with unique features, patterns, colors, etc. Advanced models such as ResNet and Inception learn too well and proceed to analyze the extremely minute details in the training set. Then, when asked about other images that are unique in different ways, they struggle to find the connection. Meanwhile, less advanced models, such as VGG and MobileNet variants, do not lose that generality while learning. \subsection{Implications for future simulator training} Results show quite a few viable models, depending on whether we want extreme accuracy at the cost of variance (VGG variants), or dependability at the slight cost of accuracy (MobileNet variants). Moreover, we note that this approach yields high accuracy with relatively few training images. With only 900 training images, we have achieved 98\% accuracy with multiple models, giving us a wide variety of options depending on the situation. Thus, for future work, even small samples of simulated images can successfully train a neural network to detect real objects in space with extremely high accuracy. \section{Foundations for Future Work} We have expanded upon \cite{Bird2020} and have shown that a small subset of simulated images can produce extremely accurate predictions of real-world planets during an interstellar journey and beyond. This paper solidifies the overall outlook on optimization methods for exoplanet detection, while introducing many ideas that will open new and exciting problems in deep space exploration. The same methodology can be used in a wide variety of astrophysical (and other) applications, where subtle issues in both the temporal and spatial domain are critical to access, and make decisions for, low bandwidth return applications. An upcoming paper will address categorization, which will expand the ideas of simulator-based detection to objects beyond exoplanets. In particular, we will further explore whether simulators can help train neural networks to distinguish between specific types of planets. What about specific types of stars, comets, asteroids, and even more interestingly, signs of life? \section{Conclusion} In our previous work, we showed how simulator images could be used to successfully train a neural network to identify real images of planets. In this paper, we delve into specific model optimization and obtain some fascinating results. First, multiple models are proven to have above 99\% accuracy when trained only on simulator images and tested on real images of planets. This result completely supports a simulator-based training model for deep space journeys, allowing us to train large neural networks pre-flight on Earth. Second, we have shown that extremely high accuracy does not depend on large data sets in this niche problem. With under 1,000 training images, we have achieved over 98\% maximum accuracy with six different models. Finally, we demonstrate that there exists both high accuracy and high stability models that can perform well with no false negatives. \section{Acknowledgements} \subsection{NASA} PML gratefully acknowledges funding from NASA NIAC NNX15AL91G and NASA NIAC NNX16AL32G for the NASA Starlight program and the NASA California Space Grant NASA NNX10AT93H, a generous gift from the Emmett and Gladys W. Technology Fund, as well as support from the Breakthrough Foundation for its Breakthrough StarShot program. More details on the NASA Starlight program can be found at \url{www.deepspace.ucsb.edu/Starlight}. \subsection{NSF} This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Models were run through initial testing phases using the Comet GPU cluster, allocation ID: TG-CCR180013.
{ "timestamp": "2020-12-29T02:24:39", "yymm": "2012", "arxiv_id": "2012.14092", "language": "en", "url": "https://arxiv.org/abs/2012.14092" }
\section{Introduction} Two classfiles namely \file{cas-sc.cls} and \file{cas-dc.cls} were written for typesetting articles submitted in journals of Elsevier's Complex Article Service (CAS) workflow. \subsection{Usage} \begin{enumerate} \item \file{cas-sc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-sc} \end{vquote} \item \file{cas-dc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-dc} \end{vquote} \end{enumerate} and have an option longmktitle to handle long front matter. \section{Front matter} \begin{vquote} \title [mode = title]{This is a specimen $a_b$ title} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \author[1,3]{CV Radhakrishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \cormark[1] \fnmark[1] \ead{cvr_1@tug.org.in} \ead[url]{www.cvr.cc, cvr@sayahna.org} \end{vquote} \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{CV Rajagopal}[% role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{cvr3@sayahna.org} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} \address[2]{Sayahna Foundation, Jagathy, Trivandrum 695014, India} \author[1,3]{Rishi T.} \cormark[2] \fnmark[1,3] \ead{rishi@stmdocs.in} \ead[URL]{www.stmdocs.in} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the first author footnote. but is common to third author as well.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} \begin{vquote} \nonumnote{This note has no numbers. In this work we demonstrate $a_b$ the formation Y\_1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. } \begin{abstract}[S U M M A R Y] This template helps you to create a properly formatted \LaTeX\ manuscript. \noindent\texttt{\textbackslash begin{abstract}} \dots \texttt{\textbackslash end{abstract}} and \verb+\begin{keyword}+ \verb+...+ \verb+\end{keyword}+ which contain the abstract and keywords respectively. Each keyword shall be separated by a \verb+\sep+ command. \end{abstract} \begin{keywords} quadrupole exciton \sep polariton \sep \WGM \sep \BEC \end{keywords} \maketitle \end{vquote} \begin{figure} \includegraphics[width=\textwidth]{sc-sample.pdf} \caption{Single column output (classfile: cas-sc.cls).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{dc-sample.pdf} \caption{Double column output (classfile: cas-dc.cls).} \end{figure} \subsection{Title} \verb+\title+ command have the below options: \begin{enumerate} \item \verb+title:+ Document title \item \verb+alt:+ Alternate title \item \verb+sub:+ Sub title \item \verb+trans:+ Translated title \item \verb+transsub:+ Translated sub title \end{enumerate} \begin{vquote} \title[mode=title]{This is a title} \title[mode=alt]{This is a alternate title} \title[mode=sub]{This is a sub title} \title[mode=trans]{This is a translated title} \title[mode=transsub]{This is a translated sub title} \end{vquote} \subsection{Author} \verb+\author+ command have the below options: \begin{enumerate} \item \verb+auid:+ Author id \item \verb+bioid:+ Biography id \item \verb+alt:+ Alternate author \item \verb+style:+ Style of author name chinese \item \verb+prefix:+ Prefix Sir \item \verb+suffix:+ Suffix \item \verb+degree:+ Degree \item \verb+role:+ Role \item \verb+orcid:+ ORCID \item \verb+collab:+ Collaboration \item \verb+anon:+ Anonymous author \item \verb+deceased:+ Deceased author \item \verb+twitter:+ Twitter account \item \verb+facebook:+ Facebook account \item \verb+linkedin:+ LinkedIn account \item \verb+plus:+ Google plus account \item \verb+gplus:+ Google plus account \end{enumerate} \begin{vquote} \author[1,3]{Author Name}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910, facebook=<facebook id>, twitter=<twitter id>, linkedin=<linkedin id>, gplus=<gplus id>] \end{vquote} \subsection{Various Marks in the Front Matter} The front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by of an Conformal asterisk (*) mark. \subsubsection{Title marks} Title mark can be entered by the command, \verb+\tnotemark[<num>]+ and the corresponding text can be entered with the command \verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be: \begin{vquote} \title[mode=title]{Leveraging social media news to predict stock index movement using RNN-boost} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \verb+\tnotetext+ and \verb+\tnotemark+ can be anywhere in the front matter, but shall be before \verb+\maketitle+ command. \subsubsection{Author marks} Author names can have many kinds of marks and notes: \begin{vquote} footnote mark : \fnmark[<num>] footnote text : \fntext[<num>]{<text>} affiliation mark : \author[<num>] email : \ead{<emailid>} url : \ead[url]{<url>} corresponding author mark : \cormark[<num>] corresponding author text : \cortext[<num>]{<text>} \end{vquote} \subsubsection{Other marks} At times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides \verb+\nonumnote+ for this purpose. The usage \begin{vquote} \nonumnote{<text>} \end{vquote} \noindent and should be entered anywhere before the \verb+\maketitle+ command for this to take effect. \subsection{Abstract and Keywords} Abstract shall be entered in an environment that starts with \verb+\begin{abstract}+ and ends with \verb+\end{abstract}+. Longer abstracts spanning more than one page is also possible in Class file even in double column mode. We need to invoke longmktitle option in the class loading line for this to happen smoothly. The key words are enclosed in a \verb+{keyword}+ environment. \begin{vquote} \begin{abstract} This is a abstract. \lipsum[3] \end{abstract} \begin{keywords} First keyword \sep Second keyword \sep Third keyword \sep Fourth keyword \end{keywords} \end{vquote} \section{Main Matter} \subsection{Tables} \subsubsection{Normal tables} \begin{vquote} \begin{table} \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2\\ \midrule 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ \bottomrule \end{tabular*} \end{table} \end{vquote} \subsubsection{Span tables} \begin{vquote} \begin{table*}[width=.9\textwidth,cols=4,pos=h] \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\ \midrule 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ \bottomrule \end{tabular*} \end{table*} \end{vquote} \subsection{Figures} \subsubsection{Normal figures} \begin{vquote} \begin{figure} \centering \includegraphics[scale=.75]{Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Fig. \protect\ref{FIG:2}).} \label{FIG:1} \end{figure} \end{vquote} \subsubsection{Span figures} \begin{vquote} \begin{figure*} \centering \includegraphics[width=\textwidth,height=2in]{Fig2.pdf} \caption{Schematic of formation of the evanescent polariton on linear chain of \PMS. The actual dispersion is determined by the ratio of two coupling parameters such as exciton-\WGM coupling and \WGM-\WGM coupling between the microspheres.} \label{FIG:2} \end{figure*}\end{vquote} \subsection{Theorem and theorem like environments} CAS class file provides a few hooks to format theorems and theorem like environments with ease. All commands the options that are used with \verb+\newtheorem+ command will work exactly in the same manner. Class file provides three commands to format theorem or theorem like environments: \begin{enumerate} \item \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font for theorem statement, bold weight for theorem heading and theorem number typeset at the right of theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. Here is an example coding and output: \begin{vquote} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm} The \WGM evanescent field penetration depth into the cuprous oxide adjacent crystal is much larger than the \QE radius: \begin{equation*} \lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1} \right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6 \mbox{ \AA} \end{equation*} \end{theorem} \end{vquote} \item \verb+\newdefinition+ command does exactly the same thing as with except that the body font is up-shape instead of italic. See the example below: \begin{vquote} \newdefinition{definition}{Definition} \begin{definition} The bulk and evanescent polaritons in cuprous oxide are formed through the quadrupole part of the light-matter interaction: \begin{equation*} H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s} \cdot {\bf p} \end{equation*} \end{definition} \end{vquote} \item \verb+\newproof+ command helps to define proof and custom proof environments without counters as provided in the example code. Given below is an example of proof of theorem kind. \begin{vquote} \newproof{pot}{Proof of Theorem \ref{thm}} \begin{pot} The photon part of the polariton trapped inside the \PMS moves as it would move in a micro-cavity of the effective modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it can escape through the evanescent field. This evanescent field essentially has a quantum origin and is due to tunneling through the potential caused by dielectric mismatch on the \PMS surface. Therefore, we define the \emph{evanescent} polariton (\EP) as an evanescent light - \QE coherent superposition. \end{pot} \end{vquote} \end{enumerate} \subsection{Enumerated and Itemized Lists} CAS class files provides an extended list processing macros which makes the usage a bit more user friendly than the default LaTeX list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. You can see the coding and typeset copy. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.' so that the item counter will be suffixed by a period as in the optional argument. \item If you provide a closing parenthesis to the number in the optional argument, the output will have closing parenthesis for all the item counters. \item You can use `(a)' for alphabetical counter and `(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \end{vquote} \begin{vquote} \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \section{Biography} \verb+\bio+ command have the below options: \begin{enumerate} \item \verb+width:+ Width of the author photo (default is 1in). \item \verb+pos:+ Position of author photo. \end{enumerate} \begin{vquote} \bio[width=10mm,pos=l]{tuglogo.jpg} \textbf{Another Biography:} Recent experimental \cite{HARA:2005} and theoretical \cite{DEYCH:2006} studies have shown that the \WGM can travel along the chain as "heavy photons". Therefore the \WGM acquires the spatial dispersion, and the evanescent quadrupole polariton has the form (See Fig.\ref{FIG:3}): \endbio \end{vquote} \section[CRediT...]{CRediT authorship contribution statement} Give the authorship contribution after each author as \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \end{vquote} To print the details use \verb+\printcredits+ \begin{vquote} \author[1,3]{V. {{\=A}}nand Rawat}[auid=000, bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \end{vquote} \begin{vquote} \cormark[1] \fnmark[1] \ead{cvr_1@tug.org.in} \ead[url]{www.cvr.cc, www.tug.org.in} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Indian \TeX{} Users Group, Trivandrum 695014, India} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{T. Rishi Nair}[role=Co-ordinator, suffix=Jr] \fnmark[2] \ead{rishi@sayahna.org} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} . . . . . . . . . \printcredits \end{vquote} \section{Bibliography} For CAS categories, two reference models are recommended. They are \file{model1-num-names.bst} and \file{model2-names.bst}. Former will format the reference list and their citations according to numbered scheme whereas the latter will format according name-date or author-year style. Authors are requested to choose any one of these according to the journal style. You may download these from The above bsts are available in the following location for you to download: \url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} \hfill $\Box$ \end{document} \section{Introduction} Due to their outstanding performance, Deep Neural Networks (DNN) are increasingly used and commercialised in virtually all application scenarios wherein complex data for which precise statistical models do not exist must be analysed and processed. Training a DNN model is a difficult and computational intensive piece of work, requiring huge amounts of labelled data and extensive training procedures that may easily go on for weeks, even on powerful workstations equipped with several GPUs. For this reason, the demand for methods to protect the Intellectually Property Rights (IPR) associated to DNN is raising. Following similar efforts made for media protection \cite{BBCP98, Bloom99, Pod01}, the use of watermarking has recently been proposed as a way to track the distribution of DNN models and protect the IPR of model vendors \cite{AdiUsenix18,Uchida17}. By indissolubly tying a watermark to a DNN model, in fact, it would be possible to prove the ownership of the model, and to track its unauthorized distribution, with the possibility of tracing back to the user who illegally distributed it. With respect to multimedia watermarking, for which a well-established theory has been developed in the last two decades \cite{Cox02, BB04}, embedding a watermark into a DNN model is quite a different piece of work, for which only a few newly proposed techniques exist \cite{Uchida17, darvish2019deepsigns, zhang2018protecting, le2019adversarial}. Generally speaking, DNN watermarking techniques can be split in two main categories: static and dynamic watermarking. Static DNN watermarking methods, like \cite{Uchida17} \cite{chen2019deepmarks}, embed the watermark into the weights of the DNN model. With dynamic watermarking, instead, the watermark is associated to the behaviour of the network in correspondence to specific inputs. For instance, the watermark may be associated to the activation map of the neurons in correspondence to certain inputs, as in \cite{darvish2019deepsigns}, or to the final output of the model, as in \cite{le2019adversarial, zhang2018protecting, chen2019blackmarks}. With regard to static DNN watermarking, which is the kind of technique we focus on in this paper, a possible approach to embed the watermark consists in adding a specific term to the loss function used for training, requiring that the weights of the network satisfy certain properties. For instance, in \cite{Uchida17}, it is required that the weights {\em correlate well} with a pseudorandom watermark sequence. In this way, the watermark is not added to a pre-trained model, on the contrary the weights are trained from scratch in such a way to satisfy the desired properties. Directly generating the weights so that they have a high correlation with the watermark sequence somewhat resembles spread spectrum watermarking with informed embedding \cite{Mill04}, according to which embedding is achieved by applying a signal-dependent perturbation to the to-be-watermarked sequence. In this way, the embedding procedure is adapted to the signal at hand resulting in a lower distortion and a lower (ideally zero) bit error rate. Informed coding is another concept that has been proven to greatly improve the performance of a watermarking system \cite{Mill04}. The informed coding paradigm stems from the interpretation of watermarking as a problem of channel coding with side information at the transmitter \cite{Cox99, Costa83}. In a nutshell, with informed coding each watermark message is associated to a pool of codewords (rather than to a single one), then informed watermark embedding is applied by choosing the codeword that results in the minimum distortion. Practical implementations of the informed coding paradigm include several popular watermarking schemes like Quantization Index Modulation (QIM) \cite{Chen01}, Dither Modulation (DM) \cite{Chen98} and Scalar Costa's Scheme (SCS) \cite{Egg03}. The possibility of coupling the strengths of QIM and Spread Spectrum (SS) watermarking has also been investigated leading to Spread-Transform Dither-Modulation (ST-DM) watermarking \cite{Chen01}, whereby DM is applied to the projection of the watermark sequence onto a predefined spreading direction. Within the wide class of informed watermarking methods, ST-DM has been widely applied due to its simplicity and good performance. In this work, we propose a new DNN watermarking algorithm based on ST-DM. To do so, we modify the algorithm proposed in \cite{Uchida17} by training the DNN with a new loss function. The new loss is explicitly designed in such a way that the correlation of the DNN weights with a given spreading sequence assumes a quantized value belonging to one of two interleaved and equally spaced quantizers, associated, respectively, to a watermark message bit equal to 0 or 1. Multi-bit watermarking, is then achieved by using different spreading sequences to encode different bits. According to watermarking theory \cite{BB04}, the expected advantage of the informed coding paradigm is either a lower obtrusiveness of the watermark for a given payload, or the possibility of embedding a higher payload for a given level of obtrusiveness. On the other hand, the application of DM on top of spread transformation, that is after projection on a pseudorandom spreading sequence, guarantees that a satisfactory level robustness is achieved. In order to verify the effectiveness of the proposed approach, we applied the new algorithm to watermark different DNN architectures targeting different classification tasks. By comparing the results we got with those achieved by Uchida et al's algorithm \cite{Uchida17}, the advantages predicted by theory are confirmed in terms of a larger achievable payload and a smaller impact on the accuracy of the network. We also assessed the robustness of the watermark, confirming that the robustness loss with respect to conventional SS watermarking is a minor one. The rest of this paper is organised as follows. In Section \ref{sec.prior}, we review prior art in DNN watermarking. In Section \ref{sec.prop}, we briefly review ST-DM watermarking and describe our algorithm for ST-DM watermarking of DNN models. Sections \ref{sec.exp} and \ref{sec.res} are devoted to the experimental validation of the new algorithm. The paper ends in Section \ref{sec.con} with our conclusions and the discussion of some directions for future research. \section{Prior art and Background} \label{sec.prior} Digital watermarking aims at embedding a message into a digital content (an image, an audio file, etc \dots), in such a way that the message can be reliably recovered and used to demonstrate the ownership of the content, or trace back to the individual who redistributed the watermarked content illegaly. In this section, we briefly review the existing literature on DNN watermarking, with particular attention to the methods proposed by Uchida et al's is \cite{Uchida17}, which is the basis of the new watermarking scheme proposed in this paper. \subsection{Existing DNN watermarking techniques} Several recent works have explored the possibility of injecting a watermark into a DNN model. Watermarking of DNN models leverages the capability of DNNs to fit data with arbitrary labels. Such a capability is achieved thanks to a huge number of parameters that can be also used to carry additional information beyond what is required for the primary classification task the network is dedicated to. As we said in the Introduction, DNN watermarking techniques can be split into static and dynamic methods. \paragraph{Static watermarking.} The first example of static DNN watermarking has been proposed by Uchida et al. in \cite{Uchida17, nagai2018digital}, according to which the watermarking bits are embedded into the weights of the to-be-marked DNN. Another static watermarking method has been proposed in \cite{chen2019deepmarks}. Both in \cite{Uchida17} and \cite{chen2019deepmarks}, embedding is achieved by adding a proper regularization term to the loss function during training. More details on the system described in \cite{Uchida17} are provided in the next section. The maximum capacity that can be achieved with such watermarking schemes, without affecting the primary classification task, depends on the dimensionality and the number of parameters of the network. Extracting the watermark requires white-box access to the model in order to get the necessary information about the values of the model weights. An obvious drawback with static methods is that the watermark, in general, is not very robust against model re-training or model pruning. \paragraph{Dynamic watermarking.} With dynamic watermarking, the watermark bits are extracted by looking at the model outputs when the network is fed with a specific input, sometimes called triggering input. With these approaches, the watermark can then be extracted in a black-box manner, since access to the internal status of the network is not required. The watermark may also be embedded in the activation map resulting from the application of the triggering input. In this way a higher payload can be embedded, however watermark extraction requires that the internal status of the network is accessible. Dynamic watermarking methods have been proposed in \cite{le2019adversarial,zhang2018protecting,AdiUsenix18,darvish2019deepsigns}. They all focus on zero-bit watermarking, except for \cite{darvish2019deepsigns}, which proposes two dynamic watermarking methods, one for zero-bit the other for multi-bit watermarking. Specifically, the method proposed in \cite{le2019adversarial} consists in modifying the original model boundary by using adversarial retraining; the watermark is given by the adversarial examples close to the decision boundary considered for the retraining. To extract the watermark, the model is queried by the adversarial images (playing the role of a watermarking key). Zhang et al \cite{zhang2018protecting} propose to train the to-be-watermarked model with a set of input crafted in order to trigger the assignment of a specific target label on those images. This approach is very similar to trojaning \cite{liutrojaning} and shares the same advantages and drawbacks. Similarly, Yossi et al. \cite{AdiUsenix18} explore the possibility of using images that are misclassified by the model as the key images. Finally, in \cite{darvish2019deepsigns}, Rouhani et al. introduce a general watermarking methodology that can be used in both white-box and black-box settings, where watermark extraction may or may not require to access the model internal parameters. The algorithm is based on embedding the watermark into the probability density function (pdf) of the activations in various layers of the DNN. The approach is shown to withstand various removal and transformation attacks, including model compression, fine-tuning, and watermark overwriting. \subsection{Uchida et al's DNN watermarking algorithm} \label{Uchida_paper} Being the basis of the watermarking method proposed in this paper, we now describe in more details the watermarking algorithm introduced by Uchida et al. in \cite{Uchida17}. Let us indicate with ${\bf b} \in \{0,1\}^l$, the vector with the watermark bits. For a selected convolutional layer, let $(s, s)$, $d$, and $n$ represent, respectively, the kernel size of the filters, the depth of the input and the number of filters. Ignoring the bias term, the weights of the selected layer can be denoted by a tensor $ \mathbf{W} \in \mathbb{R}^{s\times s\times d\times n}$. In order to embed the watermark bits into the weights, it is convenient to flatten $\mathbf{W}$ according to the following steps: i) calculate the mean of $\mathbf{W}$ over the $n$ filters, getting $\overline{\mathbf{W}} \in \mathbb{R}^{s\times s\times d}$ with $\overline{W}_{ijk} = \frac{1}{n} \sum_{h=1}^{n}W_{ijkh} $, in order to eliminate the effect of the order of the filters; ii) flatten $\overline{\mathbf{W}}$ producing a vector $\mathbf{w} \in \mathbb{R}^{v}$ where $v=s\times s\times d$. Embedding the watermark bits into the weights corresponds to embed the vector $\mathbf{b}$ into the vector $\mathbf{w}$. Embedding is achieved by training the network with a loss function $E(\mathbf{w})$ defined as follows: \begin{equation}\label{loss_fuction1} E(\mathbf{w}) = E_{0}(\mathbf{w}) + \lambda E_{R}(\mathbf{w}), \end{equation} where $E_{0}(\mathbf{w})$ represents the original loss function for the target DNN model (ensuring a good behavior with regard to the classification task), $E_{R}(\mathbf{w})$ is a regularization term added to ensure correct watermark decoding, and $\lambda$ is a parameter adjusting the tradeoff between the original loss term and the regularization term. Specifically, $E_{R}(\mathbf{w})$ is given by \begin{equation}\label{regulization} E_{R}(\mathbf{w}) = -\sum_{j=1}^{l}(b_{j}\log(y_{j})+(1-b_{j})\log(1-y_{j})), \end{equation} \noindent where $b_{j}$ is $j$-th bit of $\mathbf{b}$ and $y_{j}=\sigma\big(\sum_{i}X_{ji}w_{i}\big)$, with $w_{i}$ denoting the $i$-th element of $\mathbf{w}$, $\mathbf{X} \in \mathbb{R}^{l\times v}$ playing the role of the watermarking key and where $\sigma(\cdot)$ is the sigmoid function: \begin{equation} \sigma(x)=\frac{1}{1+\exp(-x)}. \end{equation} Three possible ways for the construction of $\mathbf{X}$ were considered in \cite{Uchida17}. Eventually, based on experimental considerations, $\mathbf{X}$ is built by considering entries independently drawn from a standard normal distribution $N(0,1)$. The algorithm to extract the watermark is pretty simple, since it consists in computing the projection of $\mathbf{w}$ onto each $X_j$, and thresholding the projection at 0, that is: \begin{equation} \hat{b}_j =\begin{cases} 1 & \sum_{i}X_{ji}w_{i} \geq 0,\\ 0 & \text{otherwise}. \end{cases} \end{equation} \section{The proposed DNN watermarking algorithm} \label{sec.prop} \subsection{Security model and requirements} \label{subsec.requirements} We start by describing the security model and the requirements that the watermark message must satisfy. \paragraph{Security model.} The watermarking method we aim at developing should be usable for both ownership verification and traitor tracing \cite{fiat2001dynamic}, to help verifying the legitimate owner of the model or tracing back to the individual who illegally redistributed it. These goals are more easily achieved by means of multi-bit watermarking, due to its superior flexibility with respect to zero-bit watermarking (see \cite{BB04} for a detailed discussion on the pros and cons of readable - or multibit - watermarking and detectable - or zero-bit - watermarking). Let $F$ be the to-be-protected model. During the training process, the model owner embeds a message $\mathbf{b}$ into the weights of $F$. To do so, the embedder relies on a secret key $K$. The watermark extraction process is depicted in Figure \ref{watermark_extraction}. As shown in the figure, watermark retrieval requires the knowledge of the key $K$ and is carried out by inspecting the weights of $F$, thus qualifying the proposed approach as a white-box watermarking algorithm. Watermark retrieval does not require that the message $\mathbf{b}$ is known in advance as it would have been in the case of zero-bit watermarking. As we said, the content of the watermark may be used to determine the owner of the model by means of an ownership verification protocol, or identify the individual who redistributed the content, by means of a traitor tracing protocol. In the rest of the paper, we describe the insertion and extraction steps of the new ST-DM watermarking algorithm, and investigate the payload and robustness achieved by the new algorithm, without caring about the specific protocols wherein the watermark is used. \begin{figure}[] \centering \includegraphics[scale=.35]{watermark_extraction_process.pdf} \caption{The watermark extraction process.} \label{watermark_extraction} \end{figure} \paragraph{Watermark requirements} Based on the security model described above, it is necessary that the watermarking algorithm satisfies the following properties. \begin{itemize} \item{\textbf{\emph{unobtrusiveness}}: embedding the watermark message $\bf{b}$ should not have a significant impact on $F$. That is, the presence of the watermark should not degrade the accuracy of the network, with respect to a non-watermarked one.} \item{\textbf{\emph{robustness}}: it should be possible to recover $\bf b$ also from a modified version of the network $F$. } \item{\textbf{\emph{integrity}}: in the absence of model modifications, the extracted watermark $\bf b^{\prime}$ should be equal to $\bf b$, that is in the absence of modifications the bit error rate should be zero.} \item{\textbf{\emph{payload}}: the payload of the watermark, that is the length of the message $\bf b$, should be as large as possible. This is a particular important requirement for traitor tracing applications, since a larger payload permits to index a larger number of users and allows the use of more powerful anti-collusion codes \cite{trappe2003anti}. } \end{itemize} \paragraph{Robustness requirements.} With respect to the robustness requirement, we consider two kinds of model modifications: fine-tuning and parameter (network) pruning. These are common operations carried out routinely by network users, even in a non-adversarial setting wherein the users do not explicitly aim at removing the watermark from the model. \begin{itemize} \item{\textbf{\emph{Fine-tuning}} is a common operation related to transfer learning. It consists in retraining a model that was initially trained to solve a given task, so to adapt it to solve a new task (possibly related to the original one). Computationally, fine-tuning is by far less expensive than training a model from scratch, hence it is often applied by model users to adapt a pre-trained model to their needs. Since fine-tuning alters the weights of the watermarked model, it is necessary to make sure that the watermark is robust against a moderate amount of fine-tuning.} \item{\textbf{\emph{Network pruning}} is a common strategy to simplify a complicated DNN model to deploy it into low power or computationally weak devices like embedded systems or mobile devices. During pruning, the model weights whose absolute value is smaller than a threshold are cut-off to zero artificially. We require that the embedded watermark is resistant to this operation.} \end{itemize} \subsection{ST-DM watermarking} \label{subsec.stdm} In this section, we briefly review the main ideas behind ST-DM watermarking, since they represent the basis for the new DNN watermarking method proposed in this paper. Spread Transform Dither Modulation (ST-DM) is a watermarking algorithm coupling QIM and spread spectrum in a very simple fashion. The starting point for understanding ST-DM is Dither Modulation watermarking (DM), the simplest form of QIM. Given a host sample\footnote{In contrast to the common terminology adopted in watermarking literature, we use the symbol $w$ to indicate the samples hosting the watermark since here we are interested in watermarking the weights of CNN models.} $w$ and a watermark bit $b$, the marked sample $w_m$ is obtained by quantizing $w$ with one of two scalar quantizers $\mathcal Q_0$ and $\mathcal Q_1$. As shown in Figure \ref{fig.DM}, the codebooks associated to $\mathcal Q_0$ and $\mathcal Q_1$ form two uniform interleaved quantizers with quantization step $\Delta$: \begin{align} &\mathcal U_0 = \left\{ k\Delta, k \in \mathbb{Z} \right\}\\ &\mathcal U_1 = \left\{ k\Delta + \Delta/2, k \in \mathbb{Z} \right\}. \label{eq.codebook_DM} \end{align} \begin{figure}[t!] \centering \includegraphics[scale=.40]{scalar_DM_watermarking.pdf} \caption{Codebook entries for scalar DM watermarking.} \label{fig.DM} \end{figure} Watermark embedding is achieved by quantizing $w$ either with $\mathcal Q_0$ or $\mathcal Q_1$: \begin{equation} w_m = \left\{ \begin{split} {\mathcal Q}_0(w)~~~~~&b=0\\ {\mathcal Q}_1(w)~~~~~&b=1 \end{split} \right. \label{eq.SDM} \end{equation} Retrieving the watermark from $w_m$ is straightforward. Given a watermarked sample $w_m$, it is only necessary to identify the entry in $\mathcal U_0 \cup \mathcal U_1$ closest to $w_m$ and see if such an entry belongs to $\mathcal U_0$ or $\mathcal U_1$, in formulas: \begin{equation} \hat{b} = \phi_{DM}(w) = \arg \min_{b=0,1} (\min_{u_{k}\in {\mathcal U}_b} | w_m - u_{k}|). \label{eq.DM_decod_rule} \end{equation} A graphical representation of the decoding function $\phi_{DM}(w)$ is given in Figure \ref{fig.DMdec}. The intuition behind DM watermarking is that for every host sample $w$ a nearby codeword exists allowing to quantize $w$ with a small distortion. In this sense, $\Delta$ controls the distortion introduced by the watermarking process since a smaller $\Delta$ results in a smaller quantization step and hence a smaller distortion. A drawback with DM watermarking, especially when a small value of $\Delta$ is used, is its lack of robustness. In fact, adding a small perturbation to $w_m$ may easily move the sample in the proximity of a wrong codebook entry thus producing a decoding error. ST-DM provides a way to increase the robustness of DM watermarking (at the price of a reduced payload) by embedding the bit $b$ into a sequence of $n$ host samples ${\bf w} = (w_1, w_2 \dots w_n)$. More specifically, let $\rho_x$ be the projection of ${\bf w}$ onto a unitary-norm pseudo-random sequence ${\bf x} = (x_1, x_2 \dots x_n)$: \begin{equation} \rho_x = \left< {\bf w}, {\bf x} \right> = \sum_i w_i x_i. \label{eq.STDM1} \end{equation} ST-DM watermarking works by applying DM to $\rho_x$ rather than to the samples in ${\bf w}$. More specifically, first the projection of ${\bf w}$ onto ${\bf x}$ is removed from ${\bf w}$, then a new component yielding the desired projection is added back to ${\bf w}$: \begin{equation} {\bf w}_m = {\bf w} - \rho_x {\bf x} + \mathcal Q_b(\rho_x) {\bf x} \label{eq.STDM2} \end{equation} To embed more than one bit into ${\bf w}$, the above procedure is repeated for different pseudo-random directions: % \begin{equation} {\bf w}_m = {\bf w} - \sum_i \big( \rho_{x_i} {\bf x}_i + \mathcal Q_b(\rho_{x_i}) {\bf x}_i \big). \label{eq.STDM3} \end{equation} Watermark retrieval is obtained by computing the projections of ${\bf w}_m$ onto the pseudo-random directions ${\bf x}_i$ and applying DM decoding as in Eq. \eqref{eq.DM_decod_rule} to the projected values. If the pseudo-random directions are orthogonal to each other, quantizations over different directions do not interfere with each other, hence resulting in error-free watermark retrieval. In practice, if the sequences are generated randomly and $n$ is big enough, they can be assumed to be nearly orthogonal and error-free decoding is still possible, unless the payload is too large. \begin{figure} \centering \includegraphics[scale=.43]{scalar_DM_decoding.pdf} \caption{The decoding function $\phi _{DM}(w)$.} \label{fig.DMdec} \end{figure} \subsection{ST-DM-based watermarking of DNN models} \label{subsec.embedding} We are now ready to describe the new DNN watermarking algorithm. Our goal is to leverage on the superior performance allowed by ST-DM watermarking to find a better trade-off between watermark unobtrusiveness and payload. To do so, we modify the scheme proposed by Uchida et al. to incorporate within it the ST-DM watermarking principles. By following the notation introduced in Section \ref{Uchida_paper}, we embed the vector ${\bf b}$ into $\mathbf {w}$, by training the DNN model so that applying the watermark decoding function to $\mathbf{w}$ results in the correct decoding bits. In particular, the loss function used for training has the following form: \begin{equation}\label{loss_function2} E_{\text{ST-DM}}(\mathbf {w})=E_{0}(\mathbf{w})+\lambda E_{R}'(\mathbf{w}) \end{equation} where $\lambda$ controls the tradeoff between the original loss term and the new regularization term $E_{R}'$. The term $E_{R}'$, enforcing correct decoding of the watermark bits, is given by \begin{equation}\label{ours_regularization} E_{R}'(\mathbf{w}) = -\sum_{j=1}^{l}(b_{j}\log(z_{j})+(1-b_{j})\log(1-z_{j})), \end{equation} where $\mathbf{z}$ corresponds to the application of the DM decoding function $\phi_{DM}$ to the projection of $\mathbf w$ over the pseudorandom directions determined by the rows of a pseudorandom matrix $\mathbf{X}$ playing the role of the watermarking key, that is: \begin{equation} z_j = \phi_{DM}(\sum_i X_{ji} w_i). \end{equation} To generate $\mathbf{X}$, we followed the same appoach used in \cite{Uchida17}, drawing the elements of $\mathbf{X}$ independently from a unitary normal distribution $N(0,1)$. Since $\phi_{DM}$ is a highly non-linear function and hence it does not satisfy the regularity properties needed to apply back-propagation, we approximated $\phi_{DM}$ with a smoother function $\theta()$, defined as: \begin{equation}\label{ours_function} \theta(x) = \frac{e^{\alpha \sin \beta x}}{1 + e^{\alpha \sin \beta x}}. \end{equation} Then, $z_j$ in Eq. \eqref{ours_regularization} is defined as $z_j = \theta\left(\sum_i X_{ji} w_i\right)$. The behavior of $\theta(x)$ is shown in Figure \ref{two_function}, for the setting $\alpha = 10$ and $\beta = 10$. In particular, $\beta$ controls the period of $\theta(x)$ and hence plays a role similar to $\Delta$ in standard ST-DM, while $\alpha$ controls the smoothness of the loss function, with large $\alpha$'s approximating better the rectangular shape of $\phi_{DM}$ at the expense of a lower smoothness. \begin{figure} \centering \includegraphics[scale=.40]{two_functions-eps-converted-to.pdf} \caption{Behavior of the proposed function $\theta()$ and the sigmoid function used in \cite{Uchida17}.} \label{two_function} \end{figure} The benefit of using the new loss function based on $\theta$ instead of the usual sigmoid function as in \cite{Uchida17} can be easily understood by inspecting Figure \ref{two_function}. To binarize the outputs of $\theta()$ and $\sigma()$, we use the threshold value 0.5, then the bit is decoded as 0 when the output is lower than 0.5, as 1 otherwise. Therefore, it is immediate to argue that with the new method the weights $w_i$ needs less modification on the average to achieve the same embedding target bit, with respect to \cite{Uchida17}. As an example, if the initial values of $\sum_{i}X_{ji}w_{i}$ are equal to $-8$, corresponding to $\sigma\left(\sum_{i}X_{ji}w_{i}\right) = \theta\left(\sum_i X_{ji} w_i\right) = 0$ (see Figure \ref{two_function}), and we want to embed a bit '1', with the function $\theta()$ it is sufficient to reach the condition $\sum_i X_{ji} w_i^\prime = -6$, while with the sigmoid function the condition $\sum_{i}X_{ji}w_{i}^\prime \geq 0$ has to be reached for a successful embedding. \subsection{Extraction} \label{subsec.extract} To extract the watermark, we simply project the watermarked weights $\mathbf{w}_m$ onto the direction established by the projection matrix $\mathbf{X}$, apply the function $\theta()$, and then binarize the output by thresholding at 0.5. Formally, the $j$-th bit is extracted as: \begin{equation} \hat{b}_j =\begin{cases} 1 & \theta\left(\sum_{i}X_{ji}w_{i}\right) \geq 0.5,\\ 0 & \text{otherwise}. \end{cases} \end{equation} \section{Experimental setting} \label{sec.exp} Before discussing the performances of ST-DM watermarking, we discuss the setting used in our experiments. Such setting has been defined so to allow a fair comparison with SS watermarking, as implemented in \cite{Uchida17}. \subsection{Datasets and host networks} In our experiments, we evaluated the performance of the proposed watermarking method mainly by referring to the CIFAR-10~\cite{krizhevsky2009learning} classification task, solved by using the popular Wide Residual Networks (WRN) model \cite{zagoruyko2016wide}. To prove the generality of the proposed method, we also carried out some tests by considering another network architecture for the same task, that is ResNet~\cite{he2016deep}, and other two tasks, namely, the German Traffic Sign Recognition Benchmark (GTSRB) \cite{stallkamp2012man} and Imagenet~\cite{deng2009imagenet} tasks. For these two tasks, we considered the ResNet~\cite{he2016deep} and VGG~\cite{simonyan2014very} networks respectively. \subsection{Settings and experiments} WRN is an efficient variant of the residual network (ResNet) with decreased depth and increased width. The structure of WRN is described in Table \ref{WRNs}. Groups of convolutions are shown in brackets where $N$ is the number of blocks in each group. The network width is determined by a factor $k$, that establishes the growth rate of the number of filters $n$ from one layer to the other (and then of the depth of the input). Only the second layer of each convolutional block in \cite{zagoruyko2016wide} is actually considered for embedding (reported in bold in the table), as done in \cite{Uchida17}. Then, the number of filters is $n = 16\times k$ for each layer in the conv 2 block, $n = 32 \times k$ for each layer in the conv 3 block, and $n = 64 \times k$ for the conv3. The input depth $d$ has the same value of $n$ when the second layer of each convolutional block is considered. The kernel size of the filters is fixed to $(s,s) = 3\times 3$ for every layer. The number of embedding weights $v = s\times s \times d$ in each layer of convolutional block is also reported in Table \ref{WRNs} (last column) as a function of $k$. Following \cite{zagoruyko2016wide}, for all the experiments we set $N=1$ and $k=4$. For simplicity, in the following, we will refer to conv 2, conv 3, or conv 4 to denote the embedding layer, keeping in mind that only the second layer of each convolutional block is actually considered. The maximum number of parameters that can be used for embedding in conv 2, conv 3 and conv 4 is then 576, 1152, and 2304, respectively. For training the non-watermarked and watermarked WRN models, we used SGD with Nesterov momentum equal to 0.9 and cross-entropy loss $E_0$, with minibatch size 64. A total number of 200 epochs was considered. The learning rate was initially set to 0.01, and then dropped by a factor of 0.2 at 60, 120 and 160 epochs. In addition to embedding the watermark within a single layer, as done in \citep{Uchida17}, we also considered the case of multi-layer embedding, so to increase the number of weights hosting the watermark and allow a higher payload. Finally, the trade-off parameter $\lambda$'s in Eq.~(\ref{loss_fuction1}) and Eq.~(\ref{loss_function2}) are both set to 0.01. We refer to Section \ref{sec.choice-parameters} for the choice of the parameters $\alpha$ and $\beta$ of the ST-DM watermarking method. To demonstrate the effectiveness on different architectures, we applied the proposed method (as well the method in \cite{Uchida17}) on ResNet and VGG. By following the instruction in \cite{he2016deep}, we considered ResNet-34 and ResNet-50. For ResNet-34, we considered 33 convolutional layers, and 49 convolutional layers for ResNet-50. For training, the learning rate was initially set to $10^{-3}$ for these two architectures, and then decreased by a factor 0.1 at 40, 80, 100 epochs, for a total number of 120 epochs and a minibatch size of 32. Regarding the trainable weights to be used for watermark embedding, we chose the weights of the second to last convolutional layer of ResNet-34 and ResNet-50, for which $v = 576$. With regard to VGG, we adopted VGG-16 with 13 convolutional layers according to the architecture described \cite{simonyan2014very}. The learning rate was initially set to $10^{-2}$, and then decreased by a factor of 0.1 when the validation accuracy stops improving. The maximum number of epochs was set to 100. Besides, we used SGD with Nesterov momentum equal to 0.9 and cross-entropy loss $E_0$, with a batch size set to 256. For watermarking, we chose the second convolutional layer in block3, for which the number of embeddable weights $v$ is 2304. \begin{table}[width=1\linewidth,cols=4,pos=h] \caption{Structure of WRN. The layers considered for embedding are highlighted in bold.} \label{WRNs} \begin{tabular*}{\tblwidth}{@{} CCCC@{} } \toprule Group & Output size & Building block & $v$\\ \midrule conv 1 & $32 \times 32$ & [$3 \times 3, 16$] & N/A \\ conv 2 & $32 \times 32$ & $\begin{bmatrix}3 \times 3, 16 \times k\\ {\bf 3 \times 3, 16 \times k}\end{bmatrix} \times N$ & $\begin{matrix} 144 \times k \\ {\bf 144 \times k} \end{matrix} $ \\ conv 3 & $16 \times 16$ & $\begin{bmatrix}3 \times 3, 32 \times k\\ {\bf 3 \times 3, 32 \times k} \end{bmatrix} \times N$ & $\begin{matrix} 144 \times k\\ {\bf 288 \times k} \end{matrix} $ \\ conv 4 & $8 \times 8$ & $\begin{bmatrix}3 \times 3, 64 \times k\\ {\bf 3 \times 3, 64 \times k}\end{bmatrix} \times N$ & $\begin{matrix} 288 \times k \\ {\bf 576 \times k} \end{matrix} $ \\ & $1 \times 1$ & avg-pool, fc, soft-max & N/A \\ \bottomrule \end{tabular*} \end{table} \subsection {Choice of the parameters} \label{sec.choice-parameters} Before showing and discussing the results we got, we pause to discuss the choice of the parameters defining the the loss term $E_R'$ of our method and how we generalized $E_R$ in \citep{Uchida17} for a more fair comparison. As discussed in Section \ref{subsec.embedding}, the choice of the parameters $\alpha$ and $\beta$ determining the shape of the function $\theta()$ in Eq. (\ref{ours_function}) is an important one. A general view of the impact of $\alpha$ on $\theta(x)$ is illustrated in Figure \ref{simulation of theta} for a fixed $\beta$ ($\beta = 1$). We see that $\alpha$ has a significant influence on controlling the smoothness of the loss function. For larger $\alpha$, $\theta(x)$ approximates better the rectangular shape of $\phi_{DM}$. However, too large values of $\alpha$ may result in an eccessive sensitivity of the loss to small variations of the weights, leading to training instability problems. For our experiments, we tested several values of $\alpha$ and empirically set it to 10, which represents a reasonably large value (approximating well $\phi_{DM}$) that still guarantees the convergence of the models. With $\alpha$ fixed to $10$, we optimized the value of $\beta$ by training with watermark embedding and measuring the Test Error Rate (TER) of the watermarked model and the Bit Error Rate (BER) of the extracted watermark. Table \ref{activation function with beta} reports the results of the WRNs model watermarked with the proposed ST-DM embedding scheme (W-DM-WRN) for the CIFAR-10 task. The watermark was embedded in the Conv 2 layer with a payload of 1024 bits. We argue that the best $\beta$ leading to a small test error and to a zero BER is 10 (we found that this value is good also in other settings). Then, in all our experiments we let $\alpha =\beta = 10$. \begin{figure} \centering \includegraphics[scale=.40]{alpha_changing-eps-converted-to.pdf} \caption{Activation function $\theta(x)$ with fix $\beta$ and varying $\alpha$}\label{simulation of theta} \end{figure} \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{TER of W-DM-WRN and BER of the extracted watermark for different $\beta$ values ($\alpha = 10$).}\label{activation function with beta} \begin{tabular*}{\tblwidth}{@{} CCC@{} } \toprule $\beta$ & TER(\%) & BER(\%)\\ \midrule 1 & 8.68 &4.69\\ 5 & 8.10 & 3\\ 10 & 7.63 & 0\\ 15 & 9.68 & 0\\ 20 & 9.58 & 0\\ 30 & 10 & 48.44 \\ 50 & 9.93 & 50.98\\ \bottomrule \end{tabular*} \end{table} Since we optimize the parameters for the decoding function of the proposed method, for a more fair comparison with the watermarking scheme in \citep{Uchida17}, we considered a generalization of the sigmoid function used therein and optimized it in a way similar to what we did for $\alpha$ and $\beta$. Specifically, the generalized sigmoid function is defined as: \begin{equation} \label{eq.sigm_gen} \sigma(x) = \frac{1}{1 + e^{- \gamma x}} \end{equation} where $\gamma$ is a tunable parameter, determining the slope of the sigmoid function, in a way similar to $\alpha$ for $\theta$ (see Figure \ref{transformed_sigmoid}). The performance of the watermarked WRN (W-WRNs) for several values of $\gamma$, for the same choice of the embedding layer and payload we used for W-DM-WRN, are reported in Table \ref{gamma_experiments}. As it can be seen, the case $\gamma = 1$ adopted in \citep{Uchida17} is not the best one, and a much lower test error rate and BER can be achieved by considered a larger $\gamma$. Since BER = 0 for all $\gamma \ge 10$, while the test error does not change much by increasing $\gamma$ above 10, in our experiments we let $\gamma = 10$. \begin{figure} \centering \includegraphics[scale=.50]{sigmoid-eps-converted-to.pdf} \caption{The sigmoid function in Eq. \eqref{eq.sigm_gen} for various $\gamma$.} \label{transformed_sigmoid} \end{figure} \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{TER of W-WRN and BER of extracted watermark with different $\gamma$ values} \label{gamma_experiments} \begin{tabular*}{\tblwidth}{@{} CCC@{} } \toprule $\gamma$ & TER (\%) & BER(\%)\\ \midrule 1 & 12.11 &7.32\\ 5 & 10.5 & 0.49\\ 10 & 8.90 & 0\\ 15 & 8.92 & 0\\ 20 & 8.95 & 0\\ 30 & 8.88 & 0 \\ 50 & 8.99 & 0\\ \bottomrule \end{tabular*} \end{table} \section{Analysis of the experimental results} \label{sec.res} According to watermarking theory, ST-DM watermarking is expected to provide superior performance with respect to SS watermarking in terms of payload and unobtrusiveness, at the price of a possible loss of robustness. The goal of this section is to report the results of the experiments we carried out to demonstrate the gain of our new algorithm with respect to prior art, from the point of view of increased payload and reduced unobtrusiveness. We also report some experiments showing that, with a proper setting of the watermark parameters, such a gain can be achieved with no significant loss of robustness. All the networks were trained from scratch both with and without watermark embedding, to evaluate the drop in the classification accuracy due to the presence of the watermark. Specifically, the performance of the watermarked models are evaluated by measuring the test error rate of the model for the envisaged task (thus assessing the unobtrusiveness of the watermark), where Test Error Rate (TER) = 1 - Accuracy, and the Bit Error Rate (BER) pertains to watermark extraction (thus assessing the watermark accuracy). \subsection {Payload, bit error rate and test error rate} We carried out our experiments for the CIFAR-10 task by considering various payloads and embedding the watermark into several convolution layers. The test error rate of the baseline non-watermarked WRN model is 8.73\%. The results we got are reported in Table \ref{payloads_experiments}(a)-(d) for different payloads and different host convolutional layers (conv 2 (a), conv 3 (b), conv 4 (c) and multi-layer embedding (d)). In the case of multi-layer embedding, the watermark is embedded into conv 2, conv 3 and conv 4 simultaneously. In this case, the payload of 7168 bits is achieved by embedding 1024 bits, 2048 bits and 4096 bits in conv 2, conv 3 and conv 4 respectively, while the 8400 bits payload is obtained by embedding 1200, 2400 and 4800 bits in conv 2, conv 3 and conv 4. For each case, the performance of the ST-DM watermarked model (W-DM-WRNs) and a model watermarked as in \cite{Uchida17} (W-WRNs) are reported. By inspecting the tables, we see that both methods do not degrade the accuracy of the original non-watermarked model, the test error rate being even lower than the baseline. This is possibly because the $L_2$ regularization terms included for watermarking (see Eq. \eqref{loss_fuction1} and Eq. \eqref{loss_function2}) reduce overfitting, thus improving the test accuracy. For large payloads, ST-DM watermarking has a significant advantage in the test error rate compared to Uchida et al's algorithm, see, for instance, the case of 1024 bits embedded in conv 2, 2048 bits embedded in conv 3 and 4096 bits embedded in conv 4, where the gain is larger than 1\%. More importantly, we observe that ST-DM watermarking works well, yielding BER = 0, also with larger payloads, i.e., when the payload is larger than the available number of training parameters for the selected layer (see last row of Tables \ref{payloads_experiments}(a), (b) and (c)). For instance, with the new ST-DM algorithm we can embed 2400 bits in conv 3 with BER = 0 and a low test error while with W-WRN the BER is above 12\%. Not surprisingly, the gain of the W-DM-WRN over W-WRN is even more evident in the case of multi-layer embedding. \begin{table}[width=1.0\linewidth,cols=4,pos=h] \caption{TER and BER of W-DM-WRN (our) and W-WRN (Uchida et al, \cite{Uchida17}) for various payloads, considering different embedding layers.} \label{payloads_experiments} (a) Watermark embedding in Conv 2 \\ (trainable parameters available for watermarking: 576 bits) \begin{tabular}{c|c c | c c} \hline Payload & \multicolumn{2}{c|}{W-DM-WRN} & \multicolumn{2}{c}{W-WRN} \\ \cline{2-5} (bit)& TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline 256 & 8.20 & 0 & 8.15 & 0\\ 512 & 7.75 & 0 & 8.28 & 0\\ 1024& 7.63 & 0 & 8.90 & 0\\ 1200& 7.96 & 0 & 8.05 & 15.67 \\ \hline \end{tabular} \\[12pt] (b) Watermark embedding in Conv 3\\ (trainable parameters available for watermarking: 1152 bits) \begin{tabular}{c|c c | c c} \hline Payload & \multicolumn{2}{c|}{W-DM-WRN} & \multicolumn{2}{c}{W-WRN} \\ \cline{2-5} (bit)& TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline 256 & 8.15 & 0 & 8.02 & 0 \\ 512 & 8.07 & 0 & 8.30 & 0 \\ 1024& 7.79 & 0 & 8.41 & 0 \\ 2048& 7.85 & 0 & 8.93 & 0 \\ 2400& 8.22 & 0 & 8.22 & 12.46\\ \hline \end{tabular} \\[12pt] (c) Watermark embedding in Conv 4\\ (trainable parameters available for watermarking: 2304 bits) \begin{tabular}{c|c c | c c} \hline Payload & \multicolumn{2}{c|}{W-DM-WRN} & \multicolumn{2}{c}{W-WRN} \\ \cline{2-5} (bit)& TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline 256 & 8.20 & 0 & 8.45 & 0 \\ 512 & 8.57 & 0 & 8.30 & 0 \\ 1024& 7.99 & 0 & 8.39 & 0 \\ 2048& 8.03 & 0 & 8.12 & 0 \\ 4096& 7.64 & 0 & 8.60 & 0\\ 4800& 8.65 & 0 & 8.25 & 11.88\\ \hline \end{tabular} \\[12pt] (d) Watermark embedding in multiple layers \begin{tabular}{c|c c | c c} \hline Payload & \multicolumn{2}{c|}{W-DM-WRN} & \multicolumn{2}{c}{W-WRN} \\ \cline{2-5} (bit)& TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline 7168 & 8.31 & 0 & 9.97 & 0 \\ 8400 & 8.19 & 0 & 9.75 & 13.10 \\ \hline \end{tabular} \end{table} We verified that the advantage of the proposed scheme is confirmed when other network architectures and different tasks are considered. Table \ref{payload_for_other_model} reports the test error rate and BER when we watermarked ResNet50, ResNet34 and VGG networks, trained on CIFAR-10, GTSRB and ImageNet respectively. We see that the proposed scheme maintains its gain with respect to \cite{Uchida17} in terms of test error rate, while allowing to embed a larger payload with zero BER. The advantage is expectedly larger in the multi-layer embedding case. \begin{table*}[width=2.08\linewidth,cols=4,pos=h] \caption{Performance achieved with different DNN models and tasks. }\label{payload_for_other_model \begin{tabular}{c|c|c|c|c|cc|cc} \hline Model & Dataset & Embedding layer (number of & Baseline & Payload & \multicolumn{2}{c|}{W-DM-WRN} & \multicolumn{2}{c}{W-WRN} \\ \cline{6-9} & & trainable parameters) & TER (\%) & (bits) &TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline \multirow {2}{*}{ResNet50}& \multirow{2}{*}{CIFAR-10} & \multirow {2}{*}{Penultimate conv layer (576)}& \multirow{2}{*}{7.51} & 512 & 7.08& 0 & 7.27 &0\\ & & & &1000 & 7.49 &0 & 7.63& 12.85\\ \hline \multirow {2}{*}{ResNet34} & \multirow {2}{*}{GTSRB} & \multirow {2}{*}{Penultimate conv layer (576)} & \multirow {2}{*}{1.33} & 512 &1.19& 0& 1.49 &0\\ & & & & 1024&0.96& 0 & 1.01 &13.09\\ \hline VGG16 & ImageNet & Block3 of conv2 (2304) & 8.75 &4096 & 7.83 &0 & 8.72 &0 \\ \bottomrule \end{tabular} \end{table*} \subsection{Robustness evaluation} In this section, we evaluate the robustness of the new proposed method against two types of very common unintentional attacks: fine-tuning and parameters pruning. Given that ST-DM watermarking is expected to be beneficial from a payload point of view with the risk of reducing the robustness of the watermark, the goal here is to show that the performance improvement described in the previous section is obtained with a negligible loss of robustness. As before, we use the scheme in \cite{Uchida17} as a baseline for our analysis. \subsubsection {Robustness against fine-tuning} Fine-tuning is the most common type of unintentional attack watermark DNN models are subject to, given its frequent use to adapt a pre-trained network to a new task. Applying fine-tuning to a pre-trained network, in fact, requires much less effort than training a network from scratch, and also avoids over-fitting when sufficient training data is not available for the new training. To measure the robustness of the watermarked models against fine-tuning, we considered the watermarked W-WRN and W-DM-WRN models and fine-tuned them by re-training them with the standard loss (i.e, the cross-entropy loss $E_0$) for some more epochs. The parameters setting for fine tuning is left unchanged, except for the number of epochs that is set to 20. Table \ref{fine-tuning} shows the results we obtain by considering several groups of embedding layers and different payloads. In all cases we fine-tuned the models on the same CIFAR-10 dataset. Not surprisingly, the test error slightly decreases after fine-tuning with the standard loss, since the weights are modified in such a way to increase the accuracy on the classification task; however, the watermark BER remains zero, so the fine-tuned models perform correct watermark decoding. We also verified that even if the fine tuning goes on for 120 epochs the BERs remain zero. For further validation, we also fine-tuned watermarked W-DM-WRNs and W-WRNs models on a modified training set for the same CIFAR-10 task, that is by using different portions of the dataset for training and testing set. For this experiments we considered the case of 256 bits embedded in conv4. The loss $E_0$ is considered for fine-tuning, which is carried out for 20 epochs. The results are shown in Table \ref{Finetuning with different datasets}. We see that both watermarked models can resist to fine-tuning for 20 epochs, still achieving BER = 0. As an additional experiment, we also fine tuned the model on a different dataset, namely the GTSRB dataset. The results we got are reported in the same table. Not surprisingly, in this case the BER increases significantly for both models. Even if this is not a desired behaviour, W-DM-WRNs shares this weakness with W-WRNs, hence showing that this particular lack of robustness is not due to ST-DM. \begin{table*}[width=2.08\linewidth,cols=4,pos=h] \caption{Robustness against fine-tuning.} \label{fine-tuning \begin{tabular}{c|c|ccc|ccc} \hline Embedded & Payload & \multicolumn{3}{c|}{W-DM-WRN} & \multicolumn{3}{c}{W-WRN} \\ \cline{3-8} layer & (bit) & \multirow{2}{*}{TER (\%)}& TER & BER& \multirow{2}{*}{TER (\%)}& TER & BER\\ & & & after attack(\%)& after attack (\%) & & after attack (\%) & after attack (\%)\\ \cline{3-8} \hline Conv 2 & 256 & 8.20 & 8.02 & 0 & 8.15 & 8.11 & 0\\ \hline \multirow{2}{*}{Conv 3}& 256 & 8.15 & 8.05 & 0 & 8.02 & 7.58 & 0\\ & 1024 & 7.79 & 7.65 &0&8.41& 8.17&0 \\ \hline \multirow{2}{*}{Conv 4}& 256 & 8.20 & 7.93 & 0 & 8.45 & 7.90 & 0\\ & 4096 & 7.64 & 7.43 & 0 & 8.60 & 8.26 & 0\\ \hline multi-layer & 768 & 8.38 & 8.25 & 0 & 8.24 & 8.14 & 0\\ \hline \end{tabular} \end{table*} \begin{table*}[width=2\linewidth,cols=4,pos=h] \caption{Robustness against fine-tuning with different dataset.}\label{Finetuning with different datasets} \begin{tabular}{c|c c c| c c c} \hline \multirow{3}{*}{Dataset} & \multicolumn{3}{c|}{W-DM-WRNs} & \multicolumn{3}{c}{W-WRNs} \\ \cline{2-7} & \multirow{2}{*}{TER (\%)} & TER & BER & \multirow{2}{*}{TER (\%)} & TER & BER(\%)\\ & &after attack(\%) & after attack(\%) & & after attack(\%) & after attack(\%)\\ \hline Modified CIFAR-10 & \multirow{2}{*}{8.20} & 6.83 & 0 & \multirow{2}{*}{8.45}& 6.27 & 0\\ GTSRB& & 6.45 & 50.78 & & 8.59 & 41.80\\ \hline \end{tabular} \end{table*} \subsubsection {Robustness against parameter pruning} Considering the deployment of DNNs on mobile devices and other platforms with limited storage capability, parameter pruning is another unintentional attack that occurs in practical application. In order to asses the robustness against parameter pruning, we randomly cropped a percentage $p\%$ of the $s\times s\times d\times n$ trainable parameters of the embedding layer for both W-WRN and W-DM-WRN models, by setting them to zero. Then, watermark extraction is carried out as usual. The performance are assessed for several groups of embedding layers and pruning percentages $p$. To show the influence of pruning on the TER, in Table \ref{Different pruning percentage} we report the results regarding W-WRN and W-DM-WRN with embedding layer equal to conv 4 and a payload of 256 bits for different values of the pruning percentage. As shown in the table, when the pruning percentage is equal to 30\%, the TER of the two models is already much higher than for the baseline model (8.73\%), however, both watermarking algorithms can resist even larger pruning percentages with a BER=0. From the table, we also see that W-DM-WRN starts having a non-zero BER when $p = 70\%$, while the BER of the W-WRN model is still 0, confirming the intuition that ST-DM watermarking is less robust than SS watermarking. However, the difference between the two systems is appreciable only for very large pruning percentage when the TER is unacceptably large. Hence, by any practical means, we can conclude that the robustness of W-WRN and W-DM-WRN against parameter pruning is the same. The results we got for different embedding layers and payloads are reported in Table \ref{pruning}. We see that W-WRN and W-DM-WRN has a similar level of robustness against parameter pruning, and also the test error rate is similar. For each case, we considered the maximum pruning percentage $p$ before the test error increases too much (above the baseline), which is approximately the same for both W-WRN and W-DM-WRN models. It turns out that such a pruning percentage limit is 10\% for both models in all the cases, except for conv 4 with 256 bits payload, where it is 20\%. \begin{table*}[width=1.9\linewidth,cols=4,pos=h] \caption{Robustness against parameter pruning.}\label{pruning} \begin{tabular}{c|c|c|ccc|ccc} \hline Embedded & Payload & $p$& \multicolumn{3}{c|}{W-DM-WRN} & \multicolumn{3}{c}{W-WRN} \\ \cline{4-9} layer & (bit) & & \multirow{2}{*}{TER (\%)}& TER (\%)& BER (\%)& \multirow{2}{*}{TER (\%)}& TER (\%)& BER (\%)\\ & & & &after attack& after attack& & after attack& after attack\\ \cline{3-8} \hline Conv 2 & 256 &10\% & 8.20 & 10.20 & 0 & 8.15 & 10.23 & 0\\ \hline \multirow{2}{*}{Conv 3}& 256 &10\% & 8.15 & 8.83 & 0 & 8.02 & 8.49 & 0\\ & 512 &10\% & 8.07 & 9.04 &0&8.30&8.80 &0 \\ \hline \multirow{2}{*}{Conv 4}& 256 & 20\% & 8.20 & 8.65 & 0 & 8.45 & 8.73 & 0\\ & 4096 &10\%& 7.64 & 7.77 & 0 & 8.60 & 9.69 & 0\\ \hline multi-layer & 768 & 10\% &8.38 & 11.53 & 0 & 8.24 & 11.62 & 0\\ \hline \end {tabular} \end{table*} From the table, we also observe that, under parameter pruning, a lower payload has to be considered for both the proposed method and Uchida et al's algorithm, especially in conv 2 and conv 3. With a larger payload, in fact, even if the BERs after pruning are still zero, the TERs increase too much. \begin{table} \caption{TER and BER of W-DM-WRNs and W-WRNs with different pruning percentages $p$.} \label{Different pruning percentage} \begin{tabular}{c|c c | c c} \hline \multirow{2}{*}{$p$} & \multicolumn{2}{c|}{W-DM-WRNs} & \multicolumn{2}{c}{W-WRNs} \\ \cline{2-5} & TER(\%) & BER(\%) & TER(\%) & BER(\%)\\ \hline 10\% & 8.31 & 0 & 8.52 & 0\\ 20\% & 8.65 & 0 & 8.73 & 0\\ 30\%& 8.87 & 0 & 8.99 & 0\\ 40\%& 10.06 & 0 & 9.56 & 0 \\ 50\%& 12.37 & 0 & 10.86 &0\\ 60\%&19.23 & 0 & 15.35 &0\\ 70\%&25.95 &6.64&21.15&0\\ \hline \end{tabular} \end{table} \section{Conclusions and final remark} \label{sec.con} In this paper, we proposed a new DNN watermarking algorithm that leverages on the watermarking with side information paradigm to decrease the obtrusiveness of the watermark and increase the payload. Inspired by the watermarking technique in \cite{Uchida17}, and exploiting the ST-DM watermarking paradigm, we have proposed to use a new regularization term into the loss function for watermark embedding. Based on the experimental results we carried out, it turns out that the proposed method can reach a higher payload with a lower test error rate. We also verified that the improvement is achieved without impairing the robustness of the models against fine tuning and parameter pruning. An interesting direction for future research regards the security of the watermarking algorithm, that is the capability of the watermark to resist deliberate attempts to remove it. For instance, assuming that the attacker is aware of the watermarking methodology, he may attempt to erase the original watermark by embedding a new one. Experiments could be carried out to assess the robustness against watermark overwriting both in the more favorable scenario where the attacker knows the position of the watermark and in the more realistic one where the attacker does not have any knowledge about the watermarked layers.
{ "timestamp": "2020-12-29T02:27:30", "yymm": "2012", "arxiv_id": "2012.14171", "language": "en", "url": "https://arxiv.org/abs/2012.14171" }
\section{Introduction} A. J. Hoffman determined in \cite{hoffman} the limit points of spectral radii of non-negative symmetric integral matrices less than $\sqrt{2+\sqrt{5}}$. His strategy essentially consisted in reducing the problem to the adjacency matrices of graphs. Hoffman's result, which is Theorem~\ref{hoffman} in this paper, pioneered a fruitful investigation on limit points of graphs eigenvalues. For instance, Shearer \cite{she} proved that all numbers greater than $2+\sqrt{5}$ are limit points of spectral radii of adjacency matrices, and many results in the same vein were obtained later on concerning the eigenvalues of various (di)graph matrices, including the adjacency and the (signless) Laplacian matrix (see, for instance, \cite{doob-lim,hof-least,zhang-chen}). Meanwhile, the graphs with adjacency spectral radius at most $\sqrt{2+\sqrt{5}}$ has been gradually characterized. Smith \cite{smith} determined all graphs whose adjacency spectral radius doest not exceed $2$. Brouwer and Neumaier \cite{BN} identified all graphs whose spectral radii of adjacency matrices are between 2 and $\sqrt{2+\sqrt{5}}$ completing a research project started by Cvetkovi\'c, Doob and Gutman \cite{CDG}. The above investigation has been known as {\em the Hoffman program} of graphs with respect to the adjacency matrix. For more details on the Hoffman program of graphs, see \cite[Section~1.3.3]{S}, \cite[Section~3.3]{BCKW} and a new survey \cite{JFW-JW-MB}. Surprisingly, Hoffman's theorem and the results stated above turned out useful to study equiangular lines in the $n$-dimensional Euclidean space, i.e\ a family of lines through the origin such that the angle between any pair of them is the same \cite{jiang-poly}. In this paper we provide a new version of Hoffman's theorem and two generalized results of it. Hoffman's result concerns non-negative symmetric integral matrices. Our generalizations, stated in Section~2, apply to certain non-negative symmetric matrices with fractional elements. Hoffman proved his theorem by showing that he could restrict to the set of the matrices involved to the $(0,1)$ symmetric matrices with null main diagonal, i.e.\ to the adjacency matrices of graphs. The strategy proposed here is to use the convex linear combination of adjacency matrix and degree matrix of graphs and the software {\em Mathematica\textsuperscript{\textregistered}} plays a pivotal role in some proofs. Let $G= (V(G),E(G))$ be a simple and undirected graph with vertex set $V(G) = \{v_1, \dots, v_n\}$ and edge set $E(G)$. The well-known {\it adjacency matrix}, denoted by $A(G) = (a_{ij})_{n \times n}$, is the $(0,1)$-symmetric matrix with $a_{ij}=1$ if $v_iv_j \in E(G)$ and $a_{ij}=0$ otherwise. For the vertex $v \in V(G)$, $d(v)$ denotes the degree of a vertex $v \in V(G)$, and $D(G)={\rm diag}(d(v_1),d(v_2),\cdots,d(v_n))$ is the degree matrix of $G$. The {\it Laplacian} and the {\it signless Laplacian} matrices are respectively defined as $L(G) = D(G) - A(G)$ and $Q(G) = D(G) + A(G)$. Generally, the Laplacian matrix stems from the celebrated Kirchhoff's matrix tree theorem which is very recently generalized to directed and weighted graphs by Leenheer \cite{lee}. Surely, these three ones stand among the most widely studied graph matrices. \medskip \begin{figure}[h!] \begin{center} \resizebox{1\textwidth}{!} {\begin{tikzpicture}[vertex1_style/.style={circle,draw,minimum size=0.17 cm,inner sep=0pt, fill=black},vertex2_style/.style={circle,draw,minimum size=0.2 cm,inner sep=0pt}, nonterminal/.style={ rectangle, minimum size=2mm, thin, draw=black, top color=white, bottom color=white!50!white!50, font=\itshape }] \node[vertex1_style, label=above right:\small$v_1$] (a0) at (1,1) {}; \node[vertex1_style, label=right:\small$v_2$] (a1) at (2,1) {}; \node[vertex1_style, label=above:\small$v_3$] (a2) at (1,2) {}; \node[vertex1_style, label=left:\small$v_4$] (a3) at (0,1) {}; \node[vertex1_style, label=below:\small$v_5$] (a4) at (1,0) {}; \draw (a1)--(a3); \draw (a2)--(a4); \draw (a2)--(a3); \draw (a4)--(a3); \draw (a1)--(a2); \draw (a1)--(a4); \node[nonterminal] (Z05) at (-.8,0) {$\; W_5 \;$}; \node (a) at (6,1) {$A(W_5)=\left( \begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 \\ \end{array} \right)$;}; \node (b) at (13,1) {$L(W_5)=\left( \begin{array}{ccccc} \phantom{-}4 & -1 & -1 & -1 & -1 \\ -1 & \phantom{-}3 & -1 & \phantom{-}0 & -1 \\ -1 & -1 & \phantom{-}3 & -1 & \phantom{-}0 \\ -1 & \phantom{-}0 & -1 & \phantom{-}3 & -1 \\ -1 & -1 & \phantom{-}0 & -1 &\phantom{-}3 \\ \end{array} \right)$.}; \end{tikzpicture} } \end{center} \vspace{-6mm} \caption{ \label{fig3} \small Adjacency and Laplacian matrix of the wheel graph $ W_5$, } \end{figure} Let $M = M(G)$ any matrix associated with $G$. The {\it $M$-polynomial} of $G$ is defined as {\rm det}$(\lambda{I} - M)$, where $I$ is the identity matrix. The {\it $M$-spectrum} of $G$ is a multiset consisting of the eigenvalues of its graph matrix $M$, and the largest absolute value of them is called the {\it $M$-spectral radius of $G$}. We denote it by $\rho\!_{_M}\!(G)$. In order to state the celebrated theorem by Hoffman, we recall that a real number $\gamma(M)$ is said to be a {\it limit point} of the $M$-spectral radius of graphs -- or simply an {\it $M$-limit point} -- if there exists a sequence of graphs $\{G_k\, |\, k\in \mathbb{N}\}$ such that $$\rho\!_{_M}\!(G_i) \neq \rho\!_{_M}\!(G_j) \quad \text{whenever $i \neq j$}, \quad \text{and} \quad \lim_{k \rightarrow \infty}\rho\!_{_M}\!(G_k)=\gamma(M).$$ \begin{thm}[Hoffman's theorem]\label{hoffman} Let $\tau$ denote the number $(\sqrt{5}+1)/2$. For $n \in \field{N}$, let $\eta_n=\beta_n^{\frac{1}{2}}+\beta_n^{-\frac{1}{2}}$, where $\beta_n$ is the positive root of \begin{equation}\label{puah0} \phi_n(x)=x^{n+1}-(1+x+x^2+\cdots+x^{n-1}). \end{equation} The numbers $2=\eta_1<\eta_2<\cdots$ are precisely the limit points of the $A$-spectral radius of graphs smaller than $$\lim_{n\rightarrow \infty}\eta_n = \tau^{\frac{1}{2}}+\tau^{-\frac{1}{2}}= \sqrt{2+\sqrt{5}}.$$ \end{thm} The new and generalized versions of Theorem~\ref{hoffman} involve the {\it $A_{\alpha}$-matrix} of a graph $G$ (see \cite{niki}), i.e\ a convex linear combination $A_{\alpha}(G)=\alpha D(G)+(1-\alpha)A(G)$, where $\alpha \in [0,1]$. Clearly, $A(G) = A_0(G), Q(G)=2A_{1/2}(G)$ and $L(G)=\frac{1}{\alpha-\beta}(A_\alpha(G)-A_\beta(G))$ for all $\alpha \not=\beta$. The $A_{\alpha}$-matrix is non-negative symmetric with fractional elements if $\alpha \in (0,1)$. For $\alpha \not=1$, the spectral theory of $A_{\alpha}$ is equivalent to the one arising from the {\it general matrix} $M_{\gamma}$, defined in \cite{LHGL} as $ M_{\gamma} (G) = \gamma D(G) + A(G)$ for every $\gamma \geq 0$, the matrices $A_{\alpha}(G)$ and $M_{\gamma} (G)$ being proportional. Note that the $A_{\alpha}$-matrix of a connected graph is non-negative irreducible. By the Frobenius-Perron Theorem \cite[Theorem 8.4.4, pp. 508]{horn-john}, the $A_\alpha$-spectral radius is an algebraically simple eigenvalue of $A_{\alpha}$-matrix associated with a positive eigenvector.\medskip \begin{exa} For the graph $W_5$ in Figure 1, set $\alpha=\frac{\;1\;}{3}$ and $\alpha=\frac{\;3\;}{4}$. The corresponding $A_\alpha$-matrices of $G$ take the following form. $$ A_{\frac{1}{3}}(G)=\left( \begin{array}{ccccc} \frac{4}{3} & \frac{2}{3} & \frac{2}{3} & \frac{2}{3} & \frac{2}{3} \\[1.1mm] \frac{2}{3} & 1 & \frac{2}{3} & 0 & \frac{2}{3} \\[1.1mm] \frac{2}{3} & \frac{2}{3} & 1 & \frac{2}{3} & 0 \\[1.1mm] \frac{2}{3} & 0 & \frac{2}{3} & 1 & \frac{2}{3} \\[1.1mm] \frac{2}{3} & \frac{2}{3} & 0 & \frac{2}{3} & 1 \\[1.1mm] \end{array} \right) \quad \mbox{and} \quad A_{\frac{3}{4}}(G)=\left( \begin{array}{ccccc} 3 & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} \\[1.1mm] \frac{1}{4} & \frac{9}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\[1.1mm] \frac{1}{4} & \frac{1}{4} & \frac{9}{4} & \frac{1}{4} & 0 \\[1.1mm] \frac{1}{4} & 0 & \frac{1}{4} & \frac{9}{4} & \frac{1}{4} \\[1.1mm] \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} & \frac{9}{4} \\[1.1mm] \end{array} \right). $$ It turns out that $\rho_{A_{\frac{1}{3}}}(G) = (11+\sqrt{73})/6$, and $\rho_{A_{\frac{3}{4}}}(G) =(23+\sqrt{17})/8$. \end{exa} A first attempt to study limit points of the $A_{\alpha}$-spectral radius of graphs has been performed in \cite{WWXB}, where it has been shown that the smallest limit point for the $A_{\alpha}$-spectral radius of graphs is $2$. Results in this paper are a natural second step. In fact we identify all the possibile smallest $A_{\alpha}$-limit points which are bigger than $2$. The $A_{\alpha}$-matrix really merges the adjacency and the signless Laplacian spectral properties. Infact, we shall be able to deduce both Theorem~\ref{hoffman} and Theorem~\ref{ult} from a single statement. The remainder of the paper is structured as follows. Section~2 contains the statements of our main results. Section~3 contains some technical results and preliminaries needed for the proofs of Theorems \ref{Aa-main2} and \ref{Aa-main1}, presented in Section~4. Section 5 contains a re-formulation of the known results on the limits points on (signless) Laplacian spectral radius of graphs. In the concluding Section 6 we propose two problems to be addressed in future studies and discuss the potential applications to the problem of estimating the maximum number of equiangular lines in an $n$-dimensional Euclidean space. \section{New discoveries} Let $P_2(P_m,P_n)$ be the graph depicted in Fig.~\ref{fig0}, where $P_n$ denotes the {\it path} of order $n$. From \cite[Proposition~3.6]{hoffman} and Proposition~\ref{alpha-p2pnpn} in Section 3, it follows that \begin{equation}\label{psi} \Psi(\alpha) = \lim\limits_{n\rightarrow\infty} \rho_{_{A_{\alpha}}} (P_2(P_n,P_n)) \end{equation} is a well-defined strictly increasing continuous function $\Psi : [0,1] \longrightarrow \field{R}$, such that $$ \Psi(0) = \sqrt{2+\sqrt{5}} \qquad \text{and} \qquad \Psi(1) = 3.$$ A closed-form expression for $\Psi(\alpha)$ is determined in \eqref{PSI}. \medskip \begin{figure}[h] \begin{center} \resizebox{0.35\textwidth}{!} {\begin{tikzpicture}[vertex1_style/.style={circle,draw,minimum size=0.17 cm,inner sep=0pt, fill=black},vertex2_style/.style={circle,draw,minimum size=0.2 cm,inner sep=0pt}] \draw [decorate,decoration={brace,amplitude=5pt,mirror,raise=2ex}] (3.9,0) -- (7.6,0) node[midway,yshift=-2em]{\scriptsize$n$}; \draw [decorate,decoration={brace,amplitude=5pt,raise=-1ex}] (3.9,1.5) -- (7.6,1.5) node[midway,yshift=.8em]{\scriptsize$m$}; \node[vertex1_style, ] (a1) at (4,1) {}; \node[vertex1_style,] (a2) at (5,1) {}; \node[vertex1_style, ] (a3) at (6.5,1) {}; \node[vertex1_style, ] (a4) at (7.5,1) {}; \node[vertex1_style, ] (b1) at (4,0) {}; \node[vertex1_style, ] (b2) at (5,0) {}; \node[vertex1_style, ] (b3) at (6.5,0) {}; \node[vertex1_style] (b4) at (7.5,0) {}; \node[vertex1_style] (c0) at (3.5,.5) {}; \node[vertex1_style] (c1) at (2.5,.5) {}; \draw (a1)--(a2); \draw (a3)--(a4); \draw (b1)--(b2); \draw (b3)--(b4); \draw (c0)--(c1); \draw (c0)--(a1); \draw (c0)--(b1); \draw[thick, dotted] (a2)--(a3); \draw[thick, dotted] (b2)--(b3); \end{tikzpicture} } \end{center} \caption{ \label{fig0} \small The graph $P_2(P_m,P_n)$} \end{figure} In Section 4, we provide a proof of Theorems \ref{Aa-main1} and \ref{Aa-main2} below. Such theorems are essentially equivalent, and generalize Theorem~\ref{hoffman}, which is deducibile from them by setting $\alpha =0$ in their statements. From Theorems \ref{Aa-main1} and \ref{Aa-main2}, we also obtain Theorem~\ref{hoff-new} which can be regarded as a new version of the original theorem by Hoffman. \begin{thm}[Generalized version-I of Hoffman's theorem]\label{Aa-main1} For every $\alpha \in [0,1)$ and any non-negative integer $n$, let \begin{equation}\label{eq1} \eta_n(\alpha)=2\alpha+(1-\alpha)\gamma_n(\alpha)^{\frac{1}{2}}+(1-\alpha)\gamma_n(\alpha)^{-\frac{1}{2}}, \end{equation} where $\gamma_0(\alpha)=1$, $\gamma_1(\alpha) \in (0,1)$ is the only positive root of \[ \Phi_{1,\alpha}(x) = (1-\alpha)^2 x^2 +2\alpha(1-\alpha) x^{\frac{3}{2}}+\alpha^2x-(1-\alpha)^2, \] and, for $n \geqslant 2$, $\gamma_n(\alpha) \in (0,1)$ is the only positive root of {\small \begin{equation}\label{PHI} \Phi_{n,\alpha}(x)=(1-\alpha)^2x^{n+1}+2\alpha(1-\alpha)\sum_{i=0}^{n-1}x^{n-i+\frac{1}{2}}+(1-2\alpha+2\alpha^2)\sum_{i=0}^{n-2}x^{i+2}+\alpha^2x-(1-\alpha)^2. \end{equation} } Then, $$2=\eta_0(0)=\eta_1(0)<\eta_2(0)< \cdots,$$ and $$ 2=\eta_0(\alpha)<\eta_1(\alpha)<\eta_2(\alpha)<\cdots \qquad (\!\text{ for $\alpha \in (0,1)$})$$ are the all possible limit points of $A_\alpha$-spectral radius of graphs smaller than $\lim_{n\rightarrow\infty}\eta_n(\alpha)=\Psi(\alpha)$, where $\Psi(\alpha)$ is defined in \eqref{psi}. \end{thm} The above sequence $\{ \eta_n(\alpha) \}_{n \geqslant 0}$ of $A_{\alpha}$-limit points can be determined in an alternative way, as Theorem~\ref{Aa-main2} shows. \begin{thm}[Generalized version-II of Hoffman's theorem]\label{Aa-main2} For every $\alpha \in [0,1)$ and any non-negative integer $n$, let \begin{equation}\label{eq1} \eta_n(\alpha)=2\alpha+(1-\alpha)\widetilde{\gamma}_n(\alpha)^{\frac{1}{2}}+(1-\alpha)\widetilde{\gamma}_n(\alpha)^{-\frac{1}{2}}, \end{equation} where $\widetilde{\gamma}_0(\alpha)=1$, $\widetilde{\gamma}_1(\alpha) \in (1,\infty)$ is the only positive root of \[ \widetilde{\Phi}_{1,\alpha}(x) = (1-\alpha)^2x^2-\alpha^2x-2\alpha(1-\alpha)x^{\frac{1}{2}}-(1-\alpha)^2, \] and, for $n \geqslant 2$, $\widetilde{\gamma}_n(\alpha) \in (1,\infty)$ is the only positive root of {\small \begin{equation}\label{PHI} \widetilde{\Phi}_{n,\alpha}(x)=(1-\alpha)^2x^{n+1}-\alpha^2x^n-2\alpha(1-\alpha)\sum_{i=1}^{n}x^{n-i+\frac{1}{2}}-(1-2\alpha+2\alpha^2)\sum_{i=1}^{n-1} x^{i}-(1-\alpha)^2. \end{equation} } Then, $$2=\eta_0(0)=\eta_1(0)<\eta_2(0)< \cdots,$$ and $$ 2=\eta_0(\alpha)<\eta_1(\alpha)<\eta_2(\alpha)<\cdots \qquad (\!\text{ for $\alpha \in (0,1)$})$$ are the all possible limit points of $A_\alpha$-spectral radius of graphs smaller than $\lim_{n\rightarrow\infty}\eta_n(\alpha)=\Psi(\alpha)$, where $\Psi(\alpha)$ is defined in \eqref{psi}. \end{thm} \begin{thm}[New version of Hoffman's theorem]\label{hoff-new} For each $n \in \field{N}$, let $\delta_n$ be the only positive root of $$\Phi_{n,0}(x)=x^{n+1}+x^n+x^{n-1}+\cdots+x^2-1.$$ The numbers $\zeta_n=\delta_n^{\frac{1}{2}}+\delta_n^{-\frac{1}{2}}$ satisfy the sequence of inequalities $$2=\zeta_1<\zeta_2<\cdots,$$ and are precisely the limit points of $A$-spectral radius of graphs smaller than $$\lim\limits_{n\rightarrow\infty}\zeta_n=\sqrt{2+\sqrt{5}}.$$ \end{thm} \begin{proof} Clearly, $\delta_n=\gamma_n(0)$ and $\zeta_n=\eta_n(0)$. From \eqref{puah0}, a direct calculation yields $$\Phi_{n,0}(x)=-x^{n+1}\phi_n \left(\frac{\;1\;}{x}\right).$$ Hence, $\delta_n = \beta_n^{-1}$, and so $\zeta_n = \alpha_n$. \end{proof} \section{Preliminaries and technical results} Apart from Lemma~\ref{alpha-delta}, which is due to Nikiforov, all the other results given in this section without a proof are taken from the reference \cite{JFW-JW-MB} written by the same authors of this paper. \begin{lem}{\rm \cite{niki}}\label{alpha-delta} Set $\alpha \in [0,1]$. Let $G$ be a connected graph with maximum degree $\Delta$. \begin{itemize} \item[$\mathrm{(i)}$] Then $\frac{1}{2}\left(\alpha(\Delta+1)+\sqrt{\alpha^2(\Delta+1)^2+4\Delta(1-2\alpha)}\right) \leqslant \rho\!_{_{A_{\alpha}}}\!(G) \leqslant \Delta$. \item[$\mathrm{(ii)}$] If $H$ is a proper subgraph of $G$, then $\rho\!_{_{A_{\alpha}}}\!(H)<\rho\!_{_{A_{\alpha}}}\!(G)$. \item[$\mathrm{(iii)}$] If $0 \leqslant \alpha < \beta \leqslant 1$, then $ \rho\!_{_{A_{\alpha}}}\! (G) < \rho\!_{_{A_{\beta}}}\!(G)$. \end{itemize} \end{lem} \begin{figure}[h] \begin{center} \resizebox{0.75\textwidth}{!} {\begin{tikzpicture}[vertex1_style/.style={circle,draw,minimum size=0.17 cm,inner sep=0pt, fill=black},vertex2_style/.style={circle,draw,minimum size=0.2 cm,inner sep=0pt}] \foreach \x [count=\p] in {0,...,5} { \node[shape=circle,fill=black, scale=0.5] (\p) at (\x*72:.8) {};}; \draw[thick] (5) arc (-72:72:.8); \draw[thick] (3) arc (144:216:.8); \draw[thick, dotted] (2) arc (72:144:.8); \draw[thick, dotted] (5) arc (-72:-144:.8); \node[left=3pt] at (.8,0) {\small$v_0$}; \node[above=2pt] at (72:.8) {\small$v_1$}; \node[below=2pt] at (-72:.8) {\small$v_{k-1}$}; \node[left=3pt] at (144:.8) {\small$v_t$}; \node[left=3pt] at (216:.8) {\small$v_{t+1}$}; \node[shape=circle,fill=black, scale=0.5] (6) at (1.6,.6) {}; \node[shape=circle,fill=black, scale=0.5] (7) at (1.6,-.6) {}; \node[shape=circle,fill=black, scale=0.18] (8) at (1.6,.25) {}; \node[shape=circle,fill=black, scale=0.18] (9) at (1.6,0) {}; \node[shape=circle,fill=black, scale=0.18] (10) at (1.6,-.25) {}; \draw[thick] (1)--(6); \draw[thick] (1)--(7); \node[vertex1_style] (b1) at (3.5,.6) {}; \node[vertex1_style] (c1) at (3.5,-.6) {}; \node[shape=circle,fill=black, scale=0.18] (x8) at (3.5,.25) {}; \node[shape=circle,fill=black, scale=0.18] (x9) at (3.5,0) {}; \node[shape=circle,fill=black, scale=0.18] (x10) at (3.5,-.25) {}; \node[shape=circle,fill=black, scale=0.18] (y8) at (10.5,.25) {}; \node[shape=circle,fill=black, scale=0.18] (y9) at (10.5,0) {}; \node[shape=circle,fill=black, scale=0.18] (y10) at (10.5,-.25) {}; \node[vertex1_style, label=above:\small$\,v_0$] (a1) at (4.1,0) {}; \node[vertex1_style, label=above:\small$\,v_1$] (a2) at (4.9,0) {}; \node[vertex1_style, label=above:\small$\,v_{t-1}$] (a3) at (6.2,0) {}; \node[vertex1_style, label=above:$v_{t}$] (a4) at (7,0) {}; \node[vertex1_style, label=above:\small$\,v_{t+1}$] (a5) at (7.8,0) {}; \node[vertex1_style, label=above:\small$\,v_{k-1}$] (a6) at (9.1,0) {}; \node[vertex1_style, label=above:\small$\,v_{k}$] (a7) at (9.9,0) {}; \node[vertex1_style] (b2) at (10.5,.6) {}; \node[vertex1_style] (c2) at (10.5,-.6) {}; \draw[thick] (a1)--(b1); \draw[thick] (a1)--(c1); \draw[thick] (a7)--(b2); \draw[thick] (a7)--(c2); \draw[thick] (a1)--(a2); \draw[thick] (a6)--(a7); \draw[thick] (a3)--(a5); \draw[thick, dotted] (a2)--(a3); \draw[thick, dotted] (a5)--(a6); \node at (0.2,-1.8) {\small Type I}; \node at (7,-1.5) {\small Type II}; \end{tikzpicture} } \end{center} \vspace{-8mm} \caption{ \label{fig1} \small The two types of internal path.} \end{figure} According to \cite{hof-smi}, an {\it internal path} of a graph $G$ is a walk $v_0 v_1 \dots v_k$ (here $k \geqslant 1$), where the vertices $v_1, \dots, v_k$ are pairwise distinct, $d(v_0) > 2$, $d(v_k) > 2$ and $d(v_i) = 2$ whenever $0 < i < k$. We say that an internal path is of {\em type I} (resp., {\em type II}) if $v_0=v_k$ (resp., $v_0\not=v_k$) (see Fig.~\ref{fig1}). In the following lemma and throghout the rest of the paper we denote by $C_n$ the cycle of order $n$ ($n \geqslant 3$), and by $DS_n$ the {\em double snake} of order $n \geqslant 6$, i.e.\ the graph containing an internal path of type II $v_0 \dots v_{n-5}$ such that $d(v_0)=d(v_{n-5})=3$. \begin{lem}{\rm \cite{JFW-JW-MB}}\label{alpha-internal} Let $uv$ be an edge of the connected graph $G$, and let $G_{uv}$ be the graph obtained from $G$ by subdividing the edge $uv$ of $G$. Set $\alpha \in [0,1)$. \begin{itemize} \item[$\mathrm{(i)}$] $\rho\!_{_{A_\alpha}}\!(C_n)=2$ and $\rho\!_{_{A_0}}\!(DS_n)=2$; \item[$\mathrm{(ii)}$] If $(G,\alpha) \neq (C_n,\alpha)$ and $uv$ is not in an internal path of $G$, then $\rho\!_{_{A_{\alpha}}}\!(G_{uv})>\rho\!_{_{A_{\alpha}}}\!(G)$; \item[$\mathrm{(iii)}$] If $(G,\alpha) \neq (DS_n,0)$ and $uv$ belongs to an internal path of $G$, then $\rho\!_{_{A_{\alpha}}}\!(G_{uv})<\rho\!_{_{A_{\alpha}}}\!(G)$. \end{itemize} \end{lem} For each positive integer $n$, we consider the matrix $B_{n}$ obtained from $A_{\alpha}(P_{n+1})$ by deleting the row and column corresponding to a vertex of degree one of the path $P_{n+1}$. We also make use of the following notations: \begin{equation}\label{kazz1} \Delta_{\lambda,\alpha} =\sqrt{(\lambda-4\alpha+2)(\lambda-2)} \qquad \text{and} \qquad h(\lambda)_{\alpha} = \frac{\lambda -\Delta_{\lambda,\alpha}}{2\alpha(\lambda-2)+2}. \end{equation} \begin{lem}{\rm \cite{JFW-JW-MB}}\label{gongshi} Let $n$ be any non-negative integer. After setting \begin{equation*} s =\frac{\lambda-2\alpha+\Delta_{\lambda,\alpha}}{2} \;\; and \;\; t= \frac{\lambda-2\alpha-\Delta_{\lambda,\alpha}}{2}, \end{equation*} \begin{itemize} \item[$\mathrm{(i)}$] $\phi(P_{n+1})=\Delta_{\lambda,\alpha}^{-1}((s+\alpha)^2s^{n}-(t+\alpha)^2t^{n})$ for $\alpha \in [0,1)$; \item[$\mathrm{(ii)}$] $\phi(B_{n+1})=$\small$\displaystyle \frac{1}{\Delta_{\lambda,\alpha}} \cdot \frac{\alpha}{\left( \alpha(\lambda-2)+1\right)} \left((s+\alpha)^2 \left(s+\frac{(1-\alpha)^2}{\alpha}\right)s^{n}-(t+\alpha)^2\left(t+\frac{(1-\alpha)^2}{\alpha}\right)t^{n}\right).$ \end{itemize} where the equality (ii) holds for $\alpha \in (0,1)$. \end{lem} \medskip \begin{figure}[h!] \begin{center} \resizebox{0.37\textwidth}{!} {\begin{tikzpicture}[vertex1_style/.style={circle,draw,minimum size=0.17 cm,inner sep=0pt, fill=black},vertex2_style/.style={circle,draw,minimum size=0.2 cm,inner sep=0pt}] \draw[dotted, ultra thick] (-.5,0) circle (1); \node at (-.5,0) {$X$}; \node[vertex1_style] (a0) at (.5,0) {}; \node[vertex1_style, label=above:\small$u_1$] (a1) at (1.5,0) {}; \node[vertex1_style, label=above:\small$u_2$] (a2) at (2.5,0) {}; \node[vertex1_style, label=above:\small$u_n$] (a3) at (4,0) {}; \node[vertex1_style] (a4) at (5,0) {}; \draw[dotted, ultra thick] (5,0) arc (-180:180:1); \draw[thick] (a0) -- (a2); \draw[thick, dotted] (a2) -- (a3); \draw[thick] (a3) -- (a4); \node at (6,0) {$Y$}; \node at (.7,.36) {\small$x$}; \node at (4.8,.33) {\small$y$}; \end{tikzpicture} } \end{center} \vspace{-6mm} \caption{ \label{fig3} \small The graph $XY(x,y;n)$} \end{figure} \begin{lem}{\rm \cite{JFW-JW-MB}}\label{alpha-gxy} Let $X$ and $Y$ be two vertex-disjoint connected graphs, and let $G_n=XY(x,y;n)$ be the graph obtained by joining $x \in V(X)$ and $y \in V(Y)$ by a path of length $n+1$ (see Fig. \ref{fig3}). Then, \begin{equation}\label{mink} \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_n)=\max \left\{ \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(X_x(P_n)),\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(Y_y(P_n)) \right\}. \end{equation} \end{lem} Let $2P_n$ be the disjoint union of two copies of $P_n$ and let $u_1,u_2 \in V(2P_n)$ be end-vertices belonging to different components. For every non-trivial connected graph $G$ and every $u \in V(G)$, let the graph $G_u(P_n,P_n)$ be obtained by adding to $G \cup 2P_n$ the edges $uu_1$ and $uu_2$. \begin{lem}\label{alpha-gupnpn}{\rm \cite{JFW-JW-MB}} For $A_{\alpha}$-index of the graph sequence $\{G_u(P_n,P_n)\}_{n \in \field{N}}$, it has a limit point $\chi'_u(G) \geqslant 2$. If $\chi'_u(G)>2$, then $\chi'_u(G)$ is the largest root of the equation $\Theta(\lambda)_{G,u,\alpha, \infty}=0$, where \begin{equation*} \Theta(\lambda)_{G,u,\alpha, \infty}= \left(1-\alpha h(\lambda)_{\alpha}\right)\left(\phi(G)(1-\alpha h(\lambda)_{\alpha})-2\alpha\phi(G)_u+2(2\alpha-1)\phi(G)_uh(\lambda)_{\alpha}\right),\end{equation*} and $h(\lambda)_{\alpha}$ is defined in \eqref{kazz1}. \end{lem} \begin{prop}\label{alpha-p2pnpn} Let $P_2(P_n,P_n)$ be the graph depicted in Fig.~\ref{fig0}. Then, $$ \rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n))< \Psi(\alpha) = \lim\limits_{n\rightarrow\infty}\rho\!_{_{A{\alpha}}}\!(P_2(P_n,P_n)).$$ Moreover, $\Psi(\alpha)$ is the largest root of \begin{multline}\label{kazz3} (1-\alpha h(\lambda)_{\alpha}) \left( (1-\alpha h(\lambda)_{\alpha})\lambda^2 \right.\\ \left. +2((\alpha^2+2\alpha-1)h(\lambda)_{\alpha} -2\alpha)\lambda -(6\alpha^2-3\alpha)h(\lambda)_{\alpha}+2\alpha^2+2\alpha-1 \right) . \end{multline} \end{prop} \begin{proof} The graph $P_2(P_n,P_n)$ is of type $G_u(P_n,P_n)$, where $G=P_2$ and $u$ is any of its vertices. Hence, $\Psi (\alpha)$ is well-defined by Lemma~\ref{alpha-gupnpn}. Moreover, by Lemma \ref{alpha-internal}(ii) (or by Lemma~\ref{alpha-delta}(ii) as well), it turns out that $\rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n))$ grows up with $n$; hence, $\rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n))<\Psi(\alpha)$. From Lemma \ref{alpha-delta} and a direct calculation we get $$\rho\!_{_{A_{\alpha}}}\! (P_2(P_n,P_n)) \geqslant \rho\!_{_{A_0}}\! (P_2(P_n,P_n)) \geqslant \rho\!_{_{A_0}}\! (P_2(P_4,P_4)) >2.$$ From (i) it follows that $\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n))$ is the largest root of the following equation: \begin{equation}\label{zac} \left(1-\alpha h(\lambda)_{\alpha}\right)\left(\phi(P_2) (1-\alpha h(\lambda)_{\alpha}) -2\alpha\phi(B_1)+2(2\alpha-1)\phi(B_1) h(\lambda)_{\alpha} \right)=0. \end{equation} We now get \eqref{kazz3} by plugging $\phi(P_2)=(\lambda-\alpha)^2-(1-\alpha)^2$ and $\phi(B_1)=\lambda-\alpha$ into \eqref{zac}. \end{proof} \noindent The software {\em Mathematica\textsuperscript{\textregistered}} provides the next closed-form expression for the function $\Psi(\alpha)$. \begin{equation}\label{PSI} \Psi(\alpha)=\frac{\;3\;}{2}\alpha+\frac{\;1\;}{\sqrt{6}}\left(g_0(\alpha) +\frac{g_1(\alpha)}{g_4(\alpha)}+\frac{g_2(\alpha)}{\sqrt{g_5(\alpha)}} -g_4(\alpha)\right)^{\frac{1}{2}}+\sqrt{\frac{g_5(\alpha)}{12}}, \end{equation} where $g_0(\alpha)=11\alpha^2-16\alpha+8$,\\[1.5mm] {\phantom{where }}$g_1(\alpha)=(\alpha-1)^2(2\alpha^2+2\alpha-1)$,\\[1.5mm] {\phantom{where }}$g_2(\alpha)=\sqrt{27}\alpha(7\alpha^2-12\alpha+6)$,\\[1.5mm] {\phantom{where }}$g_3(\alpha)=11\alpha^6-86\alpha^5+275\alpha^4-432\alpha^3+358\alpha^2-150\alpha+25$.\\[1.5mm] {\phantom{where }}$g_4(\alpha)=(1-\alpha)\left((\alpha-1)(17\alpha^2-52\alpha+26)) -\sqrt{27g_3(\alpha)} \right)^{\frac{1}{3}}$,\\[1.5mm] {\phantom{where }}$\displaystyle g_5(\alpha)=g_0(\alpha)-2\left(\frac{g_1(\alpha)}{g_4(\alpha)}-g_4(\alpha)\right)$.\\ We remind the reader that the values of $g_3(\alpha)$ are not real for $\alpha \in [0,1)$. In fact, like other software packages, {\em Mathematica\textsuperscript{\textregistered}} always chooses the principal branch of fractional powers. This means that, for every positive real number $a$, $(-a)^{\frac{1}{3}}$ has to be read as the complex number $\sqrt[3]{a}{\rm e}^{i \frac{\pi}{3}}$. This implies, in particular, that $$ g_4(0)= \frac{3}{2}+1 + i \left( \sqrt{3} + \frac{3}{2} \right), \qquad g_5(0)=12+6i,$$ and $$ \begin{array}{ll} \Psi (0) &= \displaystyle \frac{\;1\;}{\sqrt{6}}\left(8 -\frac{1}{g_4(0)}-g_4(0)\right)^{\frac{1}{2}}+\sqrt{\frac{g_5(0)}{12}} \\[2em] & \displaystyle =\left( \frac{\sqrt{2+\sqrt{5}}}{2} - i \frac{\sqrt{\sqrt{5}-2}}{2} \right) + \left( \frac{\sqrt{2+\sqrt{5}}}{2} + i \frac{\sqrt{\sqrt{5}-2}}{2} \right) = \sqrt{2+\sqrt{5}}. \end{array}$$ as expected. The values assumed by $h_4$ in Lemma \ref{K13P5Pn} are computed by the same rule. Let $G_u(P_n)$ be the graph obtained from two-vertex-disjoint graphs $G$ and $P_n$ by adding a new edge joining a vertex $u$ of $G$ with an end vertex of $P_n$. \begin{lem}{\rm \cite{JFW-JW-MB}} The $A_{\alpha}$-spectral radius of the graph sequence $\{G_u(P_n)\}_{n \in \field{N}}$ has a limit point $\chi_u(G) \geqslant 2$. Moreover, \begin{itemize} \item[$\mathrm{(i)}$]\label{exist} if $\chi_u(G)>2$, then $\chi_u(G)$ is the largest root of the equation $$\left(1-\alpha\cdot h(\lambda)_{\alpha} \right)\phi(G)- \left(\alpha-(2\alpha-1)\cdot h(\lambda)_{\alpha}\right)\phi(G)_u=0, $$ where $h(\lambda)_{\alpha}$ is defined in \eqref{kazz1}; \item[$\mathrm{(ii)}$]\label{K13P5Pn} if $G=K_{1,3}$ and $u$ is its vertex of degree $3$, then, $$\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!((K_{1,3})_u(P_n))=\frac{1}{2}\left(5\alpha+3\sqrt{2-4\alpha+3\alpha^2}\right).$$ \end{itemize} \end{lem} \begin{prop}\label{3.8} Let $u$ be the middle vertex of degree 2 of $G=P_5$. Then, $$\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!((P_5)_u(P_n))=2\alpha+\frac{1}{2}(h_1+h_5)^{\frac{1}{2}}+ \frac{1}{2}\left(h_7-h_5+\frac{h_6}{4(h_1+h_5)^{\frac{1}{2}}}\right)^{\frac{1}{2}},$$ where $h_1=4-8\alpha-3\alpha^2,$ $h_2=19\alpha^2+8\alpha-4,$ $h_3=13\alpha^4-32\alpha^3+32\alpha^2-16\alpha+4,$ \\ $h_4=(-416+2496\alpha-6300\alpha^2+8560\alpha^3-6624\alpha^4+2784\alpha^5-502\alpha^6-(172800-2073600\alpha+11453184\alpha^2- 38499840\alpha^3+87733584\alpha^4-142826112\alpha^5+170398080\alpha^6-150197760\alpha^7+97143840\alpha^8-44993664\alpha^9+ 14176512\alpha^{10}-2730240\alpha^{11}+243216\alpha^{12})^{\frac{1}{2}})^{\frac{1}{3}},$\\ $h_5=\frac{1}{3} \left( h_2+2^{\frac{1}{3}}\frac{h_3}{h_4}+2^{-\frac{1}{3}}h_4 \right),$ $h_6=512\alpha^3-32\alpha h_2+112(-\alpha+2\alpha^2+\alpha^3),$ $h_7=13\alpha^2-8\alpha+4$. In particular $\rho\!_{_{A_0}}\!((P_5)_u(P_n))=\Psi(0)= \sqrt{2+\sqrt{5}}$. \end{prop} \begin{proof} The inequalities $$\rho\!_{_{A_{\alpha}}}\! ((P_5)_u(P_n)) \geqslant \rho\!_{_{A_0}}\! ((P_5)_u(P_n)) \geqslant \rho\!_{_{A_0}}\! ((P_5)_u(P_2)) >2.$$ hold for every $n \geqslant 3$ by Lemma~\ref{alpha-delta} and a direct calculation. Since $$\rho\!_{_{A_{\alpha}}}\! (K_{1,3}(P_n)) \geqslant \rho\!_{_{A_0}}\! (K_{1,3}(P_n)) \geqslant \rho\!_{_{A_0}}\! (K_{1,3}(P_2)) >2,$$ it follows from (i) that $\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!((P_5)_u(P_n))$ is the largest root of the equation \begin{equation}\label{xxx} (1-\alpha h(\lambda)_{\alpha})\phi(P_5) -(\alpha-(2\alpha-1)h(\lambda)_{\alpha})\phi(P_5)_u=0. \end{equation} The result comes with the help of {\em Mathematica\textsuperscript{\textregistered}}, once we substitute $$\phi(P_5)=(\lambda^2-3\alpha\lambda+\alpha^2+2\alpha-1)(\lambda^3-5\alpha \lambda^2+(5\alpha^2+6\alpha -3)\lambda-8\alpha^2+4\alpha),$$ and $\phi(P_5)_u=(\lambda^2-3\alpha\lambda+\alpha^2+2\alpha-1)^2$. For $\alpha=0$, \eqref{xxx} becomes $\phi(P_5) -\left( \phi(P_2)\right)^2h(\lambda)_0$ whose largest root is $\sqrt{2+\sqrt{5}}$ as already stated in \cite[p. 171]{hoffman}. \end{proof} \section{Proofs of Theorems \ref{Aa-main1} and \ref{Aa-main2}} The next four Lemmas relates the $A_{\alpha}$-spectral radius and $A_{\alpha}$-limit points with some structural conditions on the graph $G$. \begin{lem}\label{no-TC} If $G$ is a connected graph that is neither a tree nor a cycle, then $\rho\!_{_{A_{\alpha}}}\!(G) \geqslant \Psi(\alpha)$. \end{lem} \begin{proof} For each $ n \geqslant 4$, let $L_n$ be the graph obtained from the cycle $C_{n-1}$ by adding a pendant edge. It is easily seen that $L_n$ contains $P_2(P_{\lfloor \frac{n-2}{2} \rfloor},P_{\lfloor\frac{n-2}{2}\rfloor})$ as subgraph. In our hypotheses, $G$ contains a subgraph isomorphic to $L_m$ for a suitable integer $m \geqslant 4$. Thus, $\rho\!_{_{A_{\alpha}}}\!(G) \geqslant \rho\!_{_{A_{\alpha}}}\!(L_m) > \rho\!_{_{A_{\alpha}}}\!(P_2(P_{\lfloor\frac{m-2}{2} \rfloor},P_{\lfloor\frac{m-2}{2}\rfloor}))$, by Lemma \ref{alpha-delta}(ii). Lemma \ref{alpha-internal} yields $\rho\!_{_{A_{\alpha}}}\!(L_n)>\rho\!_{_{A_{\alpha}}}\!(L_{n+1})$. Hence, $$\rho\!_{_{A_{\alpha}}}\!(G) \geqslant \rho\!_{_{A_{\alpha}}}\!(L_m)> \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(L_n)\geqslant\lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_{\lfloor\frac{n-2}{2}\rfloor},P_{\lfloor\frac{n-2}{2}\rfloor}))=\Psi(\alpha),$$ where the last equality comes from Proposition~\ref{alpha-p2pnpn}. \end{proof} Let $\mathcal S$ be any infinte set. By `almost all elements of $\mathcal S$' we mean `all elements of $\mathcal S$ apart from a finite number of them'. \begin{lem}\label{gnam} Let $\mathcal G= \{ G_a\}_{a\in \field{N}} $ be a sequence of graphs such that $\lim_{a\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_a) < \Psi(\alpha)$. Then $\Delta(G_a) \leqslant 4$ for almost all $a \in \field{N}$. \end{lem} \begin{proof} The statement directly follows from Lemma~\ref{alpha-delta}(i). In fact, for each $G_a$ such that $\Delta(G_a) \geqslant 5$, we have \[\sqrt{2+\sqrt{5}} = \Psi(0) < \sqrt{5} \leq \sqrt{\Delta(G_a)} \leq \rho\!_{_{A_0}}\!(G_a). \qedhere \popQED \] \end{proof} As usual, we denote by $\mathrm{diam} (G)$ the diameter of $G$, by $N_G(u)$ the neighbourhood of $u$ in $G$, i.e.\ the set of vertices in $V(G)$ adjacent to $u$, and by $d(u,v)$ the number of edges in a shortest path connecting $u$ and $v$. \begin{lem}\label{gnam2} Let $\mathcal G= \{ G_a\}_{a\in \field{N}} $ be a sequence of graphs such that $\lim\limits_{a\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_a) < \Psi(\alpha)$. The set of diameters $\{ \mathrm{diam} (G_a) \, | \, a \in \field{N} \}$ is not bounded above. \end{lem} \begin{proof} It is well-known that $ \lvert V(G_a) \rvert \leqslant \Delta(G_a)^{\mathrm{diam}(G_a)}+1$ (see, for instance, \cite{hoffman}). By Lemma~\ref{gnam}, we have $$ \lvert V(G_a) \rvert \leqslant 4^{\mathrm{diam}(G_a)}+1$$ for almost all $n \in \field{N}$. Since the graphs in the sequence $\{ G_a\}_{a\in \field{N}} $ are pairwise distinct, the set $\{ \lvert V(G_a) \rvert \; | a \in \field{N} \}$ cannot be bounded above. \end{proof} \begin{lem}~\label{gnam3} Let $T$ be a tree with at least three vertices of degree $3$. Then, $\rho\!_{_{A_{\alpha}}}\!(T) \geqslant \Psi(\alpha)$. \end{lem} \begin{proof} Let $u,v$ and $w$ the vertices in $T$ of degree $3$. Without loss of generality, we can assume that $v$ is the vertex which lies on the path between $u$ and $w$. Consider the minimal subtree $T'$ whose vertex-set contains $\{ u,v,w\} \cup N_T(u) \cup N_T(v) \cup N_T(w)$. Then, it is easy that $P_2(P_m,P_m)$ is a subgraph of $T'$, where $m = \min \{ d(v,u), d(v,w) \}$ and $v \in V(P_2(P_m,P_m))$. Since the path connecting $u$ and $v$ and the one connecting $v$ and $w$ are internal for $T'$, from Lemmas \ref{alpha-delta}(ii), \ref{alpha-internal} and Proposition~\ref{alpha-p2pnpn}, it follows that \[ \rho\!_{_{A_{\alpha}}}\!(T) \geqslant \rho\!_{_{A_{\alpha}}}\!(T') \geqslant \lim\limits_{k\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_k,P_k)) = \Psi(\alpha), \qedhere \popQED \] \end{proof} We now have all tools needed to prove the main theorems in this paper. Let $\mathcal G= \{ G_a\}_{a\in \field{N}} $ be a sequence of graphs such that $\lim_{a\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_a) < \Psi(\alpha)$. From Lemma \ref{no-TC} it follows that, from a suitable integers $\bar{a}$ on, the $G_a$'s are all trees or cycles. Let $\Delta = \max \{ \Delta(G_a) \, | \ a \geq \bar{a} \}$. If $\Delta=2$, then each $G_a$ for $a \geq \bar{a}$ is either a path or a cycle; this implies that $\lim_{a\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_a)=2$. The same is true if $\Delta(G_a)=2$ for almost all $a\geqslant \bar{a}$. Thus, to consider the remaining cases, it is not restrictive to assume that the number of cycles in $\mathcal G$ is finite, and there exists in $\mathcal G$ a sequence $ \mathcal T= \{T_i\}_{i \in \field{N}}$ of pairwise distinct trees such that $\Delta(T_i) \geqslant 3$ and $\rho\!_{_{A_{\alpha}}}\!(T_i)\rightarrow \sigma$, where $2<\sigma< \Psi(\alpha)$. By Lemma~\ref{gnam}, we can also assume that $\Delta(T) \leqslant 4$ for all $ T \in \mathcal T$. If almost all trees in $\mathcal T$ have a vertex of degree $4$, then, as a consequence of Lemma \ref{gnam2}, we find in $\mathcal T$ a subsequence $\mathcal T'=\{T_{i_n}\}_{n \in \field{N}}$ such that $T_{i_n}$ contains $K_{1,3}(P_n)$. Setting $\omega_1(\alpha)= \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(K_{1,3}(P_n))$, from Lemma \ref{K13P5Pn}(ii) we immediately get $$ \Psi(0) < \frac{3}{\sqrt{2}} < \omega_1(0) \leqslant \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_0}}\!(T_{i_n}) = \lim\limits_{a\rightarrow\infty}\rho\!_{_{A_0}}\!(G_a) < \Psi(0),$$ which is clearly a contradiction. This implies that the maximum degree of almost all trees in $\mathcal T$ is necessarily $3$. \begin{re} Even if it is not really relevant for the proof of Theorem~\ref{Aa-main1}, it is worth noting that if $\{ G_a\}_{a \in \field{N}}$ has a limit point and, for almost all graphs, $\Delta(G_a) \geqslant 4$, then $$ \lim\limits_{a\rightarrow\infty} \rho\!_{_{A_{\alpha}}}\!(G_a) \geqslant \omega_1(\alpha)> \Psi(\alpha) \quad \text{for all $\alpha \in [0,1)$}. $$ In fact, employing {\em Mathematica\textsuperscript{\textregistered}}, after running the command `{\rm NMaxValue[$\Psi(\alpha)-\omega_1(\alpha)$, $\alpha$]}', we get $$\max\{\Psi(\alpha)-\omega_1(\alpha)|\alpha \in \mathbb{R}\} = -0.0716+.$$ \end{re} Since the $A_{\alpha}$-spectral radius of almost all graphs in $\mathcal G$ is less than $\Psi(\alpha)$, by Lemma~\ref{gnam3} it follows that only a finite number of them has more than two vertices of degree $3$. We now show that if almost all trees in $\mathcal T$ have two vertices of degree $3$, then the distance between them is unbounded. Assuming the contrary, there exists a suitable $m \in \field{N}$ such that $\mathcal T$ contains a subsequence $\mathcal T'' = \{ T_{j_n} \}_{n \geqslant m+3}$ with the following property: $T_{j_n}$ contains the tree $T_{m,n}$ obtained from a path $u_1u_2\dots v_{m+2}\dots v_{n}$ and two isolated vertices $v,w$ by joining $v_2$ to $v$ and $v_{m+2}$ to $w$. By construction, $u_2\dots u_{m+2}$ is an internal path for $T_{m,n}$, therefore Lemma~\ref{alpha-delta}(ii) implies that $ \rho\!_{_{A_{\alpha}}}\!(T_{j_n}) \geqslant \rho\!_{_{A_{\alpha}}}\!(T_{m,n}) > \rho\!_{_{A_{\alpha}}}\!(P_2(P_{n-m},P_{n-m}))$. By taking the limits, and recalling Proposition~\ref{alpha-p2pnpn}, we get $\lim\limits_{a\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(G_a) \geqslant \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(T_{j_n}) \geqslant \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n)) = \Psi(\alpha),$ against our assumption.\smallskip So far, we have proved that only a finite number of trees in $\mathcal T$ has more than two vertices of degree $3$, and if almost all trees in $\mathcal T$ have two vertices of degree $3$, then the distance between them is unbounded. This means that, by Lemma \ref{alpha-gxy}, the sequence $\mathcal T$ can be possibly replaced with another sequence of trees $\mathcal T'''= \{ T'''_i \}_{i \in \field{N} }$ such that every $T'''_i$ has only one vertex with degree $\Delta(T''''_i) =3$, and $\lim_{i\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(T'''_i)= \lim_{i\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(T_i)<\Psi(\alpha)$. We now show that, for almost all trees in $\mathcal T'''$, the unique vertex of degree $3$ is adjacent to a pendant vertex. Otherwise, we would find a subsequence $\{T'''_{j_h}\}_{h \in \field{N}}$ of $\mathcal T'''$ such that each $T'''_{j_h}$ contains $(P_5)_u(P_h)$ defined in and by Proposition~\ref{3.8}, $$ \Psi(0) > \lim\limits_{i\rightarrow\infty}\rho\!_{_{A_0}}\!(T_i) \geqslant \lim\limits_{h\rightarrow\infty}\rho\!_{_{A_0}}\!(T'''_{j_h}) \geqslant \lim\limits_{h\rightarrow\infty}\rho\!_{_{A_0}}\!((P_5)_u(P_h)) = \Psi(0),$$ a contradiction. \begin{re} Let $\omega_2(\alpha)=\lim_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!((P_5)_u(P_n))$ (recall that $u$ is middle vertex of degree 2 in $P_5$). The presence of an arbitrarily big $(P_5)_u(P_h)$ inside almost all $T_i$'s implies that $\lim_{i\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(T_i) \geqslant \omega_2(\alpha) > \Psi(\alpha)$ for each $\alpha \in [0,1)$. In fact, using the command `{\rm FindMinVal\-ue[$g_2(\alpha)-\Psi(\alpha),\{\alpha,0\}$]}', in {\rm Mathematica}\textsuperscript{\textregistered}, we get $$\max\{\, \omega_2(\alpha)-\Psi(\alpha)|\alpha \in [0,\infty) \, \} = 2.22045\times 10^{-16}>0.$$ \end{re} We have seen that almost each $T'''_{j_h}$ is a a tree of type $P_2(P_{n_h},P_{m_h})$. In other words, if this is the case, the removal of the unique vertex of degree $3$ gives rise to the disjoint union of the three paths $P_1, P_{n_h}$ and $P_{m_h}$; yet, the two `rays' $P_{n_h}$ and $P_{m_h}$ cannot be both arbitrarily long, otherwise $\Psi(\alpha) > \lim\limits_{i\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(T'''_{i}) \geqslant \lim\limits_{n\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_n,P_n)) = \Psi(\alpha),$ which is clearly impossible. From the discussion above, it follows that the set of limit points we seek are the numbers: $$\lim_{m\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_m,P_n))= \eta_n(\alpha) \qquad \text{(for $n \in \field{N}$),}$$ where $P_2(P_m,P_n)$ is the graph depicted in Fig.~\ref{fig0}. Note that $\eta_n(\alpha)>2$, unless $n=1$ and $\alpha=0$. In fact, fixed $ n \geqslant 2$, $P_2(P_2,P_2)$ is a proper subgraph of $P_2(P_m,P_n)$ for all $m>2$; hence, \[ \eta_n(\alpha) = \lim_{m\rightarrow\infty}\rho\!_{_{A_{\alpha}}}\!(P_2(P_m,P_n)) > \rho\!_{_{A_{\alpha}}}\!\left(P_2(P_2,P_2)\right) \geqslant \rho\!_{_{A_0}}\! \left(P_2(P_2,P_2) \right) = \sqrt{\frac{5+\sqrt{13}}{2}} >2. \] When $n=1$, by \cite[Proposition~3.6]{WWXB} it follows that $\eta_1(0)=2$, and $\eta_1(\alpha) >2$ for $\alpha>0$ for Lemma~\ref{alpha-delta}(ii). For the rest of the proof, we assume that $(n,\alpha) \not= (1,0)$. Since $\eta_n(\alpha)>2$, by Lemma \ref{exist}(i), it follows that $\eta_n(\alpha)$ is the largest root of the equation \begin{equation}\label{kkk} \left(1-\alpha h(\lambda)_{\alpha}\right)\phi(P_{n+2})- \left( \alpha- (2\alpha-1) h(\lambda)_{\alpha} \right)\phi(B_1)\phi(B_n)=0. \end{equation} We now set \begin{equation}\label{Deogratias} \lambda=(1-\alpha)\theta+\frac{1-\alpha}{\theta}+2\alpha. \end{equation} Note that $\lambda>2$. So, $\theta > 0$ and $\theta \neq 1$. An obvious substitution leads to the identity \begin{equation}\label{hhh1} h(\lambda)_{\alpha} = \frac{\theta}{2(\alpha+\theta(1-\alpha))(1-\alpha (1-\theta))} \cdot \left( (1-\alpha)\theta + \frac{1-\alpha}{\theta} +2\alpha - \left\lvert \frac{(1-\alpha)(1-\theta^2)}{\theta} \right\rvert \right). \end{equation} We next distinguish the following two cases. {\bf Case 1}. $0 <\theta<1$. Then \eqref{hhh1} is equivalent to \begin{equation}\label{hhh} h(\lambda)_{\alpha} = \frac{2\theta(\alpha+\theta(1-\alpha))}{2(\alpha+\theta(1-\alpha))(1-\alpha (1-\theta))} = \frac{\theta}{1-\alpha (1-\theta)}. \end{equation} Thus, Equation \eqref{kkk} becomes \begin{equation}\label{qqq} \frac{1-\alpha}{1-\alpha(1-\theta)} \left(\phi( P_{n+2})-(\alpha (1-\theta)+\theta)\phi(B_1)\phi(B_n) \right)=0. \end{equation} By Lemma \ref{gongshi}, \eqref{Deogratias} and \eqref{hhh}, we get \begin{equation}\label{Pn+2} \scalemath{.85}{ \phi(P_{n+2}) = \frac{\theta (1-a)^{n}}{1-\theta^2} \left( (1-\alpha)^2 \left( \frac{1}{\theta^{n+3}}-\theta^{n+3}\right)+2\alpha(1-\alpha) \left(\frac{1}{\theta^{n+2}}-\theta^{n+2}\right)+\alpha^2 \left(\frac{1}{\theta^{n+1}}-\theta^{n+1}\right) \right)}, \end{equation} \begin{equation}\label{B1} \phi(B_1) = (1-\alpha)\theta+\frac{1-\alpha}{\theta}+\alpha,\end{equation} and \begin{equation}\label{Bn} \phi(B_n)= \frac{\theta (1-a)^{n-1} }{1-\theta^2} \left( \left( \frac{1}{\theta^{n+1}}-\theta^{n+1}\right)(1-a) + \left(\frac{1}{\theta^{n}}-\theta^{n}\right) a \right). \end{equation} When we plug the three expressions above in \eqref{qqq}, such equation becomes \begin{multline}\label{uhm} \displaystyle \frac{(1-\alpha)^n}{(1-\theta^2)\theta^{n+2}}\left( (1-\alpha)^2\theta^{2n+4}-(2\alpha^2-2\alpha)\theta^{2n+3}+\alpha^2\theta^{2n+2} \right.\\ \left.-(1-\alpha)^2\theta^4+(2\alpha^2-2\alpha)\theta^3-(2\alpha^2-2\alpha+1)\theta^2+(1-\alpha)^2 \right) =0, \end{multline} Once we set $\theta^2=x$, then $0<x <1$, and Equation \eqref{uhm} is equivalent to $\Omega_{n,\alpha}(x)=0$, where \begin{equation*} \begin{split} \Omega_{n,\alpha}(x)&= \frac{(1-\alpha)^n}{(1-x)x^{\frac{n}{2}+1}}\left( (1-\alpha)^2x^{n+2}+2\alpha (1-\alpha)x^{n+\frac{3}{2}}+\alpha^2x^{n+1}-(1-\alpha)^2x^2-2\alpha(1-\alpha)x^{\frac{3}{2}}\right.\\ &\phantom{==}-(2\alpha^2-2\alpha+1)x+(1-\alpha)^2\Big)\\ &=\frac{(1-\alpha)^n}{(1-x)x^{\frac{n}{2}+1}}(x-1)\Phi_{n,\alpha}(x)\\ &= -\frac{(1-\alpha)^n}{x^{\frac{n}{2}+1}}\Phi_{n,\alpha}(x), \end{split} \end{equation*} with $\Phi_{n,\alpha}(x)$ being in the statement of Theorem~\ref{Aa-main1}. Therefore, we only consider the positive root of $\Phi_{n,\alpha}(x) = 0$. It is easily verified that, for $(n, \alpha) \not=(1,0)$, $$\Phi_{n,\alpha}(0)=-(1-\alpha)^2<0, \quad \Phi_{n,\alpha}(1)= 2\alpha(1-\alpha)n +(1-\alpha^2)(n-1)+\alpha^2>0,$$ and $$ \frac{\mathrm{d} \Phi_{n,\alpha}(x)}{\mathrm{d}x}>0 \qquad \text{for $x>0$.}$$ Hence, $\gamma_n(\alpha)$ is the {\em only} positive root of \eqref{PHI} and satisfies \eqref{eq1} as claimed. Theorem \ref{Aa-main1} also holds for $(n, \alpha) =(1,0)$. In fact, $\gamma_1(0)=1$ is the only positive root of both $\Omega_{1,0}(x)= (x-1)^2(x+1)$ and $\Phi_{1,0}(x)= x^2-1$. From Proposition~\ref{alpha-p2pnpn} it follows that $\lim_{n\rightarrow\infty}\eta_n(\alpha)=\Psi(\alpha)$. Since $P_2(P_m,P_n)$ is a subgraph of $P_2(P_m,P_{n+1})$, by Lemma~\ref{alpha-delta}(ii) we obtain \begin{equation}\label{basta} \eta_1(\alpha) \leqslant \eta_2(\alpha) \leqslant \cdots \leqslant \eta_n(\alpha) \leqslant \eta_{n+1}(\alpha) \leqslant \cdots \qquad \text{ for $\alpha \in [0,1)$.} \end{equation} The proof will be over once we prove that all inequalities in \eqref{basta} are strict. By contradiction, suppose that $ \eta_{\bar{n}}(\bar{\alpha})$ is equal to $ \eta_{\bar{n}+1}(\bar{\alpha})$ for a suitable pair $(\bar{n},\bar{\alpha})$. Since the function $\lambda=\lambda (\theta)$ in \eqref{Deogratias} is strictly decreasing in the interval $(0,1)$, we also have $ \gamma_{\bar{n}}(\bar{\alpha}) = \gamma_{\bar{n}+1}(\bar{\alpha})$. From \eqref{PHI} it comes out that the number \begin{equation*} f(x,\alpha)=(x^2-2x^{\frac{3}{2}}+2x-1)\alpha^2+2(1-x+x^{\frac{3}{2}}-x^2)\alpha+x^2+x-1 \end{equation*} computes the difference $\Phi_{n+1, \alpha}(x)-x\Phi_{n, \alpha}(x)$ {\em whatever $n$ we choose in $\field{N}$}. Under our assumptions, $$f(\gamma_{\bar{n}}(\bar{\alpha}) ,\alpha)=\Phi_{\bar{n}+1, \alpha}(\gamma_{\bar{n}}(\bar{\alpha}) )-\gamma_{\bar{n}}(\bar{\alpha}) \Phi_{\bar{n}, \alpha}(\gamma_{\bar{n}}(\bar{\alpha}) )=0 - \gamma_{\bar{n}}(\bar{\alpha}) \cdot 0 =0.$$ Through an obvious iterative argument, we see that the non-zero number $\gamma_{\bar{n}}(\bar{\alpha})$ should be a root of $\Phi_{n, \bar{\alpha}}(x)$ for all $n \in \field{N}$. But this is not possible, since the difference $$ \Phi_{2, \bar{\alpha}}(x) - \Phi_{1, \bar{\alpha}}(x) = - x^2 \left( (1-\bar{\alpha})^2 x +2\bar{\alpha}(1-\bar{\alpha})\sqrt{x} -\bar{\alpha}^2 \right)$$ is negative for all $x \in (0,1)$. Consequently, we have shown Theorem \ref{Aa-main1}.\medskip {\bf Case 2}. $\theta>1$. In this case, \eqref{hhh} and \eqref{qqq} should be replaced by $$ h(\lambda) = \frac{1}{\theta-\alpha(\theta-1)}, \quad \text{and} \quad \frac{1-\alpha}{\theta-\alpha(\theta-1)} \left(\theta\phi( P_{n+2})-(1+\alpha (\theta-1))\phi(B_1)\phi(B_n) \right)=0, $$ whereas the expression for $\phi(P_{n+2})$, $\phi(B_1)$ and $\phi(B_n)$ would remain formally identical to \eqref{Pn+2}, \eqref{B1} and \eqref{Bn}. After some calculations, \eqref{qqq} becomes \begin{multline*} \frac{(1-\alpha)^n}{(x-1)x^{\frac{n}{2}+1}}\left( (1-\alpha)^2x^{n+2}-(2\alpha^2-2\alpha+1)x^{n+1}+(2\alpha^2-2\alpha)x^{n+\frac{1}{2}} -(1-\alpha)^2x^{n} \right.\\ \left. +\alpha^2x-(2\alpha^2-2\alpha)x^{\frac{1}{2}} +(1-\alpha)^2 \right) =0. \end{multline*} Set $\theta^2=x$. Then $x>1$, and the above equation is equivalent to $\widetilde\Omega_{n,\alpha}(x)=0$, where \begin{equation*} \begin{split} \widetilde{\Omega}_{n,\alpha}(x)&= \frac{(1-\alpha)^n}{(x-1)x^{\frac{n}{2}+1}}\left( (1-\alpha)^2x^{n+2}-(2\alpha^2-2\alpha+1)x^{n+1}+(2\alpha^2-2\alpha)x^{n+\frac{1}{2}} -(1-\alpha)^2x^{n} \right.\\ &\phantom{==}+\alpha^2x-(2\alpha^2-2\alpha)x^{\frac{1}{2}} +(1-\alpha)^2\Big).\\ &=\frac{(1-\alpha)^n}{x^{\frac{n}{2}+1}}\widetilde{\Phi}_{n,\alpha}(x) \end{split} \end{equation*} with $\widetilde{\Phi}_{n,\alpha}(x)$ being as in the statement of Theorem~\ref{Aa-main2}. A direct calculation leads to \begin{equation}\label{equi} \Omega_{n,\alpha}(x) = x^{n+2} \widetilde{\Omega}_{n,\alpha}\left(\frac{1}{x} \right) \qquad \mbox{and} \qquad \Phi_{n,\alpha}(x) = -x^{n+1} \widetilde{\Phi}_{n,\alpha} \left(\frac{1}{x} \right). \end{equation} From Case 1, it follows that $\widetilde{\gamma}_n(\alpha) = \frac{1}{\gamma_n(\alpha)}$ is the only positive root greater than 1 of $\widetilde{\Phi}_{n,\alpha}(x)$. Thereby, \begin{equation*} \begin{split} \eta_n(\alpha) &= 2\alpha+(1-\alpha)(\gamma_n(\alpha))^{\frac{1}{2}} + (1-\alpha)(\gamma_n(\alpha))^{-\frac{1}{2}}\\ &= 2\alpha+(1-\alpha)(\widetilde{\gamma}_n(\alpha))^{\frac{1}{2}} + (1-\alpha)(\widetilde{\gamma}_n(\alpha))^{-\frac{1}{2}}. \end{split} \end{equation*} What is left to prove for Theorem \ref{Aa-main2} also comes from Case 1, and the proofs of Theorem \ref{Aa-main1} and Theorem \ref{Aa-main2}. \section{(Signless) Laplacian matrix} Inspired by Hoffman's theorem, Guo \cite{guo-lim} and Wang et al. \cite{wang-lim} respectively determined the Laplacian and signless Laplacian limit points smaller than $2+\omega+\omega^{-1}$ and $2+\varepsilon$, where $\omega = \frac{{\;}1{\;}}{3}\left((19 + 3\sqrt{33})^{\frac{{\;}1{\;}}{3}} + (19 - 3\sqrt{33})^{\frac{{\;} 1{\;}}{3}}+1 \right)$ and $\varepsilon=\frac{{\;}1{\;}}{3}\left((54 - 6\sqrt{33})^{\frac{{\;}1{\;}}{3}} + (54 + 6\sqrt{33})^{\frac{{\;} 1{\;}}{3}} \right)$. Note that $\varepsilon=\omega + \omega^{-1} =2.38+$, and the related proofs of \cite[Theorem~3.5]{guo-lim} and \cite[Theorem~3.1]{wang-lim} only involve trees. It is well-known that the Laplacian and signless Laplacian spectra of bipartite graphs are equal \cite{gro-mer-sun}. Thus, we can state the cited results of \cite{guo-lim} and \cite{wang-lim} in a proposition. \begin{prop}{\rm \cite{guo-lim,wang-lim}}\label{QL-limit} Let $\mu_0 = 1$ and $\mu_n $$(n \geq 1)$ be the largest positive root of $$f_n(x) = x^{n+1}-(1+x+\cdots+x^{n-1})(\sqrt{x}+1)^2.$$ Let $\kappa_n = 2+ \mu_n^{\frac{1}{2}} + \mu_n^{\frac{-1}{2}}$. Then, $$4 = \kappa_0 < \kappa_1 < \kappa_2 < \cdots$$ are all the limit points of $L$-spectral radius or $Q$-spectral radius of graphs less than {\small $\lim\limits_{n\rightarrow\infty}\kappa_n=2+\varepsilon$}, where $\varepsilon = \frac{{\;}1{\;}}{3}\left((54 - 6\sqrt{33})^{\frac{{\;}1{\;}}{3}} + (54 + 6\sqrt{33})^{\frac{{\;} 1{\;}}{3}} \right) = 2.38+$. \end{prop} The proofs of Theorems \ref{Aa-main1} and \ref{Aa-main2} are also essentially limited to trees. Therefore, on the one side, Proposition \ref{QL-limit} can be seen as corollary of Theorem \ref{Aa-main2} obtained by evaluating $\alpha$ at $ \frac{1}{2}$; on the other side, we retrieve from Theorem \ref{Aa-main1} an alternative statement concerning the limit points of (signless) Laplacian spectral radius of graphs. \begin{thm}\label{ult} Let $\vartheta_0=1$, $\vartheta_1$ be the only positive root of \[ \varphi(x) = x^2 +2 x^{\frac{3}{2}}+x-1, \] and, for $n \geqslant 2$, $\vartheta_n$ be the smallest positive root of $$\varphi_n(x)=x^{n+1}+2 x^{\frac{3}{2}}+2\sum_{i=0}^{n-2}x^{i+2}+x-1.$$ Let $\xi_n=2+\vartheta_n^{\frac{\;1}{2}}+\vartheta_n^{-\frac{\;1}{2}}$. Then, $$4=\xi_0<\xi_1<\xi_2<\cdots$$ are all the limit points of (signless) Laplacian spectral radius of graphs smaller than $$\lim\limits_{n\rightarrow\infty}\xi_n=2+\varepsilon, \qquad \text{where} \quad \varepsilon=\frac{{\;}1{\;}}{3}\left((54 - 6\sqrt{33})^{\frac{{\;}1{\;}}{3}} + (54 + 6\sqrt{33})^{\frac{{\;} 1{\;}}{3}} \right).$$ \end{thm} \begin{proof} Recall that for every bipartite graph $G$, $L(G)=Q(G)=2A_{1/2}(G)$. From Theorem \ref{Aa-main1} and a direct calculation we get $$\varphi_n(x)=4\Phi_n\left(x,\frac{\;1}{2}\right), \quad \vartheta_n = \gamma_n \left( \frac{1}{2} \right) \quad \mbox{and} \quad \xi_n=2\eta_n \left(\frac{\;1}{2} \right).$$ By evaluating \eqref{PSI} at $\alpha=1/2$, the software {\em Mathematica\textsuperscript{\textregistered}} gives $2\Psi \left( \frac{1}{2} \right) = 2+\varepsilon$. In order to verify that Theorem~\ref{ult} is consistent with Theorem \ref{QL-limit}, we just check that $$\varphi_n(x) = x^{n+1}\chi_n\left(\frac{1}{x}\right).$$ Thus, $\vartheta_n=1/\mu_n$ and, consequently, $\xi_n=\kappa_n$. \end{proof} \section{Concluding remarks} About fifty years after its publication, the paper \cite{hoffman} is still inspiring people working in Spectral Graph Theory. The work presented in this paper not only provides a new version but also presents two generalized results of Hoffman's theorem about the limit points of adjacency spectral radius of graphs. In contrast, we take advantage of the software {\em Mathematica\textsuperscript{\textregistered}}, which plays an influential role in Section 3 and along the proofs of Theorems \ref{Aa-main1} and \ref{Aa-main2}. After \cite{WWXB}, the main results in the paper can be seen as a second step towards the more general problem of determining all the limit points of $A_{\alpha}$-spectral radius of graphs. Very recently, to estimate the maximum cardinality of equiangular lines in the $n$-dimensional Euclidean space, Jiang and Polyanskii \cite{jiang-poly} applied Hoffman's theorem and the related results in \cite{BN,CDG,she} to give a forbidden subgraphs characterization of graphs with bounded adjacency spectral radius. This is a novel application. To get further results in the same vein, it could be important to solve Problem~1 and prove or disprove Conjecture~1 below.\\ \noindent {\bf Problem 1}. Characterize all the connected graphs with $A_\alpha$-spectral radius between $2$ and $\Psi(\alpha)$.\\ \noindent {\bf Conjecture 1} Let $\alpha \in [0,1)$. For any $\Upsilon(\alpha) \geq \Psi(\alpha) $, there exists a sequence of graphs $\{G_i\}_{i\in \field{N}}$ such that $\lim\limits_{i\rightarrow\infty}\rho_{_{A_\alpha}}(G_i) = \Upsilon(\alpha)$. \medskip\medskip \noindent{\bf Acknowledgments.} The first author is supported for this research by the National Natural Science Foundation of China (No. 11971274). \small{
{ "timestamp": "2020-12-29T02:24:33", "yymm": "2012", "arxiv_id": "2012.14090", "language": "en", "url": "https://arxiv.org/abs/2012.14090" }
\section{Introduction} \label{Introduction} Given an $m\times n$ random matrix $A$, the uniform deviation ineqality plays an important role in theory of random matrices. Also it has many interesting and important consequences. We first quote a classical result due to G. Schechtman \cite{Schechtman} for i.i.d ensemble Gaussian random matrices with respect to general norms, which is very useful in asymptotic geometric analysis. \begin{theorem} Let $A \in \mathbb{R}^{m \times n}$ be a random matrix with i.i.d $\mathscr{N}(0,1)$ entries, $S \subseteq \mathbb{R}^{m}$, and $T \subseteq \mathbb{R}^{n}$. Then we have \begin{equation} \label{MDI} \mathbb{E}[\sup_{x \in T}\abs{\sup_{y\in S} \langle Ax,y \rangle - \mathbb{E}[\sup_{y\in S} \langle Ax,y\rangle]}] \leq C \text{rad}(S) \gamma(T), \end{equation} where $\gamma(T) \equiv \mathbb{E}[\sup_{x\in T} |\langle g,x \rangle|]$, $g \sim \mathscr{N}(0,I_{n})$, and $\text{rad}(S) \equiv \sup_{y \in S}||y||_{2}$. Here $C$ is an absolute universal constant and $\gamma(T)$ is called the Gaussian complexity of $T$. \end{theorem} Note that if $f(x) \equiv \sup_{y\in S}\langle x, y \rangle$, then $\text{Lip}(f) = \text{rad}(S)$ \footnote{In this paper, we always use $\text{rad}(S)$ to denote $\text{Lip}(f)$.}. In addition, (\ref{MDI}) is sharp (see \cite[Theorem 11.2.4]{V} and \cite[Exercise 8.7.2]{V}). See also \cite[Theorem 8.7.1]{V} and \cite{Tstable} for Chevet inequality and \cite{Chen} for the $\ell_{p}$-Gaussian-Grothendieck problem. Still, it is a challenging open problem to study the universality of the matrix deviation inequality (see \cite[Remark 11.1.9]{V}). In other words, whether the general matrix deviation inequality holds for i.i.d ensemble sub-Gaussian random matrix. Moreover, we believe that this extension is quite useful for understanding the deviation of i.i.d ensemble sub-Gaussian random matrices and has value in practical applications. Our object in this paper is to prove the matrix deviation inequality for $\ell_{p}$-norm, $1\leq p < \infty$, and i.i.d ensemble sub-Gaussian random matrices. That is to say, if $A \in \mathbb{R}^{m\times n}$ is a random matrix with i.i.d, mean-zero, unit variance, sub-Gaussian entries $\{A_{i,j}\}$ and $K = ||A_{i,j}||_{\psi_{2}}$, where $||X||_{\psi_{2}}$ is the sub-Gaussian norm of $X$ (see Definition \ref{sub-gaussian}), then, for any subset $T \subseteq \mathbb{R}^{m}$, we have \begin{equation} \label{goal1} \norm{\sup_{x \in T}\sabs{||Ax||_{p} - \mathbb{E}[||Ax||_{p}]}\;}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{q})\gamma(T) \end{equation} and \begin{equation} \label{goal2} \norm{\sup_{x \in T}\sabs{||Ax||_{p} - m^{ \frac{1}{p}}||A_{1}x||_{L^{p}}}\;}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{q})\gamma(T) \end{equation} for every $1\leq p < \infty$, where $A_{i}$ is the ith row of $A$, $||A_{1}x||_{L^{p}} = \mathbb{E}[|A_{1}x|^{p}]^{\frac{1}{p}}$, $B^{m}_{q} \equiv \{z \in \mathbb{R}^{m} : ||z||_{q} \leq 1\}$ and $C_{p}$ is a positive absolute constant depending only on $p$. Note that the matrix deviation inequality for $\ell_{2}$-norm and i.i.d ensemble sub-Gaussian random matrices was obtained earlier by Liaw et al. \cite{V2}. To prove (\ref{goal1}) and (\ref{goal2}), by Talagrand’s comparison inequality, it suffices to prove the random processes $R_{x} \equiv ||Ax||_{p} - m^{\frac{1}{p}}||A_{1}x||_{L^{p}}$ and $X_{x} \equiv ||Ax||_{p} - \mathbb{E}[||Ax||_{p}]$ possess the sub-Gaussian increment property (see Theorem \ref{v2case3}). To obtain this property, we follow the strategy of the proof of \cite[Theorem 1.3]{V2}. We first prove the single sub-Gaussian increment property (see Theorem \ref{v2case1}). This leads us to study the concentration inequality for the $\ell_{p}$-norm of the sub-Gaussian random vector. In other words, if $\{X_{i}\}_{1\leq i\leq m}$ are i.i.d sub-Gaussian random variables such that $||X_{1}||_{L^{p}} = 1$ and $K \equiv ||X_{1}||_{\psi_{2}}$, then \[ \norm{||X^{(m)}||_{p} - m^{\frac{1}{p}}}_{\psi_{2}} \leq C_{p} K^{p} \times \text{rad}(B_{q}^{m}), \] where $X^{(m)} = (X_{1},...,X_{m})$ and $C_{p}$ is a positive absolute constant depending only on $p$. To estimate $\mathbb{P}(\sabs{||X^{(m)}||_{p} - m^{\frac{1}{p}}} \geq \delta)$, we decompose it as \begin{equation} \label{intro_decomposition} \mathbb{P}(\sabs{||X^{(m)}||_{p} - m^{\frac{1}{p}}} \geq \delta) = \mathbb{P}(\sum_{i=1}^{m} |X_{i}|^{p}-1 \geq \delta_{1}) + \mathbb{P}(\sum_{i=1}^{m} 1-|X_{i}|^{p} \geq \delta_{2}), \end{equation} where $\delta_{1} = (\delta + m^{\frac{1}{p}})^{p} - m$ and $\delta_{2} = m - (m^{\frac{1}{p}} - \delta)^{p}$. Since $\mathbb{P}(\sabs{|X_{i}|^{p} - 1} \geq t) \leq 2\exp(-\frac{t^{\frac{2}{p}}}{CK^{2}})$ for every $t > 0$, this leads us to consider the tail probability of the sum of independent mean-zero $\alpha$-Orlicz random variables \footnote{Note that, in this paper, we take $\alpha = \frac{2}{p}$.} (see Section \ref{Orlicz} for the definition of $\alpha$-Orlicz random variables and the discussion above Theorem \ref{v2main} for the estimation of the tail probability (\ref{intro_decomposition})). Note that this problem, or, more generally, the tail probability of the sum of independent mean-zero random variables, have been studied by many authors, e.g., \cite{Kavita} and \cite{a_sub_exponential} for $\alpha$-Orlicz random variable \footnote{They call it $\alpha$-sub-exponential random variable}, \cite{log2} for logarithmically concave random variable, and \cite{log} for logarithmically convex random variable. Next, we prove the general sub-Gaussian increment property by the single sub-Gaussian increment property. To be more specific, applying two special cases of the general sub-Gaussian increment property (see Theorem \ref{v2case1} and Theorem \ref{v2case2}) and reverse triangle inequality yields the general sub-Gaussian increment property (see the discussion above Theorem \ref{v2case3}). Finally, as a consequence of (\ref{goal1}) and (\ref{goal2}), we show that the Johnson–Lindenstrauss lemma from $\ell_{2}^{n}$-space to $\ell_{p}^{m}$-space holds for all i.i.d ensemble sub-Gaussian random matrix. In other words, if $\epsilon \in (0,1)$ and $T \subseteq \mathbb{R}^{n}$ with $N = |T|$, then, with high probability, $$ d_{p}(1-\epsilon)||x-y||_{2} \leq \norm{\frac{1}{m^{\frac{1}{p}}}A(x-y)}_{p} \leq D_{p}(1+\epsilon)||x-y||_{2} \quad \forall x,y \in T, $$ where $d_{p}$ and $D_{p}$ are positive constants that depend on $p$ and $K$. For more general results and similar problems, see \cite{sJ3}, \cite{sJ2}, and \cite{sJ1}. The rest of the paper is organized as follows. In Section \ref{Pre}, we introduce the major tool of our proofs including some properties of $\alpha$-Orlicz random variable and some concentration inequalities. Section \ref{Main} is devoted to establishing the main results and their application. The proofs associated with the main theorem are presented in Section \ref{Main Proof}. \section{Preliminaries} \label{Pre} \subsection{$\alpha$-Orlicz Random Variables} \label{Orlicz} First, we recall the definition of $\alpha$-Orlicz random variables. For simplicity, we will postpone the proofs of the results in this subsection to appendix. \begin{definition} \label{sub-gaussian} Let $\alpha > 0$ and $X$ be a random variable. The $\alpha$-Orlicz norm of $X$ is defined by $$||X||_{\psi_{\alpha}} \equiv \inf\{t>0 : \mathbb{E}[\exp(|X|^{\alpha}/t^{\alpha})]\leq 2\}.$$ (For convenience, we set $\inf \emptyset = \infty$.) \end{definition} We say $X$ is a $\alpha$-Orlicz random variable if $||X||_{\psi_{\alpha}} < \infty$. In particular, we say $X$ is a sub-Gaussian random variable if $||X||_{\psi_{2}} < \infty$, and $X$ is a sub-exponential random variable if $||X||_{\psi_{1}} < \infty$. \begin{remark} Normally, if $\psi: [0,\infty) \mapsto [0,\infty)$ is a convex, increasing function such that $\psi(0) = 0$ and $\lim_{x\to\infty}\psi(x) = \infty$, then the $\psi$-Orlicz norm of $X$ is defined by $$||X||_{\psi} \equiv \inf\{t>0 : \mathbb{E}[\psi(\frac{|X|}{t})]\leq 1\}.$$ Here, we consider $\psi_{\alpha}(x) = e^{x^{\alpha}}-1$, where $\alpha > 0$. Note that $\psi_{\alpha}$ is convex if and only if $\alpha \geq 1$. \end{remark} \begin{theorem} ($\alpha$-Orlicz properties) \label{intro1} Let $\alpha > 0$ and $X$ be a random variable. Then the following properties are equivalent: \begin{enumerate} [label=(\alph*)] \item The MGF of $|X|^{\alpha}$ is bounded at some point, namely \begin{equation} \label{1p} \mathbb{E}[\exp(|X|^{\alpha}/K_{1}^{\alpha})] \leq 2. \end{equation} \item The tails of $X$ satisfy \begin{equation} \label{2p} \mathbb{P}(|X| \geq t) \leq 2\exp(-t^{\alpha}/K_{2}^{\alpha}) \quad \forall t\geq 0. \end{equation} \item The moments of $X$ satisfy \begin{equation} \label{3p} ||X||_{L^{p}} \leq K_{3} \times p^{\frac{1}{\alpha}} \quad \forall p\geq \alpha \end{equation} \item The MGF of $|X|^{\alpha}$ satisfies \begin{equation} \label{4p} \mathbb{E}[\exp(\lambda^{\alpha}|X|^{\alpha})]\leq \exp(K_{4}^{\alpha}\lambda^{\alpha}) \quad \forall 0\leq \lambda < \frac{1}{K_{4}}. \end{equation} \end{enumerate} Here the parameters $K_{i} > 0$ appearing in these properties differ from each other by at most an absolute constant factor. \end{theorem} Note that, in the proof of Theorem \ref{intro1}, we observe $K_{2} = K_{1}$, $K_{3} = C_{2,\alpha}K_{2}$, $K_{4} = C_{3,\alpha} K_{3}$, where $C_{2,\alpha} \equiv (\frac{2}{c\alpha})^{\frac{1}{\alpha}}$, $C_{3,\alpha} \equiv (2e\alpha)^{\frac{1}{\alpha}}$, and $c$ is a positive constant so that $\Gamma (1+x) \leq (\frac{x}{c})^{x}$ for all $ x\geq 1 $. Also, if $K_{4}$ is known, then we can take $K_{1} = (4ce)^{\frac{-1}{\alpha}}$ and so $||X||_{\psi_{\alpha}} \leq (4ce)^{\frac{-1}{\alpha}} K_{4}$. \begin{lemma} [\textbf{Centering property \& Scaling property}] \label{cs} Let $X$ be a random variable. We have, if $||X||_{\psi_{\alpha}} < \infty$, then $\norm{X - \mathbb{E}[X]}_{\psi_{\alpha}} \leq C_{4,\alpha} ||X||_{\psi_{\alpha}}$ and, if $||X||_{\psi_{\alpha \beta}} \vee || \;|X|^{\beta}||_{\psi_{\alpha}} < \infty$, then$||\;|X|^{\beta}\;||_{\psi_{\alpha}} = ||X||_{\psi_{\alpha\beta}}^{\beta}$. Here $C_{4,\alpha}$ is a positive constant that depends on $\alpha$. \end{lemma} \begin{lemma} [\textbf{The MGF of $\alpha$-Orlicz random variable}]\label{MGF} Let $1<\alpha < \infty$ and $X$ be a random variable with $||X||_{\psi_{\alpha}} \leq K$. Then we have $$ \mathbb{E}[\exp(\lambda X)] \leq \exp(\frac{1}{\alpha'} (2\lambda K C_{7,\alpha} )^{\alpha '}) \quad \forall \lambda \geq \frac{2}{K C_{7,\alpha}}, $$ where $\alpha'$ is the Hölder conjugate of $\alpha$ and $C_{7,\alpha}$ is a positive absolute constant depending only on $\alpha$. \end{lemma} \subsection{Concentration Inequalities} In this subsection, we present basic results which are used throughout this article. First, in order to obtain the tail probability of the supremum of the random processes $R_{x}$ and $X_{x}$, note that the proof of \cite[Theorem 8.5.3]{V} gives the following estimation. \begin{theorem} [\textbf{Generic chaining bound}, \protect{\cite[Theorem 8.5.3]{V}}] \label{Talagrand} Let $T \subseteq \mathbb{R}^{n}$, $x_{0} \in T$, and $\{\mathscr{R}_{x}\}_{x\in T}$ be a random process such that $\norm{\mathscr{R}_{x} - \mathscr{R}_{y}}_{\psi_{2}} \leq K||x-y||_{2}$ for every $x,y \in T$. Then \begin{equation*} \norm{\sup_{x\in T}|\mathscr{R}_{x} - \mathscr{R}_{x_{0}}|}_{\psi_{2}} \leq C K \gamma(T), \end{equation*} where $C$ is a positive absolute constant. In particular, if $\mathscr{R}_{x_{0}} = 0$, then $ \norm{\sup_{x \in T} |\mathscr{R}_{x}|}_{\psi_{2}} \leq CK\gamma(T). $ \end{theorem} Next, we quote two results that help us to control the tail probability of sum of independent mean-zero $\alpha$-Orlicz random variables. \begin{theorem} [\textbf{Sums of independent sub-Gaussians random variables}, \protect{\cite[Proposition 2.6.1]{V}}] \label{sums} Let $X_{1},...,X_{N}$ be independent mean-zero sub-Gaussian random variables. Then $ \norm{\sum_{i=1}^{N}X_{i}}_{\psi_{2}}^{2} \leq C\sum_{i=1}^{N} ||X_{i}||_{\psi_{2}}^{2}, $ where $C$ is a positive absolute constant. \end{theorem} \begin{theorem} [\textbf{Bernstein’s inequality for $\alpha$-Orlicz random variables}, \protect{\cite[Corollary 1.4]{a_sub_exponential}}] \label{bernstein} If $0 < \alpha \leq 1$ and $X_{1},...,X_{m}$ are independent mean-zero, $\alpha$-Orlicz random variables such that $\max_{1\leq i \leq m} ||X_{i}||_{\psi_{\alpha}} \leq K$, then $$ \mathbb{P}\bkt{\abs{\sum_{i=1}^{m} a_{i} X_{i}} \geq t}\leq 2\exp(-c_{\alpha}\min\{\frac{t^{2}}{K^{2} ||a||_{2}^{2}},\frac{t^{\alpha}}{K^{\alpha} \max_{1\leq i\leq m}|a_{i}|^{\alpha}}\}) \quad \forall t>0, $$ where $c_{\alpha}$ is a positive absolute constant that depends on $\alpha$. \end{theorem} \section{Main Steps} \label{Main} The goal of this section is to prove (\ref{goal1}) and (\ref{goal2}) under the following assumption. We first show that $R_{x}$ and $X_{x}$ have sub-Gaussian increments, and then prove (\ref{goal1}) and (\ref{goal2}) by Theorem \ref{Talagrand}. As the proof of \cite[Theorem 1.3]{V2}, we first prove it for some special cases, and then, by combining these results, we prove the general case. For simplicity, we postpone some of the proofs of the theorems in this section to Section \ref{Main Proof}. \begin{assumption} \label{Assumption1} Assume that $1 \leq p<\infty$ and $A \in \mathbb{R}^{m \times n}$ is a random matrix with i.i.d mean-zero, unit variance, sub-Gaussian entries $\{A_{i,j}\}$ and $K \equiv ||A_{1,1}||_{\psi_{2}}$. \end{assumption} Recall that we consider the random processes: \begin{equation} \label{randomfield} R_{x} \equiv ||Ax||_{p} - m^{\frac{1}{p}}||A_{1}x||_{L^{p}}\footnote{We denote the $i$th row vector of $A$ by $A_{i}$.} \quad \text{and} \quad X_{x} \equiv ||Ax||_{p} - \mathbb{E}[||Ax||_{p}] \quad \forall x\in \mathbb{R}^{n}. \end{equation} Then we have the following properties. \begin{enumerate} [label=(\alph*)] \item Note that if $||z||_{p} \equiv (\sum_{i = 1}^{m}|z_{i}|^{p})^{\frac{1}{p}}$ and $S = B^{m}_{q} \equiv \{z \in \mathbb{R}^{m} : ||z||_{q} \leq 1\}$, then $\sup_{y \in S} \langle z, y \rangle = ||z||_{p}$ \footnote{In this paper, $q$ denotes the Hölder conjugate of $p$.}. Also, by Hölder inequality, we get \begin{equation} \label{ball} \text{rad}(B^{m}_{q}) = \begin{cases} m^{\frac{1}{2} - \frac{1}{q}} = m^{\frac{1}{p} - \frac{1}{2}}, & \text{if} \quad 1\leq p\leq 2\\ 1, & \text{if} \quad 2<p<\infty. \end{cases} \end{equation} \item Define $||x|| \equiv ||A_{1}x||_{L^{p}}$ for $x \in \mathbb{R}^{n}$ and $B \equiv \{z \in \mathbb{R}^{n} : ||z|| \leq 1\}$ Since $\{A_{1,j}\}_{1\leq j\leq n}$ are independent variance random variables, it follows that $||\cdot||$ is a norm on $\mathbb{R}^{n}$. \item $||\cdot||$ and $||\cdot||_{2}$ are equivalent. More precisely, by \cite[Exercise 2.6.5, Exercise 2.6.6, Exercise 2.6.7]{V}, we have \begin{equation} \label{norm} CK^{-3} ||x||_{2} \leq ||x|| \leq ||A_{1}x||_{L^{2}} = ||x||_{2} \quad \forall 1\leq p \leq 2 \end{equation} and \begin{equation} \label{norm2} ||x||_{2} = ||A_{1}x||_{L^{2}} \leq ||x|| \leq C'K ||x||_{2} \quad \forall 2\leq p <\infty, \end{equation} where $C$ and $C'$ are a positive absolute constants. \item Applying Jensen's inequality shows that \begin{equation} \label{K} K = \inf\{ t > 0 : \mathbb{E}[\exp(\frac{|A_{1,1}|^{2}}{t^{2}})] \leq 2\} \geq \inf\{t> 0:\exp(\frac{\mathbb{E}[|A_{1,1}|^{2}]}{t^{2}}) \leq 2\} =\sqrt{\frac{1}{\ln2}} > 1. \end{equation} \end{enumerate} To begin with, we establish the concentration inequality for the $\ell_{p}$-norm of the sub-Gaussian random vectors. See Section \ref{51} for the proof of Theorem \ref{v2main}. \begin{theorem} \label{v2main} Let $1\leq p< \infty$ and $\{X_{i}\}_{1\leq i < \infty}$ be i.i.d sub-Gaussian random variables such that $||X_{1}||_{L^{p}} = 1$ and $K \equiv ||X_{1}||_{\psi_{2}}$. Then, for each $m\geq 1$ and $X^{(m)} = (X_{1},...,X_{m})$, we have \begin{equation*} \norm{||X^{(m)}||_{p} - m^{\frac{1}{p}}}_{\psi_{2}} \leq C_{p} K^{p} \times \text{rad}(B_{q}^{m}) \quad \text{and} \quad \norm{||X^{(m)}||_{p} - \mathbb{E}[||X^{(m)}||_{p}]}_{\psi_{2}} \leq C_{p} K^{p} \times \text{rad}(B_{q}^{m}), \end{equation*} where $C_{p}$ is a positive absolute constant depending only on $p$. \end{theorem} Now, we prove the sub-Gaussian increment property for single point. Fix $x \neq 0$. Applying Theorem \ref{sums} and (\ref{norm}) shows that $ ||A_{1} \frac{x}{||x||}||_{\psi_{2}} \leq \frac{CK||x||_{2}}{||x||} \leq \frac{CK^{4}||x||}{||x||} = CK^{4} $ if $1\leq p < 2$. Similarly, by Theorem \ref{sums} and (\ref{norm2}), we have $ ||A_{1} \frac{x}{||x||}||_{\psi_{2}} \leq CK $ if $2\leq p < \infty$. Hence, for the case of $1\leq p < 2$, applying Theorem \ref{v2main} and (\ref{norm}) yields \begin{equation} \label{v2case1_case1} \norm{||Ax||_{p} - m^{\frac{1}{p}}||x||}_{\psi_{2}} \leq ||x|| C_{p} K^{4p} \text{rad}(B^{m}_{q}) \leq ||x||_{2} C_{p} K^{4p} \text{rad}(B^{m}_{q}). \end{equation} For the case of $2\leq p < \infty$, since $K\geq 1$, applying Theorem \ref{v2main} and (\ref{norm2}) gives \begin{equation} \label{v2case1_case2} \norm{||Ax||_{p} - m^{\frac{1}{p}}||x||}_{\psi_{2}} \leq ||x|| C_{p} K^{p} \text{rad}(B^{m}_{q}) \leq ||x||_{2} C_{p} K^{4p} \text{rad}(B^{m}_{q}). \end{equation} In addition, we can use $\norm{||Ax||_{p} - m^{\frac{1}{p}}||x||}_{\psi_{2}}$ to bound $\norm{||Ax||_{p} - \mathbb{E}[||Ax||_{p}]}_{\psi_{2}}$. Therefore, we obtain the single sub-Gaussian increment property as follows. \begin{theorem} [\textbf{Case 1: $x \in \mathbb{R}^{n}$ and $y = 0$}]\label{v2case1} Under Assumption \ref{Assumption1}, we have \begin{equation*} \norm{R_{x}}_{\psi_{2}} \leq C_{p} K^{4p} \times \text{rad}(B^{m}_{q})||x||_{2} \quad \text{and} \quad \norm{X_{x}}_{\psi_{2}} \leq C_{p} K^{4p} \times \text{rad}(B^{m}_{q})||x||_{2} \quad \forall x\in \mathbb{R}^{n}. \end{equation*} \end{theorem} Next, with the help of Theorem \ref{v2case1}, we can deduce the sub-Gaussian increment property for the case that $x$ and $y$ are unit vectors in $||\cdot||$. See Section \ref{52} for the proof of Theorem \ref{v2case2}. \begin{theorem} [\textbf{Case 2: $x,y \in \partial B$}]\label{v2case2} Under Assumption \ref{Assumption1}, we have \[ \norm{R_{x} - R_{y}}_{\psi_{2}} \leq C_{p} K^{4p} \times \text{rad}(B^{m}_{q})||x - y||_{2} \quad \text{and} \quad \norm{X_{x} - X_{y}}_{\psi_{2}} \leq C_{p} K^{4p} \times \text{rad}(B^{m}_{q})||x - y||_{2}\quad \forall x,y \in \partial B. \] \end{theorem} Finally, we consider the general case. Without loss of generality, we may assume that $||x|| = 1$ and $||y|| > 1$. Set $\overline{y} = \frac{y}{||y||}$, and decompose $||R_{x} - R_{y}||_{\psi_{2}}$ according to \begin{equation*} \norm{R_{x} - R_{y}}_{\psi_{2}} \leq \norm{R_{x} - R_{\overline{y}}}_{\psi_{2}} + \norm{R_{\overline{y}} - R_{y}}_{\psi_{2}} \end{equation*} Applying Theorem \ref{v2case2}, the first term can be bounded by $C_{p} K^{4p} \times \text{rad}(B^{m}_{q})||x - \overline{y}||_{2}$. Write the second term as $||y-\overline{y}|| \times \norm{R_{\overline{y}}}_{\psi_{2}}$. Applying the first inequality of (\ref{v2case1_case1}) and (\ref{v2case1_case2}), (\ref{norm}), and (\ref{norm2}), the second term can be bounded by \begin{equation*} ||y-\overline{y}|| C_{p} K^{4p} \times \text{rad}(B^{m}_{q}) \leq ||y-\overline{y}||_{2} C_{p} K^{4p+1} \times \text{rad}(B^{m}_{q}). \end{equation*} Thus, it remains to estimate $||x-\overline{y}||_{2} + ||\overline{y} - y||_{2}$. This leads us to study the reverse triangle inequality which shows that the above term can be controlled by $||x-y||_{2}$. See Section \ref{53} for the details of the proof. Therefore, we establish the general sub-Gaussian increment property as follows. \begin{theorem} [\textbf{Case 3: General Vectors $x,y \in \mathbb{R}^{n}$}]\label{v2case3} Under Assumption \ref{Assumption1}, we have \begin{equation*} \norm{R_{x} - R_{y}}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{p})||x - y||_{2} \quad \text{and} \quad \norm{X_{x} - X_{y}}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{p})||x - y||_{2}\quad \forall x,y \in \mathbb{R}^{n}. \end{equation*} \end{theorem} Since $\gamma(T\bigcup \{0\}) = \gamma(T)$ and $||X||_{\psi_{2}} \leq ||Y||_{\psi_{2}}$ whenever $|X| \leq |Y|$, it follows that the matrix deviation inequality for $\ell_{p}$-norm is an immediate consequence of Theorem \ref{Talagrand} and Theorem \ref{v2case3}. \begin{theorem} [\textbf{Matrix Deviation Inequality for $\ell_{p}$-Norm}]\label{v2deviation} Under Assumption \ref{Assumption1}, we have \begin{equation*} \norm{\sup_{x \in T}\sabs{R_{x}}\;}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{q})\gamma(T) \quad \text{and} \quad \norm{\sup_{x \in T}\sabs{X_{x}}\;}_{\psi_{2}} \leq C_{p} K^{4p+4} \times \text{rad}(B^{m}_{q})\gamma(T). \end{equation*} \end{theorem} As a consequence of Theorem \ref{v2deviation}, we show that any i.i.d ensemble sub-Gaussian random matrix can be regraded as an embedding from $\ell_{2}^{n}$-space to $\ell^{m}_{p}$-space such that distances do not increase by more than a factor $D_{p}(1+\epsilon)$ and do not decrease by more than a factor $d_{p}(1-\epsilon)$. For the problem of dimension reduction, Brinkman and Charikar \cite{intro1} and Ping Li \cite{intro2} both give a great overview of the results in this area. See also \cite{diamond} for the problem of distortion and \cite[Section 11.3]{V} for the random projection. Let $T$ be a finite subset of $\mathbb{R}^{n}$ with $N = |T|$, $S = \{\frac{x-y}{||x-y||} : x,y \in T \text{ and } x\neq y\}$, and $\widetilde{S} = \{\frac{x-y}{||x-y||_{2}} : x,y \in T \text{ and } x\neq y\}$. Aplying (\ref{norm}), (\ref{norm2}), and the fact that $\mathbb{E}[A_{1,1}^{2}] = 1$, we obtain $d_{p} ||z||_{2} \leq ||z|| \leq D_{p}||z||_{2}$ for every $z \in \mathbb{R}^{n}$, where \begin{equation*} (d_{p},D_{p}) = \begin{cases} (CK^{-3},1), & \text{ if } 1\leq p < 2\\ (1,1), & \text{ if } p = 2\\ (1,C'K), &\text{ if } 2 < p < \infty. \end{cases} \end{equation*} Thus, it follows that $\gamma(S) \leq \frac{1}{d_{p}} \gamma(\widetilde{S})$, so, by using \cite[(9.13)]{V}, we see that $\gamma(S) \leq \frac{1}{d_{p}} \sqrt{\log N}$. Therefore, applying Theorem \ref{v2deviation} gives \begin{align*} &\norm{\sup_{x,y \in T, \; x\neq y}\abs{\frac{1}{m^{\frac{1}{p}}}\frac{||A(x-y)||_{p}}{||x-y||} - 1}\;}_{\psi_{2}} = \frac{1}{m^{\frac{1}{p}}} \norm{\sup_{z \in S}|R_{z}|}_{\psi_{2}} \leq C_{p} K^{4p+4} \frac{\text{rad}(B^{m}_{q})}{m^{\frac{1}{p}}}\gamma(S) \leq \frac{1}{d_{p}} C_{p} K^{4p+4} \frac{1}{m^{\beta}} \sqrt{\log N}, \end{align*} where $\beta = \frac{1}{2}$ if $1\leq p \leq 2$ and $\frac{1}{p}$ if $2\leq p < \infty$, so we extend the Johnson–Lindenstrauss lemma as follows. \begin{theorem}[\textbf{Johnson-Lindenstrauss Lemma from $\ell_{2}^{n}$-space to $\ell_{p}^{m}$-space }] For $\epsilon \in (0,1)$, we have $$ \mathbb{P}\bkt{ d_{p}(1-\epsilon)||x-y||_{2} \leq \norm{\frac{1}{m^{\frac{1}{p}}}A(x-y)}_{p} \leq D_{p}(1+\epsilon)||x-y||_{2} \quad \forall x,y \in T} \geq 1-2\exp(-\frac{\epsilon^{2} m^{2\beta}}{d_{p}^{-2}C_{p}K^{8p+8}\log N}). $$ \end{theorem} \section{Proofs} \label{Main Proof} \subsection{Proof of Theorem \ref{v2main}} \label{51} To prove Theorem \ref{v2main}, it suffices to show that \begin{equation} \label{v2_main_1} \mathbb{P}\bkt{\abs{||X^{(m)}||_{p} - m^{\frac{1}{p}}} \geq s} \leq \begin{cases} 2\exp(-C_{p} \frac{s^{2}}{K^{2p}m^{\frac{2}{p}-1}}), & \text{ if } 1\leq p<2\\ 2\exp(-C_{p} \frac{s^{2}}{K^{2p}}), & \text{ if } 2\leq p<\infty\\ \end{cases} \quad \forall s>0, \end{equation} where $C_{p}$ is a positive absolute constant. Indeed, by triangle inequality, we get \[ \norm{||X^{(m)}||_{p} - \mathbb{E}[||X^{(m)}||_{p}]}_{\psi_{2}} \leq \norm{||X^{(m)}||_{p} - m^{\frac{1}{p}}}_{\psi_{2}} + \mathbb{E}[\abs{||X^{(m)}||_{p} - m^{\frac{1}{p}}}] \leq C \norm{||X^{(m)}||_{p} - m^{\frac{1}{p}}}_{\psi_{2}}. \] As we mention in Section \ref{Introduction}, we only need to control the tail probability $\mathbb{P}(\frac{1}{m}\sum_{i=1}^{m} Z_{i} \geq t)$, where $Z_{i} = |X_{i}|^{p}-1$. To estimate this tail probability, we divide this problem into the small deviation case (\textbf{Step 1.}) and the large deviation case (\textbf{Step 2.} and \textbf{Step 3.}). $\newline$ \textbf{Step 1.} In this step, we prove (\ref{v2_main_1}) when $1\leq p < \infty$ and $s < K^{p}m^{\frac{1}{p}}$. Note that if $|z - 1| \geq \delta$ and $z\geq 0$, then $|z^{p} - 1| \geq \delta$. This yields $$ \mathbb{P}\bkt{\abs{\frac{1}{m^{\frac{1}{p}}}||X^{(m)}||_{p} - 1} \geq \delta} \leq \mathbb{P}\bkt{\abs{\frac{1}{m}\sum_{i=1}^{m}(|X_{i}|^{p}-1)} \geq \delta}. $$ Applying Lemma \ref{cs} shows that \begin{equation*} \norm{|X_{i}|^{p} -1}_{\psi_{1}} \leq C\norm{|X_{i}|^{p}}_{\psi_{1}} \leq C\norm{X_{i}}_{\psi_{p}}^{p} \quad \forall 1\leq p < 2 \text{ and } \norm{|X_{i}|^{p} -1}_{\psi_{\frac{2}{p}}} \leq C\norm{|X_{i}|^{p}}_{\psi_{\frac{2}{p}}} \leq C\norm{X_{i}}_{\psi_{2}}^{p} \quad \forall 2\leq p < \infty. \end{equation*} Since $||X_{i}||_{\psi_{p}} \leq c_{p}||X_{i}||_{\psi_{2}}$ for each $1\leq p<2$, it follows that \begin{equation*} \norm{|X_{i}|^{p} -1}_{\psi_{1}}\leq C_{p} K^{p} \quad \forall 1\leq p < 2 \quad \text{and} \quad \norm{|X_{i}|^{p} -1}_{\psi_{\frac{2}{p}}} \leq C_{p}K^{p} \quad \forall 2\leq p< \infty \end{equation*} Let $a_{i} = \frac{1}{m}$. Then setting $\alpha = 1$ and applying Theorem \ref{bernstein} for the case of $1\leq p < 2$ give \begin{align*} &\mathbb{P}\bkt{\abs{\frac{1}{m}\sum_{i=1}^{m}(|X_{i}|^{p}-1)} \geq \delta} \leq 2\exp(-C_{p}\min\{\frac{\delta^{2}}{K^{2p}},\frac{\delta}{K^{p}}\}m) = 2\exp(-C_{p}\frac{\delta^{2}m}{K^{2p}}) \quad \forall \delta \leq K^{p}. \end{align*} Similarly, taking $\alpha = \frac{2}{p}$ and applying Theorem \ref{bernstein} for the case of $2\leq p < \infty$ give \begin{align*} &\mathbb{P}\bkt{\abs{\frac{1}{m}\sum_{i=1}^{m}(|X_{i}|^{p}-1)} \geq \delta} \leq 2\exp(-C_{p}\min\{\frac{\delta^{2}m}{K^{2p}},\frac{\delta^{\alpha} m^{\alpha}}{K^{\alpha p}}\}) \leq 2\exp(-C_{p}\min\{\frac{\delta^{2}}{K^{2p}},\frac{\delta^{\alpha} }{K^{\alpha p}}\}m^{\alpha})\\ &\leq 2\exp(-C_{p}\frac{\delta^{2}m^{\alpha}}{K^{2p}}) \quad \forall \delta \leq K^{p}. \end{align*} Therefore, setting $s = \delta m^{\frac{1}{p}}$, we obtain (\ref{v2_main_1}) when $1\leq p < \infty$ and $s < K^{p}m^{\frac{1}{p}}$. $\newline$ \textbf{Step 2.} In this step, we prove (\ref{v2_main_1}) when $1\leq p < 2$ and $s > K\xi_{p}m^{\frac{1}{p}}$, where $\xi_{p}$ is a positve constant that depends on $p$. By using Lemma \ref{r_trangle}, we have \begin{align} \label{decomposition} &\mathbb{P}\bkt{\abs{\frac{1}{m^{\frac{1}{p}}}||X^{(m)}||_{p} - 1} \geq \delta} \leq \mathbb{P}\bkt{\abs{\frac{1}{m} \sum_{i=1}^{m} (|X_{i}|^{p} - 1)} \geq \delta^{p}} \leq \mathbb{P}\bkt{\frac{1}{m} \sum_{i=1}^{m} (|X_{i}|^{p} - 1) \geq \delta^{p}}+\mathbb{P}\bkt{\frac{1}{m} \sum_{i=1}^{m} (1-|X_{i}|^{p}) \geq \delta^{p}}. \end{align} Write $Y_{i} = |X_{i}|^{p}-1$ and $\alpha = \frac{2}{p}$. We have $||Y_{i}||_{\psi_{\alpha}} \leq C ||X_{i}||_{\psi_{2}}^{p} \leq C K^{p}$ by using Lemma \ref{cs}. Write $t = m\delta^{p}$. Applying Lemma \ref{MGF} gives \begin{align*} &\mathbb{P}\bkt{\frac{1}{m} \sum_{i=1}^{m} (|X_{i}|^{p} - 1) \geq \delta^{p}} \leq \exp(-\lambda t) \prod_{i=1}^{m} \mathbb{E}[\exp(\lambda Y_{i})] \leq \exp(-\lambda t + m\frac{1}{\alpha'} 2^{\alpha'} K^{p\alpha'} \theta_{p}^{\alpha'} \lambda^{\alpha'}) \quad \forall \lambda \geq \frac{2}{K^{p} \theta_{p}}, \end{align*} where $\theta_{p} \equiv c_{p} C_{7,\alpha}$ depends only on $1\leq p < 2$. Note that \begin{align*} &(\frac{t}{m K^{p\alpha'} \theta_{p}^{\alpha'} 2^{\alpha'}})^{\frac{1}{\alpha'-1}} \geq \frac{2}{K^{p} \theta_{p}} \iff \frac{\delta^{p}}{K^{\frac{2p}{2-p}} \theta_{p}^{\frac{2}{2-p}}2^{\frac{2}{2-p}}} \geq \frac{2^{\frac{p}{2-p}}}{K^{\frac{p^{2}}{2-p}}\theta_{p}^{\frac{p}{2-p}}} \iff \delta^{p} \geq K^{p} 2^{\frac{2+p}{2-p}} \theta_{p} &\iff \delta \geq K \xi_{2,p}, \end{align*} where $\xi_{2,p} \equiv 2^{\frac{2+p}{p(2-p)}} \theta_{p}^{\frac{1}{p}}$. Setting $\delta \geq K \xi_{p}$ and $\lambda \equiv (\frac{t}{m K^{p\alpha'} \theta_{p}^{\alpha'} 2^{\alpha'}})^{\frac{1}{\alpha'-1}} \geq \frac{2}{K^{p} \theta_{p}}$ yields \begin{align*} &\mathbb{P}\bkt{\frac{1}{m} \sum_{i=1}^{m} (|X_{i}|^{p} - 1) \geq \delta^{p}} \leq \exp\bkt{-\bkt{\frac{t}{mK^{p\alpha'} \theta_{p}^{\alpha'} 2^{\alpha'}}}^{\frac{1}{\alpha'-1}}t+m\frac{1}{\alpha'}K^{p\alpha'} 2^{\alpha'} \theta_{p}^{\alpha'} \bkt{\frac{t}{mK^{p\alpha'} \theta_{p}^{\alpha'} 2^{\alpha'}}}^{\frac{\alpha'}{\alpha'-1}}}\\ &= \exp\bkt{t^{\alpha} m^{1-\alpha} \bkt{\frac{-1}{K^{p\alpha} 2^{\alpha} \theta_{p}^{\alpha}}+\frac{K^{p\alpha'}2^{\alpha'}\theta^{\alpha'}}{\alpha' K^{p\alpha \alpha'}2^{\alpha \alpha'}\theta_{p}^{\alpha \alpha'}}}} = \exp\bkt{t^{\alpha} m^{1-\alpha} \frac{1}{K^{2}2^{\alpha} \theta_{p}^{\alpha}} \bkt{-1+\frac{1}{\alpha'}}}\\ &= \exp\bkt{t^{\alpha} m^{1-\alpha} \frac{-1}{K^{2} 2^{\alpha} \theta_{p}^{\alpha}\alpha}} = \exp\bkt{-C_{p} \frac{\delta^{2} m}{K^{2}}}, \end{align*} where $C_{p} = \frac{1}{2^{\alpha} \theta_{p}^{\alpha}\alpha}$. Similarly, we have $$ \mathbb{P}\bkt{\frac{1}{m} \sum_{i=1}^{m} (1-|X_{i}|^{p}) \geq \delta^{p}} \leq \exp\bkt{-C_{p} \frac{\delta^{2} m}{K^{2}}}. $$ Thus, since $K\geq 1$, it follows that $$ \mathbb{P}\bkt{\abs{\frac{1}{m^{\frac{1}{p}}}||X^{(m)}||_{p} - 1} \geq \delta} \leq 2\exp(-C_{p} \frac{\delta^{2}m}{K^{2}}) \leq 2\exp(-C_{p} \frac{\delta^{2}m}{K^{2p}}) \quad \forall \delta \geq \xi_{2,p} K, $$ so we complete the proof of (\ref{v2_main_1}) when $1\leq p < 2$ and $s > K\xi_{p}m^{\frac{1}{p}}$. $\newline$ \textbf{Step 3.} In this step, we prove (\ref{v2_main_1}) when $2 \leq p < \infty$ and $s > m^{\frac{1}{p}}K^{p}$. Decompose $\mathbb{P}(\sabs{\frac{1}{m^{\frac{1}{p}}}||X||_{p} - 1} \geq \delta)$, and define $Y_{i}$ and $\alpha$ as \textbf{Step 2}. Note that $||Y_{i}||_{\psi_{\alpha}} \leq CK^{p}$ and $0 < \alpha < 1$. Setting $a_{i} = \frac{1}{m}$ and applying Theorem \ref{bernstein}, we get \begin{equation*} \mathbb{P}(\frac{1}{m} \sum_{k=1}^{m} Y_{i} \geq \delta^{p}) \vee \mathbb{P}(\frac{1}{m} \sum_{k=1}^{m} -Y_{i} \geq \delta^{p}) \leq 2\exp(-\frac{m^{\alpha} \delta^{2}}{C^{\alpha}K^{\alpha}}) \leq2\exp(-\frac{m^{\alpha} \delta^{2}}{C^{\alpha}K^{2p}}) \quad\forall \delta\geq K^{p}, \end{equation*} so it follows that \begin{equation*} \mathbb{P}\bkt{\abs{||X^{(m)}||_{p} - m^{\frac{1}{p}}} \geq \delta m^{\frac{1}{p}}} \leq 4\exp(-\frac{m^{\alpha} \delta^{2}}{C^{\alpha}K^{2p}}) \quad\forall \delta\geq K^{p}. \end{equation*} Adjusting the above coefficients, this yields (\ref{v2_main_1}) when $2 \leq p < \infty$, $s > m^{\frac{1}{p}}K^{p}$. \subsection{Proof of Theorem \ref{v2case2}} \label{52} In this subsection, we show that \begin{align} \label{proof_case2} &\mathbb{P}\bkt{\abs{\frac{||Ax||_{p} - ||Ay||_{p}}{||x-y||}} \geq s} \leq \begin{cases} 2\exp(-\frac{s^{2} }{C_{7,p}^{2}K^{8p} \text{rad}(B_{q}^{m})^{2}}) + 2\exp(-\frac{s^{2}}{C_{8,p}^{2}K^{8p} \text{rad}(B_{q}^{m})^{2}}), & \mbox{if } s < 2m^{\frac{1}{p}} \\ 2\exp(-\frac{s^{2}}{C_{9,p}^{2}K^{8p} \text{rad}(B_{q}^{m})^{2}}), & \mbox{if } s \geq 2m^{\frac{1}{p}} \end{cases} \end{align} for every $1\leq p < \infty$, where $C_{i,p}$ is a positive constant that depends $p$. Note that the large deviation case of (\ref{proof_case2}) is an immediate consequence of the first inequality of (\ref{v2case1_case1}) and (\ref{v2case1_case2}). Indeed, setting $u = \frac{x-y}{||x-y||}$ gives \begin{align*} &\mathbb{P}(\abs{\frac{||Ax||_{p} - ||Ay||_{p}}{||x-y||}}\geq s) \leq \mathbb{P}(||Au||_{p}\geq s) \nonumber = \mathbb{P}(||Au||_{p} - m^{\frac{1}{p}}\geq s-m^{\frac{1}{p}}) \nonumber \leq \mathbb{P}(\abs{||Au||_{p} - m^{\frac{1}{p}}} \geq \frac{s}{2})\\ &\leq 2\exp(-\frac{s^{2}}{C_{9,p}^{2}K^{8p} \text{rad}(B_{q}^{m})^{2}}). \end{align*} \newline Therefore, it remains to prove the small deviation case of (\ref{proof_case2}). Applying Lemma \ref{abp} gives \begin{align*} &\mathbb{P}\bkt{\abs{\frac{||Ax||_{p} - ||Ay||_{p}}{||x-y||}} \geq s} \leq \mathbb{P}\bkt{\abs{\frac{||Ax||_{p}^{p} - ||Ay||_{p}^{p}}{||x-y||}} \geq s||Ax||_{p}^{p-1}} \leq \mathbb{P}\bkt{\abs{\frac{||Ax||_{p}^{p} - ||Ay||_{p}^{p}}{||x-y||}} \geq s||Ax||_{p}^{p-1},||Ax||_{p} \geq \frac{m^{\frac{1}{p}}}{2}} \\ &+ \mathbb{P}\bkt{||Ax||_{p} < \frac{m^{\frac{1}{p}}}{2}} \leq \mathbb{P}\bkt{\abs{\frac{||Ax||_{p}^{p} - ||Ay||_{p}^{p}}{||x-y||}} \geq \frac{sm^{\frac{1}{q}}}{2^{p-1}}} + \mathbb{P}\bkt{||Ax||_{p} < \frac{m^{\frac{1}{p}}}{2}} \equiv \mathscr{A}_{1} + \mathscr{A}_{2}. \end{align*} Since $s < 2m^{\frac{1}{p}}$ and $||x|| = 1$, it follows that \begin{equation*} \mathscr{A}_{2} \leq \mathbb{P}\bkt{\abs{||Ax||_{p} -m^{\frac{1}{p}}||x||} \geq \frac{m^{\frac{1}{p}}}{2}} \leq \mathbb{P}\bkt{\abs{||Ax||_{p} -m^{\frac{1}{p}}||x||} \geq\frac{s}{4}}, \end{equation*} so, applying the first inequality of (\ref{v2case1_case1}) and (\ref{v2case1_case2}) yields \begin{equation*} \mathbb{P}\bkt{\abs{||Ax||_{p} -m^{\frac{1}{p}}||x||} \geq\frac{s}{4}} \leq 2\exp(-\frac{s^{2}}{C_{8,p}^{2} K^{8p} \text{rad}(B_{q}^{m})^{2}}). \end{equation*} To estimate $\mathscr{A}_{1}$, we write $\mathscr{A}_{1}$ as \begin{equation*} \mathscr{A}_{1} = \mathbb{P}(\sabs{\frac{1}{m}\sum_{i=1}^{m} \frac{|A_{i}x|^{p} - |A_{i}y|^{p}}{||x-y||}} \geq \delta), \quad \text{where} \quad \delta \equiv \frac{s}{2^{p-1}m^{\frac{1}{p}}}, \end{equation*} so it suffices to show that \begin{equation} \label{Final} \norm{\frac{|A_{i}x|^{p} - |A_{i}y|^{p}}{||x-y||}}_{\psi_{\frac{2}{p}}} \leq C_{p}K^{4p}. \end{equation} Indeed, since $K\geq 1$, it follows that $\delta = \frac{s}{2^{p-1}m^{\frac{1}{p}}} \leq 2^{2-p} \leq 2K^{4p}$, so, by the fact that $||x|| = ||y||$, applying Theorem \ref{bernstein} on $\alpha = 1$ if $1\leq p < 2$ and on $\alpha = \frac{2}{p}$ if $2\leq p < \infty$ yields \begin{align*} &\mathscr{A}_{1} \leq 2\exp(-C_{p} \min\{\frac{\delta^{2}}{K^{8p}},\frac{\delta}{K^{4p}}\}m ) \leq 2\exp(-C_{p}\min\{\frac{\delta^{2}}{4K^{8p}},\frac{\delta}{2K^{4p}}\}m ) = 2\exp(-C_{p}\frac{\delta^{2}m}{4K^{8p}} )\\ &= 2\exp(-\frac{s^{2} m}{C_{7,p}^{2}K^{8p} m^{\frac{2}{p}}}) = 2\exp(-\frac{s^{2} }{C_{7,p}^{2}K^{8p} \text{rad}(B_{q}^{m})^{2}}) \quad \forall 1\leq p < 2 \end{align*} and \begin{align*} &\mathscr{A}_{1} \leq 2\exp(-C_{p} \min\{\frac{\delta^{2}}{K^{4p \times 2}},\frac{\delta^{\alpha}}{K^{4p \times\alpha}}\}m^{\alpha} ) \leq 2\exp(-C_{p}\min\{\frac{\delta^{2}}{4^{2-\alpha}K^{4p \times 2}},\frac{\delta^{\alpha}}{2^{2-\alpha}K^{4p \times \alpha}}\}m^{\alpha} ) = 2\exp(-C_{p}\frac{\delta^{2}m^{\alpha}}{4^{2-\alpha}K^{4p2}} )\\ &= 2\exp(-\frac{s^{2} }{C_{7,p}^{2}K^{8p} }) \quad \forall 2\leq p < \infty. \end{align*} To prove (\ref{Final}), applying Lemma \ref{abp} gives \begin{equation*} \norm{|A_{i}x|^{p} - |A_{i}y|^{p}}_{\psi_{\frac{2}{p}}} \leq p \norm{(|A_{i}x| - |A_{i}y|) \sqrt{|A_{i}x|^{2p-2}+|A_{i}y|^{2p-2}}}_{\psi_{\frac{2}{p}}}. \end{equation*} For $1 < p < \infty$, since $||XY||_{\psi_{\frac{2}{p}}} \leq ||X||_{\psi_{\frac{2r}{p}}} ||Y||_{\psi_{\frac{2s}{p}}}$ for every $r,s > 1$ such that $\frac{1}{r} + \frac{1}{s} = 1$, it follows that setting $r = p$ and $s = q = \frac{p}{p-1}$ gives \begin{align*} &\norm{(|A_{i}x| - |A_{i}y|) \sqrt{|A_{i}x|^{2p-2}+|A_{i}y|^{2p-2}}}_{\psi_{\frac{2}{p}}} \leq \norm{|A_{i}x| - |A_{i}y|}_{\psi_{2}} \norm{ \sqrt{|A_{i}x|^{2p-2}+|A_{i}y|^{2p-2}}}_{\psi_{\frac{2}{p-1}}}. \end{align*} If $1\leq p < \infty$, applying Theorem \ref{sums} gives \begin{equation*} \norm{|A_{i}x| - |A_{i}y|}_{\psi_{2}} \leq \norm{A_{i}(x-y)}_{\psi_{2}} \leq CK||x-y||_{2}. \end{equation*} If $1 < p < \infty$, by using Lemma \ref{cs} and Theorem \ref{sums}, we have \begin{equation} \label{tmp} \norm{ \sqrt{|A_{i}x|^{2p-2}+|A_{i}y|^{2p-2}}}_{\psi_{\frac{2}{p-1}}} \leq \bkt{C_{p}K^{2p-2}||x||_{2}^{2p-2}+C_{p}K^{2p-2}||y||_{2}^{2p-2}}^{\frac{1}{2}}. \end{equation} Thus, applying (\ref{norm}) and (\ref{norm2}), we see that \begin{equation*} (\ref{tmp}) \leq \bkt{C_{p}K^{8p-8}||x||^{2p-2}+C_{p}K^{8p-8}||y||^{2p-2}}^{\frac{1}{2}} = \bkt{C_{p}K^{8p-8}+C_{p}K^{8p-8}}^{\frac{1}{2}} = C_{p}K^{4p-4} \quad \forall 1\leq p < 2 \end{equation*} and \begin{equation*} (\ref{tmp}) \leq \bkt{C_{p}K^{2p-2}||x||^{2p-2}+C_{p}K^{2p-2}||y||^{2p-2}}^{\frac{1}{2}} = \bkt{C_{p}K^{2p-2}+C_{p}K^{2p-2}}^{\frac{1}{2}} \leq C_{p}K^{4p-4} \quad \forall 2\leq p < \infty, \end{equation*} which yields (\ref{Final}). Thus, we have proved the small deviation case of (\ref{proof_case2}). Therefore, combining the estimation of both cases, we complete the proof of (\ref{proof_case2}). \subsection{Proof of Theorem \ref{v2case3}} \label{53} In this subsection, we prove the general sub-Gaussian increment property. Since the argument for $R_{x}$ and $X_{x}$ are the same, we only consider the case of $R_{x}$. Without loss of generality, we may suppose that $||x|| = 1$ and $||y|| > 1$. Indeed, if $||x|| \leq ||y||$, $\overline{x} \equiv \frac{x}{||x||}$, and $\overline{y} \equiv \frac{y}{||x||}$, by (\ref{norm}) and (\ref{norm2}), then \begin{align*} &\norm{R_{x} - R_{y}}_{\psi_{2}} = ||x|| \norm{R_{\overline{x}} - R_{\overline{y}}}_{\psi_{2}} \leq ||x|| C_{p} K^{4p+4} \text{rad}(B_{q}^{m})||\overline{x} - \overline{y}||_{2} \leq C_{p} K^{4p+4} \text{rad}(B_{q}^{m}) ||x-y||_{2}. \end{align*} Assume that $||x|| = 1$ and $||y|| > 1$, and write $\overline{y} \equiv \frac{y}{||y||}$. As the estimations above Theorem \ref{v2case3}, we get $\norm{R_{x} - R_{y}}_{\psi_{2}}\leq C_{p} K^{4p+1} \text{rad}(B_{q}^{m}) (||x-\overline{y}||_{2} + ||\overline{y} - y||_{2})$. Hence, it remains to show the reverse triangle inequality: \begin{equation} \label{norm_triangle} ||x-\overline{y}||_{2} + ||y - \overline{y}||_{2} \leq C K^{3}||x-y||_{2} \quad \text{for all } ||x|| = 1 \text{ and }||y|| > 1. \end{equation} Let $\theta$ be the angle between $x-\overline{y}$ and $y - \overline{y}$ such that $0\leq \theta \leq \pi$, i.e., $\cos \theta = \frac{\langle x-\overline{y},y-\overline{y} \rangle}{||x-\overline{y}||_{2} ||y - \overline{y}||_{2}}$. Consider first $\frac{\pi}{2} \leq \theta \leq \pi$. Then $\cos\theta \leq 0$, and, by the law of cosines, we have \begin{align*} &(||x-\overline{y}||_{2} + ||\overline{y}-y||_{2})^{2} =||x-\overline{y}||_{2}^{2} + ||\overline{y}-y||_{2}^{2} + 2||\overline{y}-y||_{2}||x-\overline{y}||_{2} \leq 2(||x-\overline{y}||_{2}^{2} + ||\overline{y}-y||_{2}^{2}) \\ &\leq 2(||x-\overline{y}||_{2}^{2} + ||\overline{y}-y||_{2}^{2} -2\cos(\theta) ||x-\overline{y}||_{2}||\overline{y}-y||_{2}) = 2||x-y||_{2}^{2}, \end{align*} so it follows that $||x-\overline{y}||_{2} + ||\overline{y} - y||_{2} \leq \sqrt{2} ||x-y||_{2}$. In addition, if $\theta = 0$, then $\overline{y} = x$ and so there is nothing to prove. Now, we consider the case of $0 < \theta \leq \frac{\pi}{2}$. Let $P$ be the projection of $x$ in the direction of $y$. Then there are two possible positions for $y$ (as shown in Figure \ref{fig:M1}). \newline \textbf{Case 1:} Consider $y = y_{1}$ as in the Figure \ref{fig:M1}. Then we have $||x-y||_{2}\sin \theta' = ||x-P||_{2} = ||P-\overline{y}||_{2}\tan \theta \geq ||y-\overline{y}||_{2} \tan \theta$ and $||x-\overline{y}||_{2} \sin \theta = ||x-y||_{2} \sin \theta'$. Therefore, we get $$||\overline{y} - y||_{2} + ||x-\overline{y}||_{2} \leq \frac{\sin \theta' \times \cos \theta}{\sin \theta} ||x-y||_{2} + \frac{\sin \theta'}{\sin \theta} ||x-y||_{2} \leq \frac{2}{\sin \theta} ||x-y||_{2}.$$ \newline \textbf{Case 2:} Consider $y = y_{2}$ as in the Figure \ref{fig:M1}. Then we obtain $||x-y||_{2}\sin \theta' = ||x-P||_{2} = ||P-\overline{y}||_{2}\tan \theta$, $||P-y||_{2} = ||x-y||_{2} \cos \theta'$, and $||x-\overline{y}||_{2} \sin \theta = ||x-y||_{2} \sin \theta'$. Thus, we see that \begin{align*} &||\overline{y} - y||_{2} + ||x-\overline{y}||_{2} = ||\overline{y} - P||_{2} + ||P-y||_{2} + ||x-\overline{y}||_{2} = \frac{\sin \theta' \times \cos \theta}{\sin \theta} ||x-y||_{2} + \cos \theta ' ||x-y||_{2} + \frac{\sin \theta'}{\sin \theta} ||x-y||_{2} \\ &\leq \frac{3}{\sin \theta} ||x-y||_{2}. \end{align*} \begin{figure} \centering \begin{tikzpicture} \draw (6,0) coordinate (p) node[right] {P} -- (5,0) coordinate (y) node[below] {$y_{1}$} -- (2,0) coordinate (z) node[below] {$\overline{y}$} -- (0,0) coordinate (b) node[left] {O} -- (6,4) coordinate (x) node[above right] {x} pic["$\theta$", draw=orange, <->, angle eccentricity=1.2, angle radius=1cm] {angle=y--z--x} pic["$\theta'$", draw=orange, <->, angle eccentricity=1.2, angle radius=0.7cm] {angle=p--y--x}; \draw (2,0) coordinate -- (6,4) coordinate; \draw (5,0) coordinate -- (6,4) coordinate; \draw (6,0) coordinate -- (6,4) coordinate; \draw (15,0) coordinate (p') node[below] {P} -- (17,0) coordinate (y') node[below] {$y_{2}$} -- (11,0) coordinate (z') node[below] {$\overline{y}$} -- (9,0) coordinate (b') node[left] {O} -- (15,4) coordinate (x') node[above right] {x} pic["$\theta$", draw=orange, <->, angle eccentricity=1.2, angle radius=1cm] {angle=y'--z'--x'} pic["$\theta'$", draw=orange, <->, angle eccentricity=1.2, angle radius=0.7cm] {angle=x'--y'--p'}; \draw (11,0) coordinate -- (15,4) coordinate; \draw (17,0) coordinate -- (15,4) coordinate; \draw (15,0) coordinate -- (15,4) coordinate; \end{tikzpicture} \caption{Left: $y = y_{1}$. Right: $y = y_{2}$.} \label{fig:M1} \end{figure} Therefore we have \begin{equation} \label{sin0} ||x-\overline{y}||_{2} + ||\overline{y} - y||_{2} \leq \frac{3}{\sin \theta} ||x-y||_{2}. \end{equation} Recall that $B = \{z \in \mathbb{R}^{n} : ||z|| \leq 1\}$ and $\partial B = \{z\in \mathbb{R}^{n} : ||z|| = 1\}$. Thus, it suffices to show that \begin{equation} \label{sin} \sin \theta \geq \frac{1}{CK^{3}} \quad \forall 1\leq p < \infty, \quad ||x|| = 1, \quad ||y|| > 1 \text{ such that } 0< \theta \leq \frac{\pi}{2}. \end{equation} Define $B_{2}(a,r) = \{z \in \mathbb{R}^{n} : ||z-a||_{2} \leq r\}$. By (\ref{norm}) and (\ref{norm2}), we get $||z|| \leq \frac{1}{R_{p}} ||z||_{2}$ for every $z \in \mathbb{R}^{n}$, where $R_{p} = 1$ if $1\leq p < 2$ and $\frac{1}{C' K}$ if $2\leq p< \infty$. Thus, it follows that $B_{2}(0,R_{p}) \subseteq B$, $||x||_{2} \geq R_{p}$, and $||\overline{y}||_{2} > R_{p}$. Hence, there exists an unique $w \in \partial B_{2}(0,R_{p})$ such that $\overline{Ow} \; \bot \; \overline{w \overline{y}}$. Let $\theta'$ be the angle between $\overline{x' \overline{y}}$ and $\overline{\overline{y} y}$ as in the Figure \ref{fig:M4}. Note that $0 < \theta' \leq \theta$. Indeed, if $\theta < \theta'$ as in the Figure \ref{fig:M5}, by the fact that $B$ is convex and $w \in B$, there exists $z \in B$ such that $z = r \overline{y}$ for some $r > 1$ which contradict to $||z||\leq 1$. Thus, since $||\overline{y}|| = 1$, (\ref{norm}) and (\ref{norm2}) imply $$ \sin \theta \geq \sin \theta' = \frac{\overline{Ow}}{\overline{O\overline{y}}} = \frac{R_{p}}{||\overline{y}||_{2}} \geq \frac{1}{CK^{3}}. $$ Similarly, if $n\geq 3$, we consider the two dimensional space spanned by $x, \overline{y}$, so (\ref{sin}) still holds. Thus, using (\ref{sin0}) and (\ref{sin}), we complete the proof of (\ref{norm_triangle}), so we obtain the general sub-Gaussian increment property of $R_{x}$. \begin{figure} \centering \begin{tikzpicture} [scale = 0.7] \draw (1.8,-2.4) coordinate (w) node[below] {w} -- (1.8,0) coordinate; \draw (0,0) coordinate -- (1.8,-2.4) coordinate -- (9,3) coordinate (x') node[below] {x'}; \draw (5,0) coordinate -- (8,4) coordinate (x) node[below] {x}; \draw (0,0) coordinate (O) node[below] {O} -- (5,0) coordinate (z) node[below] {$\overline{y}$} -- (10,0) coordinate (y) node[below] {y} pic["$\theta$", draw=orange, <->, angle eccentricity=1.2, angle radius=1.3cm] {angle=y--z--x} pic["$\theta'$", draw=orange, <->, angle eccentricity=1.2, angle radius=1cm] {angle=y--z--x'}; \draw (0,0) circle [radius=3]; \draw [dashed] (5,0) -- (2,-4); \end{tikzpicture} \caption{$\theta \geq \theta'$} \label{fig:M4} \end{figure} \begin{figure} \centering \begin{tikzpicture} [scale = 0.7] \draw (1.8,-2.4) coordinate (w) node[below] {w} -- (1.8,0) coordinate; \draw (0,0) coordinate -- (1.8,-2.4) coordinate -- (9,3) coordinate (x') node[below] {x'}; \draw (5,0) coordinate -- (9,1) coordinate (x'') node[below] {x}; \draw (0,0) coordinate (O) node[below] {O} -- (5,0) coordinate (z) node[below] {$\overline{y}$} -- (10,0) coordinate (y) node[below] {y} pic["$\theta'$", draw=orange, <->, angle eccentricity=1.2, angle radius=0.7cm] {angle=y--z--x'} pic["$\theta$", draw=orange, <->, angle eccentricity=1.2, angle radius=1.2cm] {angle=y--z--x''}; \draw (0,0) circle [radius=3]; \draw [dashed] (5,0) -- (-3,-2); \draw [dashed] (9,1) -- (6.88,0) coordinate (z) node[below] {z} -- (1.8,-2.4); \end{tikzpicture} \caption{$\theta' \geq \theta$} \label{fig:M5} \end{figure} \clearpage \begin{appendix} \section{Elementary Inequalities} \begin{lemma} \label{r_trangle} Let $a,b \geq 0$ and $r > 0$. Then \begin{equation*} \begin{cases} (a+b)^{r} \leq a^{r} + b^{r} \quad\text{and}\quad |a^{r} - b^{r}| \leq |a-b|^{r}, & \text{if } 0 < r\leq 1\\ (a+b)^{r} \geq a^{r} + b^{r} \quad\text{and}\quad |a^{r} - b^{r}| \geq |a-b|^{r}, &\text{if } 1\leq r< \infty. \end{cases} \end{equation*} \end{lemma} \begin{proof} Fix $r>0$ and set $f(t) = 1+t^{r} - (1+t)^{r}$ for $t\geq 0$. Then $f(0) = 0$ and $f'(t) = \frac{r}{t^{1-r}} - \frac{r}{(1+t)^{1-r}}$. Hence, it follows that $f'(t) \geq 0$ if $0<r<1$ and $f'(t) \leq 0$ if $1\leq r < \infty$, so we get $(1+t)^{r} \leq 1+t^{r}$ if $0 < r \leq 1$, and $(1+t)^{r} \geq 1+t^{r}$ if $1\leq r<\infty$. Equivalently, we get $(a+b)^{r} \leq a^{r} + b^{r}$ if $0<r\leq 1$ and $(a+b)^{r} \leq a^{r} + b^{r}$ if $1\leq r<\infty$. \end{proof} \begin{lemma} \label{abp} Let $1\leq p < \infty$. Then \begin{equation*} a^{p-1} |a-b| \leq |a^{p} - b^{p}| \leq p|a-b| \sqrt{a^{2p-2}+b^{2p-2}} \quad \forall a,b\geq 0. \end{equation*} \begin{proof} For the first inequality, if $a \geq b$, then $a^{p} - b^{p} = a^{p-1} (a - (\frac{b}{a})^{p-1} b) \geq a^{p-1}(a-b)$; otherwise, $b^{p} - a^{p} = a^{p-1}(b(\frac{b}{a})^{p-1} - a) \geq a^{p-1}(b-a)$. For the second case, seting $f(z) \equiv pz^{p-1}(z-1) - (z^{p} - 1)$ for every $z\geq 1$, we obtain $f(1) = 0$ and $f'(z) = pz^{p-1} + p(p-1)z^{p-2}(z-1) - pz^{p-1} = p(p-1)z^{p-2}(z-1) \geq 0$, so it follows that $z^{p}-1 \leq pz^{p-1}(z-1) \leq p(z-1)\sqrt{z^{2p-2} +1}$. \end{proof} \end{lemma} \section{Proof of Theorem \ref{intro1}}\label{A1} \textbf{(1) $\Rightarrow$ (2):} Let $K_{2} \equiv K_{1}$. Then, applying Markov's inequality yields \begin{equation*} \mathbb{P}(|X| \geq t) = \mathbb{P}((\frac{|X|}{K_{2}})^{\alpha} \geq (\frac{t}{K_{2}})^{\alpha}) = \mathbb{P}(\exp((\frac{|X|}{K_{2}})^{\alpha}) \geq \exp((\frac{t}{K_{2}})^{\alpha})) \leq 2\exp(-\frac{t^{\alpha}}{K_{2}^{\alpha}}) \quad \forall t\geq 0. \end{equation*} $\newline$ \textbf{(2) $\Rightarrow$ (3):} Note that $\Gamma(1+x) \leq (\frac{x}{c})^{x}$ for each $ x\geq 1$, where $c$ is a positive constant. Thus, if $\mathbb{P}(|X| \geq t) \leq 2\exp(-t^{\alpha})$ for every $t>0$, then \begin{align*} &||X||_{L^{p}} = \bkt{\int_{0}^{\infty} pt^{p-1} \mathbb{P}(|X| > t) dt }^{\frac{1}{p}} \leq \bkt{2\int_{0}^{\infty} pt^{p-1} \exp(-t^{\alpha}) dt}^{\frac{1}{p}} = \bkt{\frac{2p}{\alpha} \int_{0}^{\infty} s^{\frac{p}{\alpha} - 1} \exp(-s) ds}^{\frac{1}{p}} \\ &= (\frac{2p}{\alpha} \Gamma(\frac{p}{\alpha}))^{\frac{1}{p}} = (2\Gamma(1+\frac{p}{\alpha}))^{\frac{1}{p}} \leq (2 (\frac{p}{c\alpha})^{\frac{p}{\alpha}})^{\frac{1}{p}} \leq C_{2,\alpha} p^{\frac{1}{\alpha}}, \end{align*} where $C_{2,\alpha} \equiv (\frac{2}{c\alpha})^{\frac{1}{\alpha}}$. Therefore, if $\mathbb{P}(|X|\geq t) \leq 2\exp(-\frac{t^{\alpha}}{K_{2}^{\alpha}})$ for every $t>0$, then $||X||_{L^{p}} = K_{2}||\frac{X}{K_{2}}||_{L^{p}} \leq K_{2}C_{2,\alpha} p^{\frac{1}{\alpha}} = K_{3}p^{\frac{1}{\alpha}}$ for every $p\geq \alpha$, where $K_{3} \equiv K_{2}C_{2,\alpha}$. $\newline$ \textbf{(3) $\Rightarrow$ (4):} Assume that $||X||_{L^{p}} \leq p^{\frac{1}{\alpha}}$ for every $p\geq \alpha$. Setting $C_{3,\alpha} \equiv (2e\alpha)^{\frac{1}{\alpha}}$ and $0\leq \lambda < \frac{1}{C_{3,\alpha}} = \frac{1}{(2e\alpha)^{\frac{1}{\alpha}}}$, we get $0 \leq \lambda^{\alpha} e \alpha < \frac{1}{2}$, so, applying the Stirling’s approximation $(\frac{n}{e})^{n} \leq n!$ gives \begin{align*} &\mathbb{E}[\exp(\lambda^{\alpha} |X|^{\alpha})] = 1+\sum_{n=1}^{\infty} \frac{\lambda^{n\alpha} \mathbb{E}[|X|^{n\alpha}]}{n!}\leq 1+\sum_{n=1}^{\infty} \frac{\lambda^{\alpha n} (n\alpha)^{\frac{n\alpha}{\alpha}}}{n!} \leq 1+ \sum_{n=1}^{\infty} \frac{\lambda^{n\alpha} (n\alpha)^{n} e^{n}}{n^{n}} = \sum_{n=0}^{\infty} (\lambda^{\alpha} e\alpha)^{n} = \frac{1}{1-\lambda^{\alpha}e\alpha}. \end{align*} Since $\frac{1}{1-x} \leq \exp(2x)$ for every $x \in [0,\frac{1}{2}]$, it follows that that $\mathbb{E}[\exp(\lambda^{\alpha} |X|^{\alpha})] \leq \exp(2\alpha e\lambda^{\alpha}) = \exp(C_{3,\alpha}^{\alpha}\lambda^{\alpha})$. Therefore, if $||X||_{L^{p}} \leq K_{3}p^{\frac{1}{\alpha}}$ and $0 \leq \lambda < \frac{1}{C_{3,\alpha}K_{3}} \equiv \frac{1}{K_{4}}$, then $\lambda K_{3} < \frac{1}{C_{3,\alpha}}$, so we obtain $\mathbb{E}[\exp(\lambda^{\alpha}|X|^{\alpha})] = \mathbb{E}[\exp((\lambda K_{3})^{\alpha} |\frac{X}{K_{3}}|^{\alpha})] \leq \exp(C_{3,\alpha}^{\alpha}(K_{3}\lambda)^{\alpha}) = \exp(K_{4}^{\alpha} \lambda^{\alpha})$. $\newline$ \textbf{(4) $\Rightarrow$ (1):} Setting $K_{1}' \equiv \frac{K_{4}}{(\ln 2)^{\frac{1}{\alpha}}}$, we get$\lambda \equiv \frac{1}{K_{1}'} = \frac{(\ln 2)^{\frac{1}{\alpha}}}{K_{4}} < \frac{1}{K_{4}}$, so it follows that $\mathbb{E}[\exp( \frac{|X|^{\alpha}}{(K_{1}')^{\alpha}})]= \mathbb{E}[\exp(|X|^{\alpha} \lambda^{\alpha})] \leq \exp(\lambda^{\alpha} K_{4}^{\alpha}) = 2$. \section{Proof of Lemma \ref{cs}} \label{A2} Applying (\ref{3p}), we see that \begin{align*} &||X - \mathbb{E}[X]||_{L^{p}}\\ &\leq \begin{cases} ||X||_{L^{p}} + ||X||_{L^{1}} \leq 2||X||_{L^{p}} \leq 2 ||X||_{\psi_{\alpha}} p^{\frac{1}{\alpha}}, & \text{ if } \alpha \geq 1 \text{ and } p\geq \alpha, \text{ or if } 0 < \alpha < 1 \text{ and } p\geq 1,\\ \norm{X - \mathbb{E}[X]}_{L^{1}} \leq 2||X||_{L^{1}} \leq 2 ||X||_{\psi_{\alpha}} \leq (2\alpha^{-\frac{1}{\alpha}}) ( p^{\frac{1}{\alpha}} ||X||_{\psi_{\alpha}}), & \text{ if } 0 < \alpha < 1 \text{ and } \alpha \leq p \leq 1, \end{cases} \end{align*} so this yields the centering property. In addition, since \begin{equation*} \mathbb{E}[\exp(\frac{(|X|^{\beta})^{\alpha}}{(||X||_{\psi_{\alpha \beta}}^{\beta})^{\alpha}})] =\mathbb{E}[\exp(\frac{|X|^{\alpha \beta}}{||X||_{\psi_{\alpha \beta}}^{\alpha \beta}})] \leq 2 \quad \text{and} \quad \mathbb{E}[\exp(\frac{|X|^{\alpha \beta}}{(||\;|X|^{\beta}||_{\psi_{\alpha}})^{\alpha}})] = \mathbb{E}[\exp(\frac{(|X|^{\beta})^{\alpha}}{(||\;|X|^{\beta}||_{\psi_{\alpha}})^{\alpha}})] \leq 2, \end{equation*} it follows that $||\;|X|^{\beta}\;||_{\psi_{\alpha}} \leq ||X||_{\psi_{\alpha\beta}}^{\beta}$ and $||X||_{\psi_{\alpha \beta}} \leq ||\; |X|^{\beta}||_{\psi_{\alpha}}^{\frac{1}{\beta}}$, so we obtain $||\;|X|^{\beta}\;||_{\psi_{\alpha}} = ||X||_{\psi_{\alpha\beta}}^{\beta}$, which yields the scaling property. \section{Proof of Lemma \ref{MGF}} \label{A3} Assume that $ \mathbb{E}[\exp(|\lambda Y|^{\alpha})] \leq \exp(|\lambda|^{\alpha}$ for every $0 \leq\lambda < 1$ and $\gamma \geq 2$. Since $|ab| \leq \frac{|a|^{p}}{p} + \frac{|b|^{q}}{q}$, $(\frac{1}{\beta})^{\frac{1}{\beta}} < 1$ for every $\beta > 1$, and $\frac{\alpha'}{\alpha} \leq \alpha' \leq 2^{\alpha'} \leq \gamma^{\alpha'}$, it follows that \begin{equation*} \mathbb{E}[\exp(\gamma |Y|)] \leq \mathbb{E}[\exp( \frac{\gamma^{\alpha'}}{\alpha'} + \frac{|Y|^{\alpha}}{\alpha})] \leq \exp(\frac{\gamma^{\alpha'}}{\alpha'}) \exp(\frac{1}{\alpha}) \leq \exp(2\frac{\gamma^{\alpha'}}{\alpha'}) \leq \exp(\frac{(2\gamma)^{\alpha'}}{\alpha'}). \end{equation*} Therefore, if $||X||_{\psi_{\alpha}} \leq K$, then $ \mathbb{E}[\exp(\lambda^{\alpha} |X|^{\alpha})] \leq \exp(K_{4}^{\alpha} \lambda^{\alpha}) $ for every $0\leq \lambda < \frac{1}{K_{4}}$ and $K_{4} \leq K\times C_{7,\alpha}$, where $C_{7,\alpha} \equiv \max\{1,C_{3,\alpha},C_{3,\alpha}C_{2,\alpha}\}$, so it follows that $ \mathbb{E}[\exp(|\lambda \frac{X}{K C_{7,\alpha}}|^{\alpha})] \leq \exp(\lambda^{\alpha}) $ for every $0\leq \lambda < 1$. Hence, if $\lambda \geq \frac{2}{KC_{7,\alpha}}$, then $ \mathbb{E}[\exp(\lambda |X|)] = \mathbb{E}[\exp((\lambda K C_{7,\alpha}) |\frac{X}{KC_{7,\alpha}}|)] \leq \exp(\frac{(2\lambda K C_{7,\alpha})^{\alpha'}}{\alpha'}) $. \end{appendix} \bibliographystyle{plainurl}
{ "timestamp": "2022-09-20T02:20:23", "yymm": "2012", "arxiv_id": "2012.14082", "language": "en", "url": "https://arxiv.org/abs/2012.14082" }
\section{Introduction} \label{sec:introduction} In the era of the Large Electron Positron (LEP) collider~\cite{Heister:2003aj,Abdallah:2004xe,Achard:2004sv,Abbiendi:2004qz} at CERN and the Stanford Linear Collider (SLC)~\cite{Abe:1994mf} at SLAC, energy-energy correlation function (EEC)~\cite{Basham:1978bw} never enjoyed the same amount of popularity as the six famous event shape variables, which are thrust~\cite{Brandt:1964sa,Farhi:1977sg}, heavy jet mass~\cite{Clavelli:1981yh}, wide and total jet broadening~\cite{Rakow:1981qn, Ellis:1986ig,Catani:1992jc}, $C$ parameter~\cite{Parisi:1978eg, Donoghue:1979vi} and the jet transition variable $Y_{23}$~\cite{Catani:1991hj}. Nonetheless, we are currently experiencing an unprecedented amount of theoretical work directed towards a better understanding of this observable in the context of perturbative QCD. One could even go as far as claiming that we are now living in the ``golden age of EEC''. Analytic results obtained for EEC in $\mathcal{N}=4$ Supersymmetric Yang-Mills (SYM) and QCD evolve hand in hand, with the former making maximal use of exceptional amount of symmetries encoded in the $\mathcal{N}=4$ SYM Lagrangian and the latter relying on more conventional calculational techniques. The relevance of $\mathcal{N}=4$ SYM calculations for QCD and collider physics is thoroughly explained in \cite{Henn:2020omi}. A casual bystander might wonder what makes EEC and EEC-like observables so exceptionally well suited for higher-order analytic investigations, including, but not limited to, fixed-order calculations. After all, as of now none of the six famous event shape variables is known analytically at NLO, while in the case of EEC we already have two QCD NLO~\cite{Dixon:2018qgp,Luo:2019nig} and one $\mathcal{N}=4$ SYM NNLO~\cite{Belitsky:2013xxa,Belitsky:2013bja,Belitsky:2013ofa,Henn:2019gkr} fixed-order results. Collinear and back-to-back regions of EEC in $\mathcal{N}=4$ SYM were investigated in \cite{Kologlu:2019mfz,Korchemsky:2019nzm}, while \cite{Moult:2019vou} introduced a formalism for the subleading power resummation of rapidity logarithms. Furthermore, it is worth noting that by making use of the AdS/CFT duality in $\mathcal{N}=4$ SYM one can also obtain a strong-coupling limit result for the EEC \cite{Maldacena:1997re,Hofman:2008ar}. In QCD, the collinear limit of EEC can be understood by using the recently available factorization theorem \cite{Dixon:2019uzg}, which also improves the resummation beyond the leading logarithmic (LL) accuracy \cite{Konishi:1978yx,Konishi:1978ax}. The back-to-back limit features an all-order factorization formula \cite{Moult:2018jzp} that makes use of the transverse-momentum dependent (TMD) factorization \cite{Collins:1981uk,Collins:1981va,Kodaira:1981nh}. In this limit, resummed predictions are currently known at the N$^3$LL$^\prime$ accuracy \cite{deFlorian:2004mp,Tulipant:2017ybb,Moult:2018jzp,Ebert:2020sfi}. The answer to the question raised in the above paragraph lies in the very definition of the energy-energy correlator. As we will see below, the Dirac delta that introduces correlations between energies of partons or final state hadrons can be straightforwardly converted to a loop-momentum dependent (albeit nonlinear) propagator and subjected to the standard methods of computing higher-order corrections, such as integration-by-parts (IBP) reduction~\cite{Chetyrkin:1981qh,Tkachov:1981wb} and differential equations~\cite{Kotikov:1991pm,Kotikov:1990kg,Kotikov:1991hm,Bern:1993kr,Remiddi:1997ny,Gehrmann:1999as}. Moreover, these steps can be carried out using off-the-shelf software packages for loop computations: the specifics of our observable (e.g.\ custom IBP equations for loop integrals with nonlinear propagators) can be encoded in the \textsc{Mathematica} scripts used to invoke the existing tools, so that the tools themselves do not require any modifications. The existence of numerical NNLO results~\cite{DelDuca:2016csb,Tulipant:2017ybb} (making use of the CoLoRFulNNLO method \cite{Somogyi:2006da,Somogyi:2006db,Aglietti:2008fe}) as well as the availability of public codes (e.g.\ \textsc{Event 2}~\cite{Catani:1996jh,Catani:1996vz}, \textsc{NLOJet++}~\cite{Nagy:2001fj,Nagy:2003tz}, \textsc{Eerad3}~\cite{Ridder:2014wza}) capable of evaluating the EEC numerically greatly facilitate the cross-checks of new analytic results. It is important to stress that when speaking of ``EEC'' we do not limit ourselves to the original definition of this event shape variable for electron-positron annihilation to partons via the reaction $e^+ e^- \to q \bar{q} + X$. For example, Transverse-Energy-Energy Correlations (TEEC) \cite{Ali:1984yp} have already been studied in the context of proton-proton~\cite{Ali:2012rn} and electron-proton~\cite{Ali:2020ksn} collisions. The back-to-back limit of TEEC can be investigated using recently obtained factorization theorems for hadron-hadron \cite{Gao:2019ojf} and electron-hadron \cite{Li:2020bub} colliders. Recent considerations of the three-point~\cite{Chen:2019bpb,Chen:2020adz}, four-point~\cite{Chicherin:2020azt} and multi-point energy correlators~\cite{Chang:2020qpj,Chen:2020vvp} as well as two-point gravitational energy correlators \cite{Gonzo:2020xza} represent further exciting extensions of the original EEC concept and signalize an increased interest of the theorist community in such novel event shape variables. For phenomenological purposes, EEC can be employed as a tool to determine the value of the strong coupling constant (cf.\ e.g.\ \cite{Kardos:2020igb} for a recent study) by comparing the available theoretical predictions to the existing electron-positron collider measurements. In~\cite{Luo:2019nig} it was suggested that a new event shape variable, denoted as the Higgs EEC, could provide an intriguing connection between the strong and the Higgs sectors by defining an observable equally accessible to experimentalists analyzing the data from a future Higgs factory and to theorists calculating the corresponding predictions. Furthermore, this observable could be potentially used for the purpose of $\alpha_s$ determinations from hadronic Higgs decays. A high-energy lepton collider, be it CEPC~\cite{CEPCStudyGroup:2018rmc,CEPCStudyGroup:2018ghi}, ILC~\cite{Behnke:2013xla,Baer:2013cma}, FCC-ee~\cite{Gomez-Ceballos:2013zzn} or CLIC~\cite{Aicheler:2012bya,deBlas:2018mhx}, would be capable of copious production of Higgs bosons in the clean environment of $e^+e^-$-annihilations. It is, therefore, not unreasonable to expect that in the future we might witness a high precision measurement of the Higgs EEC using data collected at a leptonic Higgs factory. The analytic NLO results presented in~\cite{Luo:2019nig} concerned only the $H \to gg + X$ channel calculated in the Higgs Effective Theory (HEFT)~\cite{Wilczek:1977zn,Shifman:1978zn,Inami:1982xt,Kniehl:1995tn} with massless quarks. The goal of this work is to present analytic results also for the channel $H \to q \bar{q} + X$, thus completing the fixed-order investigation of the Higgs EEC at NLO. Being the largest Higgs decay branching ratio, Higgs decaying into bottom quarks has received much attention from the theory community. For example, the partial decay width of $H \to q \bar{q}$ has been calculated to N$^4$LO~\cite{Baikov:2005rw,Davies:2017xsp,Herzog:2017dtz}, and the fully differential decay width for the same process is known to N$^3$LO~\cite{Anastasiou:2011qx,DelDuca:2015zqa,Mondini:2019gid} for massless quarks and to NNLO~\cite{Bernreuther:2018ynm} for massive quarks. Some interesting results obtained very recently are the calculation of Higgs decaying into two bottom quarks and an additional jet at NNLO~\cite{Mondini:2019vub}, the study of the Higgs decay into four bottom quarks at NLO~\cite{Gao:2019ypl} and the investigation of the thrust distribution for Higgs going into a pair of bottom quarks or gluons plus an additional jet at NLO and approximate NNLO~\cite{Gao:2019mlt}. It is also worth mentioning that the NNLL$'$ resummed results are now available for the 2-jettiness distribution describing Higgs decays into $b \bar{b}$ and $gg$~\cite{Alioli:2020fzf}. For the sake of clarity, in the following we will denote the $H \to gg + X$ and $H \to q \bar{q} + X$ contributions as $Hgg$ EEC and $Hq\bar{q}$ EEC respectively. The Higgs EEC is then understood to contain both channels. The original observable from~\cite{Basham:1978bw} will be referred to as the standard EEC. Following~\cite{Luo:2019nig}, we define the Higgs EEC as \begin{equation} \frac{1}{\Gamma_{\textrm{tot}}} \frac{d \Sigma_H (\chi)}{d \cos\chi} = \sum_{a,b} \int \, \frac{2 E_a E_b}{m_H^2} \, \delta( \cos\theta_{ab} - \cos\chi) \, d \Gamma_{a+b+X}, \label{eq:eecdef} \end{equation} with $\Gamma_{\textrm{tot}}$ being the total decay width for $H \to \textrm{hardons}$, whereas $d \Gamma_{a+b+X}$ describes the differential decay rate of a Higgs decaying into two hadrons plus anything else. Furthermore, we have $\cos\theta_{ab} = \hat{\bm{p}}_a \cdot \hat{\bm{p}}_b$, where $(E_a, \bm{p}_a)^T$ and $(E_b, \bm{p}_b)^T$ denote the 4-vectors of the hadrons $a$ and $b$ respectively. Finally, $\chi$ is the angle between two calorimeters measuring the energies of $a$ and $b$, while $m_H$ stands for the Higgs boson mass. By summing over all available final state hadron pairs $(a,b)$ and weighting their contributions to the energy flow by the product of their energies divided by the square of the Higgs mass, we obtain a differential angular distribution normalized to unit area. To calculate the Higgs EEC in perturbation theory we replace the hadrons by partons and exclude self-correlations, so that the contributions with $a=b$ are removed from the summation in eq.\,\eqref{eq:eecdef}. The interacting part of the relevant Lagrangian reads \begin{equation} \mathcal{L}_{\textrm{int}} = - \frac{1}{4} \lambda H \mathrm{Tr} (G^{\mu \nu} G_{\mu \nu}) + \sum_q \frac{y_q}{\sqrt{2}} H \bar{\psi}_q \psi_q, \end{equation} where the first term stems from the HEFT with $\lambda$ being the corresponding Wilson coefficient (known up to $\textrm{N}^4\textrm{LO}$~\cite{Baikov:2016tgj}). The second term is the Standard Model Yukawa interaction for quarks, with $y_q$ being the Yukawa coupling for the quark flavor $q$. To facilitate the analytic calculation we choose to work in the massless quark limit, while keeping nonvanishing Yukawa couplings. The top quark contributions are thus omitted and we have only 5 active quark flavors. As has already been observed in~\cite{Gao:2019mlt}, the chiral symmetry of massless QCD ensures that in this approximation there is no interference between the $H \to gg + X$ and $H \to q \bar{q} + X$ channels. The respective operators also do not mix under the renormalization so that both pieces can be treated separately. Since the gluonic channel has already been computed in~\cite{Luo:2019nig}, our sole remaining task is to calculate the contribution from Higgs decaying to a quark-antiquark pair and one or two additional partons. The 3-parton final state corresponds to the LO result, while the 4-parton states are needed for the NLO. We normalize the $H q \bar{q}$ EEC contribution with respect to the total decay width for $H \to q \bar{q}$ given by \begin{equation} \Gamma_{\textrm{tot}} = \frac{y_q^2 (\mu) m_H C_A}{16 \pi} K(\mu), \label{eq:hdecaytot} \end{equation} where $C_A$ stands for the number of colors and $K(\mu)$ encodes higher order corrections in $\alpha_s$. The $K$-factor for $H \to b \bar{b}$ in the limit where the bottom mass is set to zero is currently known at $\mathcal{O}(\alpha_s^4)$~\cite{Baikov:2005rw,Davies:2017xsp,Herzog:2017dtz}, and the full scale dependence up to $\mathcal{O}(\alpha_s^3)$ can be found in~\cite{Chetyrkin:1996sr}. This normalization prescription ensures that $H q \bar{q}$ EEC does not depend on $y_q$, while the dependence on $m_H$ enters only through $\log(\mu/m_H)$ and vanishes for the renormalization scale choice $\mu = m_H$. Our paper is organized as follows. We describe the technical details of our Higgs EEC calculation for the $H \to q \bar{q} + X$ channel in section \ref{sec:calculation} and subsequently present the obtained analytic results (including the asymptotic behavior in the collinear and back-to-back limits) in section \ref{sec:fullres}. Section \ref{sec:numerics} explores the phenomenological implications of the $H q \bar{q}$ EEC observable. Finally, our conclusions and possible future extensions of this work are summarized in section \ref{sec:summary}. \section{Technical framework} \label{sec:calculation} Our calculation essentially follows the path that has already been outlined in~\cite{Dixon:2018qgp} and explained in details in~\cite{Luo:2019nig}, so that we keep the following description short. First of all, we need to obtain matrix elements squared $|\mathcal{M}(H \to q \bar{q} + X)|^2$ for real, double-real and real-virtual corrections to the Higgs decaying into a quark-antiquark pair. The real and double-real contributions follow directly from squaring the corresponding tree-level amplitudes with 3- or 4-parton final states respectively \begin{subequations} \begin{align} H (Q) &\to q (p_1) \bar{q} (p_2) g (p_3), \\ H (Q) &\to q (p_1) \bar{q} (p_2) q' (p_3) \bar{q}' (p_4), \\ H (Q) &\to q (p_1) \bar{q} (p_2) q (p_3) \bar{q} (p_4), \\ H (Q) &\to q (p_1) \bar{q} (p_2) g (p_3) g (p_4). \end{align} \end{subequations} A visualization of the double-real contributions using the cut diagram notation is shown in figure \ref{fig:realcorr}. Working in the rest frame of the decaying Higgs particle, we have $Q = (m_H,0,0,0)^T$. The real-virtual piece follows from the interference of the tree-level and 1-loop 3-parton final states. The Higgs EEC observable without the overall normalization factor is obtained by multiplying $|\mathcal{M}(H \to q \bar{q} + X)|^2$ with the measurement function \begin{equation} E_a E_b \, \delta (\cos \theta_{ab} - \cos \chi ) = (p_a \cdot Q)^2 (p_b \cdot Q)^2 \delta \left ( 2 z \, p_a \cdot Q \, p_b \cdot Q - p_a \cdot p_b \, Q^2 \right ), \end{equation} where we introduced \begin{equation} 2 z \equiv 1 - \cos \chi. \end{equation} Since the real-virtual piece involves only a massless 3-particle phase space, it is sufficiently simple to be integrated directly via \textsc{HyperInt}~\cite{Panzer:2014caa}. However, the NLO double-real contribution leaves us with a large number of complicated and badly divergent\footnote{The IR safety of the EEC observable guarantees the absence of $1/\varepsilon_{\textrm{IR}}$ poles in the final result but not in the intermediate results.} phase-space integrals. We choose to handle them by employing the method of reverse unitarity~\cite{Anastasiou:2002yz,Anastasiou:2003yy} which effectively trades the measurement function for the following nonlinear cut propagator \begin{equation} \frac{1}{2 z \, p_a \cdot Q \, p_b \cdot Q - p_a \cdot p_b \, Q^2} \bigl |_{\textrm{cut}}. \end{equation} The occurring loop integrals can then be reduced using IBP techniques. The resulting master integrals can be solved via differential equations by finding a canonical form~\cite{Henn:2013pwa} for each of the systems and then determining the integration constants using suitable boundary conditions. In practice, we generate the Higgs decay amplitudes using \textsc{QGRAF}~\cite{Nogueira:1991ex} and \textsc{FeynArts}~\cite{Hahn:2000kx}. \textsc{FeynCalc}~\cite{Mertig:1990an,Shtabovenko:2016sxi,Shtabovenko:2020gxv}, \textsc{FORM}~\cite{Vermaseren:2000nd} and \textsc{Color}~\cite{vanRitbergen:1998pn} are used to prepare the squared matrix elements, evaluate them in $d$-dimensions and carry out the color algebra. We also employ \textsc{FeynHelpers}~\cite{Shtabovenko:2016whf} and \textsc{Package-X}~\cite{Patel:2015tea,Patel:2016fam} for the calculation of the real-virtual matrix element. To avoid dealing with ghost contributions we make use of the axial gauge \begin{equation} \sum_{\lambda=1}^2 \varepsilon^\mu (\bm{p}_i, \lambda) \varepsilon^{\ast \nu} (\bm{p}_i, \lambda) = -g^{\mu \nu} + \frac{(p_i^\mu n^\nu + p_i^\nu n^\mu)}{p_i \cdot n} - \frac{n^2 p_i^\mu p_i^\nu}{(p_i \cdot n)^2}, \end{equation} when summing over the gluon polarizations. \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{EEC1.pdf} \caption{$qqgg$} \label{fig:gg} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{EEC2.pdf} \caption{$q\bar{q}q'\bar{q}'$} \label{fig:flavor} \end{subfigure} \begin{subfigure}[c]{0.45\textwidth} \includegraphics[width=\textwidth]{EEC3.pdf} \caption{$q\bar{q}q\bar{q}$} \label{fig:qq} \end{subfigure} \begin{subfigure}[c]{0.45\textwidth} \includegraphics[width=\textwidth]{EEC4.pdf} \caption{$q\bar{q}q\bar{q}$} \label{fig:qq0} \end{subfigure} \caption{Representative cut diagrams for real corrections to the $H q\bar{q}$ EEC at NLO.} \label{fig:realcorr} \end{figure} The obligatory topology identification step proceeds by considering all possible ways to exchange loop momenta $p_a \leftrightarrow p_b$ or to perform a shift $p_a \to Q - \sum_{b \neq a} p_b$. Notice that the invariance of the sum of the measurement functions for different partons under these manipulations lead to a significant simplification of this task. In the first step, instead of looking at the full integrand \begin{equation} \left (\prod_k \delta_+(p_k^2) \right) |\mathcal{M}(H \to q \bar{q} + X)|^2 \sum_{a<b} \, 2 E_a E_b \, \delta (\cos \theta_{ab} - \cos \chi) \end{equation} it is convenient to omit the Dirac delta from the measurement function and enumerate the occurring subtopologies. In the second step we augment each identified subtopology with the corresponding nonlinear cut propagator. In the case of a 4-parton final state, one subtopology gives rise to 6 integral families, stemming from the parton pairs $(1,2)$, $(1,3)$, $(1,4)$, $(2,3)$, $(2,4)$ and $(3,4)$. The subprocesses with $q\bar{q} g$, $q \bar{q} q \bar{q}$ and $q \bar{q} q' \bar{q}'$ final states contain only one subtopology each, given by \begin{align} &\{p_1, p_2, Q -p_1 - p_2,Q -p_1, Q -p_2 \}, \\ & \{p_1, p_2, p_3, Q -p_1 - p_2 - p_3, Q -p_1 - p_2, Q -p_1, Q -p_2, p_1 + p_3, p_2 + p_3 \} \end{align} and \begin{equation} \{p_1, p_2, p_3, Q -p_1 - p_2 - p_3, Q -p_1, Q - p_3, p_1 + p_2 + p_3, p_1 + p_2, p_1 + p_3 \} \end{equation} respectively. The most complicated double-real piece stemming from the $q \bar{q} g g$ final state involves 3 following subtopologies \begin{subequations} \begin{align} & \{p_1, p_2, p_3, Q -p_1 - p_2 - p_3, Q -p_1 - p_2, Q -p_2 , Q -p_1 , p_1 + p_3, p_2 + p_3\}, \\ &\{p_1, p_2, p_3, Q -p_1 - p_2 - p_3 , Q -p_1 - p_3 , Q -p_2 , Q -p_1, p_1 + p_3, p_1 + p_2 \}, \\ & \{p_1, p_2, p_3, Q -p_1 - p_2 - p_3, Q -p_1 - p_3, Q -p_2 - p_3, Q -p_2, p_1 + p_3, p_2 + p_3 \} \end{align} \end{subequations} that lead to 18 integral families. The search for a minimal set of subtopologies as well as the generation of the final integral families is done using in-house \textsc{Mathematica} scripts. Custom codes written on top of \textsc{FeynCalc} and \textsc{LiteRed}~\cite{Lee:2012cn} are used to handle linearly dependent propagators via partial fraction decomposition and to derive symbolic equations for the IBP reduction. Then, the IBP-reduction is carried out with \textsc{FIRE}~\cite{Smirnov:2014hma,Smirnov:2019qkx}, where we submit our custom IBP equations to the program via the variable \texttt{startinglist} and mark all cut propagators through the \texttt{RESTRICTIONS} setting. Finally, we map the obtained master integrals to the set of integrals that was calculated in~\cite{Dixon:2018qgp}. Just as in the case of the $H gg$ EEC, we find no new master integrals that cannot be expressed as a linear combination of masters from the standard EEC integral basis at NLO. Upon adding all contributions together and carrying out the UV-renormalization of the real-virtual contribution, we end up with a manifestly finite result, as expected from the IR-safe property of the EEC event shape variables. \section{Analytic results at NLO} \label{sec:fullres} The main result of this work is the analytic expression for the $H q \bar{q}$ EEC at $\mathcal{O} (\alpha_s^2)$ given by \begin{align} & \frac{1}{\Gamma_{\textrm{tot}}} \frac{d\Sigma_{H q \bar{q}}(\chi)}{d\cos\chi} \nonumber \\ & = \frac{1}{K(\mu)} \left [ \frac{\alpha_s(\mu)}{2\pi} A_{H q \bar{q}}(z) + \left(\frac{\alpha_s(\mu)}{2 \pi}\right)^2 \left( (\beta_0 + 6 \, C_F) \log \frac{\mu}{m_H} A_{H q \bar{q}}(z) + B_{H q \bar{q}}(z) \right) \right ], \label{eq:eecnlo} \end{align} where $\beta_0 = 11/3 C_A - 4/3 N_f T_f$ and $N_f$ stands for the number of quark flavors. The QCD color factors read $C_A = N_c = 3$, $C_F = (N_c^2-1)/(2 N_c) = 4/3$ and $T_f = 1/2$ with $N_c$ being the number of colors. The overall prefactor $1/K(\mu)$ stems from the normalization prescription given in eq.\,\eqref{eq:hdecaytot}, while $A_{H q \bar{q}}(z)$ and $B_{H q \bar{q}}(z)$ denote the LO and NLO coefficients respectively. One may wonder why the coefficient of $\log \frac{\mu}{m_H}$ in the numerator of eq.~\eqref{eq:eecnlo} is proportional to $\beta_0+6 C_F$. The origin of this term can be traced back to the usual strong coupling constant renormalization and the additional Yukawa renormalization~\cite{Gehrmann:2014vha,Gao:2019mlt}, \begin{align} y_q^b = y_q(\mu) \left( 1- \frac{3 C_F}{2 \epsilon} \frac{\alpha_s}{2\pi} + \mathcal{O}(\alpha_s^2) \right) \,. \end{align} Notice that $K(\mu)$ to order $\mathcal{O}(\alpha_s)$ is given by \begin{align} \label{eq:KmuDefinition} K(\mu) = 1+ \frac{\alpha_s}{2 \pi} C_F \left(\frac{17}{2}+ 6 \log \frac{\mu}{m_H} \right) + \mathcal{O}(\alpha_s^2)\,. \end{align} Using eq.~\eqref{eq:KmuDefinition}, one could expand eq.~\eqref{eq:eecnlo} to $\mathcal{O}(\alpha_s^2)$, obtaining a result with the coefficient of $\log \frac{\mu}{m_H}$ being exactly proportional to $\beta_0$. The LO piece is directly proportional to $C_F$ and can be written as \begin{align} A_{H q \bar{q}}(z) & = C_F \left(\frac{-18+15 z}{4 (1-z) z^4}+\frac{\left(-9+12 z-3 z^2-z^3\right) \log (1-z)}{2 (1-z) z^5}\right). \label{eq:ahzexpl} \end{align} The NLO coefficient $B_{H q \bar{q}}(z)$ can be decomposed into \begin{equation} B_{H q \bar{q}}(z) = C_F^2 B_{H q \bar{q},\text{lc}}(z) + C_F (C_A - 2 C_F) B_{H q \bar{q},\text{nlc}}(z) + C_F N_f T_f B_{H q \bar{q},N_f}(z), \label{eq:bhzdecomp} \end{equation} where $B_{H q \bar{q},\text{lc}}(z)$, $B_{H q \bar{q},\text{nlc}}(z)$ and $B_{H q \bar{q},N_f}(z)$ stand for the leading color, next-to-leading color and the $N_f$ pieces respectively. The color structure of the NLO coefficient is identical to the one observed in the standard EEC. This is not surprising, as both observables are quark-initiated quantities. The analytic structure of the color coefficients precisely follows the pattern known from the standard EEC and the $H gg$ EEC. We again find the same set of building block functions $g_i^{(j)}$, were $j$ denotes the pure transcendental weight \begin{align} g_1^{(1)} &= \log (1-z)\,, \nonumber \\ g_2^{(1)} &= \log (z)\,, \nonumber \\ g_1^{(2)} &= 2 (\text{Li}_2(z)+\zeta_2)+\log ^2(1-z)\,, \nonumber\\ g_2^{(2)} & = \text{Li}_2(1-z)-\text{Li}_2(z)\,, \nonumber \\ g_3^{(2)} &= - 2 \, \text{Li}_2\left(-\sqrt{z}\right) + 2 \, \text{Li}_2\left(\sqrt{z}\right) + \log\left(\frac{1-\sqrt{z}}{1+\sqrt{z}}\right) \log (z) \,, \nonumber\\ g_4^{(2)} &= \zeta_2 \,, \nonumber \\ g_1^{(3)} & = -6 \left[ \text{Li}_3\left(-\frac{z}{1-z}\right)-\zeta_3 \right] - \log \left(\frac{z}{1-z}\right) \left(2 (\text{Li}_2(z)+\zeta_2)+\log^2(1-z)\right) \,, \nonumber\\ g_2^{(3)} & = -12 \left[ \text{Li}_3(z)+\text{Li}_3\left(-\frac{z}{1-z}\right) \right] + 6 \, \text{Li}_2(z) \log(1-z) + \log^3(1-z) \,, \nonumber\\ g_3^{(3)} & = 6 \log(1-z) \, (\text{Li}_2(z)-\zeta_2) - 12 \, \text{Li}_3(z) + \log^3(1-z) \,, \nonumber\\ g_4^{(3)} &= \text{Li}_3\left(-\frac{z}{1-z}\right) - 3 \, \zeta_2 \log(z) + 8 \, \zeta_3 \,,\nonumber\\ g_5^{(3)} &= - 8 \left[ \text{Li}_3\left(-\frac{\sqrt{z}}{1-\sqrt{z}}\right) + \text{Li}_3\left(\frac{\sqrt{z}}{1+\sqrt{z}}\right) \right] + 2 \text{Li}_3\left(-\frac{z}{1-z}\right) + 4 \zeta_2 \log (1-z) \nonumber \\ & +\log \left(\frac{1-z}{z}\right) \log^2\left(\frac{1+\sqrt{z}}{1-\sqrt{z}}\right) \,. \label{eq:gdef} \end{align} The coefficients of these functions (except for some coefficients multiplying $g_3^{(2)}$) are rational polynomials of the form \begin{equation} \frac{\sum_{i=1}^7 c_i z^i}{(1-z)^m z^k}, \textrm{ with } c_i \in \mathbb{Z}, \quad 0 \leq m \leq 1, \quad 0 \leq k \leq 5 \ \textrm{ and } m,k \in \mathbb{N}_0. \end{equation} Every color component also contains a term proportional to $1/z^{7/2} g_3^{(2)}$ and those pieces are symmetric under $\sqrt{z} \to - \sqrt{z}$, which appears to be a universal feature of EEC observables at NLO~\cite{Belitsky:2013ofa,Dixon:2018qgp,Luo:2019nig}. On the other hand, it is interesting to observe that the highest power of $z$ in the numerators of the rational polynomials is only 7, at variance with 8 in the case of the $Hgg$ EEC and 9 for the standard EEC at NLO. Furthermore, the largest value of the power $k$ being 5 is true also for the standard EEC, while it can go up to 6 for the $Hgg$ EEC. Presumably, these small differences between the $H q\bar{q}$ EEC and the standard EEC can be largely attributed to the different vertex structures of $\mathcal{L}_{\textrm{int}}$: the former is initiated through a scalar-fermion coupling, while the latter starts via a vector-fermion interaction. The analytic results for the separate color components at NLO read as follows \begin{subequations} \begin{align} & B_{H q \bar{q},\text{lc}}(z) = -\frac{17422-15003 z-369 z^2-304 z^3+576 z^4-576 z^5}{288 (1-z) z^4} \nonumber \\ &-\frac{\left(4775-9637 z+5189 z^2-387 z^3+436 z^4-1280 z^5+2016 z^6-1152 z^7\right)}{144 (1-z) z^5} g_1^{(1)} \nonumber \\ &+\frac{\left(195+321 z-472 z^2+44 z^3-352 z^4+720 z^5-576 z^6\right) }{72 (1-z) z^4} g_2^{(1)} \nonumber \\ &+\frac{\left(263-195 z-32 z^2+50 z^3+33 z^4-21 z^5\right)}{24 (1-z) z^5} g_1^{(2)} \nonumber\\ & -\frac{\left(65+138 z-94 z^2+32 z^3+64 z^4-96 z^5+192 z^6\right) }{24 z^5}g_2^{(2)}+\frac{(3+35 z) }{96 z^{7/2}} g_3^{(2)}\nonumber \\ & -2 \left(1-2 z+2 z^2\right) g_1^{(3)} -\frac{\left(19-27 z+10 z^2\right) }{6 (1-z) z^5} g_2^{(3)} +\frac{1}{6 (1-z)} g_3^{(3)}\nonumber \\ &-\frac{\left(461-463 z+168 z^2-26 z^3+48 z^4\right)}{24 (1-z) z^5} g_4^{(2)} , \\ \nonumber \\ & B_{H q \bar{q},\text{nlc}}(z) = -\frac{4082-4101 z+471 z^2-137 z^3+288 z^4-288 z^5}{144 (1-z) z^4} \nonumber \\ & -\frac{\left(4610-9529 z+5813 z^2-859 z^3+775 z^4-1604 z^5+2016 z^6-1152 z^7\right) }{144 (1-z) z^5} g_1^{(1)} \nonumber \\ &-\frac{\left(2496-4245 z+1207 z^2-338 z^3+2056 z^4-2880 z^5+2304 z^6\right) }{288 (1-z) z^4} g_2^{(1)} \nonumber \\ & +\frac{\left(328-435 z+53 z^2+117 z^3-9 z^4-10 z^5\right) }{48 (1-z) z^5} g_1^{(2)} \nonumber \\ &+\frac{\left(208-213 z+36 z^2-11 z^3-118 z^4+96 z^5-192 z^6\right) }{24 z^5} g_2^{(2)} +\frac{\left(291+175 z+384 z^2\right) }{192 z^{7/2}} g_3^{(2)} \nonumber \\ & -\frac{\left(268-428 z+169 z^2+26 z^3-24 z^4\right) }{12 (1-z) z^5} g_4^{(2)} \nonumber \\ & +\frac{\left(6-33 z+57 z^2-64 z^3+32 z^4\right) }{8 (1-z) z} g_1^{(3)} -\frac{\left(22-39 z+25 z^2-8 z^3+2 z^4-4 z^5\right)}{24 (1-z) z^5} g_2^{(3)} \nonumber \\ &-\frac{(1-2 z) g_4^{(3)}}{2 (1-z) z} -\frac{\left(3+2 z^2+4 z^3\right) }{8 z^4} g_5^{(3)}, \\ \nonumber \\ & B_{H q \bar{q},N_f}(z) = -\frac{10-277 z+215 z^2+16 z^3}{48 (1-z) z^4}+\frac{\left(381-621 z+321 z^2-53 z^3-24 z^4\right) }{72 (1-z) z^5} g_1^{(1)} \nonumber \\ &+\frac{\left(204-273 z+101 z^2\right) }{48 (1-z) z^4} g_2^{(1)} -\frac{\left(9-12 z+3 z^2+z^3+z^5\right) }{6 (1-z) z^5} g_1^{(2)} -\frac{\left(51-42 z+16 z^2\right) }{12 z^5} g_2^{(2)} \nonumber \\ &-\frac{(1+5 z) }{32 z^{7/2}}g_3^{(2)} +\frac{\left(87-141 z+70 z^2-12 z^3\right) }{12 (1-z) z^5} g_4^{(2)}. \end{align} \end{subequations} A plot of $B_{H q \bar{q}}(z)$ showing the size of contributions from the three different color components is shown in figure \ref{fig:bhplot}. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth,clip]{color_coeff_Bh.pdf} \caption{NLO coefficient $B_{H q \bar{q}}$ and its color components $B_{H q \bar{q},\textrm{lc}}$, $B_{H q \bar{q},\textrm{nlc}}$ and $B_{H q \bar{q},N_f}$ for $N_f =5$ and $N_c = 3$. Only the $N_f$ piece yields a negative contribution, while both other components contribute positively.} \label{fig:bhplot} \end{figure} The collinear limit of the $H q\bar{q}$ EEC is easily obtained by expanding the fixed order result around $z=0$. Up to $\mathcal{O}(z)$ this yields \begin{subequations} \begin{align} & A_{H q \bar{q}}(z) = \frac{1}{z} \frac{3C_F }{8 } +\frac{21 C_F}{40} + \mathcal{O}(z), \\ & B_{H q \bar{q}}(z) = \frac{1}{z} \biggl[\log (z) \left(-\frac{107 C_A C_F}{120}+\frac{53}{240} C_F N_f T_f+\frac{25 C_F^2}{32}\right) \nonumber \\ & +\left(-\frac{25 \zeta _2}{12}+\frac{\zeta _3}{2}+\frac{71677}{10800}\right) C_A C_F-\frac{1217}{900} C_F N_f T_f+\left(\frac{43 \zeta _2}{12}-\zeta _3-\frac{4051}{1728}\right) C_F^2\biggr] \nonumber \\ & +\log (z) \biggl[\left(\frac{21 \zeta _2}{4}-\frac{32089}{3360}\right) C_A C_F+\frac{803 C_F N_f T_f}{2520}+\left(\frac{2029}{180}-\frac{13 \zeta _2}{2}\right) C_F^2\biggr] \nonumber \\ & +\left(\frac{151 \zeta _2}{24}-\frac{65 \zeta _3}{4}+\frac{20108803}{1411200}\right) C_A C_F+\left(-\frac{\zeta _2}{3}-\frac{90047}{66150}\right) C_F N_f T_f \nonumber \\ & +\left(-\frac{33 \zeta _2}{4}+\frac{41 \zeta _3}{2}-\frac{319489}{43200}\right) C_F^2 + \mathcal{O}(z). \end{align} \end{subequations} In the same manner we can also explore the back-to-back limit. Notice that the presence of large logarithms from soft and collinear emissions signals the necessity of a proper resummation using the existing techniques~\cite{Collins:1981uk,Dokshitzer:1999sh,Moult:2018jzp,Gao:2019ojf}. Expanding around $z = 1$ we find \begin{subequations} \begin{align} & A_H(z) = \frac{1}{1-z} \left[-\frac{1}{2} C_F \log (1-z)-\frac{3 C_F}{4}\right]-4 C_F \log (1-z)-\frac{27 C_F}{4} + \mathcal{O}(1-z), \\ & B_H(z) = \frac{1}{1-z} \biggl[\frac{1}{2} C_F^2 \log ^3(1-z) +\log ^2(1-z) \left(\frac{11 C_A C_F}{12}-\frac{1}{3} C_F N_f T_f+\frac{9 C_F^2}{4}\right) \nonumber \\ & + \log (1-z) \left(\left(\frac{\zeta _2}{2}-\frac{35}{72}\right) C_A C_F+\frac{1}{18} C_F N_f T_f+\left(\zeta _2+\frac{5}{4}\right) C_F^2\right) \nonumber \\ & +\left(\frac{11 \zeta _2}{4}+\frac{3 \zeta _3}{2}-\frac{35}{16}\right) C_A C_F +\left(\frac{3}{4}-\zeta _2\right) C_F N_f T_f+\left(3 \zeta _2-\zeta _3-\frac{27}{16}\right) C_F^2\biggr] \nonumber \\ & +\log ^3(1-z) \left[\frac{13 C_A C_F}{24}+\frac{7 C_F^2}{4}\right] +\log ^2(1-z) \left[\frac{37 C_A C_F}{6}-\frac{4}{3} C_F N_f T_f+\frac{25 C_F^2}{2}\right] \nonumber \\ & +\log (1-z) \left[\left(\frac{41 \zeta _2}{4}-\frac{727}{72}\right) C_A C_F+\frac{103}{36} C_F N_f T_f+\left(\frac{\zeta _2}{2}+\frac{47}{2}\right) C_F^2\right] \nonumber\\ & + \left(\frac{3259 \zeta _2}{96}-\frac{23 \zeta _3}{8}-\frac{27}{2} \zeta _2 \log (2)-\frac{871}{24}\right) C_A C_F +\left(\frac{15 \zeta _2}{16}+\frac{115}{16}\right) C_F N_f T_f \nonumber\\ & + \left(\frac{83 \zeta _2}{24}+\frac{111 \zeta _3}{4}+27 \zeta _2 \log (2)-\frac{2111}{96}\right) C_F^2 + \mathcal{O}(1-z), \end{align} \end{subequations} where the leading power terms can be also obtained using the formalism of~\cite{Moult:2018jzp}. \section{Phenomenological applications} \label{sec:numerics} In the following we present a brief discussion on phenomenological applications of the $H q\bar{q}$ EEC event shape variable in Higgs boson decays. We verify our analytic formulas by comparing them to a numerical result that was obtained using Monte Carlo (MC) integration. In the numerical calculation we used independent matrix elements that were automatically generated with \textsc{GoSam} 2.0~\cite{Cullen:2014yla}, while the real corrections were treated using the dipole subtraction method~\cite{Catani:2002hc}. We set the strong coupling constant to $\alpha_s(M_Z)=0.1181$ in the calculations. Analytic and numerical results for the $H q\bar{q}$ EEC at LO and NLO are shown in figure \ref{fig:plotbench}, where the underlying process is the decay of the Higgs into massless quarks and all distributions are normalized to the total partial width at LO. To simplify the comparison and improve the visual quality of the plot, we choose slightly different $\cos \chi$ values for the curves describing the analytical and numerical distributions. As can be inferred from the plot, within the MC errors we find a perfect agreement between our analytic and numerical predictions both at LO and NLO. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth,clip]{plotbench.pdf} \caption{ Comparisons between full analytic LO and NLO results for the $H q\bar{q}$ EEC and the corresponding numerical calculations using MC integrations. We consider only Higgs bosons decaying into massless quarks and normalize each distribution to the total partial width at LO. The MC errors are much smaller than the size of the markers. } \label{fig:plotbench} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth,clip]{plotmass.pdf} \caption{ Upper panel: different NLO results for the $H q\bar{q}$ EEC, where everything is normalized to the fixed-order NLO prediction for massless quarks (solid red curve). The dashed green curve shows the same calculation done with massive quarks, while the dot-dashed blue curve also includes matching to parton shower (but no hadronization). The effects of massive quarks, parton shower and hadronization are simultaneously incorporated into the cyan solid curve. Lower panel: scale variations of the NLO prediction with massive bottom quarks matched to parton shower and hadronization and the projected experimental uncertainty. The latter includes only statistical errors assuming a total number of $4\times 10^5$ events. } \label{fig:plotmass} \end{figure} A direct comparison to the future experimental data requires additional corrections to the fixed-order theory prediction, which we discuss below. First of all, the effect of the finite bottom-quark mass $m_b$ can be nonnegligible in a fixed-order calculation. Since bottom quarks are treated as massless in our analytic result, it is important to estimate the impact of this simplification. To this end, we performed another NLO numerical calculation of the $H q\bar{q}$ EEC, where the bottom-quark was treated as massive with $m_b = 4.78\textrm{ GeV}$. The ratio of the $H q\bar{q}$ EEC results with massive and massless quarks is shown in the upper panel of figure \ref{fig:plotmass}. We observe that in the range of $|\cos\chi|<0.95$ the bottom-quark mass corrections reduce the distribution by about 4\% in the bulk region and can reach 10\% to 20\% in the back-to-back and collinear regions. The latter is not surprising, as it is well known that collinear radiations are suppressed due to the finite quark mass. Second, for a meaningful Higgs EEC prediction we must also include parton shower and hadronization corrections. To account for that, we match our NLO calculation with massive bottom quarks to parton shower using \textsc{POWHEG-BOX-V2}~\cite{Frixione:2007vw,Alioli:2010xd} and \textsc{PYTHIA} 8.2~\cite{Sjostrand:2014zea}. In the \textsc{PYTHIA} setup we use the Monash tune~\cite{Skands:2014pea} and, for the sake of simplicity, force all $B$ hadrons to be stable. Figure \ref{fig:plotmass} shows that the parton shower can substantially enhance the EEC distribution in the whole $\cos\chi$ range, with the corrections amounting to almost 40\% in the collinear region. This observation hints that fixed-order NNLO QCD corrections to the $H q\bar{q}$ EEC could be potentially large. Furthermore, the hadronization corrections are equally significant and can reach more than 10\%. Finally, we estimate the perturbative uncertainty of the matched NLO predictions by varying the renormalization scale and the square of the parton shower scale independently by a factor of two around their nominal values, chosen as $m_H$ for the renormalization scale and $k^2_T$ for the square of the shower scale. We add the two scale variations in quadrature and plot the uncertainty band in the lower panel of figure \ref{fig:plotmass}. The total uncertainty from the scale variations lies between 5\% and 10\% in the plotted region. Given the existence of NNLO numerical calculations for massless bottom quarks~\cite{Mondini:2019vub}, we expect that this uncertainty can be, in principle, significantly reduced in the future. In addition to that, the lower panel of figure \ref{fig:plotmass} also contains the projected experimental uncertainties. In this case we incorporate only the statistical errors and assume the total number of events being $4\times 10^5$, which corresponds to the number of $H \to b \bar{b}$ decays that CEPC~\cite{An:2018dwb} is expected to collect during its first 7 year data taking period. We estimate the statistical errors by first generating 40 ensembles of events and then calculating the standard deviation of the EEC in each bin from the values predicted by all ensembles. This procedure is meant to account for the strong statistical correlations among different bins that typically arise when studying EEC-like observables: a single event generates multiple histogram entries, hence simultaneously contributing to many bins. In our case, for all bins the observed uncertainties constitute at most 0.5\%. Experimental systematic errors, that we choose to ignore here, can be attributed to the signal extraction from the SM background as well as event reconstruction and detector resolution. Although they are expected to be dominant over the statistical errors, a thorough estimation of these uncertainties is beyond the scope of the present paper. \section{Summary} \label{sec:summary} Higgs EEC is a novel event shape variable that can be measured by reconstructing 4-vectors of the final state particles originating from hadronic Higgs decays. This observable opens an interesting perspective of $\alpha_s$ determinations from Higgs precision measurements at future Higgs factories and is therefore of great relevance for experimentalists interested in exploring Higgs phenomenology at high-energy lepton colliders. In this work we employed methods pioneered in~\cite{Dixon:2018qgp} to calculate Higgs EEC in the $H \to q \bar{q} + X$ channel at NLO in the fixed-order perturbation theory. This result can be combined with the already available computation in the $H \to g g + X$ channel~\cite{Luo:2019nig} to obtain the full Higgs EEC in the limit of vanishing light quark masses. The analytical structure of the $H q \bar{q}$ EEC is very similar to that of the $H g g$ EEC and the standard EEC: all 3 results can be calculated using the same set of master integrals and written in terms of the same building block functions that involve classical polylogarithms up to weight 3. As far as the phenomenology of the $H q \bar{q}$ EEC is concerned, we employed numerical methods to study the importance of the effects missing in the analytic calculation: finite bottom-quark mass, parton shower and hadronization. On the one hand, the corrections due to the finite bottom-quark mass turn out to be numerically rather small, apart from the collinear and back-to-back regions. On the other hand, parton shower and hadronization effects can lead to enhancements of tens of percents beyond the NLO fixed-order predictions. The remaining scale variations are at the level of 5\% to 10\%. At the same time, the projected statistical uncertainties on the measurements of the Higgs EEC at future Higgs factories are at sub-percent level. Therefore, we conclude that improved perturbative calculations and a more accurate modeling of the hadronization are mandatory in order to match the future experimental precision. Theoretical investigations of EEC-like observables continue to expand our understanding of the mathematical underpinning of perturbative QCD. The multitude of results made available in the recent years corroborate that the study of EEC has become a very active field of research within the phenomenology of the strong interactions at high energies. Even though every new calculation raises the bar a bit higher, there is obviously still a lot of work left to be done. At NLO one could consider other underlying processes that lead to hadronic decays or try to incorporate effects of massive quarks, while at NNLO we still lack the full fixed-order result even for the standard EEC. Taking a broader view, it would be very rewarding to search for techniques that could enable us to obtain NLO analytic results for event shape variables other than the EEC. Given the amount of progress in the field made in the last few years, we may very well expect to witness even more exciting findings in the years to come. \acknowledgments We are grateful to Ming-xing Luo and Hua Xing Zhu for collaboration at the early stage of this work and important comments on the manuscript. We thank Han-tian Zhang for useful discussions. The work of J.\,G. was sponsored by the National Natural Science Foundation of China under the Grant No. 11875189 and No.11835005. The work of V.\,S. and T.\,Z.\,Y. was supported in part by the National Science Foundation of China (11135006, 11275168, 11422544, 11375151, 11535002) and the Zhejiang University Fundamental Research Funds for the Central Universities (2017QNA3007). V.\,S. also acknowledges the support from the DFG under grant 396021762 -- TRR 257 ``Particle Physics Phenomenology after the Higgs Discovery. T.Z.Y. also wants to acknowledge the support from the Swiss National Science Foundation (SNF) under contract 200020-175595.
{ "timestamp": "2021-03-02T02:53:46", "yymm": "2012", "arxiv_id": "2012.14188", "language": "en", "url": "https://arxiv.org/abs/2012.14188" }
\section{Introduction} \noindent Learning dialogue policies are typically formulated as a reinforcement learning (RL) problem \cite{SuttonB98, YoungGTW13}. However, dialogue policy learning via RL from scratch in real-world dialogue scenarios is expensive and time-consuming, because it requires real users to interact with and adjusts its policies online \cite{MnihKSRVBGRFOPB15,SilverHMGSDSAPL16,DhingraLLGCAD17,SuGMRUVWY16,LiCLGC17}. A plausible strategy is to use user simulators as an inexpensive alternative for real users, which randomly sample a user goal from the user goal set for the dialogue agent training \cite{SchatzmannTWYY07,SuGMRUVWY16a,LiCLGC17,BudzianowskiUSM17,PengLLGCLW17,LiuL17,PengLGLCW18}. In task-oriented dialogue settings, the entire conversation revolves around the sampled user goal implicitly. Nevertheless, the dialogue agent's objective is to help the user to accomplish this goal even though the agent knows nothing about this sampled user goal \cite{SchatzmannY09,LiLDLGC16}, as shown in Figure~\ref{fig:a}. \begin{figure*}[tbp] \centering \subcaptionbox{Policy learning with user simulators. \label{fig:a}}{\includegraphics[width=0.9\columnwidth]{7281.ZhaoY_figure1a}} \subcaptionbox{Policy learning with proposed ACL-DQN framework. \label{fig:b}}{\includegraphics[width=0.9\columnwidth]{7281.ZhaoY_figure1b}} \caption{Two strategies of user simulator sampling for learning task-oriented dialogue policies via RL.} \label{fig:1} \end{figure*} The randomly sampling-based user simulator neglects the fact that human learning supervision is often accompanied by a curriculum \cite{RenDLC18}. For instance, when a human-teacher teaches students, the order of presented examples is not random but meaningful, from which students can benefit \cite{BengioLCW09}. Therefore, this randomly sampling-based user simulators bring two issues: \begin{itemize} \item \textit{efficiency} issue: since the ability of the dialogue agent does not match the difficulty of the sampled user goal, it takes a long time for the dialogue agent to learn the optimal strategy (or fail to learn). For example, in the early learning phase, it is possible that the random sampling method arranges the dialogue agent to learn more complex user goals first, and then learn simpler user goals. \item \textit{stability} issue: using random user goals to collect experience online is not stable enough, making the learned dialogue policy unstable and difficult to reproduce. Since RL is highly sensitive to the dynamics of the training process, dialogue agents trained with stable experience can guide themselves more effectively and stably than dialogue agents trained with instability. \end{itemize} Most previous studies of dialogue policy have focused on the \textit{efficiency} issue, such as reward shaping \cite{KulkarniNST16,LuZC19,ZhaoWYZHW20,CaoLCZ20}, companion learning \cite{ChenYCYZY17,ChenZCYY17}, incorporate planning \cite{GaoWPLL18,SuLGLC18,WuLLGY19,ZhaoWYZHW20,ZhangCSWD20}, etc. However, \textit{stability} is a pre-requisite for the method to work well in real-world scenarios. It is because, no matter how effective an algorithm is, an unstable online leaned policy may be ineffective when applied in the real dialogue environment. This can lead to bad user experience and thus fail to attract sufficient real users to continuously improve the policy. As far as we know, little work has been reported about the stability of dialogue policy. Therefore, it is essential to address the stability issue. In this paper, we propose a novel policy learning framework that combines curriculum learning and deep reinforcement learning, namely Automatic Curriculum Learning-based Deep Q-Network (ACL-DQN). As shown in Figure~\ref{fig:b}, this framework replaces the traditional random sampling method in the user simulator with a teacher policy model that arranges a meaningful ordered curriculum and dynamically adjusts it to help dialogue agent (also referred to student agent in this paper) for automatic curriculum learning. As a scheduling controller for student agents, the teacher policy model arranges students to learn different user goals in different learning stages without any requirement of prior knowledge. Sampling the user goals that match the ability of student agents regarding different difficulty of each user goal, can not only increases the feedback of the environment to the student agent but also makes the learning of the student agent more stable. There are two criteria for evaluating the sampling order of each user goal: the learning progress of the student agent and the over-repetition penalty. The learning progress of the student agent emphasizes the efficiency of each user goal, encouraging the teacher policy model to choose the user goals that match the ability of the student agent to maximize the learning efficiency of the student agent. The over-repetition penalty emphasizes the sampled diversity, preventing the teacher policy model from \textit{cheating}\footnote[1]{The teacher policy model repeatedly selects user goals that the student agent has mastered to obtain positive rewards.}. The incorporation of the learning progress of the student agent and the over-repetition penalty reflects both sampled efficiency and sampled diversity to improve efficiency as well as stability of ACL-DQN. Additionally, the proposed ACL-DQN framework can equip with different curriculum schedules. Hence, in order to verify the generalization of the proposed framework, we propose three curriculum schedule standards for the framework for experimentation: i) \textit{Curriculum schedule A}: there is no standard, only a single teacher model; ii) \textit{Curriculum schedule B}: user goals are sampled from easiness to hardness in proportion; iii) \textit{Curriculum schedule C}: ensure that the student agents have mastered simpler goals before learning more complex goals. Experiments have demonstrated that the ACL-DQN significantly improves the dialogue policy through automatic curriculum learning and achieves better and more stable performance than DQN. Moreover, the ACL-DQN equipped with the curriculum schedules can be further improved. Among the three curriculum schedules we provided, the ACL-DQN under curriculum schedule C with the strength of supervision and controllability, can better follow up on the learning progress of students and performs best. In summary, our contributions are as follows: \begin{itemize} \item We propose Automatic Curriculum Learning-based Deep Q-Network (ACL-DQN). As far as we know, this is the first work that applies curriculum learning ideas to help the dialogue policy for automatic curriculum learning. \item We introduce a new user goal sampling method (i.e., teacher policy model) to arrange a meaningful ordered curriculum and automatically adjusts it by minoring the learning progress of the student agent and the over-repetition penalty. \item We validate the superior performance of ACL-DQN by building dialogue agents for the movie-ticket booking task. The efficiency and stability of ACL-DQN are verified by simulation and human evaluations. Moreover, ACL-DQN can be further improved by equipping curriculum schedules, which demonstrates that the framework has strong generalizability. \end{itemize} \section{Proposed framework} \begin{figure} \centering \subcaptionbox{Curriculum schedule A. \label{curriculum A}} {\includegraphics[width=0.9\columnwidth]{7281.ZhaoY_A}}\\[2ex] \subcaptionbox{Curriculum schedule B. \label{curriculum B}} {\includegraphics[width=0.9\columnwidth]{7281.ZhaoY_B}}\\[2ex] \subcaptionbox{Curriculum schedule C. \label{curriculum C}} {\includegraphics[width=0.9\columnwidth]{7281.ZhaoY_C}} \caption{Three curriculum schedules were arranged and adjusted by the DQN-based teacher model based on three standards by monitoring student's training process and the over-repetition penalty (the feedback is shown in Figure~\ref{fig:a}).} \label{fig:curriculum} \end{figure} The proposed framework is illustrated in Figure~\ref{fig:a}, the ACL-DQN agent training consists of four processes: (1) \textit{curriculum schedule} includes three strategies (Figure~\ref{fig:curriculum}), which are arranged by the teacher policy model based on three standards we provided and automatically adjusted according to the learning process of the student agent and the over-repetition penalty. (2) \textit{over-repetition penalty}, which punishes the \textit{cheating} behaviors of the teacher policy model to guarantee the sampled diversity. (3) \textit{automatic curriculum learning}, where the student agent interacts with a user simulator revolving around curriculum goal specified by the teacher policy model, collects experience, improves the student dialogue policy, and feeds its performances back to the teacher policy model for adjusting. (4) \textit{teacher reinforcement learning}, where the teacher policy model is leaned and refined through a separate teacher experience replay buffer. \subsection{Curriculum schedule} In this section, we introduce a DQN-based teacher model and three curriculum schedules, which are later used in the (2), (3), and (4) processes mentioned above. \subsubsection{DQN-based teacher model} The goal of the teacher model is to help the student agent learn a series of user goals sequentially. We can formalize the teacher goal as a Markov decision process (MDP) problem, which is well-suitable for reinforcement learning to solve: \begin{itemize} \item The state $s_t$ consists of five components: 1) the state provided by the environment; 2) ID of the current user goal; 3) ID of last user goal; 4) a scalar representation of student policy network's parameters under the current user goal; 5) a scalar representation of student policy network parameters under the last user goal. \item The action $a_t$ corresponds user goal chosen $g_t$ by teacher policy model. \item The reward $r$ consists of two parts, one is the reward $r_t^{or}$ from the \textit{Over-repetition Discriminator}, and the other $r_t^{c}$ is the change in episode total reward acquired by the student for the user goal $g_t$, formulated as: \begin{eqnarray} r_t = r_t^{or} + r_t^{c} = r_t^{or} + x^{g_t}_t - x^{g_t}_{t'} \end{eqnarray} \end{itemize} \begin{algorithm} \caption{ ACL-DQN with Curriculum schedule A} \label{algA} \begin{algorithmic}[1] \State the DQN-based teacher model with probability $\epsilon$ select a random action $g_i$ in the user goal $G$; \State otherwise the DQN-based teacher model select $g_i = \mathop{\arg\max}_{g'}Q(s^t,g';\theta^T)$ in the user goal $G$; \end{algorithmic} \end{algorithm} \noindent where $x^{g_t}_{t'}$ is the previous episode total reward when the same user goal $g_t$ was trained on. In this article, we user the deep Q-network (DQN) \cite{MnihKSRVBGRFOPB15} to improve the teacher policy based on teacher experience. In each step, the teacher agent takes the state $s_t$ as input and chooses the action $g_t$ to execute. The sampled user goal $g_t$ is handed over to the \textit{Over-repetition Discriminator} to judge whether it is over-sampling. if not, it will be passed to the user simulator as a goal to interact with the student agent, otherwise it will give the teacher agent a penalty. The more times the user goal has been selected, the greater penalty gave, the less the probability of being selected in the next step. During training, we use $\epsilon$-greedy exploration that selects a random action with probability $\epsilon$ or otherwise follows the greedy policy $g_t = \mathop{\arg\max}_{g_t'}Q(s_t,g'_t;\theta^T)$. $Q(s_t,g_t; \theta^T)$ is the approximated value function, implemented as a Multi-Layer Perceptron (MLP) parameterized by $\theta^T$. When the dialogue terminates, the teacher agent then receives reward $r_t$, and updates the state to $s_{t+1}$. At each simulation epoch, we simulate $N$ ($N=1$) \footnote[1]{Considering the user cost in real dialogue scenarios, we set up only 1 simulation epoch for experience storage, $N=1$, to better reflect the performance of the proposed method on real dialogue tasks.} dialogues and store the experience $(s_t, g_t, r_t, s_{t+1})$ in the teacher experience replay buffer $D^T$ for teacher reinforcement learning. This cycle continues until the num\_episodes is reached. \subsubsection{Curriculum schedule A} As shown in Figure~\ref{curriculum A}, in order to evaluate the effect of a single DQN-based teacher model clearly, we replace the traditional sample method in user simulators with a single DQN-based teacher model that directly selects a user goal from the user goal set and dynamically adjust it according to the learning progress of the student agent and the over-repetition penalty using a $\epsilon$-greedy exploration (Algorithm~\ref{algA}). \subsubsection{Curriculum schedule B} In our curriculum schedule B, we make the learning process of the student agents similar to the education process of human students, which is that students usually learn many easier curriculums before they start to learn more complex curriculums \cite{RenDLC18}. Accordingly, we integrate user goal ranking in Curriculum schedule A, which allows student agents under the guidance of Curriculum schedule B to achieve progressive learning from easiness to hardness in proportion (Figure~\ref{curriculum B}). We take the total number of inform\_slot and request\_slot $n$ in the user goals as a measure of the difficulty of each user goal. According to this measure, user goals are divided into three groups from easiness to hardness: simple user goal set $G_{simple}$, medium user \begin{algorithm} \caption{ ACL-DQN with Curriculum schedule B} \label{algB} \begin{algorithmic}[1] \State Get the total number of inform\_slot $n_i$ and the number of request\_slot $n_r$ of each user goal, $n = n_i + n_r$; \State Sort user goal set $G$ based on $n$ and divide it into three groups, simple user goal set $G_{simple}$ (30), medium user goal set $G_{medium}$ (72), and difficult user goal set $G_{difficult}$ (26); \State Initialize curriculum\_phase = 'simple' \If { $len(G_{curriculum\_phase)}/ len(G) * epoch\_size$ have been reached} \State curriculum\_phase = next\_difficult\_stage(); \Else \State curriculum\_phase = stay\_current\_stage(); \State the DQN-based teacher model with probability $\epsilon$ select a random action $g_i$ in $G_{curriculum\_phase}$; \State otherwise the DQN-based teacher model select $g_i = \mathop{\arg\max}_{g'}Q(s^t,g';\theta^T)$ in $G_{curriculum\_phase}$; \EndIf \end{algorithmic} \end{algorithm} \noindent goal set $G_{medium}$, and difficult user goal set $G_{difficult}$. In the learning process of the student agents, we set the three user goal sets (from easiness to hardness) as the action set of the teacher agent sequentially to guarantee that the student agents learn the user goals of each stage in an orderly manner (Algorithm~\ref{algB}). \subsubsection{Curriculum schedule C} The curriculum schedule B may slow down the student agent learning. The reason is that even if the student agent has quickly mastered the goals of the current difficulty, it still needs to continue learning the remainder of this current difficulty. Accordingly, we design the curriculum schedule C, which is integrated "mastery" in curriculum schedule B, as shown in Figure~\ref{curriculum C}. The curriculum schedule C supports the student agent to directly enter the user goal of the next stage without learning the remainder of the current difficulty if it has mastered the goals of the current difficulty. It is considered that the student agent has mastered the user goals of this difficulty, if and only if the success rate of sampled user goals in the current difficulty exceeds the mastering threshold $\alpha (\alpha=0.5)$\footnote[2]{ We verified it in the subsequent experiment, the ACL-DQN performs best when the mastery threshold is 0.5.} within a continuous-time $T$ (T=5). The success rate of the sampled user goal in the current difficulty is $p_{success} = n_{success} / N_{sampled}$, where $n_{success}$ is the number of user goals completed by the student agent in the current difficulty, $N_{sampled}$ is the number of user goals sampled at the current difficulty (Algorithm~\ref{algC}). \subsection{Over-repetition Penalty} Under the three curriculum schedules mentioned above, the teacher policy model may \textit{cheat} to obtain positive rewards, which is repeatedly selecting user goals that the student agent has mastered. Besides, it is clear that the limited size of replay memory makes overtraining even worse \cite{de2015importance}. Therefore, if the student agent is only restricted to some user goals already mastered, it will cause student \begin{algorithm} \caption{ ACL-DQN with Curriculum schedule C} \label{algC} \begin{algorithmic}[1] \State Initialize curriculum\_phase = 'simple', a mastery threshold $\alpha$, a list $L$ for storing the success rate of the sampled user goal in the current difficulty; \State $p_{success} = n_{success} / N_{sampled}$; \State $L.append(p_{success})$ \If {$episode \geq T$} \State $L.remove(0)$ \EndIf \For {i in len(L)} \If { $L[i]$ $\geq$ $\alpha$ } \State $n = n + 1$ \EndIf \EndFor \If {$n$ $\geq$ T} \State curriculum\_phase = next\_difficult\_stage(); \Else \State curriculum\_phase = stay\_current\_stage(); \EndIf \State the DQN-based teacher model with probability $\epsilon$ select a random action $g_i$ in $G_{curriculum\_phase}$; \State otherwise the DQN-based teacher model select $g_i = \mathop{\arg\max}_{g'}Q(s^t,g';\theta^T)$ in $G_{curriculum\_phase}$; \end{algorithmic} \end{algorithm} \noindent agent learning to stagnate. For the sake of generalization of the proposed ACL-DQN method, we take into account guarantee the diversity of sampled user goals and integrate the over-repetition penalty mechanism in the framework. Similar to the coverage mechanism in neural machine translation \cite{TuLLLL16}, we introduced an over-repetition vector $[og_1, og_2,...,og_n]$ to the teacher experience replay buffer $D^T$ for recording the sample times of each user goal. In the beginning, we initialize it as a zero vector with dimension $[1*n]$, where n is the number of user goals in the current user goal set. In each simulation training step, if a user goal $g_i$ is sampled, the corresponding variable over-repetition number $og_i$ is update by $og_i = og_i + 1$. The more times the user goal has been selected, the greater the over-repetition penalty gave by the over-repetition discriminator, the less the probability of being selected in the next step. Thus, an over-repetition penalty function $ORP(og)$ satisfies the following requirements: \begin{itemize} \item $ORP(og) \rightarrow [-L, 0]$. \item $ORP(og)$ is a monotonically decreasing function of $og$. \end{itemize} \noindent where $L (L=40)$ is the maximum length of a simulated dialogue. \subsection{Automatic Curriculum Learning} The goal of student agents is to achieve a specific user goal through a sequence of actions with a user simulator, which can be considered as an MDP. In this stage, we use the DQN method to learn the student dialogue policy based on experiences stored in the student experience replay buffer $D^S$ : \begin{itemize} \item The state $s_t$ consists of five components: 1) one-hot representations of the current user action and mentioned slots; 2) one-hot representations of last system action and mentioned slots; 3) the belief distribution of possible value for each slot; 4) both a scalar and one-hot representation of current turn number; and 5) a scalar representation indicating the number of results which can be found in the database according to current search constraints. \item The action $a_t$ corresponds pre-defined action set, such as request, inform, confirm\_question, confirm\_answer, etc. \item The reward $r$: once a dialogue reaches the successful, the student agent receives a big bonus $2L$. Otherwise, it receives $-L$. In each turn, the student agent receives a fixed reward -1 to encourage shorter dialogues. \end{itemize} At each step, the student observes the dialogue $s$, and choose an action $a$, using an $\epsilon$-greedy. The student agent then receives reward $r$,and updates the state to $s'$. Finally, we store the experience tuple $(s, a,r,s')$ in the student experience replay buffer $D^S$. This cycle continues until the dialogue terminates. We improve the value function $Q(s, a,\theta^S)$ by adjusting $\theta^S$ to minimize the mean-squared loss function as follows: \begin{eqnarray} \begin{aligned} &\mathcal{L}(\theta^S) = \mathbb{E}_{(s,a,r,s')\sim D^S}[(y_i- Q(s, a;\theta^S))^2] \\ &y_i= r + \gamma \max _{a'}Q'(s',a';\theta^{S'}) \end{aligned} \end{eqnarray} \noindent where $\gamma \in [0,1]$ is a discount factor, and $Q'(\cdot)$ is the target value function that is only updated periodically. $Q(\cdot)$ can be optimized through $\nabla_{\theta^S} \mathcal{L}(\theta^S)$ by back-propagation and mini-batch gradient descent. \subsection{Teacher Reinforcement Learning} The teacher's function $Q(\cdot)$ can be improved using experiences stored in the teacher experience replay buffer $D^T$. In the implementation, we optimize the parameter $Q^T$ w.r.t. the mean-squared loss: \begin{eqnarray} \begin{aligned} &\mathcal{L}(\theta^T) = \mathbb{E}_{(s,g,r,s')\sim D^T}[(y_i- Q(s, g;\theta^T))^2] \\ &y_i= r_t^{or} + r_t^{change} + \gamma \max _{g'}Q'(s',g';\theta^{T'}) \end{aligned} \end{eqnarray} \noindent where $Q'(\cdot)$ is a copy of the previous version of $Q(\cdot)$ and is only updated periodically and $\gamma \in [0,1]$ is a discount factor. In each iteration, we improve $Q(\cdot)$ through $\nabla_{\theta^T} \mathcal{L}(\theta^T)$ by back-propagation and mini-batch gradient descent. \section{Experiments} Experiments have been conducted to evaluate the key hypothesis of ACL-DQN being able to improve the efficiency and stability of DQN-based dialogue policies, in two settings: simulation and human evaluation. \subsection{Dataset} Our ACL-DQN was evaluated on movie-booking tasks in both simulation and human-in-the-loop settings. Raw conversational data in the movie-ticket booking task was collected via Amazon Mechanical Turk with annotations provided by domain experts. The annotated data consists of 11 dialogue acts and 29 slots. In total, the dataset contains 280 annotated dialogues, the average length of which is approximately 11 turns. \subsection{Baselines} To verify the efficiency and stability of ACL-DQN, we developed different version of task-oriented dialogue agents as baselines to compare with. \begin{itemize} \item The \textbf{DQN} agent takes the user goal randomly sampled by the user simulator for leaning \cite{GaoWPLL18}. \item The proposed \textbf{ACL-DQN}($A$) agent takes the curriculum goal specified by the teacher model equipped with \textit{curriculum schedule A} for automatic curriculum learning (Alforithm~\ref{algA}). \item The proposed \textbf{ACL-DQN}($B$) agent takes the curriculum goal specified by the teacher model equipped with \textit{curriculum schedule B} for automatic curriculum learning (Alforithm~\ref{algB}). \item The proposed \textbf{ACL-DQN}($C$) agent takes the curriculum goal specified by the teacher model equipped with \textit{curriculum schedule C} for automatic curriculum learning (Alforithm~\ref{algC}). \end{itemize} \begin{table*}[thb] \centering \resizebox{1.88\columnwidth}{!}{ \centering \normalsize\begin{tabular}{lccccccccccccc} \toprule \multirow{2}*{\textbf{Agent}}& \multicolumn{3}{c}{Epoch = 100}& \multicolumn{3}{c}{Epoch = 200}& \multicolumn{3}{c}{Epoch = 300}& \multicolumn{3}{c}{Epoch = 400}\\ \cline{2-13} &Success&Reward&Turns&Success&Reward&Turns&Success&Reward&Turns&Success&Reward&Turns\\ \hline DQN&0.4012&-6.48&31.24&0.5242&10.36&27.08&0.6448&26.17&24.40&0.6598&28.73&22.88\\ ACL-DQN(A)&0.4309&-2.92&31.25&0.6159&22.99&23.84&0.7064&35.23&21.06&0.7419&40.19&19.66\\ ACL-DQN(B)&0.4202&-3.97&30.78&0.5678&16.29&25.69&0.6673&30.12&21.92&0.7073&35.81&20.11 \\ ACL-DQN(C)&\textbf{0.5717}&15.92&27.36&\textbf{0.7253}&37.39&21.30&\textbf{0.7573}&45.28&18.57 &\textbf{0.8055}&49.05&17.22 \\ \bottomrule \end{tabular} } \caption{Result of different agents at $epoch = \{ 100, 200, 300, 400 \}$. Each number is averaged over 5 turns, each run tested on 50 dialogues. Success: Evaluated at the same epoch (except one group: at epoch 100, ACL-DQN(B)), ACL-DQN(A), ACL-DQN(B), and ACL-DQN(C) all outperform DQN, where ACL-DQN(C) has the best performance and ACL-DQN(B) has the worst performance in three curriculum schedule. The best scores are labeled in bold.}\smallskip \label{tab:table1} \end{table*} \subsection{Implementation} For all the models, we use MLPs to parameterize the value networks $Q(\cdot)$ with one hidden layer of size 80 and $tanh$ activation. $\epsilon$-greedy is always applied for exploration. We set the discount factor $\gamma = 0.9$. The buffer size of $D^T$ and $D^S$ is set to 2000 and 5000, respectively. The batch size is 16, and the learning rate is 0.001. We applied gradient clipping on all the model parameters with a maximum norm of 1 to prevent gradient explosion. The target network is updated at the beginning of each training episode. The maximum length of a simulated dialogue is 40 turns. The dialogues are counted as failed, if exceeding the maximum length of turns. For training the agents more efficiently, we utilized a variant of imitation learning, called Reply Buffer Spiking (RBS) \cite{LiptonGLLAD16} at the beginning stage to build a naive but occasionally successful rule-based agent based on the human conversational dataset. We also pre-filled the real experience replay buffer $B^u$ with 100 dialogues before training for all the variants of agents. \subsection{Simulation Evaluation} \subsubsection{Main result} The main simulation results are depicted in Table~\ref{tab:table1}, Figure~\ref{fig:main_result}, and~\ref{fig:boxplot}. The results show that all the ACL-DQN agents under three curriculum schedules significantly outperforms the baselines DQN with a statistically significant margin. Among them, ACL-DQN(C) shows the best performance, and ACL-DQN(B) shows the worst performance. The important reason is that, regardless of the mastering progress of the student agent and only let the student agent learning from easiness to hardness will slow down the learning of the student agent. As shown in Figure~\ref{fig:main_result}, ACL-DQN(B) does not show significant advantages until after epoch $320$, while ACL-DQN(C) consistently outperform DQN by integrating the mastery module that monitors the learning progress of student agent and adjusts it in real-time. Figure~\ref{fig:boxplot} is a boxplot of DQN and ACL-DQN under three curriculum schedules about the success rate at 500 epoch. It is clearly observed that ACL-DQN(A), ACL-DQN(B), and DQN-ACL(C) are more stable than DQN, where the average success rate of ACL-DQN(C) has stabilized above 0.8 while the DQN still fluctuates substantially around 0.65. The result shows that ACL-DQN under the guidance of the teacher policy model shows a more effective and stable performance and the ACL-DQN(C) agent with the strength of supervision and controllability performs best and most stable. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{7281.ZhaoY_main} \caption{The learning curves of DQN, ACL-DQN(A), ACL-DQN(B), and ACL-DQN(C).} \label{fig:main_result} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.9\columnwidth]{7281.ZhaoY_boxplot} \caption{The stability of DQN, ACL-DQN(A), ACL-DQN(B), and ACL-DQN(C) about average success rate at 500 epoch.} \label{fig:boxplot} \end{figure} \begin{figure*} \centering \subcaptionbox{DQN. \label{heat_mapa}}{\includegraphics[width=0.65\columnwidth]{7281.ZhaoY_DQN}} \subcaptionbox{ACL-DQN(A) w/o -ORP. \label{heat_mapb}}{\includegraphics[width=0.65\columnwidth]{7281.ZhaoY_-ORP}} \subcaptionbox{ACL-DQN(A). \label{heat_mapc}}{\includegraphics[width=0.65\columnwidth]{7281.ZhaoY_ACL}} \caption{Heat maps of the number of the selected user goal in three different methods: (a) DQN, (b) ACL-DQN(A)/-ORP, (c) ACL-DQN(A). The depth of color in each image represents the number of times the goals has been select.} \label{fig:heat_map} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{7281.ZhaoY_mastery} \caption{The \textit{mastery} in ACL-DQN(C): mastery threshold $\alpha$ in $[0.5,0.6]$ performs the best.} \label{fig:mastery} \end{figure} \subsubsection{Mastery threshold of ACL-DQN(C)} Choosing a new difficulty user goal set is allowed in ACL-DQN(C), if and only if the success rate of sampled user goals in the same difficulty has exceeded the "mastery" threshold within a continuous-time $T$ (details in Algorithm~\ref{algC}). Intuitively, if the threshold is too small, student agents will enter the learning of the harder goals before they mastered the simpler goals. The student agent is easy to collapse because it is difficult to learn positive training dialogues in time. If the threshold is too big, the student agent will continue to learn the remaining simple goals even if they have mastered the simple goals, slowing down the efficiency of student agent learning. Figure~\ref{fig:mastery} depicts the influences of different thresholds. As expected, when the threshold is too high or too small, it is difficult for student agents to lean a good strategy, and the learning rate of them is not as good as using a threshold within the range of $[0.5,0.6]$. The result here can serve as a reference to ACL-DQN(C) practitioners. \subsubsection{Ablation Test} To further examine the efficiency of the over-repetition penalty module, we conduct an ablation test by removing this module, referred to as ACL-DQN/-ORP. In order to observe the influence of the over-repetition penalty module more clearly, we take ACL-DQN(A) as an example to compare with ACL-DQN/-ORP and DQN with traditional randomly sampled. We choose five user goals from the $C_{simple}$, $C_{medium}$ and $C_{difficult}$ and divide them into three groups according to their difficulty. The heat maps of three different methods (DQN, ACL-DQN(A)/-ORP, and ACL-DQN(A) ) are displayed in Figure~\ref{fig:heat_map}, where the color of grid reflects the number of the selected user goals. The darker the color, the more times the user goals have been selected. It is clear that the simple number in Figure~\ref{heat_mapa} is almost the same. But a serious imbalance phenomenon appears in Figure~\ref{heat_mapb}, which does ha \subsection{Human Evaluation} We recruited real users to evaluate different systems by interacting with different systems, without knowing which the agent is hidden from the users. At the beginning of each dialogue session, the user randomly picked one of the agents to converse using a random user goal. The user can terminate the dialogue at any time if the user deems that the dialogue is too procrastinated and it is almost impossible to achieve their goal. Such dialogue sessions are considered as failed. For the stability of different systems, each time the system was given a score (1-10), where the process was repeated 20 times. The greater the variance, the more unstable the system was. Four agents (DQN, ACL-DQN(A), ACL-DQN(B), and ACL-DQN(C)) trained as previously described (Figure~\ref{fig:main_result}) at epoch 200 \textsuperscript{\rm 3}\footnotetext[3]{ Epoch 200 is picked since we are testing the efficiency of methods using a small number of real experiences.} are selected for human evaluation. As illustrated in Figure~\ref{fig:human}, the results of human evaluation confirm what we observed in the simulation evaluations. We find that DQN is abandoned more often due to its unstable performance, and it takes so many turns to reach a promising result in the face of more complex tasks (Figure~\ref{fig:main_result}), ACL-DQN(B) is kept not good enough since they could not adapt the harder goal quickly and the ACL-DQN(C) outperforms all the other agents. For the stability of different systems, the experimental results show that the variance of three ACL-DQN methods are all small than baselines, which means our methods are more stable, and ACL-DQN combined with the curriculum schedule C is the most stable one. \begin{figure}[tbp] \centering \includegraphics[width=0.9\columnwidth]{7281.ZhaoY_human} \caption{Human evaluation results of DQN, ACL-DQN(A), ACL-DQN(B), and ACL-DQN(C), the number of test dialogues indicated on each bar.} \label{fig:human} \end{figure} \section{Conclusion} In this paper, we propose a novel framework, Automatic Curriculum Learning-based Deep Q-Network (ACL-DQN), to innovatively integrate curriculum learning and deep reinforcement learning in dialogue policy learning. We design a teacher model that automatically arranges and adjusts the sampling order of user goals without any requirement of prior knowledge to replace the traditional random sampling method in user simulators. Sampling the user goals that match the ability of student agents regarding the difficulty of each user goal, maximizes and stabilizes student agents learning progress. The learning progress of the student agent and the over-repetition penalty as the criteria of the sampling order of each user goal, guarantee both of the sampled efficiency and diversity. The experimental results demonstrate the efficiency and stability of the proposed ACL-DQN. Besides, the proposed method has strong generalizability, because it can be further improved by equipping with curriculum schedules. In the future, we plan to explore the factors in the curriculum schedules that have a pivotal impact on dialogue policy learning, and evaluate the efficiency and stability of our approach by adopting different types of curriculum schedules. \section{Acknowledgments.} We thank the anonymous reviewers for their insightful feedback on the work, and we would like to acknowledge to volunteers from South China University of Technology for helping us with the human experiments. This work was supported by the Key-Area Research and Development Program of Guangdong Province,China (Grant No.2019B0101540042) and the Natural Science Foundation of Guangdong Province,China (Grant No.2019A1515011792).
{ "timestamp": "2020-12-29T02:23:50", "yymm": "2012", "arxiv_id": "2012.14072", "language": "en", "url": "https://arxiv.org/abs/2012.14072" }
\section{Introduction} One of the most important logical property of discrete systems is \emph{liveness}. A system is \emph{live} for a given initial state (marking) if it will never reach a (partially) blocking state. In Petri net (PN) literature, liveness analysis has been extensively studied and there exist many results. Furthermore, \emph{structural liveness} (a PN is structurally live if there exists an initial marking that is making the net system live, see for example \cite{ARMura89,ArSiTeCo98,ezpeleta,BOIoAn06}) can be studied by the so called rank theorem \cite{SilvaDSSP,ArSiTeCo98}. However, if the system is not live or not structurally live, constructing a controller in order to force these properties it is a very challenging problem. For some particular classes of Petri nets, there exists many results, as for example in Resource Allocation Systems (RAS) \cite{ezpeleta,park2001deadlock,Colom:2003,li2004elementary,cano2012,7870672}. RAS systems are modular PNs composed by different processes \emph{competing} for shared resources. In this paper, we consider a different class of systems than the one considered in RAS. In particular, this paper considers modular systems composed also by different processes, called \emph{agents}, but the difference is that here, they are \emph{cooperating} in a distributed way. This cooperation is realized through a set of \emph{buffers} to/from which the agents consume/produce (partial) products. This new class is defined in Def. \ref{def:ssp} and it is called \emph{Synchronized Sequential Processes} (SSP). In order to allow distribution, there exist one important constraint in the assignment of buffers, namely \emph{tokens from a given buffer can only be consumed by a particular agent}, i.e., are destination private but not output private. Since distributed systems are considered in this paper, agents are in general geographically distributed. Therefore it is quite natural to assume that any input buffer is in the same location as the corresponding agent and only that agent can consume intermediate products from the buffer. If the system is modeling a healthcare system, buffers could model information channels containing information to be used only by a clinical pathway \cite{ARBeMaAl19} because of privacy, for example. One advantage of keeping the system distributed is to have more computational attractive analysis approaches since some properties of the system can be studied using the local perspectives due to its modular structure. SSP are somehow derived from the well known class of Deterministically Synchronized Sequential Processes (DSSP)\cite{SilvaDSSP}. However, there are some differences that are explained after Def. \ref{def:ssp}. One of the two main differences is that in SSP, buffers can constraints the internal choices, fact that is not allowed in DSSP. However, if a DSSP is not live, in order to force liveness, it is necessary to control the conflicts and by adding some controllers and the net system will not be anymore DSSP. As it is well-known, structurally live and structurally bounded PNs should be consistent and conservative, two structural properties that assumed all along this work (they can be checked in polynomial time using linear programming \cite{ArSiTeCo98}). The most important aspect of the purposed liveness enforcement controller in this paper is to keep the distributed property of the system, hence to keep the buffers (included the new \emph{control} buffers) destination private. For this reason, it is not possible to use the results from RAS, a topic discussed in Sec. \ref{sec:dssp}. \begin{figure}[h] \begin{center} \centering \psfrag{b5}{$b_{5}$}\psfrag{b6}{$b_{6}$} \psfrag{b7}{$b_{7}$}\psfrag{b8}{$b_{8}$} \psfrag{b1}{$b_{1}$}\psfrag{b2}{$b_{2}$} \psfrag{b3}{$b_{3}$}\psfrag{b4}{$b_{4}$} \psfrag{Car A}{$Car A$}\psfrag{Car B}{$Car B$} \includegraphics[width=.8\columnwidth]{PracExam.eps} \caption{\small Motivation example: a distributed production system composed by three work stations and eight buffers} \label{fig:PracExam} \end{center} \end{figure} As a motivation example, let us consider the production system which constructs cars represented in Fig. \ref{fig:PracExam}. Two different types of cars can be manufactured (model A or model B). The system is composed by 3 work stations denoted as WS1, WS2, and WS3. The WS1 consumes raw material from input buffers $b_1$ (model A) or $b_2$ (model B) and produces intermediate products (engines) to buffers $b_5$ (model A) or $b_6$ (model B). Similarly, WS2 consumes raw material from buffers $b_3$ (model A) or $b_4$ (model B) and produces intermediate products (windshield) to buffers $b_7$ (model A) or $b_8$ (model B). Finally, the WS3 manufactures cars of type A or type B by creating the corresponding bodywork and assembling the intermediate products. Engine from $b_5$ and windshield from $b_7$ are used to obtain cars of type A while engine from $b_6$ and windshield from $b_8$ are used to produce cars of type B. The system can be distributed in 3 different modules (shown in Fig. \ref{fig:PracExam}), one for each agent. If these three agents are geographical distributed, then all input buffers of agents should be private and located at the same position. The intermediate products of the input buffers can be assigned to the required activity exactly in the moment when they are needed but also they can be \emph{pre-assigned}. In the case of car producing, first the bodywork is created by agent 3, then an engine from $b_5$ or $b_6$ should be assembled in a second step and finally, in the third step a windshield from $b_7$ or $b_8$ is added. Notice that the engine is assigned to the process in the second step while the windshield in the third step. However, one can \emph{reserve} (or pre-assign) them when the process is started (when the bodywork is started to be produced) and in this way if the process of producing a car is started will finish for sure. Using this pre-asigment approach, in \cite{clavelCDC} a method to ensure that the blocking situations (including partial blocking) will not appear has been proposed. Section \ref{sec:dssp} discusses this approach and also the one based on controlling bad siphons. The control policy proposed in this paper is based on two main ideas: (i) a local T-semiflow can start firing only if all its input buffers have enough tokens to fire all its transitions and (ii) when the firing of a local T-semilfow starts, all its transitions should be fired before start firing other local T-semiflow, i.e., do not locally interrupt a production task. For the implementation of this control policy, a control PN is defined being the scheduler of the original SSP. Both systems, the original non live SSP and the control PN will evolve synchronously such that the control PN will prohibit the firing of transitions that may lead the SSP to livelock. In order to enforce liveness of the SSP, it is necessary to ensure that the control PN is live. An algorithm to force the liveness in the control PN is proposed by adding new constraint places. These new places can be seen as new \emph{virtual} buffers with only one output transition. Consequently, the distributiveness of the SSP systems is preserved. The paper is organized as follows. Sec.~\ref{sec:preliminaries} shows basic concepts and notations. Sec.~\ref{sec:dssp} points out the problems of the techniques based on controlling bad-siphons and that used in~\cite{clavelCDC} and provides some intuition behind the proposed method in this paper. In Sec.~\ref{sec:alg} an algorithm to build the control PN from a SSP structure is given. In Sec.~\ref{sec:live} a methodology to ensure the liveness in the control PN is described while in Sec.~\ref{sec:contr} rules to \emph{guide} the SSP evolution through the control net system are given. Finally, in Sec.~\ref{sec:con}, some conclusions and future works are considered. \section{Preliminaries}\label{sec:preliminaries} The reader is assumed to be familiar with Petri nets (see ~\cite{ARMura89,ICSilv93b} for a gentle introduction). The aim of this section is to fix the notation and to recall the required material. \emph{Nets and Net Systems.} We denote a Petri Net (PN) as $\N=\langle P, T, \b{Pre}, \b{Post} \rangle$, where $P$ and $T$ are two non-empty and disjoint sets of \emph{places} and \emph{transitions}, and $\b{Pre}, \b{Post} \in \mathbb{N}^{|P| \times|T|}_{\geq 0}$ are pre and post \emph{incidence matrices}. For instance, $\b{Post}[p,t]=w$ means that there is an \emph{arc} from $t$ to $p$ with \emph{weight} (or multiplicity) $w$. When all arc weights are one, the net is \emph{ordinary}. For pre and postsets we use the conventional dot notation, e.g., $\preset t = \{ p \in P | \b{Pre}[p,t] \neq 0\}$. If $\N'$ is the subnet of $\N$ defined by $P' \subset P$ and $T' \subset T$, then $\b{Pre'} =\b{Pre}[P',T']$ and $\b{Post'} =\b{Post} [P', T']$. A \emph{marking} is a $|P|$ sized, natural valued vector. A Petri Net system is a pair $\Sigma = \langle \N, \b{m}_0 \rangle$, where $ \b{m}_0$ is the \emph{initial marking}. A transition $t$ is \emph{enabled} at a given marking $\b{m}$ if $\b{m} \geq \b{Pre}[P, t]$; its firing yields a new marking $\b{m}' = \b{m} + \b{C}[P, t]$, where $\b{C}= \b{Post}-\b{Pre}$ is the token-flow matrix of the net. This fact is denoted by $\b{m} \xrightarrow{t} \b{m}'$. An \emph{occurrence sequence} from $\b{m}$ is a sequence of transitions $\sigma= t_1, \ldots, t_k, \ldots$ such that $\b{m} \xrightarrow{t_1} \b{m}_1 \ldots \b{m}_{k-1} \xrightarrow{t_k}\ldots$. The set of all reachable markings, or \emph{reachability set}, from $\b{m}$, is denoted as $RS(\N, \b{m})$. \ignore{The reachability relation is conventionally represented by a \emph{reachability graph} $RG(\N, \b{m})$ where the nodes are the reachable markings and there is an arc labeled $t$ from node $\b{m}'$ to $\b{m}''$ if $\b{m}' \xrightarrow{t} \b{m}''$.} \emph{State (transition) equation.} Denoting by $\b \sigma \in \mathbb{N}^{|T|\times 1}_{\geq 0}$ the \emph{firing count vector} of $\sigma$, where $\b \sigma[t_i]$ is the number of times that $t_i$ appears in $\sigma$. Given $\sigma$ such that $\b{m} \xrightarrow{\sigma} \b{m}'$, then $$\b{m}' = \b{m} + \b{C} \cdot \b \sigma.$$ This is known as the \emph{state transition equation} of $\Sigma$. \ignore{Nevertheless, not necessarily a vector that satisfies the state equation is an actually reachable marking, because the state equation does not check fireability of a sequence with firing count vector $\b{\sigma}$. Such markings vectors are called \emph{spurious markings} ~\cite{ArSiTeCo98}.} \emph{Place Markings Bounds and Structural Boundedness.} The marking bound of a place p in a net system $\Sigma$ is defined as: $\b b[p]=max\{\b m[p]\ |\ \b m \in RS(\Sigma)\}$. When this bound is finite, the place is said to be bounded. A net is structurally bounded if every place is bounded for every initial marking. \emph{Liveness, Structural Liveness and Deadlocks.} Liveness is a property related to the potential fireability in all reachable markings. A transition is live if it is potentially fireable in all reachable markings. In other words, a transition is live if it never losses the possibility of firing (i.e., of performing an activity). A transition $t$ is potentially fireable at $\b{m}$ if there exists a firing sequence $\sigma$ leading to a marking $\b{m}'$ in which $t$ is enabled, i.e., $\b{m}[\sigma \rangle \b{m}'\geq \b{Pre}[P,t]$. A net system is live if all the transitions are live. A net is structurally live if there exists at least one live initial marking. Non-liveness for arbitrary initial markings reflects a pathology of the net structure: structural non-liveness. In deadlock markings all transitions are dead, so none of them can be fired. A net system is said to be deadlock-free if at least one transition can be fired from any reachable marking. Liveness is a stronger condition than deadlock-freeness. \emph{Siphons.} In ordinary PNs, a siphon is a subset of places such that the set of its input transitions is contained in the set of its output transitions: $S \subseteq P$ is a siphon if $\preset {S} \subseteq \postset {S}$. A siphon is minimal if any subset of it is not a siphon. A \emph{bad siphon} is a siphon not containing any trap (set of places that in ordinary PN remain marked for all possible evolution if initially marked). \ignore{ The following two properties are satisfied in \emph{ordinary} PNs: \begin {enumerate} \item If $\b{m}$ is a behavioral deadlock (i.e., dead-marking), in an ordinary net then $S = \{p\ |\ \b{m}[p]=0\}$ is an unmarked (empty) siphon. \item If a siphon is (or becomes) unmarked, it will remain unmarked for any possible evolution. Therefore all its input and output transitions are dead. So the system is not live (but can be deadlock-free). \end {enumerate} } \emph{T-semiflows, P-semiflow.} T-semiflows are nonnegative right annullers of $\b C$. So a vector $\b x \gneq 0$ is a T-semiflow if $\b C \cdot \b x =0$. We denote by $||\b x||$ the support of the vector $\b x$ containing all non-null elements: $||\b x|| = \{i | \b x[i] \neq 0\}$. T-semiflow $\b x$ is said to be \emph{minimal} when no T-semiflow $\b{x}'$ exists such that $||\b {x}'|| \subset ||\b x||$. The P-semiflows are the nonnegative left annullers of $\b C$. A vector $\b y \gneq 0$ is a P-semiflow if $\b y \cdot \b C =0$. From the existence of P- or T-semiflows derives interesting information about the possible behaviors. If a P-semiflow $\b y > \b 0$ exists, $\N$ is \emph{conservative}. Conservativeness ensures structural boundedness. Moreover, $\N$ is \emph{consistent} if a T-semiflow $\b x > \b 0$ exists. A system that is live and bounded must be consistent because a marking repetitive sequence containing all the transitions corresponds to a positive T-semiflow. \ignore{ \emph{Conflicts and Structural Conflicts.} A \emph{conflict} is the situation when not all enabled transitions can occur at once. Formally, $t, t^\prime \in T$ are in conflict relation at marking $\b{m}$ if there exist $k, k^\prime \in \mathbb{N}$ such that $\b{m} \geq k \cdot \b{Pre}[P,t]$ and $\b{m} \geq k^\prime \cdot \b{Pre}[P,t^\prime]$, but $\b{m} \ngeq k \cdot \b{Pre}[P,t] + k^\prime \cdot \b{Pre}[P,t^\prime]$. To fulfill the above condition it is necessary that $\preset t \cap \preset t^\prime \neq \emptyset$. When $\b{Pre}[P,t]= \b{Pre}[P,t^\prime] \neq 0$, t and $t^\prime$ are in \emph{equal conflict (EQ) relation}. This means that they are both enabled whenever one is. By defining that a transition is always in EQ with itself, this is an \emph{equivalence} relation on the set of transitions and each equivalence class is an \emph{equal conflict set} denoted, for a given $t$, $EQS(t)$. SEQS is the set of all the equal conflict sets of a given net.} \emph{Implicit Places.} In general, places impose constraints on the firing of their output transition. When they never do it in isolation, they could be removed without affecting the behaviour of the rest of the system. These places are called implicit. Formally, let $\Sigma = \langle P \cup \{p\}, T, \b{Pre}, \b{Post}, \b{m}_0 \rangle$ be a PN system, the place $p$ is implicit if $\b m \geq \b {Pre}[P,t] \Rightarrow \b m[p] \geq \b {Pre}[p,t]$ for all $t \in \postset p$. \begin{figure*}[h] \begin{center} \centering \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$} \psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$} \psfrag{t9}{$t_9$}\psfrag{t10}{$t_{10}$}\psfrag{t11}{$t_{11}$} \psfrag{t12}{$t_{12}$}\psfrag{t13}{$t_{13}$}\psfrag{t14}{$t_{14}$} \psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$}\psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$} \psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$}\psfrag{p7}{$p_7$} \psfrag{p8}{$p_8$}\psfrag{p9}{$p_9$} \psfrag{p10}{$p_{10}$}\psfrag{p11}{$p_{11}$} \psfrag{b1}{$b_{1}$}\psfrag{b2}{$b_{2}$} \psfrag{b3}{$b_{3}$}\psfrag{b4}{$b_{4}$} \psfrag{b5}{$b_{5}$}\psfrag{b6}{$b_{6}$} \psfrag{b7}{$b_{7}$}\psfrag{b8}{$b_{8}$} \psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$}\psfrag{N3}{$\N_3$}\psfrag{pm}{$p_m$} \centering \subfigure[]{\includegraphics[width=0.85\columnwidth]{DSSPmot1new.eps}\label{fig:DSSPmot}}\hspace{0.01\textwidth} \centering \subfigure[]{\includegraphics[width=0.89\columnwidth]{DSSPmotMon1new.eps}\label{fig:DSSPmotMon}} \caption{\small Motivation example: (a) The not live SSP net modeling the system in Fig. \ref{fig:PracExam} with he monitor place $p_m$ that prevents the emptiness of the bad siphon $\{p_1,p_2,p_7,p_9,p_{10},p_{11},b_2,b_5\}$; (b) Live net obtained by applied the pre-assignment method in \cite{clavelCDC} to the SSP in Fig.~\ref{fig:first}.a.} \label{fig:first} \end{center} \end{figure*} \emph{PN classes}. If each transition in a ordinary PN has exactly one input and one output place, the net is called \emph{State Machine}. Any 1-marked strongly connected State Machine is live and the maximum number of tokens in each place must be 1 for all reachable marking. A PN is an \emph{Choice Free} (CF) net if all places have at most one output transition. Finally, a PN is an \emph{Join Free} (JF) net if all transitions have at most one input place. \begin{definition}\label{def:ssp} Let $\mathcal{N}^s = \langle P,T,\b{Pre},\b{Post} \rangle$ be a PN net. The system $\langle \mathcal{N}^s,\b{m}_0\rangle$ is a \emph{Synchronized Sequential Processes} (SSP) if it can be decomposed into $n$ modules (called also agents) $\N_i^s = \langle P_i, T_i, \b{Pre}[P_i,T_i], \b{Post}[P_i,T_i] \rangle$ and a set of \emph{buffers} $B$ such that: \begin{enumerate} \item $P = P_1 \cup P_2 \cup \ldots \cup P_n \cup B$ and $P_i \cap P_j = \emptyset$ for all $i \neq j$ and $P_k \cap B = \emptyset$ for all $k=1,2, \ldots, n$. \item $T = T_1 \cup T_2 \cup \ldots \cup T_n$ with $T_i \cap T_j = \emptyset$ for all $i \neq j$. \item All $\N_i^s$ are strongly connected state machines. \item All buffers in $B$ are destination private, i.e., for all buffers $b \in B$, if $\postset{b} \cap T_i \neq \emptyset$ then $\postset{b} \cap T_x = \emptyset$ for all $x = \{1,2, \ldots, n\} \setminus\{i\}$. \item Each agent $i$ has only one marked place $p_i^e$, called \emph{waiting place}, such that all its cycles that have input buffers contain this place. \item $\langle \N^s, \b{m}_0 \rangle$ is conservative and consistent. \end{enumerate} \end{definition} Conditions 1) and 2) in Def. \ref{def:ssp} ensure that the set of places and transitions are partitioned into disjoint sets (agents with their input buffers); condition 3) states that each agent is a strongly connected state machine (thus locally consistent); condition 4) imposes that a buffer can only have output transitions in one agent (the destination agent). Conditions 5) considers only local cycles that have input buffers but remark that if a cycle without an input buffer has only output buffers the net cannot be conservative. Therefore, it is not possible to have cycles only with output buffers. Furthermore, local cycles without input and output buffers can be reduced to a transitions modeling local behaviours. Checking conditions in Def. \ref{def:ssp} can be done in polynomial time. Notice that the class of systems of Def. \ref{def:ssp} is inspired from the DSSP \cite{SilvaDSSP}. On One side, Def. \ref{def:ssp} relaxes the assumption of DSSP that the buffer do not condition the choices in an agent. Another difference of Def. \ref{def:ssp} with DSSP is the constraint that represent the existence of a waiting place (condition 5)). Finally, condition 6) considers conservative and consistent SSP that are necessary conditions for structural liveness in structurally bounded PNs. \begin{example}\label{example1} The PN in Fig.~\ref{fig:DSSPmot} (without place $p_m$ and its input and output arcs) is a SSP (also a DSSP) modeling the production system represented schematically in Fig. \ref{fig:PracExam}. Agent 1, modeled by $\N_1$ is creating the engines ($t_1 \rightarrow p_2 \rightarrow t_2$ for car model A and $t_3 \rightarrow p_3 \rightarrow t_4$ for car model B). Agent 2, modeled by $\N_2$ is producing windshields ($t_5 \rightarrow p_5 \rightarrow t_6$ for car model A and $t_7 \rightarrow p_6 \rightarrow t_8$ for car model B). For simplicity, we assume that there exist raw material to produce only one type of intermediate product and a new one can be produced once it is consumed by agent 3. Agent 3 is producing the cars of type A (firing $t_9$) or type B (firing $t_{12}$). Notice that the type of the car is not chosen by resource availability but it is an outside decision (for example by a client). For this reason, in DSSP the buffers never constraint the internal choices. However, this restriction is removed in SSP since it is necessary to be violated for structural live enforcement. Agent 3 is producing first the bodywork ($t_9$ for type A and $t_{12}$ for type B), then is assembling the engine ($t_{10}$ or $t_{13}$ depending on the model) and finally the windshield ($t_{11}$ or $t_{14}$). Each agent has two local T-semiflows. In particular, the T-semiflows of $\N_1$ are $\b{x}_{1}=t_1+t_2$ and $\b{x}_{2}=t_3+t_4$; the ones of $\N_2$ are $\b{x}_{3}=t_5+t_6$ and $\b{x}_{4}=t_7+t_8$; while the T-semiflows of $\N_3$ are $\b{x}_{5}=t_9+t_{10}+t_{11}$ and $\b{x}_{6}=t_{12}+t_{13}+t_{14}$. Notice that for sake of brevity, the multi-set notation for vectors is used. There exist four pairs of buffers $(b_{1}, b_ {5})$, $(b_{2}, b_{6})$, $(b_{3}, b_{7})$ and $(b_{4}, b_{8})$ in consumption - production relation. Finally, the waiting places are $p_1$, $p_4$ and $p_7$. \ignore{The set of all equal conflict sets is: $SEQS = \{ \{t_1, t_3\},$ $\{t_2\},$ $\{t_4\},$ $\{t_5, t_7\},$ $\{ t_6\},$ $\{t_8\},$ $\{t_9, t_{12}\},$ $\{t_{10}\},$ $\{t_{11}\},$ $\{t_{13}\},$ $\{t_{14}\} \}$. $\N^s$ does not fulfill the rank theorem: $ \mid SEQS\mid - 1 = 10 \neq rank (\b{C}) = 11$.} This net is structurally not live and for the marking in Fig.~\ref{fig:DSSPmot}, by firing the sequence $t_3t_4t_5t_6t_3t_5t_9$, the SSP will reach the livelock (deadlock in this case) marking $\b{m}'=p_3+p_5+p_8+b_{1}+b_{4}+b_{6}+b_{7}$. \end{example} \section{On liveness of SSP nets}\label{sec:dssp} \begin{figure*}[ht] \begin{center} \centering \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$} \psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$} \psfrag{t9}{$t_9$}\psfrag{t10}{$t_{10}$}\psfrag{t11}{$t_{11}$}\psfrag{t12}{$t_{12}$} \psfrag{t13}{$t_{13}$}\psfrag{t14}{$t_{14}$} \psfrag{t15}{$t_{15}$} \psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$} \psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$} \psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$} \psfrag{p7}{$p_7$}\psfrag{p8}{$p_8$} \psfrag{p9}{$p_9$}\psfrag{p10}{$p_{10}$} \psfrag{p11}{$p_{11}$} \psfrag{b1}{$b_1$}\psfrag{b2}{$b_2$} \psfrag{b3}{$b_3$}\psfrag{b4}{$b_4$}\psfrag{b5}{$b_5$} \psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$} \centering \subfigure[]{\includegraphics[width=0.6\columnwidth]{DSSPpb1f.eps}\label{fig:DSSPpb1}}\hspace{0.15\textwidth} \centering \subfigure[]{\includegraphics[width=0.6\columnwidth]{SSPPb1f.eps}\label{fig:SSPPb1}} \caption{Motivation example of problem 1: (a) A non live SSP net ($\N^s$); (b) The resulting non live net after applying the pre-assignment method in \cite{clavelCDC}. Observe that the number of buffers is not changing ($\{b_1, b_2, b_3, b_4, b_5\}$). Moreover, all the post-conditions remain unchanged, while transformations concern the pre-incidence, arc flowing from the buffers.} \label{fig:Pb1} \end{center} \end{figure*} In this section, the main limitations of previous works (controlling bad siphons~\cite{ezpeleta,park2001deadlock,Colom:2003,li2004elementary,cano2012,7870672} and the method in \cite{clavelCDC}) for the liveness enforcement of SSP net systems are considered. Moreover, some intuitions behind the approach proposed in this paper are provided. \subsection{Controlling bad siphons in SSP systems} \label{siphonbased} A well-known method for liveness enforcement of \emph{ordinary nets} consists in controlling the \emph{bad siphons}. First, the set of \emph{bad siphons} is computed and then they are prevented to become empty by using, for example, \emph{monitor} places (a kind of generalized mutual exclusion constraint~\cite{BOIoAn06,GMEC,ARLuWuZhSh20}). However, this strategy applied to SSP results in new buffers (the monitor places can be seen as new buffers) that may have output transitions in more than one agent. Therefore, the system loses the distributiveness property since the buffers are not anymore destination private (condition 4 of Def. \ref{def:ssp}). The place $p_m$ in Fig.\ref{fig:DSSPmot} prevents the emptiness of the bad siphon $S_1=\{p_1,p_2,p_7,p_9,p_{10},p_{11},b_2,b_5\}$. The new place $p_m$ can be seen as a new buffer, and it provides tokens to both agents $\N_1$ and $\N_3$. Moreover, the method of controlling bad siphons is well understood in ordinary Petri nets. However, SSPs are not necessarily ordinary. Additionally, deadlocks and circular waiting of resources is well understood for the special classes of RAS called S4PR \cite{IPTrGaCoEz05}. Nevertheless, the class of SSP is not comparable with the class of S4PR from RAS. The S4PR class is also composed by a set of state machine PNs connected through shared resources, hence something similar to the buffers. However, (i) In S4PR, resource places (corresponding to buffers in SSP) are not destination private; (ii) in S4PR there exists one P-semiflow for each resource place containing only that resource place while in SSP the P-semiflows may contain in general more than one buffer and (iii) in SSP the state machines contains only one token while in S4PR could exists more than one token in the idle place. Mainly because of (ii), the well know relation between circular waiting of resources and deadlock cannot be used in SSP. On the other hand, mainly because of (iii) the method presented here is difficult to be applied to S4PR or S3PR since in SSP only one token exists and in an agent, at the initial marking when some local T-semiflows can start firing only one can start in purely non-deterministic way. Based on the previous reasons, new approaches for liveness enforcement in SSP should be taken into account. One such approach has been previously developed and it is briefly presented in subsection \ref{preinCDC}. However, it can be applied only to a reduced class of SSP. In subsection \ref{ss:controlpn} a new approach for more general SSP is introduced. \subsection{Buffer pre-assignment in SSP}\label{preinCDC} In~\cite{clavelCDC} an approach based on the pre-assignment of the buffers at the transitions in EQ relation\footnote{Two transitions $t$ and $t^\prime$ that are in conflict, i.e., $\preset t \cap \preset t^\prime \neq \emptyset$, are in EQ relation if $\b{Pre}[\cdot,t]= \b{Pre}[\cdot,t^\prime]$.} has been presented. The main idea of this method is to ensure that when a conflict transition is fired, at least one local T-semiflow that contains the corresponding conflict transition can be fired completely. \begin{example} Fig.~\ref{fig:DSSPmotMon} shows the live net system obtained by applying the pre-assignment approach proposed in~\cite{clavelCDC} to the non-live SSP net $\N^s$ in Fig.~\ref{fig:DSSPmot} without place $p_m$. In $\N^s$, if transition $t_3$ is fired and $b_2$ is empty, agent $\N_1$ is blocked until $b_2$ receives a token. However, in Fig.~\ref{fig:DSSPmotMon} $t_3$ can be fired only if $b_2$ has a token because buffer $b_2$ has been preassigned from $t_4$ to $t_3$. Notice that in Fig. \ref{fig:DSSPmotMon} all the other buffers have been preassigned, i.e., $b_1$ is preassigned to $t_1$, $b_3$ to $t_5$, $b_4$ to $t_7$, \{$b_5$,$b_7$\} to $t_9$ and \{$b_6$,$b_8$\} to $t_{12}$. \end{example} Nevertheless, this method is not working for more general SSP structures due to the complex relations that may appear between local and global T-semiflows. In the following the two main problems (\textbf{Pb. 1} and \textbf{Pb. 2}) of the \emph{pre-assignment} method in \cite{clavelCDC} are stated and illustrated. Moreover, is given intuitions on how these problems will be approached in the new method presented in this paper. \textbf{Pb. 1.} If SSP net $\N^s$ has non disjoint global T-semiflows, it could happen that after applying the pre-assignment of buffers, the firing of a local T-semiflow is conditioned by the marking of a buffer that, in $\N^s$, was not its input buffer. This situation is changing the given plan and most frequently it makes the system too restrictive and possible non-live. \begin{example}\label{ex:Pb1} Fig.~\ref{fig:DSSPpb1} shows a net $\N^s$ while Fig.~\ref {fig:SSPPb1} is the resulted net after applying the pre-assignment in~\cite{clavelCDC}. The global and local T-semiflows of $\N^s$ are given in Tab. \ref{table:TSFpb1} from where it is possible to check that $\b x_1$ and $\b x_2$ are not disjoint global T-semiflows ($\b x_1 \cap \b x_2=\{t_1,t_5\}$). \begin{table} [htbp] \caption{Local and Global T-semiflows of $\N^s$ in Fig.~\ref{fig:DSSPpb1}} \label{table:TSFpb1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Id. & Net & Type & Transitions & Global \\ \hline \hline $\b x_1$ & $\N^s$ & Global & $t_1$-$t_5$ & - \\ \hline $\b x_2$ & $\N^s$ & Global & $t_1$,$t_5$-$t_{8}$ & - \\ \hline $\b x_3$ & $\N^s$ & Global & $t_9$-$t_{12}$ & - \\ \hline $\b x_4$ & $\N_1^s$ & Local & $t_1$,$t_{2}$ & $\b x_1$ \\ \hline $\b x_5$ & $\N_1^s$ & Local & $t_{1}$,$t_{8}$ & $\b x_2$ \\ \hline $\b x_6$ & $\N_1^s$ & Local & $t_9$,$t_{10}$ & $\b x_3$ \\ \hline $\b x_7$ & $\N_2^s$ & Local & $t_3$-$t_5$ & $\b x_1$ \\ \hline $\b x_8$ & $\N_2^s$ & Local & $t_{5}$-$t_{7}$ & $\b x_2$ \\ \hline $\b x_9$ & $\N_2^s$ & Local & $t_{11}$,$t_{12}$ & $\b x_3$ \\ \hline \end{tabular} \end{center} \end{table} In the resulted net in Fig. \ref{fig:SSPPb1}, $b_2$ has been preassigned to $t_5$, so the local T-semiflow $\b x_7=t_5+t_4+t_3$ can only start its firing when $b_2$ has a token. However, this pre-assignment is also conditioning the firing of the local T-semiflow $\b x_8=t_5+t_6+t_7$ since $t_5$ belongs to both $\b x_7$ and $\b x_8$. So, in the net in Fig. \ref{fig:SSPPb1}, the firing of the local T-semiflow $\b x_8$ is conditioned by $b_2$ that in $\N^s$ (Fig. \ref{fig:DSSPpb1}) was not its input buffer. This leads to block the execution of the global T-semiflows $\b x_1$ and $\b x_2$. Particularly, from the initial marking $\b m_0=p_1+p_4+b_1+b_3+b_4$ and by firing the sequence $t_1t_8$, $\b m_1=p_1+p_4+2\cdot b_3+b_4$ is reached. From this marking, $\b x_1$ and $\b x_2$ cannot fire anymore. \end{example} In the approach proposed in this paper, in order to avoid \textbf{Pb 1}, a control PN in which the local T-semiflows are made disjoint is obtained. In particular, for each local T-semiflow in $\N^s$, a sequence $t_a \rightarrow p \rightarrow t_b$ is added in the control PN. In this way, the global T-semiflow are made disjoint and after the pre-assignment of the buffers in the control PN, the firing of a local T-semiflow only is conditioned by the buffers that in $\N^s$ were its input buffers. \textbf{Pb. 2.} In the SSP net a buffer has to choose between the firing of different local T-semiflows that subsequently require a synchronization. Only considering the pre-assignment method, the net could remain non live. \begin{example}\label{ex:Pb2} Fig.~\ref{fig:DSSPpb2} shows a $\N^s$ where the pre-assignment method in~\cite{clavelCDC} does not work due to \textbf{Pb. 2}. This $\N^s$ has a global T-semiflow $\b x_1$ composed by 3 local T-semiflows: $\b x_2=t_1+t_2$, $\b x_3=t_3+t_4$ and $\b x_4=t_5+t_6$. The firing of $\b x_2$($\b x_3$) consumes a resource from $b_1$ and produces a resource in $b_2$($b_3$). The firing of $\b x_4$ consumes a resource from $b_2$ and one from $b_3$ and produces two resources in $b_1$. So, in the long term (depending on the marking of the buffers), $\b x_2$, $\b x_3$ and $\b x_4$ should be fired proportionally. \begin{figure}[ht] \begin{center} \centering \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$} \psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$} \psfrag{t9}{$t_9$}\psfrag{t10}{$t_{10}$}\psfrag{t11}{$t_{11}$}\psfrag{t12}{$t_{12}$} \psfrag{t13}{$t_{13}$}\psfrag{t14}{$t_{14}$} \psfrag{t15}{$t_{15}$} \psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$} \psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$} \psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$} \psfrag{p7}{$p_7$}\psfrag{p8}{$p_8$} \psfrag{p9}{$p_9$}\psfrag{p10}{$p_{10}$} \psfrag{p11}{$p_{11}$} \psfrag{b1}{$b_1$}\psfrag{b2}{$b_2$} \psfrag{b3}{$b_3$}\psfrag{b4}{$b_4$} \psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$} \centering \subfigure[]{\includegraphics[width=0.45\columnwidth]{DSSPpb2.eps}\label{fig:DSSPpb2}} \centering \subfigure[]{\includegraphics[width=0.45\columnwidth]{SSPpb2.eps}\label{fig:SSPpb2}} \caption{Motivation example of problem 2: (a) A non live $\N^s$. (b) The resulting non live net after applying the pre-assignment method in \cite{clavelCDC}} \label{fig:Pb2} \end{center} \end{figure} After applying the pre-assignment method to $\N^s$ in Fig.~\ref{fig:DSSPpb2}, the non-live net in Fig.~\ref{fig:SSPpb2} is obtained. Here it can be seen that $\b{x}_2$ or $\b{x}_3$ can be fired twice without a firing of $\b{x}_4$. Consequently, the resulting net is not live. \end{example} In order to overcome \textbf{Pb. 2} in the new liveness enforcement approach, new buffers are included in the control PN, seen as \emph{information buffers}. The main objective of these new buffers is to force some global T-semiflows to fire their local ones in the correct proportion. These new buffers will only have output transitions in one local T-semiflow keeping the distributed property of the system. \subsection{Structural Liveness enforcement through a control PN}\label{ss:controlpn} Let us approach a new method with the idea of \emph{buffer pre-assignment} as the starting point. However, unlike in~\cite{clavelCDC}, the pre-assignment here is not performed in $\N^s$, but in an \emph{control PN} denoted as $\N^c$. This $\N^c$ is obtained from $\N^s$ and has a predefined type of structure in which the local T-semiflows of $\N^c$ are disjoint. The main advantage of performing the pre-assigment in $\N^c$ with disjoint local T-semiflows is the fact that their will only be conditioned by buffers that in $\N^s$ were its input buffers. In this way, \textbf{Pb 1} is prevented. Moreover, new buffers are included (seen as information buffers) forcing some global T-semiflows to fire their local ones in the correct proportion. These new buffers ensure the liveness of $\N^c$ preventing \textbf{Pb. 2}. \begin{figure}[ht] \psfrag{Nc}{$\N^c$} \psfrag{Nd}{$\N^s$} \begin{center} \includegraphics[width=1\columnwidth]{StepsNdNc.eps} \caption{\small Overview of the liveness enforcement methodology} \label{fig:StepsNdNc} \end{center} \end{figure} From a high level perspective (Fig. \ref{fig:StepsNdNc}), the proposed methodology consists of: \begin{itemize} \item \textbf{Step 1:} \emph{Compute the control PN $\N^c$} with a predefined structure and modeling the consumption/production relation between the buffers and the local T-semiflows. Sec.~\ref{sec:alg} explains how $\N^c$ is obtained by Alg.~\ref{alg:2}. \item \textbf{Step 2:} \emph{Ensure the liveness of $\N^c$}. The control net $\N^c$ must be live. In Sec.\ref{sec:live} the possible structures of $\N^c$ obtained after applying Alg.~\ref{alg:2} are characterized. In some of them, the liveness holds while in others the liveness is forced by adding some control places (Alg.\ref{conpla}). \item \textbf{Step 3:} \emph{Control policy and systems evolution}. $\N^c$ will evolve synchronously with $\N^s$ disabling transitions that may lead the system to livelock. The methodology and behavior is described in Sec.~\ref{sec:contr}. \ignore{, basically consisting in, \begin{itemize} \item \textbf{3.1} \emph{Label the transitions of the $\N^c$} with names of transitions of the $\N^s$. These common labels allow the firing of transitions in the $\N^c$ synchronously with some transitions of the $\N^s$. \item \textbf{3.2} \emph{Assign guard expressions} to transitions of $\N^s$. These guard expressions are logical conditions based on the marking of $\N^c$ and will disable the firing of the transitions that may lead to a livelock. \item \textbf{3.3} Events from $\N^s$ are input signals to $\N^c$. \end{itemize}} \end{itemize} \section{Construction of the control PN}\label{sec:alg} \ignore{In Section \ref{preinCDC} we showed that buffer pre-assignment in a consistent and conservative SSP net $\N^s$ with non-disjoint local T-semiflows (\textbf{Pb. 1}), could result in non live systems.} This section presents an algorithm to compute the control PN denoted $\N^c$ for a given structurally non live SSP $\mathcal{N}^s$. Subsequently and by means of guard expressions, the firing of local T-semiflows in $\N^s$ will be conditioned by the state (marking) in $\N^c$. Both net systems will evolve synchronously. \ignore{ $\N^c$ is obtained from $\N^s$ and models the consumption/production relation that exists between the buffers and local T-semifows of $\N^s$. Moreover, $\N^c$ has the same number of agents and the same buffers of $\N^s$. However, each local T-semiflow in $\N^s$ is modeled by an ordinary sequence $t_a\rightarrow p\rightarrow t_b$ in $\N^c$. Where $t_a$ ($t_b$) models the first (last) transition of the local T-semiflow (defined in the following). Before giving the methodology to obtain the control PN ($\N^c$) from a SSP structure ($\N^s$), let us state some concepts. For computing $\N^c$ it is necessary that all agents composing $\N^s$ have a waiting place $p_e$ (Def.~\ref{def:wait}) from which the first and last transition of each local T-semiflow can be defined. The existence of a waiting place for each agent ensures that every cycle (local T-semiflow) contains a common input/output place ($p_e$) such that $p_e \in \postset{||\b x^i_n||} \cap \preset{||\b x^i_n||}$ for all T-semiflows $\b x^i_n$ of $\N_i$.} \ignore{ \begin{figure} \psfrag{t1}{$t_1$}\psfrag{t3}{$t_3$}\psfrag{t2}{$t_2$}\psfrag{t4}{$t_4$} \psfrag{t6}{$t_6$}\psfrag{t5}{$t_5$}\psfrag{t7}{$t_7$} \psfrag{p5}{$p_5$}\psfrag{p4}{$p_4$}\psfrag{p3}{$p_3$}\psfrag{p2}{$p_2$} \psfrag{p1}{$p_1$} \begin{center} \includegraphics[width=0.8\columnwidth]{nowait.eps} \caption{\small Agent without waiting place} \label{fig:nowait} \end{center} \end{figure} \begin {definition} \label{def:wait} Let $\N_i$ be an agent of a SSP $\N^s$, $\b x^i_1, \b x^i_2, \ldots, \b x^i_k$ be the minimal (local) T-semiflows of $\N_i$ and let $\bar{P_i}=\left(\displaystyle\bigcap_{n=1}^{k}\preset||\b x^i_n||\right) \cap P_i$ be the set of common places of the local T-semiflows, where $P_i$ is the set of places of $\N_i$ (i.e., without the buffers places) \begin{itemize} \item If $k=1$ (i.e., only one T-semiflow) the \emph{waiting place} is the marked place at the initial marking (which is unique according to Def.~\ref{def:dssp}). \item If $k \geq 2$ and $\bar{P_i} \neq \b \emptyset$. The place $p_e \in \bar{P_i}$ is called \emph{waiting place} if there exists no direct path in $\N_i$ from $p_e$ to other place in $\bar{P_i} \setminus \{p_e\} $. \item If $k \geq 2$ and $\bar{P_i} = \emptyset$ implies that does not exists a \emph{waiting place} in $\N_i$. \end{itemize} \end{definition} For example, in the subnet $\N_1$ of the SSP net in Fig.~\ref{fig:DSSPpb1} there are three T-semiflows: $\b x_4$, $\b x_5$ and $\b x_6$ given in Tab.~\ref{table:TSFpb1}. $\preset||\b x_4|| \cap P_1=\{p_1,p_2\}$, $\preset||\b x_5|| \cap P_1=\{p_1,p_2\}$ and $\preset||\b x_8|| \cap P_1=\{p_1,p_3\}$. So $\bar{P_1}=p_1$ and consequently the waiting place is $p_1$. However, the net in Fig.~\ref{fig:nowait} represents an agent of a SSP structure without a waiting place. This net have 3 minimal T-semiflow: $\b x_1=t_2+t_3$, $\b x_2=t_1+t_4+t_5$ and $\b x_3=t_6+t_7$. The input places of this minimal T-semiflow are the following: $\preset||\b x_1||=\{p_2,p_3\}$, $\preset||\b x_2||=\{p_1,p_2,p_4\}$ and $\preset||\b x_3||=\{p_1,p_5\}$. In this case, $\bar{P_i}=\emptyset$ and does not exist a waiting place. } Let $p_i^e$ be the \emph{waiting place} of agent $\N_i$ that exists according to condition 5) of Def. \ref{def:ssp}. The first and last transitions of a local T-semiflow $\b x^i_l$ of agent $i$ are formally defined as, \begin {itemize} \item $t^l_{first} = \postset {p_i^e} \cap ||\b x^i_l||$; \item $t^l_{last}= \preset {p_i^e} \cap ||\b x^i_l||$. \end{itemize} \begin{algorithm}[h]\label{alg:2} \begin{algorithmic}[1] \REQUIRE SSP structure $\N^s=\langle P^s, T^s, \b{Pre}^s, \b{Post}^s \rangle$ \ENSURE Control PN $\N^c=\langle P^c, T^c, \b{Pre}^c, \b{Post}^c \rangle$ \STATE Initialize the state of $\N^c$: $P^c := \emptyset, T^c := \emptyset , Pre^c :=\b 0 , Post^c := \b 0 ,$ \FORALL {$b_i$ $\in \N^s$} \STATE Add a place $p_{b_i}$ to $P^c$ \ENDFOR \FORALL {agents $\N_i^s$ of $\N^s$} \STATE Add a place $p_{\N_i}$ to $P^c$ \STATE Compute all minimal T-semiflows of $\N_i^s$ in $\Gamma_i$ \FORALL {$\b x^i_l \in \Gamma_i$} \STATE Add a transition $t^l_j$ to $T^c$; \COMMENT{representing $t^l_{first}$} \STATE Add a transition $t^l_k$ to $T^c$; \COMMENT{representing $t^l_{last}$} \STATE Add a place $p_{x_l}$ to $P^c$; \COMMENT{representing $\b x^i_l$} \STATE $\b {Post}^c[p_{x_l},t^l_j]=\b {Pre}^c[p_{x_l},t^l_k]=1$; \STATE $\b {Pre}^c[p_{\N_i},t^l_j]= \b {Post}^c[p_{\N_i},t^l_k]=1$; \STATE Let $T_l = ||\b x^i_l||$; \FORALL {$b_i$ s.t. ${\b {Pre}^s}[b_i,T_l] \neq \b 0$} \STATE ${\b {Pre}^c}[p_{b_i},t^l_j]$= $\displaystyle\sum_{t \in T_l}{\b {Pre}^s}[b_i,t]$; \ENDFOR \FORALL {$b_i$ s.t ${\b {Post}^s}[b_i,T_l] \neq \b 0$} \STATE ${\b {Post}^c}[p_{b_i},t^l_k]$= $\displaystyle\sum_{t \in T_i}{\b {Post}^s}[b_i,t].$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \caption{Computation of the control PN} \end{algorithm} Using Alg.~\ref{alg:2}, a control PN $\N^c$ is obtained from a structurally non live SSP structure $\N^s$ as follow: \begin{itemize} \item First loop (steps 2-4): for each buffer $b_i$ in $\N^s$, a place $p_{b_i}$ is introduced in $\N^c$; \item Second loop (steps 5-22): adds the other places and transitions to $\N^c$. This loop is iterated for all agents $\N_i^s$ of $\N^s$. Each iteration of $\N_i^s$ consists of, \begin{itemize} \item A new place $p_{\N_i}$ is added in $\N^c$ (step 6); \item All minimal T-semiflows of $\N_i^s$ are computed and saved in $\Gamma_i$ (step 7); \item Third loop (steps 8 - 21) adds a ordinary sequence in $\N^c$ corresponding to each local T-semiflow of $\N_i^s$ and connect it with the input and output buffers. This loop consists of, \begin{itemize} \item For each local T-semiflow $\b x_l$ an ordinary subnet \{$t^l_j\rightarrow p_{x_l}\rightarrow t^l_k$\} is added in the $\N^c$ (steps 9-12); \item Connect the place $p_{\N_i}$ with first transition of the ordinary sequence (introduced in step 9) and connect the last transition of the ordinary sequence (introduced in step 10) with place $p_{\N_i}$. \item Step 14: All transitions belonging to the support of the T-semiflow $\b x_l$ are saved in $T_l$; \item Forth loop (steps 15-17): for all input buffers of $\b x_l$ (input buffers of transitions in $T_l$), connect in $\N^c$ the corresponding input buffers (added in step 3) with the first transition of the ordinary sequence (added in step 9); \item Fifth loop (steps 18-20): for all output buffers of $\b x_l$ (output buffers of transitions in $T_l$), connect in $\N^c$ the last transition of the ordinary sequence (added in step 10) with the corresponding output buffers (added in step 3); \end{itemize} \end{itemize} \end{itemize} In order to reduce the number of local T-semiflows in $\N^s$ and consequently the computational complexity of Alg.\ref{alg:2}, some basic and classical reduction rules~\cite{ICSilv93b,ARMura89} can be applied to $\N^s$ before computing $\N^c$. First, fusion of places and fusion of transitions must be considered and then, identical transitions (with the same input and output places) can be reduced to a unique transition. \ignore{\begin{definition} Two transition $t$ and $t'$ are identical if $\b {Pre}[P,t]=\b {Pre}[P,t']$ and $\b {Post}[P,t]=\b {Post}[P,t']$. \end{definition}} \begin{figure} \psfrag{t1}{$\textcolor[rgb]{1,0,0}{t_1}$}\psfrag{t11}{$\textcolor[rgb]{1,0,0}{t_{11}}$} \psfrag{t2}{$\textcolor[rgb]{1,0,0}{t_2}$}\psfrag{t10}{$\textcolor[rgb]{1,0,0}{t_{10}}$} \psfrag{t5}{$\textcolor[rgb]{1,0,0}{t_5}$}\psfrag{t7}{$\textcolor[rgb]{1,0,0}{t_{7}}$}\psfrag{t12}{$\textcolor[rgb]{1,0,0}{t_{12}}$} \psfrag{t6}{$\textcolor[rgb]{1,0,0}{t_6}$}\psfrag{t16}{$\textcolor[rgb]{1,0,0}{t_{16}}$} \psfrag{t3}{$\textcolor[rgb]{1,0,0}{t_3}$}\psfrag{t13}{$\textcolor[rgb]{1,0,0}{t_{13}}$} \psfrag{t9}{$\textcolor[rgb]{1,0,0}{t_9}$}\psfrag{t19}{$\textcolor[rgb]{1,0,0}{t_{19}}$} \psfrag{t4}{$\textcolor[rgb]{1,0,0}{t_4}$}\psfrag{t14}{$\textcolor[rgb]{1,0,0}{t_{14}}$} \psfrag{t8}{$\textcolor[rgb]{1,0,0}{t_8}$}\psfrag{t18}{$\textcolor[rgb]{1,0,0}{t_{18}}$} \psfrag{x5}{$\textcolor[rgb]{1,0,0}{x_5}$}\psfrag{x6}{$\textcolor[rgb]{1,0,0}{x_6}$} \psfrag{x7}{$\textcolor[rgb]{1,0,0}{x_7}$}\psfrag{x8}{$\textcolor[rgb]{1,0,0}{x_8}$} \psfrag{x9}{$\textcolor[rgb]{1,0,0}{x_9}$}\psfrag{x10}{$\textcolor[rgb]{1,0,0}{x_{10}}$}\psfrag{x4}{$\textcolor[rgb]{1,0,0}{x_{4}}$} \psfrag{x11}{$\textcolor[rgb]{1,0,0}{x_{11}}$}\psfrag{x12}{$\textcolor[rgb]{1,0,0}{x_{12}}$} \psfrag{t41}{$t^4_1$}\psfrag{t42}{$t^4_2$}\psfrag{t51}{$t^5_1$}\psfrag{t56}{$t^5_8$} \psfrag{t69}{$t^6_9$}\psfrag{t610}{$t^{6}_{10}$}\psfrag{t75}{$t^{7}_5$} \psfrag{t73}{$t^7_{3}$}\psfrag{t85}{$t^8_{5}$}\psfrag{t87}{$t^{8}_{7}$}\psfrag{t912}{$t^{9}_{12}$} \psfrag{t911}{$t^9_{11}$} \psfrag{px4}{$p_{x_4}$}\psfrag{px5}{$p_{x_5}$}\psfrag{px6}{$p_{x_6}$}\psfrag{px7}{$p_{x_7}$}\psfrag{px8}{$p_{x_8}$} \psfrag{px9}{$p_{x_9}$} \psfrag{b1}{$p_{b_1}$}\psfrag{b2}{$p_{b_2}$} \psfrag{b3}{$p_{b_3}$}\psfrag{b4}{$p_{b_4}$} \psfrag{b5}{$p_{b_5}$}\psfrag{b6}{$p_{b_6}$} \psfrag{b7}{$p_{b_7}$}\psfrag{b8}{$p_{b_8}$}\psfrag{N1}{$\N_1$} \psfrag{N2}{$\N_2$} \psfrag{bN1}{$p_{\N_1}$}\psfrag{bN2}{$p_{\N_2}$} \begin{center} \includegraphics[width=1\columnwidth]{ControPb1f.eps} \caption{\small Control PN obtained from SSP structure in Fig.~\ref{fig:DSSPpb1}} \label{fig:control1}. \end{center} \end{figure} \begin{example} Let us consider the SSP net $\N^s$ in Fig.~\ref{fig:DSSPpb1}. Applying Alg. \ref{alg:2} to $\N^s$, the control PN $\N^c$ in Fig.~\ref{fig:control1} is obtained. First, since $\N^s$ has 5 buffers, in $\N^c$ the places $p_{b_i}$ ($i=1,2,\dots,5$) are added. Furthermore, since $\N^s$ has two agents ($\N_1$ and $\N_2$), the places $p_{\N_1}$ and $p_{\N_2}$ are added in $\N^c$. After this, steps 8-20 are applied to all minimal T-semiflows of each agent. Let us consider for example the local T-semiflow $\b x_7= t_5+t_4+t_3$. Corresponding to this T-semiflow, the ordinary sequence \{$t^7_5\rightarrow p_{x_7}\rightarrow t^7_3$\} is added in $\N^c$. Since in $\N^s$, $b_2$ is an input buffer of $\b x_7$, in $\N^c$ there is an arc from the place $p_{b_2}$ to the input transition $t^7_5$. In addition, $\b x_7$ has an output buffer $b_1$ so in $\N^c$ there exits an arc from the output transition $t^7_3$ to the place $p_{b_1}$. Moreover, since $||\b x_7||$ belongs to $\N_2$, we add an arc from $p_{\N_2}$ to $t^7_5$ and from $t^7_3$ to $p_{\N_2}$. Finally, by applying steps 8-20 to all local T-semiflows, we obtain the control PN in Fig~\ref{fig:control1}. Notice that in Fig~\ref{fig:control1}, all six T-semiflows of $\N_1$ and $\N_2$ (given in Tab. \ref{table:TSFpb1}) are disjointly represented. \end{example} \ignore{\begin{figure} \psfrag{t75}{$t^7_5$}\psfrag{t73}{$t^7_3$} \psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$}\psfrag{t5}{$t_5$} \psfrag{te5}{$\textcolor[rgb]{1,0,0}{t_5}$} \psfrag{te3}{$\textcolor[rgb]{1,0,0}{t_3}$} \psfrag{px7}{$p_{x_7}$}\psfrag{p4}{$p_4$}\psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$} \psfrag{pb1}{$p_{b_1}$}\psfrag{pb2}{$p_{b_2}$} \psfrag{b1}{$b_1$}\psfrag{b2}{$b_2$} \psfrag{xe7}{$\textcolor[rgb]{1,0,0}{x_7}$} \psfrag{pN2}{$p_{\N_2}$}\psfrag{N2}{$\N_2$} \psfrag{Algorithm 2}{Algorithm 1}\psfrag{x5=t1,t2,t3}{$\b x_5=t_1,t_2,t_3$} \begin{center} \includegraphics[width=1\columnwidth]{Transform2.eps} \caption{\small Transformation obtained by applying algorithm \ref{alg:2} to a simple agent composed by 1 local T-semiflow $\b x_7=t_5+t_4+t_3$} \label{fig:transform} \end{center} \end{figure} } \ignore{ \begin{figure*}[htb] \psfrag{x}{$\b x$}\psfrag{tch}{$t_{ch}$} \begin{center} \includegraphics[width=1.7\columnwidth]{diagramliveCPN2.eps} \caption{\small Diagram for ensuring liveness in the Control PN} \label{liveCPN} \end{center} \end{figure*}} \section {Liveness of the Control PN}\label{sec:live} Since the control PN system $\langle \N^c, \b{m}_0 \rangle$ should \emph{guide} the evolution of the SSP avoiding any blocking situation, it should be live as well. Otherwise, some local T-semiflows of the SSP will never be fired after control PN reaches a livelock state. \subsection{Structural liveness analysis of the Control PN}\label{sec:liveen} To check structural liveness of the control PN, the following two reduction rules are first applied to the control PN $\N^c$ to obtain the \emph {simplified control PN} denoted as $\N^{cs}$. \begin{rul}\label{reduc1} The sequences \{$t^l_j\rightarrow p_{x_l}\rightarrow t^l_k$\} are reduced to a unique transition $t_{x_l}$. \end{rul} \begin{rul}\label{reduc2} Once rule \ref{reduc1} is applied to all sequences, places $p_{\N_i}$ are implicit because $\b {Pre}^c[p_{\N_i},T]=\b {Post}^c[p_{\N_i},T]=\b m[p_{\N_i}]$ and are removed. \end{rul} Because in $\N^c$ a sequence \{$t^l_j\rightarrow p_{x_l}\rightarrow t^l_k$\} represents a local T-semiflow of $\N^s$, in $\N^{cs}$ transition $t_{x_l}$ (obtained by rule \ref{reduc1}) also represents a local T-semiflow of $\N^s$. Moreover, by applying rule \ref{reduc2}, places $p_{\N_i}$ are removed, so all places in $\N^{cs}$ represent buffers. In this way, $\N^{cs}$ may be composed of isolated subnets where the places represent buffers and the transitions represent local T-semiflows of $\N^s$. Moreover, each isolated subnet considers the global T-semiflows of $\N^s$ which have common input and/or output buffers. \begin{figure}[h] \begin{center} \psfrag{tx4}{$t_{x_4}$}\psfrag{tx5}{$t_{x_5}$}\psfrag{tx6}{$t_{x_6}$}\psfrag{tx7}{$t_{x_7}$}\psfrag{tx8}{$t_{x_8}$}\psfrag{tx9}{$t_{x_9}$}\psfrag{b1}{$p_{b_{1}}$}\psfrag{b2}{$p_{b_{2}}$}\psfrag{b3}{$p_{b_{3}}$}\psfrag{b4}{$p_{b_{4}}$}\psfrag{b5}{$p_{b_{5}}$} \includegraphics[width=0.5\columnwidth]{ConSimPb1f.eps} \caption{\small Simplified control PN $\N^{cs}$ after applying the reduction rules \ref{reduc1} and \ref{reduc2} to the control PN in Fig.\ref{fig:control1}.} \label{fig:ConSimPb1} \end{center} \end{figure} Fig. \ref{fig:ConSimPb1} shows the simplified control PN obtained after applying rules \ref{reduc1} and \ref{reduc2} to the control PN in Fig.\ref{fig:control1}. It can be seen that this net is composed by two isolated sub-nets. The one at left corresponding to the global T-semiflows $\b x_1$ and $\b x_2$ (given in Tab. \ref{table:TSFpb1}) which have the common input/output buffer $b_1$ (modeled by place $p_{b_1}$). The subnet at the right corresponds to the global T-semilfow $\b x_3$. \begin{proposition} \label{def} Let us assume a SSP structure $\N^s$, its control PN $\N^c$ obtained by applying Alg. \ref{alg:2} and the simplified control PN $\N^{cs}$ obtained by applying Reduction Rules \ref{reduc1} and \ref{reduc2} to $\N^c$. If all isolated subnets $\N^{cs}_i$ of $\N^{cs}$ are CF or JF, then the control PN $\N^c$ is structurally live. \end{proposition} \begin{proof} Alg.\ref{alg:2} preserves in $\N^c$ the number of agents, T-semiflows and buffers of $\N^s$. Moreover, Alg.\ref{alg:2} also preserves in $\N^c$ the consumption/production relation between buffers and local T-semiflows that exists in $\N^s$. Since $\N^s$ is consistent and conservative, then $\N^c$ is consistent and conservative. On the other hand, the reduction rules \ref{reduc1} and \ref{reduc2} do not change the number of tokens consumed and produced from and to the buffers, so if $\N^c$ is consistent and conservative, then each subnet $\N^{cs}_i$ of $\N^{cs}$ is consistent and conservative. Furthermore, if each subnet $\N^{cs}_i$ is CF or JF then $\N^{cs}$ is structurally live according to \cite{ARTeCoSi97}. On the other hand, the applied reduction rules (\ref{reduc1} and \ref{reduc2}) to $\N^{c}$ to obtain $\N^{cs}$ preserve liveness property \cite{ICSilv93b}. Therefore, if $\N^{cs}$ is structurally live then $\N^c$ is structurally live. \end{proof} For example, let us assume the simplified control PN of Fig. \ref{fig:ConSimPb1} obtained from the control PN in Fig. \ref{fig:control1}. Notice that the subnet on the left part is JF while the one on the right is CF and JF. According to Prop. \ref{def}, the corresponding control PN in Fig. \ref{fig:control1} is structurally live. therefore, there exists an initial marking for the buffers that make this net live. This marking could be for example the initial marking that is allowing the firing of all global T-semiflows once. \subsection{Structural Liveness enforcement of the Control PN}\label{sec:livefor} If a subnet $\N^{cs}_i$ is not JF neither CF then $\N^{cs}_i$ may not be structurally live. This subsection proposes a methodology to force the structural liveness of a structurally non live $\N^{cs}$. Recall that each $\N^{cs}_i$ of $\N^{cs}$ is consistent and conservative (see the first part of the proof of Prop. \ref{def}). So, each $\N^{cs}_i$ is composed of one or more T-semiflows covering all transitions. In order to force structural liveness, the basic idea is to ensure that the transitions are fired proportionally according to the global T-semiflows. In this way, we prevent the free resolution of a conflict that subsequently requires a synchronization. The proposed methodology adds an input place connected with ordinary arcs with each transition in conflict. The number of tokens in these new places is equal to the number of times that its output transition can be fired so, equal to the number of times that its output transition appears in each T-semiflows. In order to identify when a global T-semiflow has been fired completely, a check transition is introduced. Its firing means that the T-semiflow has finished and all its transitions can be fired again. So, the firing of the check transition should update the marking of the new added places. Before giving formally the methodology to add this new control places let us consider a simple intuitive example. \begin{example} The left part of Fig.\ref{fig:ConSimPb2} shows the simplified control PN ($\N^{cs}$) of the SSP net $\N^s$ in Fig.\ref{fig:DSSPpb2}. This net is not live because there exists a conflict ($t_{x2}$,$t_{x3}$) and subsequently a synchronization in transition $t_{x4}$ (\textbf{Pb 2}). \begin{figure}[h] \begin{center} \psfrag{tx4}{$t_{x_4}$}\psfrag{tx2}{$t_{x_2}$}\psfrag{tx3}{$t_{x_3}$}\psfrag{tx7}{$t_{x_7}$}\psfrag{tx8}{$t_{x_8}$}\psfrag{tx9}{$t_{x_9}$}\psfrag{b1}{$p_{b_{1}}$}\psfrag{b2}{$p_{b_{2}}$}\psfrag{b3}{$p_{b_{3}}$}\psfrag{b4}{$p_{b_{4}}$}\psfrag{b5}{$p_{b_{5}}$}\psfrag{Algorithm}{Alg. 2}\psfrag{ptx2}{$p_{t_{x2}}$}\psfrag{ptx3}{$p_{t_{x3}}$} \includegraphics[width=1\columnwidth]{ConSimPb2f.eps} \caption{\small Left: simplified control PN of the SSP net in Fig. \ref{fig:DSSPpb2}; right: simplified control PN with new added buffers} \label{fig:ConSimPb2} \end{center} \end{figure} In the right part of Fig.\ref{fig:ConSimPb2} the new control buffers $p_{t_{x2}}$ and $p_{t_{x3}}$ are added for controlling the firing of $t_{x2}$ and $t_{x3}$. These new buffers constraint to one the times that $t_{x2}$ and $t_{x3}$ can fire before one firing of $t_{x4}$. \end{example} \ignore{To know when a global T-semiflow of a subnet composing the simplified Control PN has been fired completely, we define its check transition} The firing of a check transition of a given global T-semiflow must produce enough tokens to fire completely the T-semiflow again. For this net, the check transition is $t_{x4}$. The formal definition of check transitions is given in Def.\ref{cecktr}. \begin{definition}\label{cecktr} Let $\N^{cs}_i$ be a subnet of a simplified control PN $\N^{cs}$ and let $\b x_j$ be a local T-semiflow of $\N^{cs}_i$. A transition $t^j_{ch} \in ||\b{x}_j||$ is called \emph{check transition} of $\b x_j$ if \begin{itemize} \item it is not belonging to the support of other T-semiflow of $\N^{cs}$ and, \item its firing creates enough tokens to fire again completely $\b x_j$. \end{itemize} If there are more than one transition in $||\b{x}_j||$ that fulfill the previous two constraints, any non conflict transition can be chosen. \end{definition} \ignore{being $\b m= \b {Post}^{cs}[P,t^j_{ch}]$ the marking produced by firing of $t^j_{ch}$, the following conditions are satisfied: \begin{enumerate} \item $\begin{array}{l} \b x_k[t^j_{ch}]= \left\{ \begin{array}{lll} 1, &\text{if} &k=j \\ 0, &\text{if} &k \ne j \end{array} \right., \end{array}$ \item There exists a firing sequence $\sigma$ such that its firing vector $\b{\sigma}$ satisfies $\b m \xrightarrow{\sigma} \b m$ and $\b{\sigma}=\b x_j$. \end{enumerate} } \ignore {\color{red} Condition 1) in the previous definition ensures that the check transition $t^j_{ch}$ belongs only to $||\b{x}_j||$ while condition 2) ensures the existence of a firing sequence with the firing vector equal to $\b{x}_j$ that can be fired after the firing of $t^j_{ch}$. If there are more than one transition that fulfill the constraints in Def. \ref{cecktr}, any non conflict transition can be chosen. } \begin{algorithm}[h]\label{conpla} \begin{algorithmic}[1] \REQUIRE A structurally non live net $\N^{cs}_i$. \ENSURE A structurally live subnet $\N^+$ and a live $\b m_0$. \STATE Let $\N^+ = \N_i^{cs}$. \STATE Compute the set $X$ of minimal T-semiflows of $\N^{cs}_i$. \FORALL {T-semiflow $\b x_i$ of $X$} \STATE Obtain the check transition $t^i_{ch}$. \ENDFOR \STATE Let $T_{ck}$ be the set of check transitions. \STATE Let $T_{cn}$ be the set of conflict transitions. \FORALL {$t_i$ $\in$ ($T_{cn}$ $\backslash$ $T_{ck}$)} \STATE Add a place $p_{t_i}$ s.t. $\b{Pre}^+ [p_{t_i},t_i]=1$. \ENDFOR \FORALL {T-semiflow $\b x_i$ of $X$} \FORALL {$t_j$ s.t $\b x_i[t_j]>0$ and $t_j \in (T_{cn}$ $\backslash$ $T_{ck}$)} \STATE $\b {Post}^+[p_j,t^i_{ch}]=\b x_i[t_j]$ \ENDFOR \ENDFOR \STATE $\b m_0=\displaystyle\sum_{\b x_i \in X}{\b {Post}^+[P^+,t^i_{ch}]}$ \end{algorithmic} \caption{Enforcing liveness of a structurally non live $\N^{cs}_i$.} \end{algorithm} Alg. \ref{conpla} adds the control places and compute an initial marking that ensure the liveness of a subnet $\N^{cs}_i$ by ensuring the proportional firing of the transitions belonging to the T-semiflows. Each new added place only has one output transition, so the resulted net preserves the distributiveness property. It is necessary that all T-semiflows of the sub-nets to which Alg. \ref{conpla} is applied to have a check transition according to Def. \ref{cecktr}. Alg. \ref{conpla} adds a control place $p_{t_j}$ for each conflict transition $t_j$ that is not a check transition $t^i_{ch}$. The firing of $t_j$ is constrained by the marking of the new added place $p_{t_j}$: $\b{Pre}^+ [p_{t_j},t_j]=1$ (steps 8-10). When the check transition $t^i_{ch}$ of T-semiflow $\b x_i$ is fired, the marking of the places $p_{t_j}$ that control transition $t_j$ belonging to $\b x_i$ are updated: $\b {Post^+}[p_j,t^i_{ch}]=\b x_i[t_j]$ (Steps 11-15). The initial marking $\b m_0$ is given by the marking that would be produced by firing each check transition $t^i_{ch}$ (step 16). \begin{figure} \psfrag{pt1}{$\color{red}{p_{t_1}}$}\psfrag{pt2}{$\color{red}{p_{t_2}}$}\psfrag{pt3}{$\color{red}{p_{t_3}}$}\psfrag{pt4}{$\color{red}{p_{t_4}}$} \psfrag{pt5}{$\color{red}{p_{t_5}}$}\psfrag{pt6}{$\color{red}{p_{t_6}}$}\psfrag{pt7}{$\color{red}{p_{t_7}}$} \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$}\psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$}\psfrag{t9}{$t_9$}\psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$}\psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$}\psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$}\psfrag{p7}{$p_7$}\psfrag{p8}{$p_8$}\psfrag{b1}{$b_{1}$}\psfrag{b2}{$b_{2}$}\psfrag{b3}{$b_{3}$}\psfrag{b4}{$b_{4}$}\psfrag{x1}{$\b{x}_{1}$}\psfrag{x2}{$\b{x}_{2}$}\psfrag{x3}{$\b{x}_{3}$}\psfrag{x4}{$\b{x}_{4}$}\psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$} \begin{center} \includegraphics[width=1\columnwidth]{SimCPN2.eps} \caption{\small A non live $\N^{cs}_i$ and the control places (red).} \label{Simcpn} \end{center} \end{figure} In Fig. \ref{Simcpn}, a possible subnet $\N^{cs}_i$ (containing both choices and synchronizations) of a simplified control PN $\N^{cs}$is shown in black. This net is composed by two global T-semiflows $\b x_1=2t_1+2t_2+t_3+t_4+t_5+t_6+t_8$ and $\b x_2=2t_2+t_6+t_7+t_9$. Moreover, the control places $p_{t_j}$ added by applying Alg.\ref{conpla} are represented in red color. $\N^{cs}_i$ without the control places is not live because there are choices between \{$t_1, t_2$\}, \{$t_3, t_4$\} and \{$t_5,t_6,t_7$\} and subsequently synchronizations in $t_8$ and $t_9$ are required (\textbf{Pb.} 2). However, after adding the control places, some places become implicit. Particularly, adding $p_{t_1}$ and $p_{t_2}$ with $\b m_0[p_{t_1}]=2$ and $\b m_0[p_{t_2}]=4$ the transition $t_1$ can be initially fired two times and transition $t_2$ four times becoming the place $p_1$ implicit. Removing $p_1$ (implicit) it is possible to combine $p_{t_1}$ with $p_2$ and $p_{t_2}$ with $p_3$. Now, the control places $p_{t_3}$ - $p_{t_7}$ make the previous combined places implicit. Repeating this process iteratively, it is possible to check that the simplified control PN with the added places is live. \begin{lemma} The net system resulted after applying Alg.\ref{conpla} to a structurally non live subnet $\N^{cs}_i$ is live. \end{lemma} \begin{proof} $\N^{cs}_i$ is a consistent and conservative PN (see the proof of Prop. \ref{def}) where each transition represents a local T-semiflow of $\N^s$ and each place represent a buffer. The initial marking $\b{m}_0$ of the net is equal to the marking produced by firing of check transitions. By applying Alg.\ref{conpla}, a control place $p_{t_j}$ is added for each conflict transition $t_j$ that is not a check transition. These control places limit the firing of conflict transitions making the original input places of these conflicts to be implicit. This happens because the firing of check transitions produce, \begin {itemize} \item in its original output places (buffers), enough tokens to fire completely the global T-semiflows. \item in the new added places $p_{t_j}$, the exact number of tokens for firing the conflict transitions $t_j$ the times that appears in the global T-semiflows. \end {itemize} If a place that receives tokens by firing of check transitions is not a decision (i.e., has only one output transition), it can be fused with the output places of its unique output transition. This procedure can be iterated until the obtained place is a decision. Moreover, the marking of this place is greater or equal to the marking of the added places $p_{t_j}$ (i.e., constraining the firing of the transitions in the conflict). In this way, this place is implicit and can be removed without compromise the liveness of the net. After removing of an implicit place, its output transitions $t_j$ are not anymore in conflict and each one has only one input place $p_{t_j}$. Starting now with an input place $p_{t_j}$ the full procedure can be iterated and at the end it will be removed being implicit for the next conflict. Finally, the net structure obtained is composed by the check transitions connected with one place by a self-loop. So, the resulting net system is live and consequently the original net system with the added places is live. \end{proof} Alg.\ref{conpla} adds some control places and computes an initial marking that force the liveness of the simplified Control PN ($\N^{cs}$). However, the control PN ($\N^c$) is the net used on the control level. So, the new added control places in $\N^{cs}$ should be translated to $\N^c$. \begin {remark}\label{refine} Considering that the sequence \{$t^l_j\rightarrow p_{x_l}\rightarrow t^l_k$\} in $\N^c$ has been reduced to a unique transition $t_{x_l}$ in $\N^{cs}$, for each new control buffer ($b_s$) added in $\N^{cs}$ an homologous buffer ($b_h$) is added in $\N^c$ as follows: \begin{itemize} \item $\b {Pre}^c[b_h,t^l_j]=\b {Pre}^{cs}[b_s,t_{x_l}]$ \item $\b {Post}^{c}[b_h,t^l_k]=\b {Post}^{cs}[b_s,t_{x_l}]$ \item $\b m_0[b_h]=\b m_0[b_s]$ \end{itemize} \end{remark} \textbf{Absence of check transition.} Alg. \ref{conpla} assumes that for each T-semiflow of $\N^{cs}_i$ it is possible to compute a check transition according to Def. \ref{cecktr}. Unfortunately, this is not always true, and could exists no check transition for some T-semiflows in $\N^{cs}_i$. In these situations should be necessary to fix a \emph{set of check transitions} that together fulfill the conditions in Def. \ref{cecktr}. One possible brute force approach for finding a set of check transitions could be to analyze all possible sub-sets of transitions bellonging to the support of the analyzed T-semiflow. Once a set of check transitions is fixed, the \emph{new virtual check transition} can be added. This new transition must synchronize the set of check transitions. Fig. \ref{checktran} shows a possible set of check transitions (\{$t_1, t_2, t_3$\}) and the new virtual check transition $t_{ch}$. \begin{figure}[htb] \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_{ch}$}\psfrag{set}{Set of check transitions}\psfrag{add}{Virtual check transition} \begin{center} \includegraphics[width=0.3\columnwidth]{checktran.eps} \caption{\small Adding a virtual check transition $t_{ch}$.} \label{checktran} \end{center} \end{figure} \section{Control policy and system evolution}\label{sec:contr} This section discusses the control policy based on a live control PN system. The idea is based on the following two conditions, \begin{condi} \label{condi1} The firing of a local T-semiflow can only starts if all its input buffers have enough tokens to complete the firing of all its transitions. \end{condi} \begin{condi} \label{condi2} When the firing of a local T-semiflow $\b x^i_l$ starts in the agent $\N_i$, it is not possible to fire transitions that are not in the support of $\b x^i_l$ until all transitions of $||\b x^i_l||$ in $\N_i$ are fired. \end{condi} To achieve that $\N^c$ constraint the fireability of the transitions in $\N^s$ guard expressions are used. Each transition in $\N^s$ has associated a guard expression. These guard expressions are logical conditions related with the marking of $\N^c$. A transition $t \in T^s$ can be fired only if the associated guard expression is true. Of course, to fire $t$, it is also necessary to be enabled in $\N^s$. \emph{Labelling in Control PN}. Each transition in $\N^c$ is labelled with the name of a transition of $\N^s$. In the process of obtaining $\N^c$, for each local T-semiflow $\b x_l$ in $\N^s$ an ordinary subnet ($t^l_j \rightarrow p_{x_l}\rightarrow t^l_k$) in $\N^c$ has been added. The transition $t^l_j$/$t^l_k$ is labelled with the name of the first/last transition of $\b x_l$. In addition, the place $p_{x_l}$ is labelled with the name of the local T-semiflow $\b x_l$. \emph{Control policy}. A transition $t$ belonging to $\N^s$ can be fired if it is enabled ($\b m^s \geq \b {Pre}^s[P,t]$) and its guard expression is true. The guard expression associated with $t$ becomes true if at least one of the next conditions is fulfilled in $\N^c$: \begin{itemize} \item there is a transition labelled as $t$ and it is enabled. \item the marking of a place labelled with the name of a local T-semiflow to which $t$ belongs is equal to one. \end{itemize} First condition prevents to start firing a local T-semiflow whose input buffers do not have enough tokens. While second condition ensures that if a local T-semiflow has been started, all the guard expressions of its transitions become true. \emph{Evolution of the control PN}. The evolution of $\N^c$ is synchronized with the evolution of $\N^s$ (see Fig.~\ref{fig:ControlDiagram} for a schematic view). The events in $\N^s$ (firing of transitions) are considered as input signals in $\N^c$. When a transition $t_k$ is fired in $\N^s$, then in $\N^c$: \begin{itemize} \item if no transition labelled $t_k$ exists, no transition is fired. \item if there exist enabled transitions labelled $t_k$, the controller (scheduler) chooses one and is fired in $\N^c$. \end{itemize} \begin{algorithm}[h]\label{policy} \begin{algorithmic}[1] \REQUIRE A SSP $\langle \N^{d}= \langle P^{d}, T^{d}, \b{Pre}^{d}, \b{Post}^{d} \rangle, \b m^s_0 \rangle$ and its live control PN system $\langle \N^{c}= \langle P^{c}, T^{c}, \b{Pre}^{c},$ $\b{Post}^{c} \rangle, \b m^c_0 \rangle$. \ENSURE A live evolution of $\langle \N^{d}, \b{m}_0^s \rangle$ through $\langle \N^{c}, \b{m}_0^c \rangle$ \STATE $\b m^s:= \b m^s_0$ \COMMENT{initializing the current state in $\N^{d}$} \STATE $\b m^c:= \b m^c_0$ \COMMENT{initializing the current state in $\N^{c}$} \STATE $T^e:= \emptyset$ \COMMENT{initializing the set of enabled transitions in $\N^s$ at marking $\b m^s$} \STATE $T^f:= \emptyset$ \COMMENT{initializing the set of transition that can be fired in $\N^s$} \FORALL {$t_i \in T^s$} \IF{$\b m^s \geq \b {Pre}^s[P^s,t_i]$} \STATE $T^e:=T^e \cup t_i$; \ENDIF \ENDFOR \FORALL {$t_j \in T^e$} \IF{the guard of $t_j$ is true at marking $\b m^c$} \STATE $T^f:=T^f \cup t_j$ \ENDIF \ENDFOR \STATE A transition $t_k \in T^f$ is fired in $\N^s$ \STATE {If there exists $T^s \subseteq T^f$ such that all $t_o \in T^s$ have the same label in $\N^c$ as $t_k$, select one as $t_k$.} \STATE $\b m^s:= \b m^s+\b C^s[P^s,t_k]$ \COMMENT{$\b m^s$ is updated} \IF{there exists a transition labelled as $t_k$ in $\N^c$} \STATE a transition labelled as $t_k$ is fired in $\N^c$ \STATE $\b m^c:= \b m^c+\b C^c[P^c,t_k]$ \COMMENT{$\b m^c$ is updated} \ENDIF \STATE go to Step 3 \end{algorithmic} \caption{Control policy and systems evolution on a SSP net $\N^s$ and its control PN $\N^c$.} \end{algorithm} Alg. \ref{policy} implements the control policy and the system evolution on a SSP net and its control PN. \begin{figure}[ht] \psfrag{Input signal (t)}{Input signal ($t_k$)}\psfrag{Event (t)}{Event ($t_k$)} \psfrag{enabeled}{enabled} \psfrag{Control}{Control} \psfrag{m}{$\b m^s$}\psfrag{mc}{$\b {m^c}$} \psfrag{Te}{$T^e$}\psfrag{Tf}{$T^f$} \psfrag{Nc}{$\N^c$}\psfrag{Nd}{$\N^s$} \psfrag{DSSP}{SSP} \psfrag{Enabeled}{Enabled}\psfrag{transitions}{transitions} \begin{center} \includegraphics[width=1\columnwidth]{Control_diagram.eps} \caption{\small Control diagram of a SSP structure using a control PN} \label{fig:ControlDiagram} \end{center} \end{figure} \ignore{Fig.~\ref{fig:ControlDiagram} shows the control flow diagram of the proposed liveness enforcement approach for a SSP structure ($\N^s$) using a control PN ($\N^c$). The control policy depends on the markings of $\N^s$ and $\N^c$. The marking of $\N^c$ inhibits some enabled transitions to be fired in $\N^s$ if a deadlock could appear. A transition $t$ is chosen from the set of control-enabled transitions.} \ignore{ \begin{figure*} \begin{center} \centering \psfrag{pt1}{$\color{red}{p_{t_1}}$}\psfrag{pt2}{$\color{red}{p_{t_2}}$}\psfrag{pt3}{$\color{red}{p_{t_3}}$}\psfrag{pt4}{$\color{red}{p_{t_4}}$} \psfrag{pt5}{$\color{red}{p_{t_5}}$}\psfrag{pt6}{$\color{red}{p_{t_6}}$}\psfrag{pt7}{$\color{red}{p_{t_7}}$} \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$}\psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$}\psfrag{t9}{$t_9$}\psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$}\psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$}\psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$}\psfrag{p7}{$p_7$}\psfrag{p8}{$p_8$}\psfrag{b1}{$b_{1}$}\psfrag{b2}{$b_{2}$}\psfrag{b3}{$b_{3}$}\psfrag{b4}{$b_{4}$}\psfrag{x1}{$p_{x_{1}}$}\psfrag{x2}{$p_{x_2}$}\psfrag{x3}{$p_{x_3}$}\psfrag{x4}{$p_{x_4}$}\psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$}\psfrag{bN1}{$p_{\N_1}$}\psfrag{bN2}{$p_{\N_2}$}\psfrag{bN3}{$p_{\N_3}$}\psfrag{N3}{$\N_3$} \centering \subfigure[]{\includegraphics[width=0.75\columnwidth]{AdvanceOut.eps}\label{fig:AdvanceOut}}\hspace{0.1\textwidth} \centering \subfigure[]{\includegraphics[width=0.75\columnwidth]{AdvanceOut2.eps}\label{fig:AdvanceOut2}} \caption{\small A Control PN where: a) the production of tokens is generated from the last transition of the sequences; b) the production of tokens is generated from the first transition of the sequences} \label{fig:Advance} \end{center} \end{figure*}} \begin{theorem} Let $\N^s$ be a non live SSP structure and $\N^c$ the structurally live control PN obtained by Alg. \ref{alg:2}, eventually after applying Alg. \ref{conpla} to force its structural liveness. If the initial marking of buffers in $\N^s$ is greater than or equal to to the initial marking of buffers that makes $\N^c$ live, by following the control policy described in Alg.~\ref{policy}, the controlled SSP is live. \end{theorem} \begin{proof} If the control PN $\N^c$ is structurally live, then by definition there exists an initial marking $\b{m}_0$ that makes the net system $\langle \N^c, \b{m}_0 \rangle$ live. Since $\N^c$ is representing the relations between the T-semiflows of $\N^s$ and buffers, putting the marking of the buffers in $\N^s$ equal to the marking of buffers that makes $\N^c$ live will allow the firing of all global T-semiflows in isolation (not implying that $\N^s$ is live). The control policy in Alg. \ref{policy}, computes first the set of enabled transitions of SSP $\N^s$ in $T^e$ (steps 5-9). Then set $T^f$ is obtained from $T^e$ by removing those transitions with guard expression not valid (steps 10-14). All transitions in $T^f$ are enabled in $\N^s$ and their guard expressions are true. Guard expressions are \emph{True} if Conditions \ref{condi1} and \ref{condi2} are true. Condition \ref{condi1} ensures that if a first transition of a local T-semiflow is in $T^f$ then its firing means that a local T-semiflow is starting to fire and there exists enough tokens in all input buffers to completely fire it. If the first transition of a local T-semiflow is fired in $\N^s$ (is chosen from the set $T^f$ in steps 15-17) then the transition labelled with the same name is also fired in $\N^c$ (steps 18-21), and consequently a token is generated in a place labelled with the name of the local T-semiflow in $\N^c$. Notice that, if exist more transitions labelled with the same labelled in $\N^c$, then the scheduler select one to fire (step 16). If a transition of $T^f$ is not a first transition of a local T-semiflow, Condition \ref{condi2} ensures that it belongs to the support of the local T-semiflows that haves already been started to fire. This is done by simple checking the marking of the corresponding place of $\N^c$. When a last transition of a local T-semiflow is fired, the waiting place is marked again and a new local T-semiflow could start firing. As $\N^s$ is consistent and conservative, there exist sequences of firing of the local T-semiflow to fire global T-semiflows. The relations between the T-semiflows and buffers are modeled in the control PN $\N^c$. Since $\langle \N^c, \b{m}_0 \rangle$ is live it will allow to fire only the local T-semiflows that will not produce livelocks forcing the liveness of controlled SSP. \end{proof} The computational complexity of the proposed live enforcement approach is exponential because it is necessary compute the set of global minimal T-semiflows. However, this computation should be done only once, at the beginning of the approach and it is not necessary to be iterated, as for example in the case of controlling bad siphons. \begin{example} In $\N^s$ of the Fig.~\ref{fig:DSSPpb1}, transitions $t_1$, $t_{9}$, $t_5$ and $t_{12}$ are enabled, but only $t_1$, $t_{9}$ and $t_5$ can be fired because in its $\N^c$ given in Fig.~\ref{fig:control1} transition labelled as $t_{12}$ ($t^{9}_{12}$) is not enabled. In this way, only the local T-semiflows whose input buffers have enough tokens for completing their firing are allowed to start. Let us assume that $t_5$ (Fig.~\ref{fig:DSSPpb1}) is fired in $\N^s$: \begin {itemize} \item automatically the enabled transition $t^8_5$ (labelled as $t_5$) has to be fired in $\N^c$. \item Now in $\N^s$, the transitions $t_1$, $t_{9}$, $t_4$ and $t_{6}$ are enabled. \item In $\N^c$ transition labelled as $t_1$ ($t^4_1$ and $t^5_1$) and $t_9$ ($t^6_9$) are enabled, so $t_1$ and $t_9$ can be fired in $\N^s$. Moreover, in $\N^c$ the marking of the place $p_{x8}$ labelled as $x_8$ is equal to 1, so transition $t_{6} \in ||x_8||$ can be fired in $\N^s$. However, in $\N^c$ the marking of the place $p_{x7}$ labelled as $x_7$ is equal to 0, so transition $t_4 \in ||x_7||$ cannot be fired. Let us assume that $t_6$ is fired in $\N^s$. \item The firing of $t_6$ in $\N^s$ does not imply changes in $\N^c$ because there exists no transition labelled $t_6$ in this net being a transition belonging to the support of a local T-semiflow but is not the first or the last one. \end {itemize} \end{example} \ignore{ In order to reduce the restrictiveness of the approach, a relaxation is achieved in the control PN: the production of tokens from the sequences $t_a\rightarrow p \rightarrow t_b$ to the output buffers is advanced from transition $t_b$ to transition $t_a$. In this way, from the moment when the sequence $t_a\rightarrow p \rightarrow t_b$ start firing, the tokens are produced in the output buffers. These tokens allow that other T-semiflows can start its firing despite the fact that in the DSSP net the input buffers do not have enough tokens yet. Condition \ref{condi2} of the proposed control policy imposes that when a local T-semiflow start its firing in $\N^s$, all its transition must be fired until the end. In this way, the advance of the token production in the buffers of $\N^c$ do not compromise the liveness of the controlled system due to it is guaranteed (by condition 2) that the same token production will be made in the corresponding buffers of $\N^s$. } \ignore{ \section{Composition of SSP system and control PN system} Our approach is based on obtaining a control PN $\N^c$ that through guard expressions limit the firing of transitions in the SSP net $\N^s$. In this way, $\N^c$ can be seen as a high level scheduler that guides the firing of transitions in $\N^s$. Having a control PN is a great advantage in the case of complex systems, since $\N^c$ gives an overview of what is happening in the system. However, working with smaller systems could be interesting to have a unique net including $\N^s$ and $\N^c$. The idea is to impose the same initial conditions (1 and 2) that have been imposed through the control policy. To achieve this, some modifications are made in $\N^s$. First, in order that each local T-semiflow has its own ``first'' transition, some transitions are duplicated. Then the input buffers of each local T-semiflow are pre-assinged to its first transitions. At this point, condition 1 is fulfilled. Finally, the second condition is imposed by new places that are included in order to ``guide'' the complete firing of a local T-semiflow when its first transition has been fired. The steps to obtain this unique net system are as follows. \begin{enumerate} \item \emph{Duplicate first transitions}. Each first transition belonging to more than one local T-semiflow is duplicated as many times as the number of local T-semiflows to which belongs. \item \emph{Pre-assignment of the buffers}. The input buffers of each local T-semiflow are pre-assinged to its first transition. \item \emph{Include new places}. For each conflict transition $t_i$ (which is not a first transition), a place $p_{t_i}$ is added such that $p_{t_i}$ limits the firing of $t_i$ ($\b {Pre}[p_{t_i},t_i]=1$). Moreover, from each first transition ($t^k_j$) of the local T-semiflows to which $t_i$ belongs, there is an arc to $p_{t_i}$ ($\b {Post}[p_{t_i},t^k_j]=1$). \item \emph{Obtain the live control PN and ensure or force its liveness}. It is necessary to know if control buffers are needed to force the liveness in the control PN. \item \emph{Include the control buffers}. If control buffers have been introduced to force the liveness of the control PN, these buffers must be added. \end{enumerate} } \ignore{ Fig. \ref{DSSPunique} shows the equivalent controlled system $\N^e$ obtained from applying Step 1-5 to the non live SSP system $\N^s$ in Fig. \ref{fig:DSSPpb1}. The modifications performed are showed in different colors. In step 1 (blue) $t_1$ and $t_5$ have been transformed in two pairs of transition: \{$t^4_1$,$t^5_1$\} and \{$t^7_5$,$t^8_5$\}. In step 2 (green), the input buffers of the local T-semiflows has been preassigned to its first transitions. For example, $b_2$ is an input buffer of $\b x_7$ pre-assinged to its first transition $t^7_5$. Finally, in step 3 (red) for each transition in conflict relation that is not a first transition (\{$t_2$,$t_8$\} and \{$t_4$,$t_6$\}), a new place ($p_{t_2}$, $p_{t_8}$, $p_{t_4}$ and $p_{t_6}$) has been added. Step 5 is not applied because in Step 4 we obtain a live control PN (Fig. \ref{fig:control1}) without adding control buffers. In the equivalent controlled net $\N^e$ of Fig. \ref{DSSPunique}, a local T-semiflow only can start if its input buffers have enough tokens to fire all its transitions. Moreover, once a local T-semiflow has started, all its transitions are fired. For example, $\b x_8$ can only start when its input buffer $b_3$ has a token because $b_3$ has been pre-assigned to transition $t^8_5$ . Once $\b x_8$ has started its firing by transition $t^8_5$, a token is produced in $p_{t_6}$. This token will enable the conflict transition $t_6$ and will guide the agent to fire the local T-semiflow $\b x_8$ completely.} \begin{figure}[ht] \begin{center} \psfrag{t1}{$t_1$}\psfrag{t2}{$t_2$}\psfrag{t3}{$t_3$}\psfrag{t4}{$t_4$} \psfrag{t5}{$t_5$}\psfrag{t6}{$t_6$}\psfrag{t7}{$t_7$}\psfrag{t8}{$t_8$} \psfrag{t9}{$t_9$}\psfrag{t10}{$t_{10}$}\psfrag{t11}{$t_{11}$}\psfrag{t12}{$t_{12}$} \psfrag{t13}{$t_{13}$}\psfrag{t14}{\color{blue}{$t^4_1$}}\psfrag{t15}{\color{blue}{$t^5_1$}} \psfrag{p1}{$p_1$}\psfrag{p2}{$p_2$}\psfrag{pt2}{\color{red}$p_{t_2}$}\psfrag{pt8}{\color{red}$p_{t_8}$}\psfrag{pt4}{\color{red}$p_{t_4}$}\psfrag{pt6}{\color{red}$p_{t_6}$} \psfrag{t54}{\color{blue}{$t^7_5$}}\psfrag{t55}{\color{blue}{$t^8_5$}} \psfrag{p3}{$p_3$}\psfrag{p4}{$p_4$} \psfrag{p5}{$p_5$}\psfrag{p6}{$p_6$} \psfrag{p7}{$p_7$}\psfrag{p8}{$p_8$} \psfrag{p9}{$p_9$}\psfrag{p10}{$p_{10}$} \psfrag{p11}{$p_{11}$} \psfrag{b1}{$b_1$}\psfrag{b2}{$b_2$} \psfrag{b3}{$b_3$}\psfrag{b4}{$b_4$}\psfrag{b5}{$b_5$} \psfrag{N1}{$\N_1$}\psfrag{N2}{$\N_2$} \includegraphics[width=.8\columnwidth]{DSSPpb1uniqueef.eps} \caption{PN system obtained by doing the synchronous composition of the SSP in Fig. \ref{fig:DSSPpb1} with its control PN in Fig. \ref{fig:control1}.}\label{DSSPunique} \end{center} \end{figure} \ignore{ \begin{remark} This paper presents the first method of enforcing liveness of SSP systems that keeps the distributed property of the net. As we mentioned before, the siphon-based method cannot be applyed in general, not only because the SSP will not be distributed but also because the SSP arte not ordinary. Moreover, the approach from RAS are also not aplicable in general since the class of systems are not comparable, as discussed in Section \ref{siphonbased}. However, the main drawback of the approach is the permissimibility of the approach as it can be seen in Tab. \ref{tab:simul} where are shown the number of reachable markings of the original not live SPP in Fig. \ref{fig:DSSPpb1}, the number of reachable markings after applying the siphon based method and the reachable marking of the net obtained after applying the approach in this paper (Fig. \ref{DSSPunique}). \end{remark}} \section{Conclusions}\label{sec:con} Synchronized sequential processes (SSP) are modular PN systems used for modeling and analysis of systems composed by distributed cooperating sequential processes. This paper presents a liveness enforcement strategy for SSP systems formalized on two levels: \emph{execution} and \emph{control}. In the execution level the original SSP system evolves conditioned by the control level, a kind scheduler: each transition in SSP net has associated a guard expression which depends on the state of the control level. A \emph{control PN} is obtained from the SSP structure by using Alg. \ref{alg:2}. Control PN models the consumption/production relation between buffers and local T-semiflows of the SSP net. If the control PN is not live, Alg. \ref{conpla} enforce its liveness by adding some control places. Both SSP and control PN evolve synchronously following a given control evolution policy synthesized by Alg. \ref{policy}. To control is always to constraint the possible behaviors of the plant. Since the method presented in this paper is constraining the firing of local T-semiflows of the global T-semiflows (not allowing to start firing again the global T-semiflow until all local T-semiflows are not fired in the corresponding proportions) the permissitivity of this approach may be less than of other approaches in literature. For example, by doing the synchronous composition of the non live SSP system in Fig. \ref{fig:DSSPpb1} with its supervisory control PN system in Fig. \ref{fig:control1}, the net system in Fig. \ref{DSSPunique} is obtained. \begin{table}[htb] \centering \caption{Simulation result using SSP net system in Fig. \ref{fig:DSSPpb1} } \label{tab:simul} \begin{tabular}{|c|c|c|} \hline \textbf{Net system} & \textbf{\# Reachable makings} & \textbf{\# livelock marking} \\ \hline SSP (Fig. \ref{fig:DSSPpb1}) & 180 & 13 \\ \hline SSP + monitors & 139 & 0 \\ \hline SSP + control (Fig. \ref{DSSPunique}) & 94 & 0 \\ \hline \end{tabular} \end{table} Tab. \ref{tab:simul} shows the number of reachable markings of the non live SSP in Fig. \ref{fig:DSSPpb1}, the number reachable markings of the live net system obtained by controlling the bad siphons (method well known understood in RAS) and the number of reachable markings of the composition of the SSP with the control net proposed in this paper. The number of reachable markings is in general greater if the bad siphons are controlled. Nevertheless, as it is discussed in Section \ref{sec:dssp}, unfortunately distributeveness of the system is lost. Up to our knowledge, the approach in this paper is one of the first one on dealing with liveness enforcement in this class, method that keeps the controlled system to be also a SSP (hence having this distributiveness property). As a future work, the permissibility of the approach with respect to the number of reachable marking will be improved. By using the Synchrony Theory \cite{TRPetri76,ICSi87} the liveness of the control PN could be enforced and in some cases more permissive approach as the one presented in Sec. \ref{sec:livefor} could be obtained. Another future work will be to check permissibility with respect to the throughput. Notice that timed SSP could be throughput non monotone systems (contrary to DSSP systems) and a reduction in the number of reachable markings is not necessary always worst or at least in not the same proportion as the reduction in the number of reachable markings.
{ "timestamp": "2020-12-29T02:28:03", "yymm": "2012", "arxiv_id": "2012.14199", "language": "en", "url": "https://arxiv.org/abs/2012.14199" }
\section{Introduction} Autonomous vehicles, especially Unmanned Aerial Vehicles (UAVs), have attracted tremendous research interests in recent years due to their potential in improving efficiency and safety in many military and civilian applications \cite{cai2014survey}. {\color{blue} One fundamental prerequisite for an autonomous vehicle to successfully execute missions is to accurately and reliably estimate its 6-DOF pose \cite{Zhang2015Visual}, which requires delicate algorithm development and systematic integration. A detailed review of various localization systems is provided in \cite{yuan2021survey}.} Due to physical and electromagnetic interferences, GPS systems may not provide persistent and reliable localization information in many complex environments, named as GPS-denied environments, making it a challenging task for a vehicle to carry out missions autonomously. There are mainly two approaches to handle the localization in a GPS-denied environment. The first approach is the simultaneous localization and mapping (SLAM) framework \cite{durrant2006simultaneous}, which is to incrementally track the local pose by estimating the relative transformation between two observation frames. The main drawback of a SLAM-based method is that the localization result drifts as time goes on due to the accumulated relative pose estimation errors \cite{strasdat2010scale}. {\color{blue} One may reason that causes the relative pose estimation error is that the salient point features, on which many localization methods are dependent, are insufficient in many challenging civilian environments.} {\color{blue}Line and plane features are considered as promising supplements to point features for robot localization, especially in many civilian environments.} Line and plane features are considered as promising supplements to point features for robot localization. First, lines and planes are more structurally salient and therefore can be endowed with more prior information. Secondly, they are more stable than points with respect to lighting/texture changes. Finally, lines and planes are more ubiquitous in a structure rich environment, such as most infrastructures in urban cities. Thus, it is a good practice to additionally integrate line and plane features in the localization framework for many civil applications. Another alternative is to localize the vehicle according to a prior map by matching the local observations to the map directly \cite{sattler2011fast}. Although this method has no drift issue, it is not easy to apply this method solely to achieve reliable localization in many challenging environments. {\color{blue}First, the prior-map-based localization requires a high-fidelity map that may not be available in many scenarios. Second, even if there is a prior map, the association between local observations and map information is usually sophisticated and time-consuming due to large differences in observation view angle, data resolution, and even data format \cite{sattler2018benchmarking,muhlfellner2016summary}.} Reflecting on the pros and cons of the two distinct approaches, it is a good practice to combine them, namely, to lend some prior information of the map to aid the SLAM based navigation. A loosely coupled framework is proposed in \cite{platinsky2020collaborative}, which incorporates global pose factors in the local SLAM optimization. The global pose is obtained by matching images with a global map. Article \cite{middelberg2014scalable} uses a locally pre-stored 3D points cloud map to fix the drift of a local SLAM system in a tightly coupled framework. Similar works are also presented in \cite{Zuo2020Multi,mur2017visual} based on different sources of map. {\color{blue}Although the combination methods above demonstrates improved localization performances, their frequent and indiscriminate map prior information integration may severely slow down the localization process. Therefore, it is important to determine what information to integrate into the SLAM process to make a balance between localization performance and efficiency.} {\color{blue}Inspired by the discussions above, this paper proposes a Structure Priors aided Inertial Navigation System (SPINS) that combines SLAM-based and prior map-based localization method by lending feature level structure prior information to restrain the drift of the SLAM-based methods. To deal with the challenges such as feature deficiency, prior information association, and to achieve an efficient combination of the two methods in civilian environments with rich structure information, we make the following contributions:} To summarize, this paper makes the following contributions: \begin{enumerate} \item Firstly, we extend our previous work \cite{lyu2022structure} and further relief the feature deficiency problem by integrating 3D point, line, and plane features from various sensor modalities to relief the feature deficiency in civil environments. The heterogeneous features based localization is modeled in a sliding window fashion and solved with a graph optimization method. \item With the heterogeneous feature used, we integrate more generic and broader range of prior information which we named as \textit{structure priors} and parameterized as the relative distances/angles between different geometric primitives. The association between observation and map is therefore simplified to one dimension level. To ease the burden of integrating too many factors, we develop a structure prior information selection strategy based on the \textit{information gain} to incorporate only the most effective structure priors for localization. \item Finally, we test our proposed framework extensively based on synthetic data, public datasets, and real UAV flight data obtained from both indoor and outdoor inspection tasks. The results indicate that the proposed framework can improve the localization robustness even in challenging environments. \end{enumerate} The remainder of the paper is organized as follows. In Section \ref{related_works}, the related works are provided. The proposed SPINS framework is formulated in Section \ref{system_description}. The geometric features and structure priors are modeled in Section \ref{features} and Section \ref{priors}, respectively. Experiment validations are provided in Section \ref{experiment}. Section \ref{conclusion} concludes the paper. \section{Related works} \label{related_works} SLAM-based localization is considered as one of the most promising approaches for robot localization. By measuring salient features from the environment, the robot can accumulatively estimate its pose in local coordinates, which is preferred in many challenging environments, such as complex indoor or urban cities \cite{weiss2011monocular}. In this part, we review the recent SLAM results based on different features and structure priors. Among existing methods reported in the literature, the point feature is most commonly used in the environment perception front-end in SLAM frameworks, especially in the visual aided navigation frameworks. Some successful demonstrations, such as ORB-SLAM \cite{Mur2015ORB}, and VINS-mono \cite{Qin2018VINSMono}, utilize 2D point features from vision sensors to estimate the local motion and sparse 3D point clouds of the environment. With the development of more advanced 3D sensing technologies, navigation methods based on 3D sensors, such as stereo-vision\cite{lemaire2007vision}, LiDAR\cite{zhang2014loam}, and RGB-D camera\cite{sturm2012benchmark}, can directly utilize 3D points with metric information in the estimation process, therefore can provide improved localization and reconstruction results. In addition to the most commonly used point features, line and plane features are also adopted as additional features in some mission scenarios, such as indoor servicing \cite{Padhy2019Monocular,Lu_2015_ICCV}, structure inspection \cite{hasan2016construction,nguyen2020liro}, and autonomous landing \cite{Andres2017Homography}, in case that point features are not sufficient. In the past few years, SLAM methods using heterogeneous features have begun to draw researchers' attention. The improvement of localization performance has been verified by works with different feature combinations. In vision-based SLAM, line features are considered more effective than point features in a low textural but high structural environment with a proper definition of the state and re-projection error. Extended from the ORB-SLAM, the PL-SLAM \cite{pumarola2017pl} can simultaneously handle both point and line correspondences. The line state is parameterized with its endpoints, and the re-projection error is defined as point-to-plane distances between the projected endpoints of 3D line and the observed line on image plane. A tightly coupled Visual Inertial Odometry (VIO) exploiting both point and line features is proposed in \cite{he2018pl}. The line is parameterized as a six-parameter Pl{\"u}cker coordinate\cite{hodge1994methods} for transformation and projection simplicity, and a four-parameter orthonormal representation for optimization compactness. Similar reprojection error to \cite{pumarola2017pl} is utilized in \cite{he2018pl,lyu2022structure}. Comparisons of different line feature parameterization are provided in \cite{Yang2019Visual} based on the MSCKF SLAM framework, which shows that the closest point (CP) based \cite{yang2019aided} and quaternion based \cite{kottas2013efficient} representations outperform the Pl{\"u}cker representation under noisy measurement conditions. Besides the monocular based methods above, a stereovision-based VIO using point and line features together is proposed in \cite{zheng2018trifo} where the measurement model is directly extended from the monocular camera model similar to \cite{pumarola2017pl}. In the stereo vision based PL-SLAM framework \cite{gomez2019pl}, a visual odometry is formulated similarly to \cite{zheng2018trifo}. In addition to that, the key-frame selection and loop closure detection under point and line features setup are also provided. As 3D sensors such as LiDAR or RGB-D camera become more popular, plane features now can be effectively extracted in a man-made environment. A Lidar Odometry and Mapping (LOAM) in real-time is proposed in \cite{zhang2014loam}, which utilizes plane features to improve the registration accuracy of a point cloud. A LiDAR-inertial SLAM framework based on 3D planes is proposed in \cite{geneva2018lips}, where the closest point representation is utilized for parameterizing a plane. A tightly-coupled vision-aided inertial navigation framework combining point and plane features is proposed in \cite{yang2019tightly}, where a plane is parameterized similar to \cite{geneva2018lips}. In addition, the point-on-plane constraint is incorporated to improve the VINS performance. Recently, point, line, and plane features are jointly applied in the SLAM frameworks \cite{zhang2019structure,yang2019observability,li2020leveraging,Aloise2019Systematic} to realize stable localization and mapping in low texture environments. In \cite{zhang2019structure}, the line and plane features are tracked simultaneously along a long distance to provide persistent measurements. Also, the relationship between features, such as co-planar points, is implemented to enforce structure awareness. Similar work \cite{li2020leveraging} utilizes line and plane to improve the feature richness and incorporates more spatial constraints to realize more robust visual inertial odometry. A pose-landmark graph optimization back-end is proposed in \cite{Aloise2019Systematic} based on the three types of features, which are handled in a unified manner in \cite{Nardi2019Unified}. A thorough theoretical analysis of implementing point, line, and plane features in VINS is provided in \cite{Yang2019Visual} where three kinds of features are parameterized as measurements to estimate the local state based on a recursive MSCKF framework. More importantly, the observability analysis of different combinations of features is provided, and the effect of degenerate motion is studied. Although researchers have begun to introduce heterogeneous geometric features into their SLAM works, integrating prior map information to the localization still draws limited attention. There are mainly two types of prior information that can be obtained from a prior map to aid the SLAM, namely the global information and the local structure information. By incorporating global pose constraints by matching local observations with a consistent global map, the SLAM drift can be reduced. Inspiring by this, the global information is incorporated in the local SLAM in both loosely coupled manner \cite{platinsky2020collaborative} and tightly coupled manner \cite{middelberg2014scalable}. On the other hand, only limited structure priors information, such as point-on-line, line-on-plane, and point-on-plane constraints, are considered in the SLAM. A visual inertial navigation system that utilize both point features and actively selected line features on image plane is proposed in \cite{lyu2022structure}, structure prior information based on special point and line relationships are utilized to improve the localization performance. Note that the structure information is ubiquitous in civil environments that are rich of man-made objects. It is a practical wisdom to implement more general structure prior information of the environment to improve the localization and mapping quality rather than to consider the environment as entirely unknown. For instance, in a building inspection environment, the structure information, as elaborate as the CAD modes, or as coarse as some common sense such as flat planes, parallel lines, with proper parameterization, can be implemented to aid the localization process\cite{hasan2016construction,Jovan2016Matching}. \section{System description} \label{system_description} In this section, the SPINS is described from a systematic point of view. The functional blocks of the system are illustrated in Fig. \ref{fig:system-flowchart}. The SPINS depends on three types of information to fulfill the task of accurately and reliably localizing an autonomous vehicle in a challenging civilian environment, namely 1) ego-motion measurements from interoceptive sensors, such as the Inertial Measurement Unit (IMU), at high-frequency feeding streaming, and 2) detected/tracked point, line, and plane features from exteroceptive sensors, and 3) structure priors which are parameterized as pairwise high fidelity measurements between features. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/flowchart.jpg} \caption{The systematic functional blocks of the proposed SPINS framework.} \label{fig:system-flowchart} \end{figure} \begin{figure} \centering \subfigure[A geometric representation of point, line and plane features.]{\label{fig:sub-first}\includegraphics[width=0.5\linewidth]{fig/3d_structure-crop}} \subfigure[The corresponding factor graph of subfigure (a).]{\label{fig:sub-second}\includegraphics[width=0.5\linewidth]{fig/factor_graph-crop}} \caption{The factor graph including the structure priors.} \end{figure} \subsection{Optimization formulation} The aforementioned three types of information within a sliding window are incorporated into a factor graph, which is a bipartite graph contains variables and factors. Specifically, the variables represent the state of the local vehicle and the heterogeneous features, and factors encode different types of observations and structure priors. The states included in the sliding window at time $t$ are defined as \begin{equation} \label{state_def} \mathcal{X}_t \triangleq \begin{bmatrix} \{\xbf^\top_{I,m}\}_{m\in T_t} & \{\xbf^\top_{i}\}_{i\in P_t} &\{\xbf^\top_{j}\}_{j\in L_t} &\{\xbf^\top_{k}\}_{k\in \Pi_t} \end{bmatrix}^\top, \end{equation} where $\{\xbf_{I,m}\}_{m\in T_t}$ contains active IMU states within the sliding window at time instance $t$. $T_t$ denotes the set of IMU measurements at $t$. $P_t, L_t, \Pi_t$ denote the sets of point, line and plane features that are observed within the sliding window at time $t$, respectively. The IMU state is \begin{equation} \xbf_I\triangleq \begin{bmatrix} {^I_G}\bar{q}^\top & {^G}\pbf^\top_I & {^G}\vbf^\top_I & \bbf_g^\top & \bbf_a^\top \end{bmatrix}^\top, \end{equation} where ${^I_G}\bar{q}$ is a unit quaternion denoting the rotation from the global frame $\{G\}$ to the IMU frame $\{I\}$. ${^G}\pbf_I$ and ${^G}\vbf_I$ are the IMU position and velocity, respectively. $\bbf_g$, $\bbf_a$ are the random walk biases for gyroscope and accelerator, respectively. With the state definition (\ref{state_def}), the objective is to minimize the cost function of different measurement residuals in (\ref{ful_cost}). \begin{equation} \begin{aligned} \label{ful_cost} \min\limits_{\mathcal{X}_t} &\left\{\|\rbf_{p}\|^2_{\Pbf_{p_t}} + \sum_{m\in T_t}\|\rbf_{I,m}\|^2_{\Sigma_m}\right.\\ &+ \sum_{i\in P_t}\rho(\|\rbf_{i}\|^2_{\Sigma_i}) + \sum_{j\in L_t}\rho(\|\rbf_{j}\|^2_{\Sigma_j}) + \sum_{k\in \Pi_t}\rho(\|\rbf_{k}\|_{\Sigma_k}^2) \\ &\left.+ \sum_{s\in \mathcal{S}_t}\rho(\|\rbf_s\|_{\Sigma_{s}}^2)\right\}. \end{aligned} \end{equation} The first term of (\ref{ful_cost}) is the cost on prior estimation residuals, and $\Pbf_{p_t}$ is the corresponding covariance prior to the optimization at $t$\cite{kaess2012isam2}. The second term is the cost of IMU-based residual, and $\rbf_{I,m}$ defines the measurement residual between active frames $m$ and $m+1$. The IMU measurement between time step $m$ and $m+1$ is obtained by integrating high-frequency raw IMU measurements continuously with the technique called IMU preintegration \cite{Forster2017Manifold}, and $\Sigma_m$ is the corresponding measurement covariance. The second line of (\ref{ful_cost}) represents the cost function of measurement residuals of point, line, and plane features and are weighted by their corresponding covariances. The third line is the cost of structure priors measurement residuals. For a measurement in Euclidean space, its residual term is defined as the difference between the predicted measurement based on estimated state $\hat{\mathcal{X}}$ and a real measurement $\zbf$ , as \begin{equation} \rbf = h(\hat{\mathcal{X}}) - \zbf, \end{equation} where $h(\cdot)$ is the measurement prediction function for the estimated state between any two variables in the factor graph. The term $||\rbf||^2_{\Sigma} = \rbf^\top\Sigma^{-1}\rbf$ is defined as the squared Mahalanobis distance with covariance matrix $\Sigma$. A huber loss $\rho(\cdot)$ \cite{huber1992robust} is applied on each squared term to reduce potential mismatches between states and measurements. The optimization of $(\ref{ful_cost})$ is usually solved with an iterative Least-Squares solver through a linear approximation as detailed in Appendix \ref{solver}. The formulation of measurement functions $h(\cdot)$ with regard to the geometric features and the structure priors are provided in \ref{features} and Section \ref{priors}, respectively. \subsection{Structure prior information} The structure information is ubiquitous, ranging from the fine-grained blueprint to common knowledge such as parallel lines or planes. The greatest challenge to integrate the prior information in the optimization is to correctly associate the structure information with the observed features. Benefiting from using point, line, and plane features simultaneously, we can parameterize the spatial relationship between different geometric features as pairwise relative distances and angles in Section \ref{priors}. The advantages of using such parameterization are mainly three folds. \begin{enumerate} \item First, the prior knowledge can be integrated in a simple fashion. With the distance and angle based formulation, the rigorous association process between the prior knowledge and current observation, which are normally based on high-dimensional and computational-demanding feature matching processes, can be simplified to a scalar level association based on thresholds. \item Second, more extensive prior knowledge can be utilized to aid the navigation. The distances and angles can not only be extracted from prior maps with specific format, but also be obtained from structural common senses, hand-measured quantities, and so on. \item As the angles and distances are stored as scalars, the storage can be reduced dramatically comparing to storing a map. \end{enumerate} The structure priors can be extracted offline based on the following three steps: \begin{enumerate} \item extract structural primitives from various formates, \textit{e.g.}, high fidelity maps, local measurements, semantic information \textit{etc.} , \item measure and calculate the relative quantities between every two primitives (as formulated in Section \ref{priors}), and \item store salient structure quantities (distances/angles) as structure priors in a database. \end{enumerate} {\begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/association-crop} \caption{The online association between structure priors and local estimates.} \label{fig:association} \end{figure}} { The association process between offline structure prior information and online local observations is illustrated as Fig.\ref{fig:association}. Initially, finely estimated features within current optimization window are selected based on the estimation confidences. Then the structural quantities (angles/distances) based on the estimates can be calculated. Finally, a structure prior is integrated into the optimization when it is close enough to a structural quantity from estimates, based on a scalar threshold. { Given two features' estimates, $\hat{\xbf}_i,\hat{\xbf}_j$, a distance/angle prior information $z_s$ is associated to $i,j$ when \begin{equation*} |h_s(\hat{\xbf}_i,\hat{\xbf}_j) - z_s|\le \tau_s, \end{equation*} where $h_s(\cdot)$ is the function to extract distance/angles (see Sec. V), and $\tau_s$ is a scalar threshold. To prevent possible false association of structure prior information, we set a very small threshold $\tau_s$, and allow structure priors to be assigned to only finely estimated features.} Above structure priors integration process should be carried out based on following pre-requisites. First, the structure priors should be quantities that are identifiable among all relative distances and angles. Second, the structure prior quantity should be representative of the most salient structural patterns of an environment. In many robotic operation environment, such as indoor navigation, building inspection, geometric features are ubiquitous, and structure patterns are inerratic and repetitive. In such environment, the extracted structure priors (angles and distances) are sparsely distributed, see, e.g. Fig.\ref{fig:room_planes-crop} and Fig.\ref{fig:distributions}, and the association can be achieved based on a simple threshold as described above. In this paper, we only consider structure rich environments where the sparseness of the structure priors holds. } \section{Heterogeneous geometric features} \label{features} In this section, we discuss how to model the point, line, and plane features based on on-board perception and incorporate them into the graph optimization. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/geometry_v2} \caption{The geometry representation of point, line and plane in 3D space.} \label{fig:geometry_point} \end{figure} \subsection{Point feature} As one of the most frequently implemented features in perception tasks, a point feature can be uniquely parameterized by its 3D coordinate, as shown in Fig. \ref{fig:geometry_point}. A point feature $i\in P$ can be extracted from different sensors by measuring it in the local frame, \begin{equation} \zbf_{i} = h_i(\xbf_I, \xbf_i)+\pmb\nu_i = {^{I}_{G}}\Rbf ({^G}\pbf_i - {^G}\pbf_I)+ \pmb\nu_i, \end{equation} where $^G\pbf_i\in \mathbb{R}^3$ and $^G\pbf_I\in \mathbb{R}^3$ are the 3D positions of the point feature and the local vehicle in the global frame. $\pmb\nu_i\in \mathbb{R}^3$ is the measurement noise. ${^{I}_{G}}\Rbf\in \mathbb{SO}(3)$ represents the rotation from the local frame to the global frame. By defining the state of point feature as $\xbf_i = {^G}\pbf_i$, we have the Jacobians calculated in Appendix \ref{point_feature}. \subsection{Line feature} One most commonly implemented representation of a line feature in 3D space is its Pl\"ucker coordinates. For an infinite line $j\in L$, its Pl\"ucker coordinates are defined as $\lbf_j \triangleq \begin{bmatrix} \nbf_j^\top & \vbf_j^\top \end{bmatrix}^\top$, $\nbf_j$ and $\vbf_j$ are the normal vector and directional vector calculated from any two distinct points on the line, as shown in Fig. \ref{fig:geometry_point}. A measurement of an infinite line can be modeled as its Pl\"ucker coordinates in the local frame as \begin{equation} \begin{aligned} \zbf_j &= h_j( {^G}\xbf_I, {^G}\lbf_j ) + \pmb\nu_j \\& = \left[\begin{array}{cc} _{G}^{I}\Rbf & -_{G}^{I} \mathbf{R}\left[^{G} \pbf_{I} \right]_{\times} \\ \mathbf{0}_{3} & {_{G}^{I}}\mathbf{R} \end{array}\right] {^G}{\mathbf{l}}_j + \pmb\nu_j, \end{aligned} \end{equation} where $\pmb\nu_j\in \mathbb{R}^6$ is the measurement noise. Obviously, the expression $\lbf_j$ is not a minimum parameterization of the line state. To calculate the Jacobian, we implement the closest point approach described in \cite{yang2019observability}, which is formulated as $\pbf_j = d_j\bar q_j\in \mathbb{R}^4$, where the unit quaternion $\bar{q}_j$ and the closest distance of the line to the origin can be calculated from the Pl\"ucker coordinates respectively as \begin{align} \Rbf_j(\bar{q}_j) &= \begin{bmatrix} \frac{\nbf_j}{\|\nbf_j\|} & \frac{\vbf_j}{\|\vbf_j\|} & \frac{\nbf_j}{\|\nbf_j\|}\times \frac{\vbf_j}{\|\vbf_j\|} \end{bmatrix},\\ d_j &= \frac{\|\nbf_j\|}{\|\vbf_j\|}, \end{align} where $\Rbf_j(\bar{q}_j)$ is the corresponding rotation matrix to $\bar{q}_j$. By defining the line state as $\xbf_j = {^G}\pbf_j$, we have a minimum parameterization of the line in Euclidean space. The corresponding measurement Jacobians are provided in Appendix \ref{line_feature}. \subsection{Plane feature} An infinite plane $k\in \Pi$ can be minimally parameterized by the closest point $\pbf_k = d_k\nbf_k \in \mathbb{R}^3$, as shown in Fig. \ref{fig:geometry_point}, where $\nbf_{k} $ is the plane's unit normal vector, and $d_{k} $ is the distance from the origin to the plane. The plane measurement here is modeled as the closest point in the local frame as \begin{equation} \label{plane_meas} \begin{aligned} \zbf_{k} &= {^I}\pbf_k + \pmb\nu_k = {^I}\nbf_{k}{^I}d_{k} + \pmb\nu_k, \end{aligned} \end{equation} where $\pmb\nu_k\in \mathbb{R}^3$ represents the plane measurement noise. The translation of the unit normal vector and the distance of a plane from the global frame to the local frame is \begin{equation} \label{plane_trans} \begin{bmatrix} ^I\nbf_{k}\\^I d_{k} \end{bmatrix} = \begin{bmatrix} {^I_G}\Rbf & \mathbf{0}_{3\times1}\\ -^G\pbf_I^\top &1 \end{bmatrix}\begin{bmatrix} ^G\nbf_{k}\\^G d_{k} \end{bmatrix}. \end{equation} Incorporating (\ref{plane_trans}) into (\ref{plane_meas}), the plane measurement can be expressed with the normal vector $^G\pbf_I^T$ and distance $^G d_{\pi}$ in the global frame as \begin{equation} \label{plane_GtoI} \begin{aligned} \zbf_k = {^I} d_{k} {^I}\nbf_{k}+\pmb\nu_{k} = (-^G\pbf_I^\top {^G}\nbf_{k} + {^G} d_{k}){^I_G}\Rbf^G\nbf_{k}+\pmb\nu_{k}\\ =-^G\nbf_{k}^\top {^G}\pbf_I{^I_G}\Rbf^G\nbf_{k} + {^G} d_{k}{^I_G}\Rbf{^G}\nbf_{k}+\pmb\nu_{k}. \end{aligned} \end{equation} Define the state of a plane $k$ as $\xbf_k={^G}\pbf_k={^G}d_k{^G}\nbf_k$, which is the closest point of the plane to the origin. we can calculate the measurement Jacobian in Appendix \ref{plane_feature}. \begin{remark} In our paper, the raw sensor measurement models of different geometric features are not explicitly provided since they are highly dependent on the sensing mechanism of different sensors. Our purpose here is to provide a common framework implementing the heterogeneous features rather than considering a specific sensor. \end{remark} \section{Structure priors formulation} \label{priors} In this part, the structure priors are formulated as the relative relationships between features. Specifically, the angles and distances are defined between different geometric primitives, including point, line, and plane. \subsection{Feature-to-feature prior modeling} The structure prior factors are plotted as blue edges in the factor graph, as shown in Fig. \ref{fig:factor_graph}. Denote the topology set containing all the pairwise structure priors as $\mathcal{S}$, then an edge $(a,b)\in \mathcal{S}$ indicates that some quantitative measurements between two features $a,b\in \{P,L,\Pi\}$, denoted as $\zbf_{ab}$, are known a priori. Let $h_{ab}$ denote the measurement function between $a$ and $b$, the structure prior residual can be obtained as \begin{equation} \rbf_{ab} = h_{ab}(\hat\xbf_a, \hat\xbf_b) - \zbf_{ab}. \end{equation} The residual cost is \begin{equation} \sum\limits_{(a,b)\in \mathcal{S}}\|\rbf_{ab}\|^2_{\Sigma_{ab}}, \end{equation} where $\Sigma_{ab}$ is the covariance of the measurement noise which represents the fidelity of implementing specific structure prior constraints. The covariance is assumed to follow Gaussian distribution and is calculated statistically during the structure priors extraction process. The following are to model the pairwise measurements between point, line, and plane features described above. \subsection{Feature-to-feature factors} \subsubsection{Point-to-point factor} When two salient points are detected, the possible structure prior information that characterizes the spatial relationship can be modeled as a $1-3$ dimensional measurement. Denote two point features $i,i'\in P$, and their relative translation $\xbf_{ii'} = \xbf_{i'} - \xbf_i$, the point-to-point structure measurement \begin{equation} \zbf_{ii'} = h_{ii'}(\xbf_{ii'}), \end{equation} is to project the 3D displacement between the two points onto a specific 1-3 dimensional metric in the global frame. Specifically, the distance measurement can be modeled as $z_{ii'}^d = \|\xbf_{ii'}\|$. The measurement residual Jacobians with respect to the points state are provided in Appendix \ref{ptpj}. \begin{remark} In the point-to-point structure, the points should be salient in both texture and structure senses. In practice, most points are distributed according to the texture, and it may not be easy to endow structure information. Some examples of structurally salient points are intersection points, endpoints, and corner points. The integration of point-to-point structure prior information depends on the extraction and recognition of structural points, which may be challenging in practice. \end{remark} \subsubsection{Point-to-line factor} The spatial relationship between a point and an infinite line can be described with a 2D vector. With a point $i\in P$ and a line $j\in L$, we define a 2D displacement between them as \begin{equation} \xbf_{ij} =\begin{bmatrix} \bar\nbf_j^\top \\ \bar\nbf_j^\top\times\bar\vbf_j^\top \end{bmatrix} \xbf_i + \begin{bmatrix} 0 \\d_j \end{bmatrix}\in \mathbb{R}^2, \end{equation} where $\bar\nbf_j = \frac{\nbf_j}{\|\nbf_j\|}$, $\bar\vbf_j = \frac{\vbf_j}{\|\vbf_j\|}$, and $d_j$ are the line $j$'s unit normal vector, unit directional vector, and the distance to the origin point, respectively. Denote the point-to-line measurement of $\xbf_{ij}$ as \begin{equation} \zbf_{ij} = h_{ij} ( \xbf_{ij}), \end{equation} The point to line distance can be calculated by letting $h_{ij}(\cdot)$ be a norm operator, \begin{equation} z_{ij}^d=d_{ij} = \|\xbf_{ij}\|. \end{equation} Hereafter, the point-on-line constraint can be enforced as $ d_{ij}= 0$. The measurement residual Jacobian is calculated as Appendix \ref{ptlj}. \subsubsection{Point-to-plane factor} The relationship between a point and an infinite plane can be described with one scalar, \textit{i.e.}, the distance from the point to the plane. With a point feature $i\in P$, and an infinite plane feature $k\in \Pi$, the distance between a point and a plane is defined as \begin{equation} d_{ik} = (\nbf_k)^\top\pbf_i + d_k. \end{equation} Define a measurement function as $z^d_{ik} = d_{ik}$, then the point-on-plane constraint can be enforced by letting $ d_{ik}= 0$. The measurement residual Jacobian is provided in Appendix \ref{ptplj}. \subsubsection{Line-to-line factor} The relationship of two lines can be uniquely parameterized with a 3D translation vector and a rotation angle. Given two lines, denoted respectively as $j, j'\in L$, the rotation $\alpha_{jj'}$ and translation $\dbf_{jj'}\in \mathbb{R}^3$ can be calculated as follows: \begin{equation} \alpha_{jj'} = \vbf_j^\top\vbf_{j'}, \end{equation} and \begin{small} \begin{equation} \dbf_{jj'} = \left\{\begin{array}{ll} \mathbf{0}, & j\text{ and } j' \text{ intersection},\\ \bar\dbf_{jj'}, & j\text{ and }j' \text{ are parallel},\\ (\bar\vbf_j\times\bar\vbf_j')^\top\bar\dbf_{jj'}(\bar\vbf_j\times\bar\vbf_j'), & \text{otherwise}, \end{array}\right. \end{equation} \end{small} where $\bar \dbf_{jj'} = d_j\bar\nbf_j\times\bar\vbf_j - d_{j'}\bar\nbf_{j'}\times\bar\vbf_{j'}$. $d_j\bar\nbf_j\times\bar\vbf_j$ and $d_{j'}\bar\nbf_{j'}\times\bar\vbf_{j'}$ are the closest point of line $j$ and $j'$ to the origin. It is straightforward to prove that as the relative angle $\alpha_{jj'}\to \pm 1$, the distances $d_{jj'} = \|\dbf_{jj'} \|$ denotes the relative distance between two parallel lines. We first consider the rotation $\alpha_{jj'}$ as a measurement between two lines. Further, when two lines are parallel, namely $\alpha_{jj'} = \pm 1$, the distance $d_{jj'} = \|\dbf_{jj'}\|$ is considered as another measurement, namely \begin{equation} \zbf_{jj'} = \left\{ \begin{aligned} &\begin{bmatrix} &\alpha_{jj'} & d_{jj'} \end{bmatrix}^\top, & \text{in parallel}, \\ &\alpha_{jj'}, & \text{otherwise}.\end{aligned}\right. \end{equation} The Jacobian of line-line measurement residual is provided in \ref{ltlj}. \subsubsection{Line-to-plane factor} The spatial relationship between a line and a plane can be characterized by the dot product of the directional vector of a line $j\in L$ and the normal vector of a plane $k\in \Pi$, denoted as $\alpha_{jk}$: \begin{equation} \alpha_{jk} = \bar\vbf_{j}^\top \nbf_k. \end{equation} Especially, when $\alpha_{jk} = 0$, namely a line is parallel to a plane, a distance can be further calculated as \begin{equation} d_{jk} = \nbf_k^T(\bar\nbf_j\times \bar\vbf_j)d_j - d_{k}. \end{equation} The measurement therefore is \begin{equation} \zbf_{jk} = \left\{ \begin{aligned} &\begin{bmatrix} \alpha_{jk} & d_{jk} \end{bmatrix}^\top, & \text{in parallel},\\ &\alpha_{jk}, &\text{otherwise}. \end{aligned}\right. \end{equation} Specifically, the line-on-plane constraint is enforced as $\alpha_{jk} =0$, and $d_{jk} = 0$. The associated Jacobian is provided in Appendix \ref{ltpj}. \subsubsection{Plane-to-plane factor} Similar to the above formulations, a similar dot product between the unit normal vectors of two planes, can be calculated as \begin{equation} \alpha_{kk'} = \nbf_k^\top\nbf_{k'}. \end{equation} When two planes are parallel, the displacement can be calculated based on the closest points of two planes as \begin{equation} \dbf_{kk'} = \xbf_{k'} -\xbf_k = \nbf_k (d_{k'} - d_{k})= \nbf_{k'} (d_{k'} - d_{k}). \end{equation} Define the measurement of the relationship as \begin{equation} \zbf_{jk} = \left\{ \begin{aligned} &\begin{bmatrix} \alpha_{kk'} & d_{kk'} \end{bmatrix}^T, & k \text{ parallel to } k',\\ &\alpha_{kk'}, &\text{otherwise}. \end{aligned}\right. \end{equation} The measurement Jacobians are provided in Appendix \ref{pltplj}. \begin{table}[] \caption{The structure priors formulated as disances/angles} \label{sp_table} \centering \begin{tabular}{|l|l|l|l|} \hline & Point $i$ & Line $j$ & Plane $k$ \\ \hline\hline $i$ & $d_{ii'}$ & \begin{tabular}[c]{@{}l@{}}$d_{ij}$\\ e.g.\\ point-on-line: $d_{ij}=0$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$d_{ik}$\\ e.g.\\ point-on-plane: $d_{ik}=0$\end{tabular} \\ \hline $j$ & ------- & \begin{tabular}[c]{@{}l@{}}$\alpha_{jj'}$\\ $d_{jj'}$ if parallel \\ e.g.\\ orthogonality: $\alpha_{ij} = \pm 0$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$\alpha_{jk}$\\ $d_{jk}$ if parallel ($\alpha_{jk} = 0$)\\ e.g.\\ line-on-plane:\\$d_{jk}=0, \alpha_{jk}=0$\\ orthogonality: $\alpha_{jk}=\pm 1$\end{tabular} \\ \hline $k$ & ------- & ------- & \begin{tabular}[c]{@{}l@{}}$\alpha_{kk'}$\\ $d_{kk'}$ if parallel $\alpha_{kk'}= \pm 1$\\ e.g.\\ orthogonality: $\alpha_{kk'} = \pm 1$\end{tabular} \\ \hline \end{tabular} \end{table} With the above formulation of spatial relationships between features, the structure priors can be encoded into low dimensional angles and/or distances, as summarized in Tab \ref{sp_table}. The low dimensional encoding makes their associations with the structure priors database easy. Both the heterogeneous geometric feature factors and the structure prior factors are further integrated with the graph optimization toolbox GTSAM \cite{dellaert2012factor}. \subsection{Structure prior selection} \label{structure_selection} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/timevserror-crop} \caption{The translational estimation error and calculation time as number of SP grows.} \label{fig:trajs-crop} \end{figure} Based on the above formulations of the structure information between features, there are at most $n(n-1)/2$ possible feature priors in a scenario with $n$ geometric features. In the graph optimization process, incorporating too many structure priors will severely damage the sparsity of the graph matrix, therefore will slow down the optimization. As an illustrative example, a synthetic environment with 100 points, 40 lines, and 40 planes are shown in Fig. \ref{fig:trajs-crop}. Despite the localization error decreases as more structure priors are incorporated into the optimization, the optimization efficiency deteriorates simultaneously. Among all the potential prior information, some are not as helpful as others, and there may also exist redundancies in the structure priors set. Based on above observations, it is a practical trick to select a limited number of structure prior measurements which benefit the localization the most. Specifically, we consider to minimize the localization uncertainty represented by a estimation covariance. In this paper, we consider implementing the Fisher Information Matrix (FIM) to measure the contribution of a structure prior to the localization performance. Denote the belief of the state within the sliding window of time $t$ as $\mathcal{X}_t\sim \mathcal{N}(\bar{\mathcal{X}}_t, \Pbf_t)$, and one structure prior $s\in\mathcal{S}$ of current local map as a measurement, $ h_s(\mathcal{X}_t)\sim \mathcal{N}(\zbf_s, \Sigma_s)$, we have the following equation according to the Bayes' rule \begin{equation} \label{bayes} \Pbf_{t^+}^{-1} = \Pbf_{t}^{-1} + \sum\limits_{s\in \mathcal{S}_t}\mathbf{I}_s, \end{equation} where $ \mathbf{I}_s = \Jbf_s^T\Sigma_s^{-1}\Jbf_s$ is the FIM of a specific structure prior $s$. $\Jbf_s$ is the Jacobian of the structure prior $s$ with regard to the local pose. $\Pbf_{t}$ and $\Pbf_{t^+}$ denote the covariance before and after integrating the structure priors, respectively. Finally, we can use the marginalization technique \cite{Carlone2019Attention} to obtain the localization uncertainty $\Pbf_t'$ by marginalizing out other states. The structure priors can be selected by minimizing a metric of the covariance $\Pbf_t'$. The selection problem is NP-hard and cannot be solved efficiently for a large number of structure priors. As indicated in \cite{shamaiah2010greedy}, the $\log\det(\cdot)$ metric of the covariance is submodular w.r.t. the information gain of structure priors. With efficient greedy algorithms, a sub-optimal solution with guaranteed performance can be obtained more efficiently. We use selection algorithm similar to \cite{lyu2022structure} to obtain the structure prior set. \section{Experiment evaluation} \label{experiment} In this part, the proposed SPINS is tested based on synthetic data, the public datasets, and, most importantly, on real flight datasets that are collected with a UAV during indoor and outdoor inspection tasks. \subsection{Synthetic data} To evaluate the localization performance of the proposed framework, we create a customized 2.5D indoor simulation scenario with point, line, and plane features as presented in Fig. \ref{fig:trajectory-crop}. A 3D robot trajectory is generated within the simulation space based on spline functions as a red curve. An IMU is simulated according to the ADIS16448 IMU sensor specifications listed in \cite{geneva2018lips}. We assume that the 3D geometry information of the features is obtained according to the measurement function described in Section \ref{features} with extra FOV limitations. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/trajectory-crop} \caption{The 2.5D simulation scenario with point, line and plane features.} \label{fig:trajectory-crop} \end{figure} The optimization-based localization is solved with the iSAM2 solver from the GTSAM packag, which we additionally integrate line factors, plane factors, and structure prior factors. Comparisons based on different features and structure prior factors are carried out. Specifically, we consider 1) point feature (P-INS), 2) point and line features (PL-INS), 3) point, line, and plane features (PLP-INS) based methods, and 4) our structure prior aided method (SPINS). We select 1) 20 structure priors in each frame randomly (SPINS-Rand. 20 SP), 2) 20 structure priors according to Section \ref{structure_selection} (SPINS-App. Info. Opt. 20 SP). 3) the most informative 20 structure priors (SPINS-Info. Opt. 20 SP), and 4) all structure priors (SPINS-All SP). The structures information listed as Table \ref{sp} are adopted. { In addition, a prior map based localization method is used as a localization benchmark where perfect feature matching between local observations and map is assumed.} The root-mean-square errors (RMSEs) are plotted in Fig. \ref{fig:errors_sim-crop}. The quantitative comparison between different strategies is also provided in Tab. \ref{sp}. The localization results of using heterogeneous features are plotted as solid lines in different colors. It is apparent that, as more types of feature are used, more accurate estimation can be achieved. More important, the integration of structure prior information can further improve the localization performance, as plotted in dashed lines. Specifically, integrating all structure prior information unsurprisingly achieves the closest localization performance to the prior map based method. Nevertheless, the computation time for solving each round of the local optimization also significantly increases due to more structure factors are integrated, as provided in Tab. \ref{sp}. Among the 20 structure priors based method, the optimal selection achieves the best performance, at the expense of greedy search computation overhead. Our proposed method achieved comparable result but with much less computation burden. Random selection based localization result is the least accurate. \begin{table}[] \centering \caption{Structure Priors between Features.} \label{sp} \begin{tabular}{ |c|l|l| } \hline & Line & Plane \\ \hline\hline Point & Points on the Line & Points on the Plane \\ \hline Line & \begin{tabular}[c]{@{}l@{}}-Parallelism with distance\\ -Orthogonality\end{tabular} & \begin{tabular}[c]{@{}l@{}}-Orthogonality\\ -Parallelsim with distance\\ -Line on the Plane\end{tabular} \\ \hline Plane & - & \begin{tabular}[c]{@{}l@{}}-Parallelism with distance\\ -Orthogonality\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/errors_sims_new-crop} \caption{The translational and rotational RMSE under different feature and prior information configurations.} \label{fig:errors_sim-crop} \end{figure} \begin{table}[] \caption{Average RMSE over 100 Mento-Carlo simulations with different strategies.} \centering \begin{tabular}{|c|c|c|c|} \hline Strategies & \begin{tabular}[c]{@{}l@{}}Trans.\\ Errors {[}m{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rot. \\ Errors {[}deg.{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Time per\\ iteration [s]\end{tabular} \\ \hline\hline P-INS & 0.8186 & { 6.9399 } & { 0.0180 } \\ \hline PL-INS & 0.6103 & 5.0499 & 0.0193 \\ \hline PLP-INS & 0.4863 & 3.9595 & 0.0274 \\ \hline SPINS-Rand. 20 SPs & 0.3242 & 3.1684 & 0.0259 \\ \hline SPINS-App. Opt. 20 SPs & 0.2082 & 3.1652 & 0.0301 \\ \hline SPINS-Opt. 20 SPs & 0.1757 & 2.0399 & 0.3175 \\ \hline SPINS-All SPs & 0.1277 & 1.6984 & 0.2463 \\ \hline Prior Map & 0.0547 & 0.9414 &--- \\ \hline \end{tabular} \end{table} \subsection{Euroc dataset} In this part, the proposed SPINS framework is tested on the public Euroc MAV Dataset \cite{burri2016euroc}. The front-end detection and track point, line and planes based on stereo vision measurements. { Specifically, the point features are detected and tracked with the KLT based optical flow method similar to \cite{Qin2018VINSMono}. The line features are detected and tracked based on a modified line segment detector (LSD) as described in \cite{fu2020pl}. Moreover, the planes are extracted and tracked based on triangulated point features based on the method described in \cite{Nardi2019Unified}.} On the prior information part, we use the point cloud from the Vicon room to obtain the plane related structure priors. The plane extraction based on the point cloud is shown in Fig. \ref{fig:room_planes-crop}. The corresponding distributions of angles between planes and the distances of parallel planes are plotted in Fig. \ref{fig:distributions}, which show the repetition and sparsity angle and distance pattern in a man-made environment. { Based on the prior information above, we extract the distance priors, such as point-on-plane, line-on-plane, and plane-to-plane distances, and angles priors, such as plane parallel, plane orthogonality, as structure priors to aid the localization.} The localization results by implementing the VINS FUSION(\url{https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git}), ORB-SLAM3 (\url{https://github.com/UZ-SLAMLab/ORB_SLAM3.git}), the heterogeneous features based method\cite{yang2019observability}, and the proposed framework are listed in Table \ref{rmse} based on the evaluation method described in \cite{Zhang18iros}. {\color{blue} We use the same experiment setup for all tests according to the Euroc dataset parameters.} As indicated, our proposed method can achieve the best performance in both translational and rotational RMSE in $\mathtt{V1\_01}$, $\mathtt{V2\_01}$, $\mathtt{V2\_02}$ and $\mathtt{V2\_03}$. Specifically, our proposed SPINS outperforms the PLP based method\cite{yang2019observability} in all datasets, which shows the effectiveness of incorporating structure prior information. As one example, the results of estimated trajectories of $\mathtt{V2\_02}$ are plotted in Fig. \ref{fig:V103_trajectory_top-crop}. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/room_planes-crop} \caption{The structure priors extracted from the ground-truth scan of the vicon room.} \label{fig:room_planes-crop} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/distribution-crop.pdf} \caption{The angles and distances distribution of segemented planes in Fig.\ref{fig:room_planes-crop}}. \label{fig:distributions} \end{figure} \begin{table}[tbh] \caption{The RMSE of the estimation results based on different methods} \label{rmse} \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Data} & \multicolumn{4}{c|}{Trans. RMSE [m]/Rot. RMSE [$^\circ$]} \\ \cline{2-5} & VINS & ORB3 & PLP & SPINS \\ \hline\hline V1\_01 & 0.129/1.748 & 0.085/1.484 & 0.098/1.674 & {\bf 0.079}/{\bf 1.131} \\ \hline V1\_02 & 0.145/1.504 & {\bf0.089}/1.336 & 0.321/1.455 & { 0.094}/{\bf 0.905} \\ \hline V1\_03 & 0.144/1.967 & {\bf 0.093}/1.952 & 0.193/2.389 & 0.095/{\bf 1.762} \\ \hline V2\_01 & 0.150/3.121 & 0.085/1.852 & 0.116/2.352 & {\bf 0.077/1.731} \\ \hline V2\_02 & 0.197/4.413 & { 0.167/3.141} & 0.188/4.581 & {\bf 0.160/3.030} \\ \hline V2\_03 & 0.219/2.924 & 0.160/3.007 & 0.213/3.458 & {\bf 0.151/2.988} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/V202_trajectory_top1-crop} \caption{The estimated trajectories of $\mathtt{V1\_03}$ based on the VINS-FUSION, ORB3 and the proposed SPINS with PLP and PLP-SP setups.} \label{fig:V103_trajectory_top-crop} \end{figure} \begin{figure*} \centering \includegraphics[width=1\linewidth]{fig/Exp_setup-crop} \caption{The experiment setups. We test the SPINS in both indoor (b) and outdoor (c) inspection scenarios.} \label{fig:expsetup-crop} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=0.4\linewidth]{fig/before-crop} \includegraphics[width=0.4\linewidth]{fig/after-crop} \caption{The point feature estimation before/after integrate the point-on-line constraint. When the constraint is imposed, the points on the wall are more accurately stick to the wall, lead to better point feature estimation.} \label{fig:after-crop} \end{figure} \subsection{Field collected data} In this part, the proposed SPINS is further tested on large scale inspection environment. A DJI M600 pro hexacopter carrying various sensors is utilized to detect features of the environment, as illustrated in Fig.\ref{fig:expsetup-crop}(a). We considered two scenarios where geometric feature and structure patterns are rich, as shown in Fig.\ref{fig:expsetup-crop}(b) indoor navigation and Fig.\ref{fig:expsetup-crop}(c) building fa\c{c}ade inspection, which is available as part of the VIRAL dataset\cite{nguyen2021ntu}. We collect sensing data from visual cameras, LiDARs, and IMU sensors to properly detect the geometric features based on similar front-end processing to Sec. VII.B. {\color{blue} The ground truth is provided by a Leica Geosystem that measures the optical prism onboard. All tests are carried out based on the same parameter setup provided in \url{https://ntu-aris.github.io/ntu_viral_dataset/}.} { \subsubsection{Indoor navigation} In this part, the proposed structure prior information is further tested on an indoor auditorium. Similarly, we measure some potentially repetitive and salient features as the structure prior information. Moreover, we use the Leica system to obtain a point cloud map to extract plane based structure priors similar to Sec. VII.B. Three different trajectories are generated to test the proposed methods. We compare our results to methods based on monocular camera (VINS-Mono \cite{Qin2018VINSMono}), stereo-camera (VINS-Stereo \cite{qin2019general}), Lidar (LOAM \cite{zhang2014loam}), Lidar and Camera (LVI-SAM \cite{shan2021lvi}), a map based localization method (DLL \cite{caballero2021dll}), and our method (SPINS). The ICP method by registering Lidar scans to point cloud map is also provided as a localization benchmark. More specifically, we use the LOAM odometry for ICP and DLL initialization. The paramters are set as 50 iterations and $0.05$m max correspondence distance in ICP. The estimated trajectories of 3 trials based on the above methods are plotted in Fig.\ref{fig:trial123-crop}. The estimation errors of NYA03 is given in Fig. \ref{fig:errors-crop}. The position RMSE based on the methods are provided in TABLE \ref{rmse_2}. It's apparent, our proposed method, aided by the angle/distance priors (as plotted in Fig. \ref{fig:numsp-crop}), can achieve the best performance among methods mentioned above. Moreover, the distances structure priors are more favored in most cases comparing to angles priors. One example of the effectiveness by imposing point-on-plane constraint, or zero point-to-plane distance constraints, is shown in Fig. \ref{fig:after-crop}. {Additionally, we compare our method to the prior map based method DLL. Although the DLL method can achieve very close performance to the SPINS in both accuracy and time efficiency, it require an odometery to provide the initialization for registration between map and local scan, which is an extra computation burden. In addition, the DLL method requires an initial position of the robot in the map, which may not be available in practical scenarios. The ICP based method is also provided to indicate the best localization performance that map-based method can achieve. However, the ICP method uses more than 2 seconds for each registration process, and is hard to be implemented in real time applications. } \begin{figure*} \centering \includegraphics[width=1\linewidth]{fig/trial_123_new-crop} \caption{The estimated trajectories based on different methods.} \label{fig:trial123-crop} \end{figure*} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/errors_new-crop} \caption{The estimation errors of different methods.} \label{fig:errors-crop} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/numsp-crop1} \caption{The number of angle and distance structure priors} \label{fig:numsp-crop} \end{figure} \begin{table}[tbh] \caption{The RMSE of the estimation results based on different methods} \label{rmse_2} \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Translational RMSE {[}m{]}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Time per \\ iteration {[}s{]}\end{tabular}} \\ \cline{2-4} & NYA01 & NYA02 & NYA03 & \\ \hline\hline Vins-Mono & ------ & 0.2576 & 0.6118 & ------ \\ \hline Vins-Stereo & 0.2427 & 0.2424 & 0.3808 & ------ \\ \hline ALOAM & 0.0768 & 0.0902 & 0.0797 & ------ \\ \hline LVI-SAM & 0.0761 & 0.0885 & 0.0827 & ------ \\ \hline DLL & 0.0734 & 0.0663 & 0.0607 & 0.0911 \\ \hline SPINS & 0.0551 & 0.0672 & 0.0592 & 0.0798 \\ \hline ICP & 0.0262 & 0.0253 & 0.0214 & 2.003 \\ \hline \end{tabular} \end{table}} \subsubsection{Building inspection} In the larger scale building inspection task, the UAV is driven to follow a trajectory covering the fa\c{c}ade of the building. The geometric feature extraction is shown in Fig. \ref{fig:sensing}. To utilize our proposed SPINS method, we manually measure some distances and angles, which we treated as main patterns of the building, are summarized as Table \ref{metrics}. \begin{table}[] \caption{Hand-measured distances for the building} \label{metrics} \centering \begin{tabular}{|l|l|} \hline Distance Type& Typical Value {[}m{]} \\ \hline\hline Parallel Lines & {[}0.3, 0.5, 1.2, 1.5, 2.5, 3.3, 4{]} \\ \hline Parallel Line to Plane & {[}0.5, 1.2, 2.7, 3, 4{]} \\ \hline Parallel Planes & {[}1.5, 3, 4.5, 6{]} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/sensing_final} \caption{The extraction of point, line and plane features from both vision and LiDAR.} \label{fig:sensing} \end{figure} The localization results of the inspection using different methods are presented from Fig. \ref{fig:traj3d-crop} to Fig. \ref{fig:errors5eps-crop}. {A demonstrative video} is provided at \url{https://youtu.be/p-wca_WekvQ}. The trajectories of different localization methods along with the groudtruth (GT) are plotted in Fig. \ref{fig:traj3d-crop}, although all trajectories are initialized at the same $[0,0,0]^\top$, the trajectories from the VINS drifts as time goes on. Specifically, from Fig. \ref{fig:traj-subfigs-crop}, it's clear that the trajectories from VINS drifts in $x$ and $y$ directions due to the low feature density and variation during the horizontal movement, and on the other hand, the trajectory from A-LOAM drifts mainly in the $z$ direction due to low depth variation during vertical movement in $z$ direction movement in a 2.5D building. As indicated in Fig. \ref{fig:errors5eps-crop}, the results obtained by using point, line and plane features apparently have better performance in all three directions. Moreover, the integration of structure priors SPINS outperformed the PLP method with the provide structure prior information. The position estimation RMSE of the PLP and SPINS are $1.018$m and $0.7452$m, respectively, with a significant position accuracy improvement by incorporating the structure priors. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig/traj3d-crop} \caption{The estimated trajectories from different methods.} \label{fig:traj3d-crop} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig/traj-subfigs_nolegend-crop} \caption{The estimation on X,Y,Z directions.} \label{fig:traj-subfigs-crop} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig/errors_withscopenolegend_small-crop} \caption{The localization errors on X,Y,Z directions.} \label{fig:errors5eps-crop} \end{figure} \section{Conclusions} \label{conclusion} We propose a sliding window optimization-based localization framework utilizing point, line, and plane features. Considering heterogeneous features, we further develop a structure priors integration method that can further improve localization robustness and accuracy. To alleviate the computation burden brought by extra structure factor edges in the factor graph, we adopt a screening mechanism to select the most informative structure priors. {\color{blue}Although the proposed SPINS shows advantages in civilian environments in our experiment, its effectiveness in more generic environments is questionable. In the future, we will further investigate on using general semantic prior information to aid the localization to achieve more wide range of environment adaptation. }
{ "timestamp": "2022-07-28T02:05:14", "yymm": "2012", "arxiv_id": "2012.14053", "language": "en", "url": "https://arxiv.org/abs/2012.14053" }
\section{Introduction} Microarchitectural attacks, such as Spectre~\cite{Kocher2018spectre}, Meltdown~\cite{meltdown2018}, Foreshadow~\cite{vanbulck2018foreshadow}, RIDL~\cite{ridl2019}, and ZombieLoad~\cite{zombieload2019}, exploit the abstraction gap between the Instruction Set Architecture (ISA) and how instructions are actually executed by processors to compromise the confidentiality and integrity of a system. That is, these attacks exploit microarchitectural side-effects resulting from processor's optimizations, such as speculative and out-of-order execution, and from processor's internal buffers and caches that are invisible at the ISA level. To secure systems against microarchitectural attacks, programmers need to reason about and \textit{program against} these microarchitectural side-effects. There is, however, no ``unique'' reference for microarchitectural side-effects. Even for a single manufacturer, processors subtly differ in security-relevant microarchitectural side-effects across generations. For example, the \texttt{clflush} instruction for flushing caches behaves differently across generations of Intel processors~\cite{flushgeist}. As a result, a program might be secure when run on a processor and insecure when run on another processor providing slightly different guarantees. However, we cannot---and should not---expect programmers to manually tailor programs for specific processors and their security guarantees. Instead, we could rely on compilers (and the secure compilation community), as they can play a prominent role in bridging this gap: compilers should target specific processors microarchitectural security guarantees and they should leverage these guarantees to produce secure code. % This will enable decoupling program-level security (say, ensuring that secrets are not leaked under the ISA semantics), which programmers should enforce, and microarchitectural security (say, preventing leaks of secrets due to microarchitectural side-effects), which is the job of the compiler. To achieve this, we outline the idea of \textit{Contract-Aware Secure COmpilation} (CASCO) where compilers are parametric with respect to a hardware/software security-contract, an abstraction capturing a processor's security guarantees. That is, compilers will automatically leverage the guarantees formalized in the contract to ensure that program-level security properties are preserved at microarchitectural level. For concreteness, our overview of CASCO builds on a recent formulation of hardware/software contracts~\cite{guarnieri2021contracts} that focuses on data confidentiality (and therefore hypersafety properties). We believe that CASCO is more general and it can be applied also to other classes of security properties. \section{Contract-aware secure compilation} The CASCO framework relies on the following elements: ISA, Hardware and Contract languages, the adversary we consider and the notion of contract-aware compilers. \smallskip \paragraph{ISA language} We consider an ISA language \src{L} with a notion of programs \src{p} (comprising both code and data segments) and of architectural program states $\src{\sigma} \in \src{AS}$. \src{L} is equipped with an \textit{architectural semantics} $\archStyle{\rightarrow}:\src{AS}\times\src{AS}$ that models the execution of programs at the architectural level, mapping an architectural state{} $\src{\sigma}$ to its successor $\src{\sigma'}$. % Assume given $\archSemP{\src{p}}$, a function that denotes the \emph{Architectural TRaces} of \src{p}, derived from the sequence of architectural state{}s $\src{\sigma} \cdot \ldots \cdot \src{\sigma_n}$ that the execution of \src{\sigma} goes through according to $\archStyle{\rightarrow}$. \smallskip \paragraph{Hardware} The execution of $\src{L}$-programs at the microarchitectural level is formalised with a \textit{hardware semantics} that relies on {\em hardware states} $\trgb{\hs}=\trg{\tup{\src{\sigma},\trgb{\mu}}} \in \trg{HS}$. Hardware states consist of an architectural state{} $\src{\sigma}$ (as before) and a {\em microarchitectural state} $\trgb{\mu}$, which models the state of components like predictors, caches, and reorder buffers. A hardware semantics $\muarchStyle{\Rightarrow}{} : \trg{HS} \times \trg{HS}$ maps hardware states $\trgb{\hs}$ to their successor $\trgb{\hs'}$.\looseness=-1 % \smallskip \paragraph{Adversary} We consider a hardware-level adversary that can observe parts of the microarchitectural state{} during execution. Given a program $\src{p}$, $\muarchSemP{\src{p}}$ denotes the \emph{Hardware TRaces} of \src{p}, that is, the sequence of hardware observations $\mathcal{A}(\trg{\trgb{\mu}_0}) \cdot \ldots \cdot \mathcal{A}(\trg{\trgb{\mu}_n})$ that the hardware state \trgb{\hs} of \src{p} goes through according to \muarchStyle{\Rightarrow}{}. Here, $\mathcal{A}(\trgb{\mu})$ maps $\trgb{\mu}$ to its attacker-visible components (say, the cache metadata). % \smallskip \paragraph{Contracts} A contract splits the responsibilities for preventing side-channels between software and hardware, and it provides a concise representation of a processor's microarchitectural security guarantees. Following~\cite{guarnieri2021contracts}, a {\em contract} \con{c} defines: \begin{inparaenum}[(1)] \item a notion of contract states $\con{\cs}\in\con{CS}$ that extend \src{\sigma} with contract-related components, \item labels $\con{l}\in\con{LC}$ representing contract-observations, and \item a labeled semantics $\interfStyle{\rightharpoonup} : \con{CS}\times\con{LC}\times\con{CS}$. \end{inparaenum} Given a program $\src{p}$, \interfSemP{\src{p}} denotes the \emph{Contract TRace} of \src{p}, that is, the sequence \con{l_1,\cdots,l_n} of labels that the contract state \con{\cs} of \src{p} goes through according to \interfStyle{\rightharpoonup}{}. The contract traces of a program $\interfSemP{\cdot}$ capture which architectural state{}s are guaranteed to be indistinguishable by a hardware attacker on any hardware platform satisfying the contract: % \begin{definition}[Hardware satisfies contract~\cite{guarnieri2021contracts}]\label{def:hni} A hardware semantics $\muarchSemP{\cdot}$ {\em satisfies a contract $\con{c}$} (denoted $\hsni{\con{c}}{\muarchSemP{\cdot}}$) if, for all programs \src{p} and \src{p'} that only vary in the data segment, if $\interfSemP{\src{p}} = \interfSemP{\src{p'}}$, then $\muarchSemP{\src{p}}= \muarchSemP{\src{p'}}$. \end{definition} \paragraph{Contract-aware compilers} Contract-aware compilers~($\comp{\cdot}$) are parametric with respect to a contract $\con{c}\in\con{\mk{C}}$, which formalizes a processor's security guarantees. % The target program \comp{\con{c},\src{p}} depends on the source $\src{p}$ and on the contract $\con{c}$.\looseness=-1 Contract-aware compilers can be constructed to preserve many security properties (e.g., cryptographic constant-time and absence of speculative leaks), so long as these properties are expressible in the contract semantics (fortunately, this is often the case~\cite{guarnieri2021contracts}). Depending on the property of interest, we then choose different secure compilation criteria and instantiate them with the ISA and contract semantics. Proving that a contract-aware compiler upholds such a criterion demonstrates that the criterion is preserved for all contracts in $\con{c} \in \con{\mk{C}}$, which determine the target language's semantics. As an example, consider the security property of interest being the prevention of all microarchitectural leaks of information not exposed by ISA observations (captured by the architectural traces $\archSemP{\cdot}$); this can ensure, for instance, the absence of leaks of transiently accessed data~\cite{patrignani2019exorcising}. We therefore choose the secure compilation criterion preserving 2-hypersafety properties~\cite{rhc,rhc-rel}. An instantiation of that criterion is found \Cref{def:sec-comp} below. That informally tells that the compiler translates ISA-equivalent programs into contract-equivalent ones, so there is no more leakage at the contract level than what expressable in the ISA.\looseness=-1 \begin{definition}[Compiler satisfies contract]\label{def:sec-comp} We say that a compiler $\comp{\cdot}$ is secure for all contracts of $\con{\mk{C}}$ (denoted as $\comp{\cdot} \vdash {\con{\mk{C}}}$) if for all contracts $\con{c} \in \con{\mk{C}}$ and programs $\src{p}$, \src{p'} that only differ in the data segment, if $\archSemP{\src{p}} = \archSemP{\src{p'}}$, then $\interfSemP{\comp{\con{c},\src{p}}} = \interfSemP{\comp{\con{c},\src{p'}}}$. \end{definition} \Cref{theorem:main} illustrates the overarching benefits of using CASCO. It is sufficient to show that both the hardware and the compiler satisfy a contract (\Cref{def:hni} and \Cref{def:sec-comp}) to derive that any ISA program \src{p} will produce hardware executions that will not be vulnerable to attacks when run on hardware satisfying the contract. Notably, proofs of \Cref{def:hni} and \Cref{def:sec-comp} can be done separately and by different parties: hardware developers can provide contracts and proving \Cref{def:hni} independently of specific compiler criteria, while developers of secure compilers can focus on proving \Cref{def:sec-comp} ignoring most of the hardware details (except those captured by contracts). \begin{theorem}\label{theorem:main} If $\comp{\cdot} \vdash {\con{\mk{C}}}$, $\con{c}\in\con{\mk{C}}$, and $\hsni{\con{c}}{\muarchSemP{\cdot}}$, then for all programs $\src{p}$ and \src{p'} that only differ in the data segment, if $\archSemP{\src{p}} = \archSemP{\src{p'}}$, then $\muarchSemP{\comp{\con{c},\src{p}}} = \muarchSemP{\comp{\con{c}, \src{p'}}}$. \end{theorem} \section{CASCO for secure speculation} To illustrate the benefits of CASCO, we focus on speculative execution attacks (Spectre) as an example due to the availability of compiler-level countermeasures~\cite{patrignani2019exorcising} and security contracts~\cite{guarnieri2021contracts}. CASCO, however, is more general and it can be applied to all settings where microarchitectural attacks are prevented by compiler-inserted countermeasures. Consider the classical Spectre v1 attack~\cite{Kocher2018spectre}. There, an \trg{attacker} poisons the \trg{branch} \trg{predictor} (which exists at the \trg{hardware} level and not at the \src{ISA}) to trigger speculative execution and encode speculatively accessed data (otherwise unaccessible) into the \trg{cache}, so the attacker can later retrieve them by probing the \trg{cache}. There exist four different \con{contracts} that serve as specifications of processors' microarchitectural security guarantees~\cite{guarnieri2021contracts} and that compilers can use as security specification.\looseness=-1 \begin{description}[leftmargin=0pt, parsep=0pt, listparindent=1em, font =\sffamily\bfseries] \item[Contract $\CtSeqInterfP{\cdot}$:] This contract exposes the program counter and the locations of memory accesses on sequential, non-speculative paths. $\CtSeqInterfP{\cdot}$ is often used to formalize constant-time programming~\cite{BartheBCL19,AlmeidaBBDE16}, and it is satisfied (in the sense of \Cref{def:hni}) by in-order, non-speculative processors~\cite{guarnieri2021contracts}. \item[Contract $\CtSpecInterfP{\cdot}$:] This contract additionally exposes the program counter and the locations of all memory accesses on speculatively executed paths~\cite{guarnieri2020spectector}. Simple speculative out-of-order processors satisfy $\CtSpecInterfP{\cdot}$~\cite{guarnieri2021contracts}. \item[Contract $\ArchSeqInterfP{\cdot}$:] This contract, which guarantees the confidentiality of data that is {\em only transiently} loaded, exposes the program counter, the location of all loads and stores, and the values of all data loaded from memory on standard, i.e., non-speculative, program paths. Processors implementing speculative taint tracking~\cite{STT2019,nda2019weisse} satisfy $\ArchSeqInterfP{\cdot}$~\cite{guarnieri2021contracts}. \item[Contract $\CtPcSpecInterfP{\cdot}$:] This contract exposes program counter and addresses of loads during sequential execution, and only the program counter during speculative execution. Processors with load-delay countermeasures~\cite{specshadow2019} satisfy $\CtPcSpecInterfP{\cdot}$~\cite{guarnieri2021contracts}. \end{description} A possible countermeasure against Spectre v1 attacks, implemented in the Microsoft Visual C++ and Intel ICC compilers~\cite{Intel-compiler,microsoft}, is the insertion of \texttt{lfence} instructions (which stop speculation). The countermeasure has been developed to work against speculative, out-of-order processors (contract $\CtSpecInterfP{\cdot}$), and it injects an \texttt{lfence} instruction after all branch instructions, preventing the attack described before. However, a contract-aware compiler can rely on the contract information to know the underlying processor's security guarantees and optimise its code, avoiding the injection of unnecessary \texttt{lfence}s. For example, consider processors that implement load-delay (contract $\CtPcSpecInterfP{\cdot}$) or speculative taint-tracking countermeasures (contract $\ArchSeqInterfP{\cdot}$). A contract-aware compiler targeting those processors can avoid inserting \texttt{lfence}s after branches since there, the speculative memory leaks are prevented by the hardware. \section{Future directions} We believe CASCO provides foundations for designing and proving the correctness of compilers that automatically leverage hardware-level security guarantees, formalized using security contracts, to prevent microarchitectural leaks. % For this, we will need \begin{inparaenum}[(1)] \item formal languages for modeling interesting classes of contracts; \item ways of formalizing compilers that use contract information to optimise code; and % \item new proof techniques that account for contract parametricity and composability (to simplify proofs across similar contracts). \end{inparaenum} \smallskip {\small \textbf{Acknowledgements:} This work was partially supported by the German Federal Ministry of Education and Research (BMBF) through funding for the CISPA-Stanford Center for Cybersecurity (FKZ: 13N1S0762), by a grant from Intel Corporation, Juan de la Cierva-Formaci\'on grant FJC2018-036513-I, Spanish project RTI2018-102043-B-I00 SCUM, and Madrid regional project S2018/TCS-4339 BLOQUES. } \balance
{ "timestamp": "2020-12-29T02:28:15", "yymm": "2012", "arxiv_id": "2012.14205", "language": "en", "url": "https://arxiv.org/abs/2012.14205" }
\section{Introduction} It is known from the studies of AdS/CFT correspondence that the charged black holes in gravitational theory corresponds to states at finite temperature with nonzero charge density or non-zero chemical potential in the dual field theory \cite{Maldacena:1997re}. The black hole is made charged via the Maxwell field which is dual to the current of a global U(1) symmetry in the dual field theory. The AdS/CFT correspondence has been used to understand the transport coefficients of holographic matter at finite temperature with a finite density and magnetic field in \cite{Hartnoll:2007ih, Hartnoll:2007ip}. Studies of holographic matter is reviewed, recently, in \cite{Hartnoll:2016apf}. It is suggested in \cite{Iqbal:2008by} that in the low frequency limit, the transport coefficients can be calculated by evaluating some geometric quantities at the horizon. The longitudinal electrical conductivity of Einstein-Maxwell-dilaton-axion system with explicit break down of the spatial translational symmetry is computed for AdS spacetime in \cite{Andrade:2013gsa}. Based on the result of computation, it is suggested in \cite{Liu:2020rrn} to write it as \begin{equation} \sigma_L=(\mu L)^{d-3}w_0 ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-3 }\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right], \end{equation} A prescription is given to calculate the transport coefficients with momentum dissipation for holographic matter at the horizon in \cite{Donos:2014cya} and it matches with the computation made in \cite{Andrade:2013gsa}. Further studies are made in \cite{Blake:2017qgd, Blake:2014yla}. We have calculated the longitudinal thermo-electric and thermal conductivity and the result reads as \begin{eqnarray} \alpha_L&=&(\mu L)^{d-1} ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-2} \left[\frac{4\pi(d-2)w_0}{\psi_0k^2 L^2}\right],\nonumber \\ \kappa_L&=&(\mu L)^{d-1} ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-1}\left(\frac{16\pi^2 T_H }{\psi_0k^2}\right)\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right]^{-1} \end{eqnarray} where the form of the function $g(x,~y)$ is to be determined by solving it \begin{equation} \mu L~~ g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)=\frac{r_h}{L} \end{equation} in terms of the size of the horizon, $r_h$, and the horizon size has to be determined by solving eq(\ref{temp_b_charge_density_dissipation}) by setting the magnetic field to zero. It just follows that generically it is not possible to separate the incoherent production of the particle-hole pairs from the momentum dissipation due to lattice effects. \paragraph{AdS spacetime:} In this paper, we have revisited the charged AdS black holes solutions with planar horizon in arbitrary spacetime dimensions in Einstein-Maxwell-dilaton-axion system and found that the celebrated Wiedemann-Franz relation, unfortunately, does not hold. However, there exists a relation among the transport coefficients. The relation involves electrical conductivity, thermo-electric conductivity, thermal conductivity, chemical potential and the temperature, which reads as \begin{equation} \frac{\mu^2}{k^2}\frac{\kappa_L\sigma_L}{T_H \alpha^2_L}=\frac{\psi_0 L^2}{(d-2)^2 w_0}={\rm constant} \end{equation} This relation involves quantities which are defined at the horizon such as transport coefficients and the chemical potential at the boundary (the behavior of the gauge potential) means the above product is not necessarily universal \cite{Davison:2016ngz}. This is true as it works only for the AdS spacetime. \paragraph{Universal relations:} As soon as we turn on the magnetic field the transport coefficients takes the form as written in eq(\ref{transport_b}) and eq(\ref{transport_kappa}). It is easy to notice that the non-vanishing behavior of longitudinal electrical conductivity, thermo-electric and thermal conductivities are governed directly by chemical potential, momentum dissipation and temperature, respectively. However, the non-vanishing behavior of transverse conductivities are governed by temperature, chemical potential and magnetic field. In fact, it follows upon closer inspection that the transport coefficients evaluated at the horizon are not all independent. There exists an interesting relation among the transport coefficients \begin{equation}\label{holo_transport_relation} \fcolorbox{lightgray}{white}{ $\displaystyle T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=1$ }, \end{equation} which in turn gives a relation that involves all the transport coefficients except the longitudinal thermal conductivity at the horizon \begin{equation} \fcolorbox{teal}{white}{ $\displaystyle T_H\left[ \sigma_{11}\sigma_{12}(\alpha^2_{11}-\alpha^2_{12})-\alpha_{11}\alpha_{12}(\sigma^2_{11}-\sigma^2_{12})\right]=\sigma_{11}\kappa_{12}(\sigma^2_{11}+\sigma^2_{12})$}. \end{equation} It says the off-diagonal component of the thermal conductivity matrix, which is the transverse thermal conductivity, can be determined completely in terms of the electrical and thermo-electric conductivities. It holds irrespective of the precise detail of the black hole spacetime. We have checked that the relation eq(\ref{holo_transport_relation}) also holds for the Einstein-DBI-dilaton-axion system studied in \cite{Pal:2019bfw, Pal:2020gsq}. We also show relations \begin{eqnarray}\label{holo_transport_relation_II} &&\fcolorbox{teal}{white}{ $\displaystyle \frac{\sigma_{11}(r_h)}{\alpha_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{\rho}{16\pi G B}=T_H\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}=-\left(\frac{Q}{V_{d-1}B}\right)$},\nonumber \\ &&\fcolorbox{teal}{white}{ $\displaystyle T_H\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=\left(\frac{16\pi G B}{\rho}\right)=\frac{\overline\kappa_{12}(r_h)}{\alpha_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=\left(\frac{V_{d-1} B}{Q}\right)$}, \end{eqnarray} where $\vartheta_{11}, \vartheta_{12}, \rho_{ij}$ and $Q$ are the Seebeck coefficient, Nernst response, resistivity coefficients and the electric charge, respectively. It is interesting to note that the relation eq(\ref{holo_transport_relation_II}) follows from eq(\ref{holo_transport_relation}) via the duality relation. \paragraph{New black hole solution:} In order to include the effect of the charge density at finite temperature at IR, it is suggested in \cite{Tarrio:2011de, Alishahiha:2012qu} to include more than one U(1) gauge fields. In this paper, we present a new class of solution at non-zero temperature and charge density in the presence of magnetic field by considering only one gauge field. Moreover, we do not need a non-trivial profile of the dilaton at IR. The solution is characterized by two exponents $z$ and $\gamma$. Non-zero value of $\gamma$ essentially describes the scale violation. The paper is organized as follows. In {\bf section 2}, we shall present the Einstein-Maxwell-dilaton-axion system and calculate the currents after including the geometric and gauge fields fluctuations. In {\bf section 3}, we shall calculate the transport coefficients. In {\bf section 4}, we shall introduce a transformation that interchanges the charge density with the magnetic field and study its impact on the transport coefficients. We prove the relation eq(\ref{holo_transport_relation}) and eq(\ref{holo_transport_relation_II}). In {\bf section 5}, we study the behavior of the transport coefficients versus temperature and the magnetic field for AdS spacetime as well as for the scale violating spacetime. The thermodynamics of the AdS spacetime is studied in the Appendix A. The relations among the transport coefficients, i.e., eq(\ref{holo_transport_relation}) and eq(\ref{holo_transport_relation_II}) are proved for the planar black holes in the Einstein-DBI-dilaton-axion system in Appendix B. \section{The system} The system that we shall be considering for our study of the transport coefficients at finite chemical potential and magnetic field in arbitrary spacetime dimensions is as follows. The action that we shall be considering involves metric, dilaton, $(d-1)$ number of axions, gauge field which takes the following form in $d+1$ spacetime dimensions \begin{eqnarray}\label{action} S_{bulk}&=&\frac{1}{2\kappa^2}\int d^{d+1} x\sqrt{-g}\bigg[ {\cal R}-2\Lambda-\frac{1}{2}(\partial \phi)^2-V(\phi)-\frac{W(\phi)}{4}F_{MN} F^{MN}\nonumber \\&-&\frac{\Psi(\phi)}{2}\Bigg((\partial \chi_1)^2+(\partial \chi_2)^2+\cdots+(\partial \chi_{d-1})^2 \Bigg)\Bigg]. \end{eqnarray} We shall take $d$ to be odd. The equation of motion that follows for the metric tensor are \begin{eqnarray} &&{\cal R}_{MN}-\frac{\Psi(\phi)}{2}\Bigg(\partial_M \chi_1\partial_N\chi_1+\partial_M \chi_2\partial_N\chi_2+\cdots+\partial_M \chi_{d-1}\partial_N\chi_{d-1}\Bigg) -\frac{W(\phi)}{2} {F_M}^L F_{NL}+\nonumber \\&&\frac{W(\phi)}{4(d-1)}g_{MN}F^{KL}F_{KL}-\frac{[V(\phi)+2\Lambda]}{(d-1)}g_{MN}- \frac{1}{2}\partial_M\phi\partial_N\phi=0. \end{eqnarray} For the gauge field \begin{equation} \partial_{M}\left[\sqrt{-g} W(\phi) F^{MN} \right]=0,\quad {\rm where}\quad F_{MN}=\partial_M a_N-\partial_N a_M. \end{equation} The equation of motion associated to the scalar field is \begin{eqnarray}\label{scalar_eom} &&\partial_{M}\bigg(\sqrt{-g}\partial^M\phi \bigg)-\sqrt{-g}\frac{dV(\phi)}{d\phi}-\frac{\sqrt{-g}}{2}\frac{d\Psi(\phi)}{d\phi}\bigg[(\partial \chi_1)^2+(\partial \chi_2)^2+\cdots+(\partial \chi_{d-1})^2\bigg]\nonumber \\&&-\frac{\sqrt{-g}}{4}\frac{dW(\phi)}{d\phi}F_{MN}F^{MN}=0. \end{eqnarray} The equation of motion associated to axions, $\chi_i$'s, are \begin{equation} \partial_{M}\bigg(\sqrt{-g}\Psi(\phi)\partial^M\chi_i \bigg)=0,\quad {\rm for}~~ i= 1,2, \cdots, (d-1). \end{equation} \paragraph {Solution:} To read out the solution of the above equations, let us take the following ansatz of the metric and other matter fields as follows \begin{eqnarray}\label{solution_gen} ds^2_{d+1}&=&-g_{tt}(r)dt^2+g_{xx}(r)\left( dx^2_1+\cdots+dx^2_{d-1}\right)+g_{rr}(r)dr^2,\quad \phi=\phi(r),\quad\chi_i=kx_i,\nonumber \\ \quad F&=&a'_t(r)dr\wedge dt+B\left( dx_1\wedge dx_2+dx_3\wedge dx_4+\cdots+dx_{d-2}\wedge dx_{d-1}\right). \end{eqnarray} We shall consider the case for which $d-1$ is even, it means $d$ odd. In what follows we shall write $g_{tt}(r)=U_1(r)~~g_{xx}(r)=h(r)$ and $g_{rr}(r)=\frac{1}{U_2(r)}$. After a bit of calculations with such a choice of the geometry allows us to write the Ricci tensor as follows \begin{eqnarray} {\cal R}_{tt}&=&\frac{(d-1)U_2h'U'_1}{4h}+\frac{U_2U''_1}{2}-\frac{U_2U'^2_1}{4U_1}+\frac{U'_1U'_2}{4},\nonumber \\ {\cal R}_{rr}&=&\frac{(d-1)h'^2}{4h^2}+\frac{U'^2_1}{4U^2_1}- \frac{(d-1)h'U'_2}{4hU_2}-\frac{U'_1U'_2}{4U_1U_2}-\frac{(d-1)h''}{2h}-\frac{U''_1}{2U_1} ,\nonumber \\ {\cal R}_{ij}&=&-\delta_{ij}\left[\frac{(d-3)U_2h'^2}{4h}+\frac{U_2h'U'_1}{4U_1}+\frac{U_2h''}{2}+\frac{h' U'_2}{4} \right]. \end{eqnarray} \paragraph{Equation of motion:} The equation of motion associated to geometry are \begin{eqnarray} &&{\cal R}_{tt}-\frac{W(d-2)}{2(d-1)}U_2a_t^{'2}+\frac{(V+2\Lambda)}{(d-1)}U_1-\frac{WU_1B^2}{4h^2}=0,\nonumber \\ &&{\cal R}_{ij}-\frac{\Psi}{2}k^2\delta_{ij}-\frac{WU_2}{2(d-1)U_1}ha_t^{'2}\delta_{ij}-\frac{(V+2\Lambda)}{(d-1)}h\delta_{ij}-\frac{WB^2}{4h}\delta_{ij}=0,\nonumber \\ &&{\cal R}_{rr}+\frac{(d-2)}{2(d-1)}\frac{W}{U_1}a_t^{'2}- \frac{(V+2\Lambda)}{(d-1)U_2}-\frac{1}{2}\phi'^2+\frac{WB^2}{4h^2U_2}=0, \end{eqnarray} writing it out explicitly gives \begin{eqnarray}\label{eom} &&\frac{(d-1)U_2h'U'_1}{4h}+\frac{U_2U''_1}{2}-\frac{U_2U'^2_1}{4U_1}+\frac{U'_1U'_2}{4}-\frac{W(d-2)}{2(d-1)}U_2a_t^{'2}+\frac{(V+2\Lambda)}{(d-1)}U_1-\frac{WU_1B^2}{4h^2}=0,\nonumber \\ &&\frac{(d-3)U_2h'^2}{4h}+\frac{U_2h'U'_1}{4U_1}+\frac{U_2h''}{2}+\frac{h' U'_2}{4} +\frac{\Psi}{2}k^2+\frac{WU_2}{2(d-1)U_1}ha_t^{'2}+\frac{(V+2\Lambda)}{(d-1)}h+\frac{WB^2}{4h}=0,\nonumber \\ &&\frac{(d-1)h'^2}{4h^2}+\frac{U'^2_1}{4U^2_1}- \frac{(d-1)h'U'_2}{4hU_2}-\frac{U'_1U'_2}{4U_1U_2}-\frac{(d-1)h''}{2h}-\frac{U''_1}{2U_1}+\nonumber \\&&\frac{(d-2)}{2(d-1)}\frac{W}{U_1}a_t^{'2}- \frac{(V+2\Lambda)}{(d-1)U_2}-\frac{1}{2}\phi'^2+\frac{WB^2}{4h^2U_2}=0. \end{eqnarray} These equations yield a constraint equation \begin{eqnarray}\label{constraint} &&2h^2\Bigg(2U_1(V+2\Lambda)+U_2(Wa'^2_t-U_1\phi'^2)\Bigg)\nonumber \\&=&-(d-1)\left[ 2h(k^2U_1\psi+U_2h'U'_1)+U_1(B^2W+(d-2)U_2h'^2)\right] \end{eqnarray} The equation of motion for the gauge potential can be solved and it gives \begin{equation} a'_t(r)=\rho\frac{\sqrt{U_1(r)}}{\sqrt{U_2(r)}W(\phi(r))h^{\frac{d-1}{2}}(r)}, \end{equation} where $\rho$ is a constant and which upon integration gives \begin{equation} \mu=A_t(r=\infty)=\rho\int^{\infty}_{r_h}dr \frac{\sqrt{U_1(r)}}{\sqrt{U_2(r)}W(\phi(r))h^{\frac{d-1}{2}}(r)}. \end{equation} This allows the charge susceptibility as \cite{Iqbal:2008by} \begin{equation} \chi_e\equiv\frac{\rho}{\mu}=\left( \int^{\infty}_{r_h} dr \frac{\sqrt{U_1(r)}}{\sqrt{U_2(r)}W(\phi(r))h^{\frac{d-1}{2}}(r)}\right)^{-1}. \end{equation} The equation of motion of the dilaton field is \begin{eqnarray}\label{scalar_eom} &&\partial_{r}\bigg(\sqrt{U_1U_2}h^{\frac{d-1}{2}}\phi' \bigg)-\frac{\sqrt{U_1}}{\sqrt{U_2}}h^{\frac{d-1}{2}}\bigg[\frac{dV(\phi)}{d\phi}+\frac{d-1}{2}k^2\frac{d\psi(\phi)}{d\phi}-\frac{dW(\phi)}{d\phi}\frac{U_2}{2U_1}a'^2_t\nonumber \\&+&\frac{(d-1)B^2}{4h^2}\frac{dW(\phi)}{d\phi}\bigg]=0. \end{eqnarray} \paragraph{Hawking temperature:} If we construct a black hole solution then the temperature associated to it takes the following form \begin{equation} T_H=\frac{1}{4\pi}\left(\frac{U'_1\sqrt{U_2}}{\sqrt{U_1}}\right)_{r_h}, \end{equation} where $r_h$ is the size of the horizon. The Bekenstein-Hawking entropy density is \begin{equation} s=\frac{h^{\frac{d-1}{2}}(r_h)}{4G}. \end{equation} \subsection{Fluctuation and equations} Let us fluctuate the background geometry and the matter as \begin{equation} g_{MN}\longrightarrow g^{(0)}_{MN}+H_{MN},\quad F_{MN}\longrightarrow F^{(0)}_{MN}+f_{MN},\quad \chi_i\longrightarrow \chi^{(0)}_i+\delta\chi_i, \end{equation} where the superscript $(0)$ in each field denotes background solution and the other part corresponds to fluctuation. Under such a fluctuation, the Ricci tensor changes to leading order in fluctuation as \begin{equation} {\cal R}_{MN}\longrightarrow {\cal R}^{(0)}_{MN}+ R^{(1)}_{MN}, \end{equation} where \begin{equation} R^{(1)}_{MN}=\frac{1}{2}\left[ \nabla^{(0)}_K\nabla^{(0)}_N {H^K}_M+\nabla^{(0)}_K\nabla^{(0)}_M {H^K}_N-\nabla^{(0)}_K\nabla^{(0)K} H_{MN}-g^{(0){KL}}\nabla^{(0)}_M\nabla^{(0)}_N H_{KL}\right]. \end{equation} The covariant derivatives, $\nabla^{(0)}_K$, are defined with respect to the unperturbed metric $g^{(0)}_{KL}$. The resulting fluctuating equation of motions associated to geometry and gauge fields are \begin{eqnarray} R^{(1)}_{MN}&-&\frac{W}{2}\left[ F^{(0)}_{MK} F^{(0)}_{NL} g^{(0)KR}H_{RS}g^{(0)SL}+F^{(0)}_{MK} f_{NL} g^{(0)KL}+F^{(0)}_{NL} f_{MK} g^{(0)KL}\right]-H_{MN}\frac{(V+2\Lambda)}{(d-1)}\nonumber \\&+& \frac{W}{4(d-1)}H_{MN} F^{(0)KL} F^{(0)}_{KL}+\frac{W}{2(d-1)}g^{(0)}_{MN}\left[F^{(0)KL} f_{KL}-{F^{(0)R}}_L F^{(0)SL}H_{RS} \right]\nonumber \\&-& \frac{\psi}{2}\partial_M \chi^{(0)}_i\partial_N\delta\chi_i-\frac{\psi}{2}\partial_M\delta\chi_i\partial_N \chi^{(0)}_i=0,\nonumber \\ &&\partial_K\Bigg[\sqrt{-g^{(0)}}W\bigg(\left[H^{MK}g^{(0)NL}-H^{ML}g^{(0)NK}\right]F^{(0)}_{MN}+f_{MN}g^{(0)MK}g^{(0)NL}+\nonumber \\&& \frac{1}{2}g^{(0)RS}H_{RS}F^{(0)}_{MN}g^{(0)MK}g^{(0)NL}\bigg)\Bigg]=0. \end{eqnarray} The superscript indices on metric fluctuations are defined as $H^{MN}\equiv-g^{(0)MK}H_{KL}g^{(0)LN}$. \paragraph{Traceless fluctuation:} For our purpose, we shall consider only traceless fluctuation to geometry ($g^{(0)RS}H_{RS}=0$), which can be written in the following way \begin{eqnarray}\label{fluctuation_geometry_without_b} ds^2&=&-U_1(r)dt^2+\frac{1}{U_2(r)}dr^2+h(r)\left( dx^2_1+\cdots+dx^2_{d-1}\right)+2G_{t {i}}(t, r)dt dx^i \nonumber \\&+& 2G_{ri}(r)dx^i dr,\quad \chi_i=k~~x^i+\delta\chi_i(r), \nonumber \\ F&=&a'_t(r)dr\wedge dt+B\left( dx_1\wedge dx_2+dx_3\wedge dx_4+\cdots+dx_{d-2}\wedge dx_{d-1}\right)\nonumber \\&+&\partial_ta_{{i}}(t,r)dt\wedge dx^i+\partial_ra_{{i}}(t,r)dr\wedge dx^i, \end{eqnarray} where the fluctuating part of the metric and other fields shall be considered infinitesimally. We shall consider the fluctuation to have the following structure \cite{Banks:2015wha} \begin{eqnarray}\label{flu_metric_gauge_withgout_b} a_{{i}}(t,r)&=&-E_{{i}} t+a_{{i}}(r)+\xi_{{i}} ~t~ a_t(r),\nonumber \\ G_{t{{i}}}(t,r)&=&h(r) h_{{{ti}}}(r)-t~ \xi_{{i}}~ U_1(r),\nonumber \\ G_{r{{i}}}&=&h(r) h_{{{i}}}(r), \end{eqnarray} where $\xi_i$'s are related to thermal gradients along the spatial directions. The advantage of using such a traceless perturbation is that it is easier to decouple the fluctuating metric components from the rest. With such form of the fluctuation of the geometry and matter fields, the non-zero Ricci tensor components $R^{(1)}_{t{i}}$ and $R^{(1)}_{r{i}}$ reads as \begin{eqnarray} R^{(1)}_{t{i}}&=&-\frac{1}{2}h U_2 h''_{t{i}}+h'_{t{i}}\left[\frac{hU_2 U'_1}{4U_1}-\frac{(d+1)}{4}U_2h'-\frac{1}{4}hU'_2\right]\nonumber \\&-&h_{t{i}}\left(\frac{h'U_2 U'_1}{4U_1}+\frac{(d-3)}{4h}U_2h'^2+\frac{1}{4}h'U'_2+\frac{1}{2}U_2 h''\right)\nonumber \\&+&t \xi_{{i}}\left[\frac{(d-1)}{4h}U_2h'U'_1-\frac{U_2U'^2_1}{4U_1}+\frac{1}{4}U'_1U'_2+\frac{1}{2}U_2U''_1\right],\nonumber \\ R^{(1)}_{r{i}}&=&-h_{r{i}}\left[\frac{(d-3)U_2 h'^2}{4h}+\frac{U_2h'U'_1}{4U_1}+\frac{h'U'_2}{4}+\frac{U_2h''}{2}\right]+\xi_{{i}}\left[\frac{U'_1}{2U_1}-\frac{h'}{2h}\right]. \end{eqnarray} This results in the equation of motion for the $h_{t{1}}$ and $h_{t{2}}$ as \begin{eqnarray}\label{eom_flu_I} &&-\frac{1}{2}h U_2 h''_{t{1}}-h'_{t{1}}\Bigg[\frac{B^2 W+2k^2 h \psi+3d U_2 h'^2+2h h'U'_2}{8h'}+\nonumber \\&&\frac{h^2}{4(d-1) U_1 h'}\Bigg(2U_1(V+2\Lambda)+U_2(Wa'^2_t-U_1\phi'^2)\Bigg)\Bigg]+h_{t{1}}\Bigg[ \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\Bigg]\nonumber \\&+&\frac{BW}{2h}(E_{{2}}-\xi_{{2}} a_t)-\frac{1}{2}BU_2 W h_{r{{2}}}a'_t-\frac{1}{2}U_2 Wa'_t a'_{{1}}=0,\nonumber \\ &&-\frac{1}{2}h U_2 h''_{t{2}}-h'_{t{2}}\Bigg[\frac{B^2 W+2k^2 h \psi+3d U_2 h'^2+2h h'U'_2}{8h'}+\nonumber \\&&\frac{h^2}{4(d-1) U_1 h'}\Bigg(2U_1(V+2\Lambda)+U_2(Wa'^2_t-U_1\phi'^2)\Bigg)\Bigg]+h_{t{2}}\Bigg[ \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\Bigg]\nonumber \\&-&\frac{BW}{2h}(E_{{1}}-\xi_{{1}} a_t)+\frac{1}{2}BU_2 W h_{r{{1}}}a'_t-\frac{1}{2}U_2 Wa'_t a'_{{2}}=0, \end{eqnarray} where we have used eq(\ref{eom}). The equation of motion associated to the other components of the metric fluctuation simply follows by looking at eq(\ref{eom_flu_I}). As an example, the equation of motion for $h_{t{3}}$ follows from the first equation of eq(\ref{eom_flu_I}) with the following substitution: \begin{equation}\label{pres_1} h_{t{1}}\rightarrow h_{t{3}},\quad (E_{{2}},\xi_{{2}})\rightarrow (E_{{4}},\xi_{{4}}), \quad h_{r{{2}}}\rightarrow h_{r{{4}}}. \end{equation} Similarly, the equation of motion for $h_{t{4}}$ follows from the second equation of eq(\ref{eom_flu}) with the following substitution: \begin{equation}\label{pres_2} h_{t{2}}\rightarrow h_{t{4}},\quad (E_{{1}},\xi_{{1}})\rightarrow (E_{{3}},\xi_{{3}}), \quad h_{r{{1}}}\rightarrow h_{r{{3}}}. \end{equation} Most importantly, $h_{t{1}},~h_{t{2}},~h_{r{1}}$ and $h_{r{2}}~$ decouples from the rest of the metric fluctuations. The equation of motion of other components of the fluctuating metric $h_{t{i}}$ follows, similarly. The equation of motion of the fluctuating gauge field reads as \begin{eqnarray} \partial_r\left[ \sqrt{U_1U_2} W h^{\frac{d-3}{2}}\left( a'_i\delta^{in}-Bh_{ri}\delta^{ij}\delta^{mn}\epsilon_{jm}+\frac{U_2}{U_1}a'_t\delta^{in}h_{ti}\right)\right]+\sqrt{\frac{U_1}{U_2}} W h^{\frac{d-5}{2}}B\delta^{ij}\xi_i\delta^{mn}\epsilon_{jm}=0. \end{eqnarray} \subsection{Currents} \paragraph{Electric currents:}The radially conserved electric currents for the Einstein-Maxwell-dilaton-axion system are \begin{eqnarray}\label{conserved_electric_current} J^{1}(r)&=&-\frac{\sqrt{U_1U_2}}{(16\pi G) } W h^{\frac{d-3}{2}}\left[ a'_{{1}}+B h_{r{{2}}}+\frac{h}{U_1}a'_t h_{t{1}}\right]-\frac{\xi_{{2}}}{(16\pi G) } M_J(r),\quad \nonumber \\ J^{2}(r)&=&-\frac{\sqrt{U_1U_2}}{(16\pi G) } W h^{\frac{d-3}{2}}\left[ a'_{{2}}-B h_{r{{1}}}+\frac{h}{U_1}a'_t h_{t{2}}\right]+\frac{\xi_{1}}{(16\pi G)} M_J(r),\nonumber \\ M_J(r)&=&-B \int_{r_h}^{r} \sqrt{\frac{U_1}{U_2}}Wh^{\frac{d-5}{2}}, \end{eqnarray} The other components of the radially conserved currents can be written by following the prescription as written down in eq(\ref{pres_1}) and eq(\ref{pres_2}). \paragraph{Heat currents:} Let us consider quantities, ${\cal Q}^1(r)$ and ${\cal Q}^2(r)$, which has the structure as follows \begin{eqnarray} {\cal Q}^1(r)&=&\frac{U^{\frac{3}{2}}_1\sqrt{U_2}}{(16\pi G) } h^{\frac{d-3}{2}}\partial_r\left( \frac{h h_{t1}}{U_1} \right)-a_t(r) J^1(r)\nonumber \\ {\cal Q}^2(r)&=&\frac{U^{\frac{3}{2}}_1\sqrt{U_2}}{(16\pi G) } h^{\frac{d-3}{2}}\partial_r\left( \frac{h h_{t2}}{U_1} \right)-a_t(r) J^2(r), \end{eqnarray} It follows that the radial gradient of ${\cal Q}^1(r)$ and ${\cal Q}^2(r)$ can be calculated using the fluctuating equation of motion for $h_{t1}(r)$ and $h_{t2}(r)$ as written down in eq(\ref{eom_flu_I}) as well as the equation of motion of $U_1(r),~U_2(r)$ and $h(r)$ as written in eq(\ref{eom}), which results in \begin{eqnarray} \partial_r {\cal Q}^1&=&\frac{BW}{16\pi G}\sqrt{\frac{U_1}{U_2}}h^{\frac{d-5}{2}}(E_{{2}}-\xi_{{2}} A_t)+\frac{\xi_2}{16\pi G}M_J(r)a'_t(r)\equiv {\cal M}^1(r),\nonumber \\ \partial_r {\cal Q}^2&=&-\frac{BW}{16\pi G}\sqrt{\frac{U_1}{U_2}}h^{\frac{d-5}{2}}(E_{{1}}-\xi_{{1}} A_t)-\frac{\xi_1}{16\pi G}M_J(r)a'_t(r)\equiv {\cal M}^2(r). \end{eqnarray} The radially conserved heat currents can be constructed as follows \begin{equation} Q^1(r)={\cal Q}^1(r)-\int^r_{r_h} dx {\cal M}^1(x),\quad Q^2(r)={\cal Q}^2(r)-\int^r_{r_h} dx {\cal M}^2(x). \end{equation} One can similarly construct the other radially conserved heat currents along the other spatial directions. \subsubsection{Currents at the horizon} In order to calculate the currents at the horizon, we need to find out the behavior of fields at the horizon. Essentially, we want to put the in-falling boundary conditions at the horizon and are as follows: \begin{eqnarray} a_1(r)&=&-\frac{E_1}{U_0}~Log(r-r_h)+{\cal O}(r-r_h),\quad a_2(r)=-\frac{E_2}{U_0}~Log(r-r_h)+{\cal O}(r-r_h),\nonumber \\ h_{t1}(r)&=& Uh_{r1}(r_h)-\xi_1 \left(\frac{U_1(r)}{h(r)~ U_0}\right)~Log(r-r_h)+{\cal O}(r-r_h), \nonumber \\ h_{t2}(r)&=&Uh_{r2}(r_h)-\xi_2\left(\frac{U_1(r)}{h(r)~ U_0}\right)~Log(r-r_h)+{\cal O}(r-r_h), \end{eqnarray} where $U(r)\equiv \sqrt{U_1(r)U_2(r)}=U_0 (r-r_h)+\cdots$. Note, $U_0$ is independent of $r$ and respects the relation $U_0=\sqrt{U^{(0)}_1U^{(0)}_2}$ as we demand that the function $U_1(r)$ and $U_2(r)$ at the horizon has the form as follows: \begin{equation}\label{u1_u2_horizon} U_1(r)=U^{(0)}_1(r-r_h)+{\cal O}(r-r_h)^2,\quad U_2(r)=U^{(0)}_2(r-r_h)+{\cal O}(r-r_h)^2. \end{equation} This allows us to write he temperature as \begin{equation} T_H=\frac{\sqrt{U^{(0}_1U^{(0)}_2}}{4\pi}\equiv \frac{U_0}{4\pi}, \end{equation} \paragraph{Electric currents at the horizon:} The currents at the horizon with the help of the in-falling boundary condition reads as \begin{eqnarray} (16\pi G) J^{1}(r_h)&=&\left[W h^{\frac{d-3}{2}} \bigg( E_1-B h_{t{{2}}}-\frac{\rho }{Wh^{\frac{d-3}{2}}} h_{t{1}}\bigg)\right]_{r_h}\nonumber \\ (16\pi G) J^{2}(r_h)&=&\left[W h^{\frac{d-3}{2}} \bigg( E_{{2}}+B h_{t{{1}}}-\frac{\rho }{Wh^{\frac{d-3}{2}}} h_{t{2}}\bigg)\right]_{r_h},\\ (16\pi G)~ Q^1(r_h)&=&-U_0h^{\frac{d-1}{2}}(r_h)h_{t1}(r_h), \quad (16\pi G)~ Q^2(r_h)=-U_0h^{\frac{d-1}{2}}(r_h)h_{t2}(r_h). \end{eqnarray} In order to calculate the currents at the horizon in terms of the electric fields and the thermal gradients, we need to know the behavior of $h_{t1}$ at the horizon. It can be calculated with the help of eq(\ref{constraint}) and the first equation of eq(\ref{eom_flu_I}). This resulted in the following fluctuating equation \begin{eqnarray}\label{eom_flu} &&-\frac{1}{2}h U_2 h''_{t{1}}-h'_{t{1}}\Bigg[\frac{(d+1) U_1U_2h'-hU_2U'_1+hU_1U'_2}{4U_1}\Bigg]+h_{t{1}}\Bigg[ \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\Bigg]\nonumber \\&+ &\frac{BW}{2h}(E_{{2}}-\xi_{{2}} a_t)-\frac{1}{2}BU_2 W h_{r{{2}}}a'_t-\frac{1}{2}U_2 Wa'_t a'_{{1}}=0 \end{eqnarray} Upon inspection, the behavior of this differential equation with the help of the in-falling boundary condition at the horizon gives \begin{equation}\label{coupled_ht1_ht2_I} \Bigg[\frac{\xi_1}{2} U_0+h_{t{1}}\left( \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\right)+\frac{BW}{2h}E_{{2}}-\frac{B \rho}{2h^{\frac{d-1}{2}}} h_{t2} +\frac{E_1 \rho}{2h^{\frac{d-1}{2}}}\Bigg]_{r_h} =0 \end{equation} Similarly, the differential equation obeyed by $h_{t2}(r)$ is \begin{eqnarray} &&-\frac{1}{2}h U_2 h''_{t{2}}-h'_{t{2}}\Bigg[\frac{(d+1)U_1 U_2 h'+h(U_1 U'_2-U_2U'_1)}{4U_1}\Bigg] +h_{t{2}}\Bigg[ \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\Bigg]\nonumber \\&-&\frac{BW}{2h}(E_{{1}}-\xi_{{1}} a_t)+\frac{1}{2}BU_2 W h_{r{{1}}}a'_t-\frac{1}{2}U_2 Wa'_t a'_{{2}}=0, \end{eqnarray} The behavior of this differential equation with the help of the in-falling boundary condition at the horizon gives \begin{equation}\label{coupled_ht1_ht2_II} \Bigg[\frac{\xi_2}{2} U_0+h_{t{2}}\left( \frac{B^2 W}{2h}+\frac{k^2\psi}{2}\right)-\frac{BW}{2h}E_{{1}}+\frac{B \rho}{2h^{\frac{d-1}{2}}} h_{t1} +\frac{E_2 \rho}{2h^{\frac{d-1}{2}}}\Bigg]_{r_h} =0 \end{equation} We can solve for the equations eq(\ref{coupled_ht1_ht2_I}) and eq(\ref{coupled_ht1_ht2_II}) and the behavior of $h_{t1}(r_h)$ and $h_{t1}(r_h)$ at the horizon reads as \begin{eqnarray} h_{t1}(r_h)&=&-\Bigg[\frac{ 1}{ B^4h^d W^2+h^{ d+2} k^4 \psi^2 + B^2 h^3 ( \rho^2+ 2 h^{d-2} k^2 W \psi)}\times\nonumber \\&&[E_1 h^{( d+5)/2} k^2 \rho \psi+ B E_2h^3 ( \rho^2 +B^2 h^{d-3} W^2+ h^{d-2} k^2 W \psi)+\nonumber \\&&\xi_1U_0h^{d+1}(B^2W+hk^2\psi)+\xi_2BU_0\rho h^{( d+5)/2}]\Bigg]_{r_h},\nonumber \\ h_{t2}(r_h)&=&\Bigg[\frac{ 1}{ B^4h^d W^2+h^{ d+2} k^4 \psi^2 + B^2 h^3 ( \rho^2+ 2 h^{d-2} k^2 W \psi)}\times\nonumber \\&&[B E_1h^3 ( \rho^2 +B^2 h^{d-3} W^2+ h^{d-2} k^2 W \psi)+E_2 h^{( d+5)/2} k^2 \rho \psi +\nonumber \\&&\xi_1BU_0\rho h^{( d+5)/2} +\xi_2U_0h^{d+1}(B^2W+hk^2\psi)]\Bigg]_{r_h}. \end{eqnarray} \section{Transport quantities} The electric currents and the heat currents at the horizon can be written in terms of the electrical conductivity, thermo-electric and thermal conductivity as follows \begin{eqnarray}\label{electrical_heat_current} J^1(r_h)&=&\sigma_{11}(r_h)E_1+\sigma_{12}(r_h)E_2+T_H\alpha_{11}(r_h)\xi_1+T_H\alpha_{12}(r_h)\xi_2,\nonumber \\ J^2(r_h)&=&\sigma_{21}(r_h)E_1+\sigma_{22}(r_h)E_2+T_H\alpha_{21}(r_h)\xi_1+T_H\alpha_{22}(r_h)\xi_2,\nonumber \\ Q^1(r_h)&=&T_H{\overline\alpha}_{11}(r_h)E_1+T_H{\overline\alpha}_{12}(r_h)E_2+T_H{\overline\kappa}_{11}(r_h)\xi_1+T_H{\overline\kappa}_{12}(r_h)\xi_2,\nonumber \\ Q^2(r_h)&=&T_H{\overline\alpha}_{21}(r_h)E_1+T_H{\overline\alpha}_{22}(r_h)E_2+T_H{\overline\kappa}_{21}(r_h)\xi_1+T_H{\overline\kappa}_{22}(r_h)\xi_2. \end{eqnarray} Similar expressions exists for other currents. The transport coefficients takes the following form \begin{eqnarray}\label{transport_b} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)=\frac{1}{16\pi G}\left[\psi k^2h^{\frac{d-3}{2}}\frac{(\rho^2 h^{2-d}+W\psi k^2+W^2B^2h^{-1})}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ \sigma_{12}(r_h)&=&-\sigma_{21}(r_h)=\frac{1}{16\pi G}\left[\rho Bh^{-1}\frac{(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ \alpha_{11}(r_h)&=&\alpha_{22}(r_h)={\overline\alpha}_{11}(r_h)={\overline\alpha}_{22}(r_h)\nonumber \\&=& \frac{1}{16\pi G}\left[\frac{4\pi \rho\psi k^2}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ \alpha_{12}(r_h)&=&-\alpha_{21}(r_h)={\overline\alpha}_{12}(r_h)=-{\overline\alpha}_{21}(r_h)\nonumber \\&=& \frac{1}{16\pi G}\left[4\pi B h^{\frac{d-3}{2}}\frac{(\rho^2 h^{2-d}+W\psi k^2+W^2B^2h^{-1})}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ {\overline\kappa}_{11}(r_h)&=&{\overline\kappa}_{22}(r_h)=\frac{1}{16\pi G}\left[16\pi^2 T_H h^{\frac{d-1}{2}}\frac{(\psi k^2+WB^2h^{-1})}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ {\overline\kappa}_{12}(r_h)&=&-{\overline\kappa}_{21}(r_h)=\frac{1}{16\pi G} \left[\frac{16\pi^2 T_H\rho B}{\psi^2 k^4+B^2h^{-1}(\rho^2 h^{2-d}+2W\psi k^2+W^2B^2h^{-1})}\right]_{r_h}\nonumber \\ \end{eqnarray} The thermal conductivity is defined as the response of the heat current to the thermal gradient for zero electric current, $\kappa_{ij}={\overline\kappa}_{ij}-T_H(\alpha\sigma^{-1}\alpha)_{ij}$ \begin{eqnarray}\label{transport_kappa} \kappa_{11}(r_h)&=&\kappa_{22}(r_h)=\Bigg[{\overline\kappa}_{11}-T_H\frac{(\alpha^2_{11}-\alpha^2_{12})\sigma_{11}+2\alpha_{11}\alpha_{12}\sigma_{12}}{\sigma^2_{11}+\sigma^2_{12}}\Biggr]_{r_h}\nonumber \\&=&\frac{1}{16\pi G}\left[16\pi^2 T_H W h^{\frac{d-1}{2}}\frac{(\rho^2 h^{2-d}+W\psi k^2)}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h}\nonumber \\ \kappa_{12}(r_h)&=&-\kappa_{21}(r_h)=\Biggl[{\overline\kappa}_{12}+T_H\frac{(\alpha^2_{11}-\alpha^2_{12})\sigma_{12}-2\alpha_{11}\alpha_{12}\sigma_{11}}{\sigma^2_{11}+\sigma^2_{12}}\Biggr]_{r_h}\nonumber \\&=&-\frac{1}{16\pi G}\left[ \frac{16\pi^2 B T_HW^2\rho}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h}. \end{eqnarray} Interestingly, the Nerst coefficient reads as \begin{equation} \nu\equiv \frac{\alpha_{12}(r_h)}{B\sigma_{11}(r_h)}=\frac{4\pi}{\psi k^2}. \end{equation} The Seebeck coefficient and the Nernst response is defined as ratio of the electric field to the thermal gradient in the absence of electrical current, $\vartheta=-\sigma^{-1}\cdot \alpha.$ The Seebeck coefficient $\vartheta_{11}$ and the Nernst response $\vartheta_{12}$ reads as \begin{eqnarray}\label{nernst_coefficient} \vartheta_{11}(r_h)&=&-\frac{\sigma_{11}(r_h)\alpha_{11}(r_h)+\sigma_{12}(r_h)\alpha_{12}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)}=-4\pi\rho\left[\frac{h^{\frac{3-d}{2}}(\rho^2 h^{2-d}+W\psi k^2+W^2B^2h^{-1})}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h},\nonumber \\ \vartheta_{12}(r_h)&=&-\frac{\sigma_{11}(r_h)\alpha_{12}(r_h)-\sigma_{12}(r_h)\alpha_{11}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)}=-4\pi B\left[\frac{W^2\psi k^2}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h} \end{eqnarray} Let us also mention the resistivity matrix \begin{eqnarray}\label{resistivity} \rho_{11}(r_h)&\equiv&\frac{\sigma_{11}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)}=(16\pi G)\left[\frac{h^{\frac{3-d}{2}}\psi k^2(\rho^2 h^{2-d}+W\psi k^2+W^2B^2h^{-1})}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h}\nonumber \\&=&-(16\pi G)\left[\frac{\psi k^2}{4\pi\rho}\vartheta_{11}(r)\right]_{r_h},\nonumber \\ \rho_{12}(r_h)&\equiv&-\frac{\sigma_{12}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)}=-(16\pi G)\rho B\left[\frac{h^{2-d}(\rho^2 h^{2-d}+W\psi k^2+W^2B^2h^{-1})}{B^2 W^2\rho^2 h^{1-d} +(\rho^2 h^{2-d}+W\psi k^2)^2}\right]_{r_h}\nonumber \\&=&(16\pi G)\left[\frac{Bh^{\frac{1-d}{2}}}{4\pi}\vartheta_{11}(r)\right]_{r_h}.\nonumber \\ \end{eqnarray} It simply follows that \begin{equation}\label{thetaoverrestivity} \frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{1}{(16\pi G)}\left(\frac{4\pi\rho}{\psi(\phi(r_h)) k^2}\right),\quad \frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=\frac{16\pi G}{T_H}\left(\frac{\psi(\phi(r_h)) k^2}{4\pi\rho}\right). \end{equation} \section{A transformation} Let us consider a transformation, inspired from \cite{Hartnoll:2007ih} for $d=3$, in which the charge density and the magnetic field gets interchanged. For generic $d$, the transformation reads as \begin{eqnarray}\label{duality} \rho\rightarrow B~~ W(\phi(r_h)) ~~ h^{\frac{d-3}{2}}(r_h),\quad B \rightarrow \frac{\rho}{W(\phi(r_h))~~ h^{\frac{d-3}{2}}(r_h)}. \end{eqnarray} It is easy to see that under such a transformation the quantity $\rho B$ as well as $B^2 W^2 h^{-1} +\rho^2 h^{2-d}$ evaluated at the horizon remains invariant whereas $\rho/B\rightarrow (B/\rho) W^2 h^{d-3}$. This transformation is interpreted in \cite{Hartnoll:2007ih} as particle vortex duality. Under such a transformation, eq(\ref{duality}), the transport coefficients transforms as \begin{eqnarray} &&\sigma_{11}(r_h)\leftrightarrow \frac{\rho_{11}(r_h)}{(16 \pi G)^2} ~~(W(\phi(r_h)) )^2 ~~ h^{d-3}(r_h), \quad \sigma_{12}(r_h)\leftrightarrow -\frac{\rho_{12}(r_h)}{(16 \pi G)^2} ~~(W(\phi(r_h)) )^2 ~~ h^{d-3}(r_h),\nonumber \\ &&\alpha_{11}(r_h)\leftrightarrow -\frac{\vartheta_{12}(r_h)}{(16 \pi G)} ~~W(\phi(r_h)) ~~ h^{\frac{d-3}{2}}(r_h), \quad \alpha_{12}(r_h)\leftrightarrow -\frac{\vartheta_{11}(r_h)}{(16 \pi G)} ~~W(\phi(r_h)) ~~ h^{\frac{d-3}{2}}(r_h),\nonumber \\ &&{\overline\kappa}_{11}(r_h)\leftrightarrow \kappa_{11}(r_h), \quad {\overline\kappa}_{12}(r_h)\leftrightarrow -\kappa_{12}(r_h) \end{eqnarray} If we impose of either the condition $\sigma_{11}(r_h)=\frac{\rho_{11}(r_h)}{(16 \pi G)^2}(W(\phi(r_h)) )^2 ~ h^{d-3}(r_h)$ or $\sigma_{12}(r_h)= -\frac{\rho_{12}(r_h)}{(16 \pi G)^2} ~(W(\phi(r_h)) )^2 ~ h^{d-3}(r_h)$ then it provides an interesting relation among the electrical conductivities \begin{equation}\label{circle} \sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)=(W(\phi(r_h)) )^2 ~ h^{d-3}(r_h) \end{equation} It is an equation of a circle and this relation holds for non-zero values of electrical conductivities. More importantly, it relates the sum of squares of electrical conductivities with the geometrical quantities. If we demand that the longitudinal electrical conductivity as well as the transverse electrical conductivity remain positive, which happens for positive values of $\psi$, then eq(\ref{circle}) will be an equation for a semi-circle. The meaning of the condition $\sigma_{11}(r_h)=\frac{\rho_{11}(r_h)}{(16 \pi G)^2}(W(\phi(r_h)) )^2 ~ h^{d-3}(r_h)$ or \\ $\sigma_{12}(r_h) = -\frac{\rho_{12}(r_h)}{(16 \pi G)^2} ~(W(\phi(r_h)) )^2 ~ h^{d-3}(r_h)$ essentially relates the charge density, magnetic field and the geometric quantities as $\rho=B~~ W(\phi(r_h)) ~~ h^{\frac{d-3}{2}}(r_h) $. \paragraph{The special point:} At $\rho=B~~ W(\phi(r_h)) ~~ h^{\frac{d-3}{2}}(r_h) $, we can show that as long as $W(\phi(r_h))$ and $\psi(\phi(r_h))$ are positive the following relation holds \begin{eqnarray} &&\sigma^2_{11}(r_h)< \sigma^2_{12}(r_h)\quad {\rm for}\quad h(r_h)\psi(\phi(r_h)) k^2 < \sqrt{2} W(\phi(r_h)) B^2\nonumber \\ &&\alpha^2_{11}(r_h) < \alpha^2_{12}(r_h)\quad {\rm generically}\nonumber \\ &&\kappa^2_{11}(r_h) > \kappa^2_{12}(r_h)\quad {\rm generically}. \end{eqnarray} We can also show \begin{equation} \frac{\alpha^2_{11}(r_h)-\alpha^2_{12}(r_h)}{\alpha^2_{11}(r_h)\alpha_{12}(r_h)}=-(16\pi G)\left(2\frac{\sigma_{12}(r_h)}{\sigma_{11}(r_h)}\right),\quad \kappa_{12}(r_h)=-T_H\frac{\alpha_{11}(r_h)\alpha_{12}(r_h)}{\sigma_{11}(r_h)}. \end{equation} \subsection{Universal relations} After closer inspection of the transport coefficients as written in eq(\ref{transport_b}) reveals an interesting universal relation among the different components of the transport quantities. It follows that the ratio of specific transport coefficients are \begin{equation}\label{universal_ratio} \frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=\left(\frac{4\pi B}{\psi k^2}\right)_{r_h},\quad T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}=\left(\frac{\psi k^2}{4\pi B}\right)_{r_h} \end{equation} Upon combining these relations, we get \begin{equation}\label{universal_relation} T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=1. \end{equation} It is universal in the sense that it does not depend on the number of spacetime dimensions ($d\geq 3$). However, for these relations to work we need to have non-zero magnetic field. The transport coefficients for the Einstein-DBI-dilaton-axion system for $d=3$ has been calculated in \cite{Pal:2019bfw} and \cite{Pal:2020gsq}. The relation among the transport coefficients, i.e., eq(\ref{universal_relation}) is respected for the DBI case too. It essentially suggests us to make a conjecture that the relation eq(\ref{universal_relation}) does not depend on the nature of the matter field. However, this claim need to be checked by looking at other systems. The universal relation as written down in eq(\ref{universal_relation}) can be re-written as \begin{eqnarray}\label{int_relation} &&T_H\left[ \sigma_{11}(r_h)\sigma_{12}(r_h)(\alpha^2_{11}(r_h)-\alpha^2_{12}(r_h))-\alpha_{11}(r_h)\alpha_{12}(r_h)(\sigma^2_{11}(r_h)-\sigma^2_{12}(r_h))\right]\nonumber \\&=&\sigma_{11}(r_h)\kappa_{12}(r_h)(\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)). \end{eqnarray} It essentially says that the transport coefficient, $\kappa_{12}(r_h)$, is not an independent quantity rather is completely determined by the electrical and thermo-electric coefficients. Moreover, the relation eq(\ref{universal_relation}) or eq(\ref{int_relation}) does not depend on explicitly on either the nature of the geometry or the form of couplings, even though each transport coefficients does. The quantity $\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}$ under the transformation eq(\ref{duality}) transforms as \begin{equation} \frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}\rightarrow -\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=\frac{1}{T_H}, \end{equation} where in the last equality, we have used the value of transport coefficients evaluated in eq(\ref{transport_kappa}), eq(\ref{nernst_coefficient}) and eq(\ref{resistivity}). Hence, it follows that the relation eq(\ref{universal_relation}) predicts another relation under eq(\ref{duality}) \begin{equation} -T_H\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=1 \end{equation} This is in agreement with the second equation of eq(\ref{thetaoverrestivity}). There also exists interesting relations that follows eq(\ref{thetaoverrestivity}) and eq(\ref{universal_ratio}) and reads as \begin{eqnarray} &&\frac{\sigma_{11}(r_h)}{\alpha_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{1}{16\pi G}\left(\frac{\rho}{B}\right)=T_H\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)},\nonumber \\ &&T_H\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=16\pi G\left(\frac{B}{\rho}\right)=\frac{\overline\kappa_{12}(r_h)}{\alpha_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}. \end{eqnarray} The electric charge is defined as \begin{equation} Q=\int d^{d-1}x J^0, \quad{\rm where}\quad J^0=-\frac{W(\phi)}{16\pi G}\sqrt{-g}F^{r0}=\frac{\rho}{16\pi G}, \end{equation} where we have used the solution to gauge potential. Denoting $\int d^{d-1}x\equiv V_{d-1} $, gives $Q=\frac{V_{d-1}\rho}{16\pi G}$. This results in \begin{eqnarray} &&\frac{\sigma_{11}(r_h)}{\alpha_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{1}{V_{d-1}}\left(\frac{Q}{B}\right)=T_H\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)},\nonumber \\ &&T_H\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=V_{d-1}\left(\frac{B}{Q}\right)=\frac{\overline\kappa_{12}(r_h)}{\alpha_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}. \end{eqnarray} \section{Special case: At UV} \paragraph{An exact solution: } The differential equation obeyed by the metric components eq(\ref{eom}) and the dilaton equation eq(\ref{scalar_eom}) can be solved and the solution reads as \begin{eqnarray}\label{ads_spacetime} ds^2_{d+1}&=&\frac{r^2}{L^2}\left[-f(r)dt^2+ dx^2_1+\cdots+dx^2_{d-1}\right]+\frac{L^2}{r^2f(r)}dr^2,\quad \phi(r)={\rm constant},\nonumber \\ W&=&w_0,\quad \psi=\psi_0,\quad V=0,\quad A'_t(r)=\frac{L^{d-1}\rho}{w_0 r^{d-1}},\nonumber \\ f(r)&=&-\frac{2\Lambda L^2}{d(d-1)}-\frac{B^2L^6w_0}{4(d-4)r^4}-\frac{k^2L^4\psi_0}{2(d-2)r^2}+\frac{c_1}{r^d}+\frac{\rho^2 L^{2d}}{2w_0(d-1)(d-2)r^{2(d-1)}}, \end{eqnarray} where the constant, $c_1$, is related to the mass of the black hole. Upon setting the cosmological constant as $2\Lambda=-\frac{d(d-1)}{L^2}$, the temperature of the black hole is related to the size of the horizon as \begin{equation}\label{temp_b_charge_density_dissipation} T_H=\frac{dr_h}{4\pi L^2}\left[1-\frac{B^2 L^6w_0}{4dr^4_h}-\frac{k^2L^4\psi_0}{2dr^2_h}-\frac{ \rho^2L^{2d}}{2w_0d(d-1)r^{2(d-1)}_h} \right]. \end{equation} \paragraph{Thermodynamic Quantities:} Various thermodynamic quantities like entropy (S), charge (Q), energy (E), free energy (F), magnetization (m) and magnetic susceptibility ($\chi_B$) are as follows: \begin{eqnarray}\label{td_uv} S&=&\frac{V_{d-1}}{4G}\frac{r^{d-1}_h}{L^{d-1}},\quad Q=\frac{V_{d-1}}{16\pi G}\rho,\nonumber \\ E&=&\frac{V_{d-1}}{16\pi G}\left[\frac{(d-1)r^d_h}{L^{d+1}} -\frac{(d-1)B^2w_0}{4(d-4)}\frac{r^{d-4}_h}{L^{d-5}}-\frac{(d-1)k^2\psi_0}{2(d-2)}\frac{r^{d-2}_h}{L^{d-3}}+\frac{\rho^2}{2(d-2)w_0}\frac{L^{d-1}}{r^{d-2}_h}\right],\nonumber \\ F&=&-\frac{V_{d-1}}{16\pi G}\left[\frac{r^d_h}{L^{d+1}} +\frac{3B^2w_0}{4(d-4)}\frac{r^{d-4}_h}{L^{d-5}}+\frac{k^2\psi_0}{2(d-2)}\frac{r^{d-2}_h}{L^{d-3}}+\frac{\rho^2}{2(d-1)(d-2)w_0}\frac{L^{d-1}}{r^{d-2}_h}\right],\nonumber \\ m&=&-\left(\frac{\partial F}{\partial B}\right)_{\beta} =\frac{(d-1)V_{d-1}}{32\pi G(d-4)} B w_0\frac{r^{d-4}_h}{L^{d-5}}\quad \chi_B(B=0)= \frac{(d-1)V_{d-1}}{32\pi G(d-4)} w_0 \left(\frac{r^{d-4}_h}{L^{d-5}}\right)_{B=0} \end{eqnarray} The energy is calculated by assuming that it obeys the first law of thermodynamics. The thermodynamic quantities are calculated in the Appendix A by using counter term method for $d=3,~d=5$ and $d=7$. The result matches with that given in eq(\ref{td_uv}). \paragraph{Transport coefficients with vanishing Magnetic field: } Without the magnetic fields, the longitudinal transport coefficients takes the following form with the choice $16\pi G=1$ \begin{eqnarray}\label{transport_zero_B} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)= h^{\frac{d-3}{2}}(r_h)\left[W+\frac{\rho^2 h^{2-d}}{\psi k^2}\right]_{r_h},\nonumber \\ \alpha_{11}(r_h)&=&\alpha_{22}(r_h)=\left[\frac{4\pi \rho}{\psi k^2}\right]_{r_h},\nonumber \\ \kappa_{11}(r_h)&=&\kappa_{22}(r_h)=16\pi^2 T_H\left[\frac{ h^{\frac{d-1}{2}}}{\psi k^2(1+\frac{\rho^2 h^{2-d}}{W\psi k^2})}\right]_{r_h},\nonumber \\ \vartheta_{11}(r_h)&=&-\left[\frac{4\pi\rho}{W\psi k^2}\left(\frac{h^{\frac{3-d}{2}}}{1+\frac{\rho^2 h^{2-d}}{W\psi k^2}}\right)\right]_{r_h}. \end{eqnarray} It follows that in the absence of magnetic field, the thermo-electric conductivity is a constant at UV. Moreover, the electrical conductivity for an asymptotically AdS spacetime is computed in \cite{Andrade:2013gsa} without the scalar field $\phi$. Let us compare our results with theirs by setting $h(r)=r^2/L^2$. Without the scalar field $\phi$, the gauge field can be integrated to give, $A'_t=\frac{\rho}{W h^{\frac{d-1}{2}}}$. In the present case, the chemical potential, $\mu$, is related to the charge density as \begin{equation}\label{charge_density_chemical_potential} \rho L^{d-1}=w_0\mu(d-2)r^{d-2}_h. \end{equation} It means for a fixed charge density, the chemical potential changes with the horizon size in a power law type. This essentially gives $\rho^2 (h(r_h))^{2-d}L^2=(d-2)^2\mu^2w^2_0$. Hence, the longitudinal electrical conductivity becomes \begin{equation}\label{andrade_cond} \sigma_L\equiv \sigma_{11}(r_h)=\sigma_{22}(r_h)=w_0\left(\frac{r_h}{L}\right)^{d-3}\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right] \end{equation} This precisely matches with the results reported in \cite{Andrade:2013gsa} for $w_0=1,~~\psi_0=1$. It also follows that for $d=3$ the longitudinal electrical conductivity is completely determined by the chemical potential and the momentum dissipation. Recall, from the solution of axion, $\chi$, the quantity $k$ can be interpreted as the source term. Hence, the longitudinal electrical conductivity for $d=3$ is fully determined by the source term of the gauge potential and the axion. The thermo-electric and the thermal conductivities are \begin{eqnarray} \alpha_L&\equiv&\alpha_{11}(r_h)=\alpha_{22}(r_h)=\frac{4\pi \mu(d-2)w_0}{\psi_0k^2L}\left(\frac{r_h}{L}\right)^{d-2},\nonumber \\ \kappa_L&\equiv& \kappa_{11}(r_h)=\kappa_{22}(r_h)=\left[\frac{16\pi^2 T_H w_0r^{d-1}_h}{L^{d-3}((d-2)^2\mu^2w^2_0+ w_0\psi_0k^2L^2)}\right]\nonumber \\ &=&\frac{16\pi^2 T_H }{\psi_0k^2}\left(\frac{r_h}{L}\right)^{d-1}\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right]^{-1},\nonumber \\ \vartheta_L&=&\vartheta_{11}(r_h)=-\frac{4\pi\rho}{w_0\psi k^2}\left(\frac{r_h}{L}\right)^{3-d}\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right]^{-1}. \end{eqnarray} The temperature has the following dependence on the chemical potential and the size of the horizon \begin{equation} T_H=\frac{dr_h}{4\pi L^2}\left[1-\frac{\psi_0k^2L^4}{2dr^2_h}-\frac{\mu^2(d-2)^2L^{2}w_0}{2d(d-1)r^{2}_h} \right]. \end{equation} The size of the horizon, $r_h$, can solved in terms of temperature, $T_H$, chemical potential, $\mu$, and the dissipation parameter, $k$, which then allows us to express the result of electrical conductivity as written in eq(\ref{andrade_cond}), completely in terms of temperature, $T_H$, chemical potential, $\mu$, and the dissipation parameter, $k$. On solving \begin{equation} \frac{r_h}{L}=\mu L\left[\frac{2\pi}{d} \left(\frac{T_H}{\mu}\right)+\sqrt{\frac{(d-2)^2}{2d(d-1)}\frac{w_0}{ L^2}+\frac{4\pi^2}{d^2}\left (\frac{T_H}{\mu}\right)^2+\frac{\psi_0}{2d}\left(\frac{k}{\mu}\right)^2}\right], \end{equation} where we have taken the higher size of the horizon. This gives the transport coefficients as \begin{eqnarray} \sigma_L&=&(\mu L)^{d-3}w_0 ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-3 }\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right],\nonumber \\ \alpha_L&=&(\mu L)^{d-1} ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-2} \left[\frac{4\pi(d-2)w_0}{\psi_0k^2 L^2}\right],\nonumber \\ \kappa_L&=&(\mu L)^{d-1} ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{d-1}\left(\frac{16\pi^2 T_H }{\psi_0k^2}\right)\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right]^{-1},\nonumber \\ \vartheta_L&=&-\frac{4\pi\rho}{w_0\psi k^2}(\mu L)^{3-d} ~\left(g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)\right)^{3-d}\left[1+\frac{(d-2)^2w_0\mu^2}{\psi_0k^2L^2}\right]^{-1} \end{eqnarray} where the function \begin{equation}\label{function_g} g\bigg( \frac{T_H}{\mu}, \frac{k}{\mu} \bigg)=\left[\frac{2\pi}{d} \left(\frac{T_H}{\mu}\right)+\sqrt{\frac{(d-2)^2}{2d(d-1)} \frac{w_0}{L^2}+\frac{4\pi^2}{d^2}\left (\frac{T_H}{\mu}\right)^2+\frac{\psi_0}{2d}\left(\frac{k}{\mu}\right)^2}\right]. \end{equation} For $d=3$, something special happens, it is possible to separate the incoherent particle-hole pairs from the momentum dissipation due to lattice only for the longitudinal electrical conductivity. \paragraph{A holographic relation:} There exists a relation among the transport coefficients. This follows from eq(\ref{transport_zero_B}) \begin{equation} \frac{\mu^2}{k^2}\frac{\kappa_L\sigma_L}{T_H \alpha^2_L}=\frac{W(\phi(r_h))\psi(\phi(r_h)) h^{d-2}(r_h)}{\left(\int^{\infty}_{r_h}\frac{\sqrt{U_1(r)}}{\sqrt{U_2(r)}W(\phi(r))h^{\frac{d-1}{2}}(r)}\right)^2} \end{equation} For AdS spacetime as written in eq(\ref{ads_spacetime})this gives \begin{equation} \frac{\mu^2}{k^2}\frac{\kappa_L\sigma_L}{T_H \alpha^2_L}=\frac{\psi_0 L^2}{(d-2)^2 w_0}={\rm constant}. \end{equation} \paragraph{With Magnetic field: } Let us calculate the transport coefficients in the presence of magnetic field for an asymptotically AdS spacetime with $h(r)=r^2/L^2$. The charge density and the chemical potential are related as given in eq(\ref{charge_density_chemical_potential}). The transport coefficients reads with the choice $16\pi G=1$ as \begin{eqnarray}\label{ads_transport} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)= \left[\frac{k^2w_0\psi_0L^2\left((d-2)^2w_0\mu^2+ \psi_0k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right)}{ \psi^2_0k^4L^4+\frac{B^2L^4w_0}{r^{2}_h}\left((d-2)^2w_0\mu^2+2\psi_0 k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right)}\right]\left(\frac{r_h}{L}\right)^{d-3}\nonumber \\ \sigma_{12}(r_h)&=&-\sigma_{21}(r_h)=\left[\frac{(d-2)\mu BLw^2_0\left((d-2)^2w_0\mu^2+ 2\psi_0k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right)}{ \psi^2_0k^4L^4+\frac{B^2L^4w_0}{r^{2}_h}\left((d-2)^2w_0\mu^2+2\psi_0 k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right)}\right]\left(\frac{r_h}{L}\right)^{d-4}\nonumber \\ \alpha_{11}(r_h)&=&\alpha_{22}(r_h)={\overline\alpha}_{11}(r_h)={\overline\alpha}_{22}(r_h)\nonumber \\&=& \left[\frac{4\pi \mu (d-2) w_0\psi_0k^2L^3}{\psi^2_0k^4L^4+\frac{B^2L^4w_0}{r^{2}_h}\left((d-2)^2w_0\mu^2+2\psi_0 k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right) }\right]\left(\frac{r_h}{L}\right)^{d-2}\nonumber \\ \alpha_{12}(r_h)&=&-\alpha_{21}(r_h)={\overline\alpha}_{12}(r_h)=-{\overline\alpha}_{21}(r_h)\nonumber \\&=& 4\pi B w_0L^2\left[\frac{\frac{B^2L^4w_0}{r^{2}_h}+(d-2)^2w_0\mu^2+\psi_0 k^2L^2}{\psi^2_0k^4L^4+\frac{B^2L^4w_0}{r^{2}_h}\left((d-2)^2w_0\mu^2+2\psi_0 k^2L^2+\frac{B^2L^4w_0}{r^{2}_h}\right) }\right]\left(\frac{r_h}{L}\right)^{d-3},\nonumber \\ \kappa_{11}(r_h)&=&\kappa_{22}(r_h)\nonumber \\&=&16\pi^2 T_H L^2\left[\frac{((d-2)^2w_0\mu^2+\psi_0 k^2L^2)}{\frac{B^2}{r^{2}_h} (d-2)^2w^2_0L^4\mu^2 +((d-2)^2w_0\mu^2+ \psi_0k^2L^2)^2}\right]\left(\frac{r_h}{L}\right)^{d-1}\nonumber \\ \kappa_{12}(r_h)&=&-\kappa_{21}(r_h)\nonumber \\&=&-\left[ \frac{16\pi^2 B T_H\mu(d-2)w_0 L^3}{\frac{B^2}{r^{2}_h} (d-2)^2w^2_0L^4\mu^2 +((d-2)^2w_0\mu^2+ \psi_0k^2L^2)^2}\right]\left(\frac{r_h}{L}\right)^{d-2}. \end{eqnarray} The temperature in the presence of magnetic reads as \begin{equation} T_H=\frac{dr_h}{4\pi L^2}\left[1-\frac{B^2w_0L^6 }{4dr^4_h}-\frac{k^2\psi_0 L^4}{2dr^2_h}-\frac{\mu^2(d-2)^2L^2w_0}{2d(d-1)r^{2}_h} \right]. \end{equation} In principle, we can solve the size of the horizon as function of the temperature, chemical potential, magnetic field and the dissipation parameter. However, even for the case $d=3$, the expression of the size of the horizon becomes too messy. So, instead of finding the precise dependence of the transport coefficients on these parameters, we shall plot these coefficients. Introducing dimensionless variables $(b,~{\tilde\rho},~x_h,~{\tilde k},~t_H~{\tilde \mu})$ as \begin{equation} b\equiv BL\sqrt{w_0},\quad {\tilde\rho}\equiv\frac{\rho L}{\sqrt{w_0}},\quad x_h\equiv\frac{r_h}{L},\quad {\tilde k}\equiv k L,\quad {\tilde \mu}=\mu \sqrt{w_0} \end{equation} leads to \begin{eqnarray} t_H&\equiv& T_H L=\frac{dx_h}{4\pi }\left[1-\frac{b^2 }{4dx^4_h}-\frac{{\tilde k}^2\psi_0 }{2dx^2_h}-\frac{{\tilde\rho}^2}{2d(d-1)x^{2(d-1)}_h} \right]\nonumber \\ {\widetilde\sigma}_{11}&\equiv&\frac{\sigma_{11}(x_h)}{w_0}=\frac{\sigma_{22}(x_h)}{w_0}= \left[\frac{{\tilde k}^2\psi_0\left((d-2)^2{\tilde \mu}^2+ \psi_0{\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}{ \psi^2_0{\tilde k}^4+\frac{b^2}{x^2_h}\left((d-2)^2{\tilde \mu}^2+2\psi_0 {\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}\right]x^{d-3}_h\nonumber \\ {\widetilde\sigma}_{12}&\equiv&\frac{\sigma_{12}(x_h)}{w_0}=-\frac{\sigma_{21}(x_h)}{w_0}=\left[\frac{(d-2){\tilde \mu} b\left((d-2)^2{\tilde \mu}^2+2\psi_0 {\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}{ \psi^2_0{\tilde k}^4+\frac{b^2}{x^2_h}\left((d-2)^2{\tilde \mu}^2+2\psi_0 {\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}\right]x^{d-4}_h\nonumber \\ {\widetilde\alpha_{11}}&\equiv&\frac{\alpha_{11}(x_h)}{\sqrt{w_0}L}=\frac{\alpha_{22}(x_h)}{\sqrt{w_0}L}=\frac{{\overline\alpha}_{11}(x_h)}{\sqrt{w_0}L}=\frac{{\overline\alpha}_{22}(x_h)}{\sqrt{w_0}L}\nonumber \\&=& \left[\frac{4\pi {\tilde \mu} (d-2) \psi_0{\tilde k}^2}{ \psi^2_0{\tilde k}^4+\frac{b^2}{x^2_h}\left((d-2)^2{\tilde \mu}^2+2\psi_0 {\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}\right]x^{d-2}_h\nonumber \\ {\widetilde\alpha_{12}}&\equiv&\frac{\alpha_{12}(r_h)}{\sqrt{w_0}L}=-\frac{\alpha_{21}(r_h)}{\sqrt{w_0}L}=\frac{{\overline\alpha}_{12}(r_h)}{\sqrt{w_0}L}=-\frac{{\overline\alpha}_{21}(r_h)}{\sqrt{w_0}L}\nonumber \\&=& 4\pi b \left[\frac{\frac{b^2}{x^{2}_h}+(d-2)^2{\tilde \mu}^2+\psi_0 {\tilde k}^2}{ \psi^2_0{\tilde k}^4+\frac{b^2}{x^2_h}\left((d-2)^2{\tilde \mu}^2+2\psi_0 {\tilde k}^2+\frac{b^2}{x^{2}_h}\right)}\right]x^{d-3}_h,\nonumber \\ {\widetilde\kappa}_{11}&\equiv&\frac{\kappa_{11}(r_h)}{L}=\frac{\kappa_{22}(r_h)}{L}=16\pi^2 t_H \left[\frac{((d-2)^2{\tilde \mu}^2+\psi_0 {\tilde k}^2)}{\frac{b^2}{x^{2}_h} (d-2)^2{\tilde \mu}^2 +((d-2)^2{\tilde \mu}^2+ \psi_0{\tilde k}^2)^2}\right]x^{d-1}_h\nonumber \\ {\widetilde\kappa}_{12}&\equiv&\kappa_{12}(r_h)=-\kappa_{21}(r_h)=-16\pi^2t_H\left[ \frac{ b {\tilde \mu}(d-2)}{\frac{b^2}{x^{2}_h} (d-2)^2{\tilde \mu}^2 +((d-2)^2{\tilde \mu}^2+ \psi_0{\tilde k}^2)^2}\right]x^{d-2}_h. \end{eqnarray} It follows from the expression of the electrical transport coefficients that for positive values of $\psi_0$ the insulating behavior of AdS spacetime is ruled out. We have plotted the transport coefficients versus temperature, $t_H$, by setting $\psi_0$ to be negative in fig(\ref{fig1}) and fig(\ref{fig2}). Once, we set $\psi_0$ to be negative then it follows that the transport coefficients can become negative as well, whose interpretation is not clear to the author. For completeness, we have, also, plotted the transport coefficients versus temperature, $t_H$, by setting $\psi_0$ to be positive in fig(\ref{fig3}) and fig(\ref{fig4}). \begin{figure} \centering \subfigure[]{\includegraphics[width=0.4\textwidth]{long_electrical_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_electrical_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{long_thermo_electrical_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_thermo_electrical_cond_vs_temp.png}} \caption{(a) Longitudinal electrical conductivity (b) transverse electrical conductivity (c) longitudinal thermo-electrical conductivity (d) transverse thermo-electrical conductivity are plotted versus temperature, $t_H$, by fixing ${\tilde k}=1,~{\tilde\rho}=2,~\psi_0=-8,~d=3$ for different values of magnetic field, $b$.} \label{fig1} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.4\textwidth]{long_thermal_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_thermal_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{hall_angle_cond_vs_temp.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{hall_lorentz_ratio_cond_vs_temp.png}} \caption{(a) Longitudinal thermal conductivity (b) transverse thermal conductivity (c) Hall angle conductivity (d) transverse Lorentz ratio conductivity are plotted versus temperature, $t_H$, by fixing ${\tilde k}=1,~{\tilde\rho}=2,~\psi_0=-8,~d=3$ for different values of magnetic field, $b$.} \label{fig2} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.4\textwidth]{long_electric_cond_vs_temp_positive_psi0.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_electric_cond_vs_temp_positive_psi0.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{long_thermo_electric_cond_vs_temp_positive_psi0.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_thermoelectric_cond_vs_temp_positive_psi0.png}} \caption{(a) Longitudinal electrical conductivity (b) transverse electrical conductivity (c) longitudinal thermo-electrical conductivity (d) transverse thermo-electrical conductivity are plotted versus temperature, $t_H$, by fixing ${\tilde k}=1,~{\tilde\rho}=2,~\psi_0=1,~d=3$ for different values of magnetic field, $b$.} \label{fig3} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.4\textwidth]{long_thermal_cond_vs_temp_positive_psi0.png}} \subfigure[]{\includegraphics[width=0.4\textwidth]{trans_thermal_cond_vs_temp_positive_psi0.png}} \caption{(a) Longitudinal thermal conductivity (b) transverse thermal conductivity are plotted versus temperature, $t_H$, by fixing ${\tilde k}=1,~{\tilde\rho}=2,~\psi_0=1,~d=3$ for different values of magnetic field, $b$.} \label{fig4} \end{figure} \subsection{At IR} \paragraph{An exact solution without magnetic field:} An exact real valued solution at IR follows in the absence of magnetic field upon choosing the potential $V$, the couplings, geometry, gauge potential and the dilaton as \begin{eqnarray}\label{scale_violating} V(\phi)&=&v_0,\quad \psi(\phi)=\frac{8r^2_h}{(d-2)k^2}\equiv \frac{\psi_0r^2_h}{k^2},\quad W(\phi)=\frac{w_0}{r^2_h}r^{\frac{2z(d-2)}{d-1}},\nonumber \\\phi(r)&=&\frac{\sqrt{2z}\sqrt{2(d-1)-z(d-2)}}{\sqrt{d-1}}~Log~r,\quad A'_t(r)=-\frac{4r^2_h(2(d-1)-z(d-2))}{(d-2)\rho~~r^3},\nonumber \\ U_1&=&r^{2(\frac{d(z-1)-2z+1}{d-1})}\left(1-\left(\frac{r_h}{r}\right)^2\right),\quad U_2(r)=r^{2(\frac{2(d-1)+z}{d-1})}\left(1-\left(\frac{r_h}{r}\right)^2\right),\nonumber \\ h(r)&=&r^{\frac{-2z}{d-1}},\quad w_0=-\frac{(d-2)\rho^2}{4(2(d-1)-z(d-2))} \end{eqnarray} where $v_0$ is constant and arbitrary. The geometry is now described by Lifshitz dynamical exponent $z$ \cite{Kachru:2008yh} and hyperscale violating parameter $\gamma=\frac{2(d+z-1)}{d-1}\equiv \frac{\theta}{d-1}$ \cite{Huijse:2011ef} and reads as \begin{equation} ds^2 =r^{\frac{-2(d+z-1)}{d-1}}\left[-r^{2z}f(r)dt^2+r^2dx^i dx_i+\frac{dr^2}{r^2f(r)} \right],\quad f(r)=1-\left(\frac{r_h}{r}\right)^2 \end{equation} The scale violating parameter, $\gamma$, for the Einstein-Maxwell-dilaton-axion system is not an independent parameter rather depends on the number of spatial dimensions as well as on the Lifshitz dynamical exponent, $z$, whereas for the Einstein-DBI-dilaton-axion system \cite{Pal:2020gsq}, it is a free parameter. In order to have a real valued solution, there is a constraint on the dynamical exponent and it should be $0\leq z \leq \frac{2(d-1)}{d-2}$ for $d\geq 2.$ The Hawking temperature for such a black hole is given by \begin{equation} T_H=\frac{r^z_h}{2\pi}. \end{equation} The chemical potential for the scale-violating solution eq(\ref{scale_violating}) reads as \begin{equation} \mu=- \left[\frac{4r^2_h(2(d-1)-z(d-2))}{(d-2)\rho}\right]\int^{\infty}_{r_h}\frac{dr}{r^3}=-\frac{2[2(d-1)-z(d-2)]}{(d-2)\rho} \end{equation} Hence, the conductivities are \begin{eqnarray} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)=h^{\frac{d-3}{2}}(r_h)\frac{W}{16\pi G}\left[1+\frac{\rho^2 h^{2-d}}{W\psi k^2}\right]_{r_h}=\frac{w_0}{16\pi G}r^{z-2}_h\left[1+\frac{\rho^2}{w_0\psi_0}\right],\nonumber \\&=& (2\pi T_H)^{\frac{z-2}{z}}\frac{w_0}{16\pi G}\left[1+\frac{\rho^2}{w_0\psi_0}\right],\nonumber \\ \alpha_{11}(r_h)&=&\alpha_{22}(r_h)=\frac{1}{16\pi G}\left[\frac{4\pi \rho}{\psi_0 r^2_h }\right]=\frac{ \rho}{4G\psi_0} (2\pi T_H)^{-\frac{2}{z}},\nonumber \\ \kappa_{11}(r_h)&=&\kappa_{22}(r_h)=\frac{1}{16\pi G}\left[\frac{16\pi^2 T_H h^{\frac{d-1}{2}}}{\frac{\rho^2 h^{2-d}}{W}+\psi k^2}\right]_{r_h}=\frac{1}{2Gr^2_h(\frac{\rho^2}{w_0}+\psi_0)}\nonumber \\&=&\frac{1}{2G\psi_0(1+\frac{\rho^2}{w_0\psi_0})} (2\pi T_H)^{-\frac{2}{z}}. \end{eqnarray} \paragraph{Interesting relations:} Given the exact result of the longitudinal electrical conductivity, thermo-electrical conductivity and the thermal conductivity, we can calculate the ratio between temperature times the thermal conductivity with the electrical conductivity and it reads as \begin{equation} \frac{T_H \kappa_{11}(r_h)}{ \sigma_{11}(r_h)}=\frac{4}{w_0\psi_0}={\rm constant}. \end{equation} There exists another interesting relation, which is the ratio between the thermal conductivity with the thermo-electrical conductivity and it reads as \begin{equation} \frac{ \kappa_{11}(r_h)}{ \alpha_{11}(r_h)}=\frac{2}{\rho(1+\frac{\rho^2}{w_0\psi_0})}={\rm constant}. \end{equation} The entropy density and the specific heat at constant charge density has the following dependence on the dynamical exponent \begin{equation} s=\frac{r^{-z}_h}{4G}=\frac{1}{8\pi G} T^{-1}_H,\quad C_{\rho}\sim -T^{-1}_H. \end{equation} The negative specific heat suggests this phase at IR is an unstable phase. The energy of the system obtained via the first law of thermodynamics has a logarithmic dependence on temperature \begin{equation} E=-\frac{V_{d-1}}{8\pi G} \left[Log(2\pi T_H)+ \left(\frac{2(d-1)-z(d-2)}{(d-2)}\right)Log\rho\right]. \end{equation} \subsection{An exact solution with magnetic field} A gravitational solution at finite temperature that breaks the scaling symmetry is obtained in \cite{Huijse:2011ef}. The scaling symmetry is broken with the help of a dilaton field that goes logarithmically. Essentially, it is the back reaction of the dilaton field breaks the scaling symmetry of the geometry. The gravitational solution at finite temperature and with finite charge density is obtained with the help of more than one U(1) gauge fields in \cite{Tarrio:2011de, Alishahiha:2012qu}. In this model the magnetic field is included in \cite{Ge:2016sel}. In this subsection, we shall obtain a gravitational solution at finite temperature, finite chemical potential in the presence of a constant magnetic field. The interesting point about the solution is that it will be generated by solving the necessary equations for constant dilaton field and with one U(1) gauge potential. More importantly, the geometry obtained will break the scaling symmetry. We can solve eq(\ref{eom}) to find an exact solution with a magnetic field and it takes the following form \begin{eqnarray} ds^2&=&r^{-2\gamma}\left[-r^{2 z}f(r)dt^2+r^{2(z-\gamma)} dx^2_i+\frac{dr^2}{r^2f(r)}\right],\nonumber \\ \phi(r)&=&\phi_0,\quad V(\phi)=-v_0,\quad \psi(\phi)=\psi_0,\quad W(\phi)=w_0,\nonumber \\ A'_t(r)&=&\frac{\rho}{w_0}r^{-1-(d-2)(z-2\gamma)}, \end{eqnarray} where $\phi_0,~v_0,~\psi_0$ and $w_0$ are constants. The function \begin{eqnarray} f(r)&=&\frac{v_0}{d(d-1)(z-2\gamma)^2}r^{-2\gamma}\left[1-\left(\frac{r_h}{r}\right)^{d(z-2\gamma)}\right]-\frac{\psi_0k^2}{2(d-2)(z-2\gamma)^2}r^{-2(z-\gamma)}\left[1-\left(\frac{r_h}{r}\right)^{(d-2)(z-2\gamma)}\right]\nonumber \\&+&\frac{\rho^2}{2w_0(d-1)(d-2)(z-2\gamma)^2}\frac{r^{-zd+2\gamma(d-1)}}{r^{(d-2)(z-2\gamma)}_h}\left[1-\left(\frac{r_h}{r}\right)^{(d-2)(z-2\gamma)}\right]\nonumber \\&-&\frac{B^2w_0}{4(d-4)(z-2\gamma)^2} r^{-2(2z-3\gamma)}\left[1-\left(\frac{r_h}{r}\right)^{(d-4)(z-2\gamma)}\right]. \end{eqnarray} Note, in order to have a non-singular solution, we shall be restricting $z\neq 2\gamma$. The Hawking temperature for such a case is \begin{equation} T_H=\frac{1}{4\pi(z-2\gamma)}\left[\frac{v_0r^{z-2\gamma}_h}{(d-1)}-\frac{B^2w_0}{4}r^{-3(z-2\gamma)}_h-\frac{\rho^2}{2w_0(d-1)}r^{-(2d-3)(z-2\gamma)}_h-\frac{\psi_0k^2}{2r^{z-2\gamma}_h}\right]. \end{equation} The chemical potential \begin{equation} \mu=\frac{\rho}{w_0(d-2)(z-2\gamma)}r^{-(d-2)(z-2\gamma)}_h \end{equation} The entropy, charge and the energy of the system is \begin{eqnarray} S&=&\frac{V_{d-1}}{4G}r^{(d-1)(z-2\gamma)}_h,\quad Q= \frac{V_{d-1}}{16\pi G}\rho\nonumber \\ E&=&\frac{V_{d-1}}{16\pi G(z-2\gamma)}\bigg[\frac{v_0r^{d(z-2\gamma)}_h}{d}-\frac{B^2w_0(d-1)}{4(d-4)}r^{(d-4)(z-2\gamma)}_h\nonumber \\&+&\frac{\rho^2}{2w_0(d-2)}r^{-(d-2)(z-2\gamma)}_h-\frac{\psi_0k^2(d-1)}{2(d-2)}r^{(d-2)(z-2\gamma)}_h\bigg]. \end{eqnarray} It follows that the free energy, $F=E-T_H S -\mu Q$, reads as \begin{eqnarray} F&=&-\frac{V_{d-1}}{16\pi G(z-2\gamma)}\bigg[\frac{v_0r^{d(z-2\gamma)}_h}{d(d-1)}+\frac{3B^2w_0}{4(d-4)}r^{(d-4)(z-2\gamma)}_h\nonumber \\&+&\frac{\rho^2}{2w_0(d-1)(d-2)}r^{-(d-2)(z-2\gamma)}_h+\frac{\psi_0k^2}{2(d-2)}r^{(d-2)(z-2\gamma)}_h\bigg] \end{eqnarray} From which follows the zero field magnetic susceptibility \begin{eqnarray} \chi_B&=&-\partial^2_B F=\frac{w_0V_{d-1}r^{(d-4)(z-2\gamma)}_h}{32\pi G(d-4)(z-2\gamma)}\times\nonumber \\&&\left(\frac{2(d-1)v_0r^{2z}_h+r^{4\gamma}_h((d-1)^2\psi_0k^2+(d-7)(d-2)^2w_0 \mu^2(z-2\gamma)^2)}{2v_0r^{2z}_h+r^{4\gamma}_h((d-1)\psi_0k^2-(d-2)^2w_0 \mu^2(z-2\gamma)^2)}\right)_{B=0}. \end{eqnarray} \paragraph{The null energy condition:} The null energy condition states that for any null vector $u^M$, the energy momentum tensor obeys $T_{MN}u^M u^N\geq 0$. Using the Einstein's equation motion, it is easy to show $R_{MN}u^M u^N\geq 0$, where $R_{MN}$ is the Ricci tensor. For the present case, we get $w_0\geq 0$ and $\psi_0\geq 0$. \paragraph{Transport coefficients:} The transport coefficients with the choice $16\pi G=1$ are \begin{eqnarray}\label{transport_b_IR} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)=\left[\frac{\psi_0 k^2r^{(d-3)(z-2\gamma)}_h(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+w_0\psi_0 k^2+w^2_0B^2r^{-2(z-2\gamma)}_h)}{\psi^2_0 k^4+B^2(\rho^2r^{-2(d-1)(z-2\gamma)}_h +2w_0\psi_0 k^2r^{-2(z-2\gamma)}_h+w^2_0B^2r^{-2(z-2\gamma)}_h)}\right]_{r_h}\nonumber \\ \sigma_{12}(r_h)&=&-\sigma_{21}(r_h)=\left[\frac{\rho Br^{-2(z-2\gamma)}_h(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+2w_0\psi_0 k^2+w^2_0B^2r^{-2(z-2\gamma)}_h)}{\psi^2_0 k^4+B^2(\rho^2r^{-2(d-1)(z-2\gamma)}_h +2w_0\psi_0 k^2r^{-2(z-2\gamma)}_h+w^2_0B^2r^{-2(z-2\gamma)}_h)}\right]_{r_h}\nonumber \\ \alpha_{11}(r_h)&=&\alpha_{22}(r_h)={\overline\alpha}_{11}(r_h)={\overline\alpha}_{22}(r_h)\nonumber \\&=& \left[\frac{4\pi \rho\psi_0 k^2}{\psi^2_0 k^4+B^2(\rho^2r^{-2(d-1)(z-2\gamma)}_h +2w_0\psi_0 k^2r^{-2(z-2\gamma)}_h+w^2_0B^2r^{-2(z-2\gamma)}_h)}\right]_{r_h}\nonumber \\ \alpha_{12}(r_h)&=&-\alpha_{21}(r_h)={\overline\alpha}_{12}(r_h)=-{\overline\alpha}_{21}(r_h)\nonumber \\&=& \left[ \frac{4\pi Br^{(d-3)(z-2\gamma)}_h(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+w_0\psi_0 k^2+w^2_0B^2r^{-2(z-2\gamma)}_h)}{\psi^2_0 k^4+B^2(\rho^2r^{-2(d-1)(z-2\gamma)}_h +2w_0\psi_0 k^2r^{-2(z-2\gamma)}_h+w^2_0B^2r^{-2(z-2\gamma)}_h))}\right]_{r_h}\nonumber \\ \kappa_{11}(r_h)&=&\kappa_{22}(r_h)\nonumber \\&=&\left[16\pi^2 T_H w_0 r^{(d-1)(z-2\gamma)}_h\frac{(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+w_0\psi_0 k^2)}{B^2 w^2_0\rho^2 r^{-2(d-1)(z-2\gamma)}_h +(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+w_0\psi_0 k^2)^2}\right]_{r_h}\nonumber \\ \kappa_{12}(r_h)&=&-\kappa_{21}(r_h)=-\left[ \frac{16\pi^2 B T_Hw^2_0\rho}{B^2 w^2_0\rho^2 r^{-2(d-1)(z-2\gamma)}_h +(\rho^2 r^{-2(d-2)(z-2\gamma)}_h+w_0\psi_0 k^2)^2}\right]_{r_h}. \end{eqnarray} The precise temperature dependence of the transport coefficients are very difficult to find. However, for small magnetic field and charge density, the size of the horizon is related to the temperature as \begin{equation} r^{(z-2\gamma)}_h=\frac{1}{2a}\left[T_H\pm\sqrt{T^2_H+4ab} \right]+{\cal O}(\rho^2,~~B^2), \end{equation} where $a=\frac{v_0}{4\pi(d-1)(z-2\gamma)}$ and $b=\frac{\psi_0 k^2}{8\pi(z-2\gamma)}$. In which case, the temperature dependence of the transport coefficients for $d=3$ to leading order in charge density and magnetic fields are as follows: \begin{eqnarray} \sigma_{11}(r_h)&=&\sigma_{22}(r_h)=w_0+\frac{(\rho^2-B^2)}{\psi_0 k^2}\left(\frac{2a}{T_H\pm\sqrt{T^2_H+4ab} }\right)^2+{\cal O}(\rho^4,~~B^4),\nonumber \\ \sigma_{12}(r_h)&=&\frac{2w_0\rho B}{\psi_0 k^2}\left(\frac{2a}{T_H\pm\sqrt{T^2_H+4ab} }\right)^2+{\cal O}(\rho^4,~~B^4),\nonumber \\ \alpha_{11}(r_h)&=&\frac{4\pi\rho }{\psi_0 k^2}-\frac{8\pi\rho B^2}{\psi^2_0 k^4}\left(\frac{2a}{T_H\pm\sqrt{T^2_H+4ab} }\right)^2+{\cal O}(\rho^4,~~B^4),\nonumber \\ \alpha_{12}(r_h)&=&\frac{4\pi w_0 B }{\psi_0 k^2}+\frac{4\pi\rho^2 B}{\psi^2_0 k^4}\left(\frac{2a}{T_H\pm\sqrt{T^2_H+4ab} }\right)^2+{\cal O}(\rho^4,~~B^4),\nonumber \\ \kappa_{11}(r_h)&=&\frac{16\pi^2 T_H}{\psi_0 k^2}\left(\frac{T_H\pm\sqrt{T^2_H+4ab} }{2a}\right)^2+{\cal O}(\rho^2,~~B^2),\nonumber \\ \kappa_{12}(r_h)&=&\frac{16\pi^2 T_H\rho B}{\psi^2_0 k^4}+{\cal O}(\rho^2,~~B^2). \end{eqnarray} \section{Conclusion} In this paper, we have revisited the Einstein-Maxwell-dilaton-axion system and have studied thermodynamics as well as the transport coefficients associated to an electrically, magnetically charged black hole at finite temperature with planar horizon in arbitrary but even dimensional bulk spacetime. A new black hole solution is obtained at IR which share properties similar to that of the scale violating solutions like entropy depends on the scale violating parameter as well as on the Lifshitz dynamical exponent through the horizon. One of the distinguishing feature of the new solution is that there is no need to have a logarithmic profile of the dilaton, instead a constant form of the dilaton will help us to generate the solution. We have shown that there exists dimensionless ratios involving the transport coefficients and reads as \begin{eqnarray} &&T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=1,\nonumber \\&& \frac{\sigma_{11}(r_h)}{\alpha_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{\rho}{16\pi G B}=T_H\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)},\nonumber \\ &&T_H\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)}=\frac{16\pi G B}{\rho}=\frac{\overline\kappa_{12}(r_h)}{\alpha_{11}(r_h)}\frac{\vartheta_{12}(r_h)}{\kappa_{12}(r_h)} \end{eqnarray} which holds irrespective of the detailed structure of the solution. We have checked that such a relation also holds even for an electrically, magnetically charged black hole at finite temperature with the planar black holes in $d=3$ for Einstein-DBI-dilaton-axion system. This is discussed in Appendix B. \section{Appendix: A} In this section, we shall present the result of the on-shell value of the action for the Einstein-Maxwell-dilaton-axion system. This is very much essential to find the thermodynamic potential in grand canonical ensemble. Once, we have the thermodynamic potential, we can calculate all the thermodynamic quantities. In particular, we are interested to find the magnetization of the system. It is computed by taking the derivative of the grand potential with respect to the magnetic field for constant temperature, \begin{equation} M=-\frac{1}{V_{d-1}\beta}\left(\frac{\partial I}{\partial B} \right)_{\beta}, \end{equation} where $I,~\beta,~B$ are the grand potential, inverse temperature and magnetic field respectively. $V_{d-1}=\int d^{d-1}x$ is volume of the spatial directions, $x_i$\rq{}s. The Gibbon-Hawking term is \begin{equation} S_{GH}=-\frac{1}{\kappa^2}\int d^d x\sqrt{-\gamma} K \end{equation} where $K$ is the trace of the extrinsic curvature and $\gamma_{ab}$ is the induced metric. The extrinsic curvature is defined as $K_{ab}=-\frac{1}{2}(\nabla_{a}n_{b}+\nabla_{b}n_{a})$, where $n_{a}$ is the unit vector normal to the boundary. The sum of the on-shell value of the bulk action and the Gibbon-Hawking term gives \begin{eqnarray} &&S_{bulk}+S_{GH}=\frac{(d-2)V_{d-1}}{2\kappa^2}\left(\beta\frac{\sqrt{g_{tt}}}{\sqrt{g_{rr}}}g^{\prime}_{xx}g^{\frac{d-3}{2}}_{xx}\right)^{UV}+\frac{V_{d-1}}{2\kappa^2}\left(\beta\frac{\sqrt{g_{tt}}}{\sqrt{g_{rr}}}g^{\prime}_{xx}g^{\frac{d-3}{2}}_{xx}\right)^{IR}\nonumber \\ &&+\frac{V_{d-1}}{2\kappa^2}\left(\beta\frac{g^{\prime}_{tt}}{\sqrt{g_{tt}g_{rr}}}g^{\prime}_{xx}g^{\frac{d-1}{2}}_{xx}\right)^{UV}-\frac{V_{d-1}}{\kappa^2}\beta\int dr \sqrt{g_{tt}g_{rr}}g^{\frac{d-3}{2}}_{xx}\left[ \frac{\psi}{2}k^2+\frac{W}{2g_{xx}}B^2\right]. \end{eqnarray} The counter terms required to find a well defined on-shell bulk action on the boundary are \begin{eqnarray}\label{ct_action} S_{ct1}&=&\frac{\alpha_1}{2\kappa^2}\int d^d x \sqrt{-\gamma}=\frac{\alpha_1V_{d-1}}{2\kappa^2} \beta\left(\sqrt{g_{tt}}g^{\frac{d-1}{2}}_{xx}\right)^{UV},\nonumber \\ S_{ct2}&=&\frac{\alpha_2}{\kappa^2}\int d^d x \sqrt{-\gamma} \partial_{\mu}\chi_i\partial_{\nu}\chi_ig^{\mu\nu}=\frac{\alpha_2V_{d-1}}{\kappa^2}(d-1)k^2 \beta\left(\sqrt{g_{tt}}g^{\frac{d-3}{2}}_{xx}\right)^{UV}. \end{eqnarray} where $\alpha_i$\rq{}s are constants and the spacetime index is denoted by upper case Latin index $M$, which can $d+1$ values. Here, we shall take $M=(\mu,~r)$, where the Greek index, $\mu$, takes only $d$ values, i.e., defined for the field theory directions. For the present case, we shall set the metric components as $g_{tt}=U_1(r)=\frac{r^2}{L^2}f(r)~~g_{xx}=h(r)=\frac{r^2}{L^2}$ and $g_{rr}(r)=\frac{1}{U_2(r)}=\frac{L^2}{r^2f(r)}$. The couplings are being set as $\psi(\phi(r))=\psi_0$ and $W(\phi(r))=w_0$, where $\psi_0$ and $w_0$ are constants. The finite term in the sum of the bulk term, Gibbon-Hawking term and the counters gives \begin{eqnarray} S&\equiv& S_{bulk}+S_{GH}+S_{ct1}+S_{ct2},\nonumber \\ &=&-\frac{V_{2}}{2\kappa^2}\beta\frac{(c_1 r_h +B^2 w_0L^6-k^2r^2_h\psi_0L^4)}{ L^4r_h},\quad {\rm for}\quad d=3, ~\alpha_1=-\frac{4}{L},~\alpha_2=\frac{\psi_0 L}{4},\nonumber \\ \end{eqnarray} where $V_{d-1}=\int d^{d-1}x$ is volume of the spatial directions, $x_i$\rq{}s. However, for $d=5$, we need to add other counter terms \begin{eqnarray} S_{ct3}&=&\frac{\alpha_3}{\kappa^2}\int d^d x \sqrt{-\gamma}F_{\mu\nu}F^{\mu\nu}=\frac{\alpha_3V_{d-1}}{\kappa^2}(d-1)B^2 \beta\left(\sqrt{g_{tt}}g^{\frac{d-5}{2}}_{xx}\right)^{UV},\nonumber \\ S_{ct4}&=&\frac{\alpha_4}{\kappa^2}\int d^d x \sqrt{-\gamma}(\partial_{\mu}\chi_i\partial_{\nu}\chi_ig^{\mu\nu})^2=\frac{\alpha_4V_{d-1}}{\kappa^2}(d-1)^2k^4 \beta\left(\sqrt{g_{tt}}g^{\frac{d-5}{2}}_{xx}\right)^{UV}, \end{eqnarray} and for $d=7$, we need some more counter terms \begin{eqnarray} S_{ct5}&=&\frac{\alpha_5}{\kappa^2}\int d^d x \sqrt{-\gamma}(\partial_{\mu}\chi_i\partial_{\nu}\chi_ig^{\mu\nu})(F_{\rho\sigma}F^{\rho\sigma})=\frac{\alpha_5V_{d-1}}{\kappa^2}(d-1)^2B^2k^2 \beta\left(\sqrt{g_{tt}}g^{\frac{d-7}{2}}_{xx}\right)^{UV},\nonumber \\ S_{ct6}&=&\frac{\alpha_6}{\kappa^2}\int d^d x \sqrt{-\gamma}(\partial_{\mu}\chi_i\partial_{\nu}\chi_ig^{\mu\nu})^3=\frac{\alpha_6V_{d-1}}{\kappa^2}(d-1)^3k^6 \beta\left(\sqrt{g_{tt}}g^{\frac{d-7}{2}}_{xx}\right)^{UV}. \end{eqnarray} The on-shell value of the action for $d=5$ and $d=7$ gives \begin{eqnarray} &S&\equiv S_{bulk}+S_{GH}+S_{ct1}+S_{ct2}+S_{ct3}+S_{ct4}+S_{ct5}+S_{ct6},\nonumber \\ &=&-\frac{V_{4}}{2\kappa^2}\beta\frac{(3c_1-3B^2 r_h w_0L^6-k^2 r^3_h \psi_0L^4)}{3L^6}\quad {\rm for} ~d=5,~\alpha_1=-\frac{8}{L},~\alpha_2=\frac{\psi_0L}{12},~\alpha_3=\frac{w_0L}{8},\nonumber \\&&\alpha_4=\frac{\psi^2_0L^3}{1152}\nonumber \\ &=&-\frac{V_{6}}{2\kappa^2}\beta\frac{(15c_1-5B^2 r^3_h w_0L^6-3 k^2 r^5_h\psi_0L^4)}{15}\quad {\rm for} ~d=7,~\alpha_1=-\frac{12}{L},~\alpha_2=\frac{\psi_0L}{20},~\alpha_3=\frac{w_0L}{24},\nonumber \\ &&\quad\quad\quad\quad ~\alpha_4=\frac{\psi^2_0L^3}{4800},~\alpha_5=\frac{w_0\psi_0L^3}{2880},\quad \alpha_6=\frac{\psi^3_0L^5}{576000}. \end{eqnarray} The quantity, $c_1$, can be easily calculated from the condition, $f(r_h)=0$. The grand potential, $I$, when expressed in terms of the chemical potential reads as \begin{eqnarray} I&=&-\left(\frac{V_2}{4G}\right)\left(\frac{r^2_h}{L^2}\right)\left(\frac{4r^4_h -3B^2 w_0L^6+r^2_hL^2(\mu^2w_0+2k^2 L^2 \psi_0)}{12r^4_h -B^2 w_0L^6-r^2_hL^2(\mu^2w_0+2k^2 L^2 \psi_0)}\right)\quad {\rm for}\quad d=3,\nonumber \\ &=&-\left(\frac{V_4}{12G}\right) \left(\frac{r^4_h}{L^4}\right)\left(\frac{24r^4_h +18B^2 w_0L^6+r^2_hL^2(9\mu^2w_0 +4k^2 \psi_0L^2)}{40r^4_h -2B^2 w_0L^6-r^2_hL^2(9\mu^2w_0 +4k^2 L^2 \psi_0)}\right)\quad {\rm for}\quad d=5,\nonumber \\ &=&-\left(\frac{V_6}{20G}\right)\left(\frac{r^6_h}{L^6}\right) \left(\frac{60r^{4}_h +15B^2 w_0L^6+r^2_hL^2(25\mu^2 w_0 +6k^2 \psi_0L^2)}{84r^{4}_h -3B^2 w_0L^6-r^2_hL^2(25\mu^2 w_0 +6k^2 L^2 \psi_0)}\right)\quad {\rm for}\quad d=7,\nonumber \\ \end{eqnarray} where we have used the relation $2\kappa^2=16\pi G.$ Various thermodynamic quantities can be calculated from the free energy. The entropy is defined as \begin{eqnarray} S&=&\beta\left( \frac{\partial I}{\partial \beta}\right)_{\mu}-I=\frac{V_2 r^2_h}{4G L^2}\quad {\rm for}\quad d=3,\nonumber \\ &=&\frac{V_4 r^4_h}{4GL^4}\quad {\rm for}\quad d=5,\nonumber \\ &=&\frac{V_6 r^6_h}{4GL^6}\quad {\rm for}\quad d=7. \end{eqnarray} The charge is defined as \begin{eqnarray} Q&=&-\frac{1}{\beta}\left( \frac{\partial I}{\partial \mu}\right)_{\beta}=\frac{V_2 w_0\mu r_h}{16\pi G L^2},\quad {\rm for}\quad d=3,\nonumber \\ &=&\frac{3V_4 w_0\mu r^3_h}{16\pi GL^4},\quad {\rm for}\quad d=5,\nonumber \\ &=&\frac{5V_6 w_0\mu r^5_h}{16\pi GL^6},\quad {\rm for}\quad d=7. \end{eqnarray} Energy is defined as \begin{eqnarray} E&=&\left( \frac{\partial I}{\partial \beta}\right)_{\mu}-\frac{\mu}{\beta}\left( \frac{\partial I}{\partial \mu}\right)_{\beta}=\frac{V_2}{32\pi G r_hL^4}\left(4r^4_h +B^2 w_0L^6+r^2_hL^2(w_0\mu^2-2k^2\psi_0L^2)\right), \quad {\rm for}\quad d=3,\nonumber \\ &=&\frac{V_4}{96\pi GL^6}\left(24r^5_h-6 B^2 r_h w_0L^6+r^3_hL^2(9w_0\mu^2-4k^2\psi_0L^2)\right)\quad {\rm for}\quad d=5,\nonumber \\ &=&\frac{V_6}{160\pi GL^8}\left(60r^7_h-5 B^2 r^3_h w_0L^6+r^5_hL^2(25w_0\mu^2-6k^2\psi_0L^2)\right)\quad {\rm for}\quad d=7. \end{eqnarray} Magnetic moment is defined as \begin{eqnarray} m&=&-\frac{1}{\beta}\left( \frac{\partial I}{\partial B}\right)_{\beta}=-\frac{B w_0 V_2 L^2}{16\pi G r_h},\quad {\rm for} \quad d=3,\nonumber \\ &=&\frac{V_4 B w_0 r_h}{8\pi G},\quad {\rm for} \quad d=5,\nonumber \\ &=&\frac{V_6 B w_0 r^3_h}{16\pi GL^2},\quad {\rm for} \quad d=7. \end{eqnarray} It is easy to notice that the dissipation parameter, $k$, enters, directly into the expression of energy. It also comes into all the thermodynamic quantities via the size of the horizon. The magnetic susceptibility is defined as the ratio of the magnetic moment per unit volume times the applied magnetic field \begin{eqnarray} \chi=\frac{1}{V}\left(\frac{\partial m}{ \partial B}\right)_{\beta}&=&-\frac{ w_0L^2 }{16\pi G r_h},\quad {\rm for} \quad d=3,\nonumber \\ &=&\frac{ w_0 r_h}{8\pi G},\quad {\rm for} \quad d=5,\nonumber \\ &=&\frac{ w_0 r^3_h}{16\pi GL^2},\quad {\rm for} \quad d=7, \end{eqnarray} where $V$ is the appropriate volume in the corresponding spacetime. It simply follows that the magnetic susceptibility is negative for $d=3$, whereas positive for $d=5$ and $d=7$. Note that the magnetic susceptibility depends on the temperature, magnetic field, chemical potential and the dissipation parameter, $k$, through the size of the horizon, $r_h$, in a very non-trivial way. It is easy to see that the thermodynamic quantities obey the following relation \begin{equation} \frac{I}{\beta}=E-TS-\mu Q. \end{equation} The temperature of the system is \begin{equation} T_H=\frac{dr_h}{4\pi L^2}\left[1-\frac{B^2 L^6w_0}{4dr^4_h}-\frac{k^2L^4\psi_0}{2dr^2_h}-\frac{ \rho^2L^{2d}}{2w_0d(d-1)r^{2(d-1)}_h} \right]. \end{equation} The condition for the inflection point with respect to the size of the horizon is \begin{equation} \frac{\partial T_H}{\partial r_h}=0=\frac{\partial^2 T_H}{\partial r^2_h}. \end{equation} Restricting to $d=3$, and introducing dimensionless variable $(b,~{\tilde\rho},~x_h,~{\tilde k},~t_H,~t_{crit})$ as \begin{equation} b=BL\sqrt{w_0},\quad {\tilde\rho}=\frac{\rho L}{\sqrt{w_0}},\quad x_h=\frac{r_h}{L},\quad {\tilde k}=k L,\quad t_H= T_H L,\quad t_{crit}=T_{crit} L \end{equation} gives the condition on the size of the horizon and $\psi_0$ as \begin{equation} x_h=x_{crit}=\frac{(b^2+{\tilde\rho}^2)^{\frac{1}{4}}}{\sqrt{2}},\quad \psi_0=\psi_{crit}=-\frac{6\sqrt{b^2+{\tilde\rho}^2}}{{\tilde k}^2} \end{equation} At this value the critical temperature is \begin{equation} t_{crit}=\frac{\sqrt{2}}{\pi}~(b^2+{\tilde\rho}^2)^{\frac{1}{4}}. \end{equation} The temperature, $t_{crit}$, is interpreted to be the temperature for which the turning points appear or disappear in the temperature versus the size of the horizon graph and $x_{crit}$ represent the size of that black hole. \begin{figure}[h!] \centering {\includegraphics[ width=8cm,height=6cm]{temp_vs_horizon1.png} } \caption{ The figure is plotted for dimensionless Hawking temperature, $t_H$, vs the dimensionless size of the horizon, $x_h$, for $AdS_4$ black hole. The parameters are set as: $b=2,~d=3,~{\tilde \rho}=1,~{\tilde k}=2$. The $t_{critc}$ is plotted as a horizontal green line. } \label{fig_1} \end{figure} The Hawing-Page phase transition temperature occurs at \begin{equation} t_{HP}=\frac{4b^2-2{\tilde \rho}^2-\psi_0{\tilde k}^2\left(\sqrt{12b^2+\psi^2_0{\tilde k}^4-4{\tilde \rho}^2}-\psi_0{\tilde k}^2 \right)}{\pi\left(\sqrt{12b^2+\psi^2_0{\tilde k}^4-4{\tilde \rho}^2}-\psi_0{\tilde k}^2 \right)^{\frac{3}{2}}} \end{equation} In order to have a real valued Hawing-Page phase transition temperature, the following condition need to be imposed on the dimensionless magnetic field, charge density and the dissipation parameter \begin{equation} 12b^2+\psi^2_0{\tilde k}^4\geq 4{\tilde \rho}^2. \end{equation} \section{Appendix B: Transport for DBI system} The transport coefficients for Einstein-DBI-dilaton-axion system for $d=3$ is calculated in \cite{Pal:2019bfw, Pal:2020gsq} and restoring the factor of $16\pi G$ reads as \begin{eqnarray}\label{transport_dbi} &&(16\pi G)\sigma_{11}(r_h)=(16\pi G)\sigma_{22}(r_h)\nonumber \\ &=&\left(\frac{k^2T_bh Z_2\psi[T_bZ_2(\rho^2+B^2Z^2_1)+k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h} \nonumber \\ &&(16\pi G)\sigma_{12}(r_h)=-(16\pi G)\sigma_{21}(r_h)\nonumber \\ &=&\left(\frac{B\rho T_b[T^2_bZ^2_2(\rho^2+B^2Z^2_1)+2T_bk^2Z_2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}+k^4\psi^2]}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h}\nonumber \\ &&(16\pi G)T_H \alpha_{11}(r_h)=(16\pi G)T_H \alpha_{22}(r_h)\nonumber \\ &=&\left(\frac{k^2T_bU_0\rho h^2Z^2_2\psi}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h}\nonumber \\ &&(16\pi G)T_H \alpha_{12}(r_h)=-(16\pi G)T_H \alpha_{21}(r_h)\nonumber \\ &=&\left(\frac{B T_bh U_0Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h}\nonumber \\ &&(16\pi G)T_H{\overline\kappa}_{11}(r_h)=(16\pi G)T_H{\overline\kappa}_{22}(r_h)\nonumber \\ &=&\left(\frac{U^2_0h[k^2\psi(B^2+h^2Z^2_2)+B^2T_bZ_2\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h} \nonumber \\ &&(16\pi G)T_H{\overline\kappa}_{12}(r_h)=-(16\pi G)T_H{\overline\kappa}_{21}(r_h)\nonumber \\ &=&\left(\frac{\rho BT_bU^2_0h^2Z^2_2}{k^4\psi^2(B^2+h^2Z^2_2)+T_bB^2Z_2[T_bZ_2(\rho^2+B^2Z^2_1)+2k^2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2Z^2_2)}]}\right)_{r_h} \nonumber \\ \end{eqnarray} This gives the longitudinal thermal conductivity as \begin{eqnarray}\label{kappa_11} &&(16\pi G)\kappa_{11}(r_h)=(16\pi G)\kappa_{22}(r_h)=(16\pi G)\Biggl[{\overline\kappa}_{11}-T_H\frac{\left((\alpha^2_{11}-\alpha^2_{12})\sigma_{11}+2\alpha_{11}\alpha_{12}\sigma_{12}\right)}{\sigma^2_{11}+\sigma^2_{12}}\Biggr]_{r_h}\nonumber \\ &=&\left(\frac{16\pi^2 T_H h\left[T_b \rho^2 Z_2\sqrt{\rho^2+Z^2_1(B^2+h^2 Z^2_2)}+k^2\psi(\rho^2+h^2Z^2_1Z^2_2)\right]}{ [2k^2 T_b\rho^2 Z_2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2 Z^2_2)}+T^2_b\rho^2Z^2_2 (\rho^2+B^2 Z^2_1)+k^4\psi^2(\rho^2+h^2Z^2_1Z^2_2)]}\right)_{r_h}\nonumber \\ \end{eqnarray} The right hand side need to be evaluated at the horizon. The transverse thermal conductivity takes the following form \begin{eqnarray}\label{kappa_xy} &&(16\pi G)\kappa_{12}(r_h)=-(16\pi G)\kappa_{21}(r_h)=(16\pi G)\Biggl[{\overline\kappa}_{12}+T_H\frac{\left((\alpha^2_{11}-\alpha^2_{12})\sigma_{12}-2\alpha_{11}\alpha_{12}\sigma_{11}\right)}{\sigma^2_{11}+\sigma^2_{12}}\Biggr]_{r_h}\nonumber \\&=&-\left(\frac{16 \pi^2 T_H B T_b \rho h^2Z^2_1Z^2_2}{ 2k^2 T_b\rho^2 Z_2\psi\sqrt{\rho^2+Z^2_1(B^2+h^2 Z^2_2)}+T^2_b\rho^2Z^2_2 (\rho^2+B^2 Z^2_1)+k^4\psi^2(\rho^2+h^2Z^2_1Z^2_2)}\right)_{r_h}.\nonumber \\ \end{eqnarray} It is easy to show using $U_0=4\pi T_H$ \begin{equation}\label{universal_ratio_dbi} \frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=\left(\frac{4\pi B}{\psi k^2}\right)_{r_h},\quad T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}=\left(\frac{\psi k^2}{4\pi B}\right)_{r_h} \end{equation} Upon combining these relations, we get \begin{equation}\label{universal_relation_dbi} T_H\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}\frac{\alpha_{12}(r_h)}{\sigma_{11}(r_h)}=1. \end{equation} The electric charge is defined as $Q=\int d^{d-1}x J^0$, where \begin{equation} J^0=T_b\frac{Z_1(\phi) \sqrt{-det\bigg( Z_2(\phi) g_{MN}+\lambda F_{MN}\bigg)}}{32\pi G}\left[\bigg( Z_2(\phi) g+\lambda F\bigg)^{-1~ r0}-\bigg( Z_2(\phi) g+\lambda F\bigg)^{-1~0 r}\right]. \end{equation} Using the solution to gauge potential as given in \cite{Pal:2020gsq}, gives the electric charge $Q=\frac{V_2}{16\pi G}\rho$. It is easy to show that the longitudinal component of the resistivity matrix is related to the longitudinal component of the Seebeck coefficient as \begin{eqnarray} \rho_{11}(r_h)&\equiv&\frac{\sigma_{11}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)}=-(16\pi G)\left[\frac{\psi k^2}{4\pi\rho}\vartheta_{11}(r)\right]_{r_h},\quad {\rm where}\nonumber \\ \vartheta_{11}(r_h)&=&-\frac{\sigma_{11}(r_h)\alpha_{11}(r_h)+\sigma_{12}(r_h)\alpha_{12}(r_h)}{\sigma^2_{11}(r_h)+\sigma^2_{12}(r_h)} \end{eqnarray} This gives the relation \begin{equation} \frac{\sigma_{11}(r_h)}{\alpha_{12}(r_h)}\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}=-\frac{1}{V_2}\left(\frac{Q}{B}\right)=T_H\frac{\vartheta_{11}(r_h)}{\rho_{11}(r_h)}\frac{\alpha_{11}(r_h)}{\overline\kappa_{12}(r_h)}. \end{equation}
{ "timestamp": "2021-04-21T02:18:36", "yymm": "2012", "arxiv_id": "2012.14125", "language": "en", "url": "https://arxiv.org/abs/2012.14125" }
\section{\normalsize Introduction} \label{secIn} In classical mechanics one has inertial reference frames as preferred ones. The description of motion in such frames usually is simpler than in the case of use of noninertial frames. In the last case one must introduce the so called inertial forces for example the centrifugal force and the Coriolis force in case of the rotating reference frame. However sometimes it is not reasonable to use the inertial frame. For example noninertial reference frame formed by rotating Earth is natural if one wants to describe all observations on the immovable Earth. In General Relativity there are no preferred coordinate systems. If two different coordinate systems describe the same space-time region they equally can be used. However as it is the case for classical mechanics it is necessary to understand the physical sense of different terms in equations of movement of bodies in different coordinate systems. In cosmology the well known example of dipole and quadrupole terms in background radiation can be mentioned~\cite{Kogut,NaselskiiNovikovN}. Our telescopes on the Earth or in space of the Solar system correspond not to the synchronous reference frame in cosmology usually used in it. So the problem of the interpretation of appearance of terms arising due to the reference frame used in physical cosmology is arising. The difference between reference frames is important for large distances where the nondiagonal term in metrical tensor is not small. It is well known for the case of rotating reference frame that the Doppler effect and the redshift of light in it is different from the inertial case~\cite{Kundig63}. Measuring the redshift is the main method to get information on far Galaxies and their movement. Information on dark energy and cosmological constant is obtained mainly from these measurements. And it is well known from the text book~\cite{LL_II}, Sec.~88, Problem one, that the forces acting on a particle in a gravitational field are obtained from the analysis of the Christoffel terms in geodesic equations. That is why we must write these equations and expressions for such terms to understand the origin of noninertial forces appearing in our case. In this paper we write geodesic equations in the homogeneous expanding Universe in two different reference frames often used in cosmology and investigate the physical meaning of different terms arising in them. The expressions for the energy and momentum are also obtained. In our previous papers~\cite{GribPavlov2016d,GribPavlov2019h,GribPavlov2020} we showed that besides the difference in inertial forces there is a difference in possible energies of particles, i.e. existence of particles with negative energies in the nonsynchronous system. The analysis of geodesic equations made by us shows that besides the usual existence of some kind of the viscosity force existing in the synchronous frame leading to deceleration there appears a term leading in some cases to the acceleration. This acceleration occurs however for the cases of nonzero cosmological constant or exotic matter equation of state which seems to be the expectable result. \section{\normalsize Isotropic homogeneous Universe and Einstein's equation} \label{secNEEU} The square of space-time interval of the isotropic cosmological Friedmann model can be written in synchronous frame~\cite{LL_II} as \begin{equation} d s^2 = c^2 d t^2 - a^2(t) \left( \frac{d r^2}{1 - K r^2} + r^2 d \Omega^2 \right), \label{f1} \end{equation} where $c$ is speed of light, parameter $r$ is changing from 0 to $\infty $ in open ($ K= -1 $) and quasi-Euclidean flat models ($ K=0 $), and from 0 to 1 in closed cosmological model ($ K= 1 $), $d \Omega^2 = d \theta^2 + \sin^2 \theta \, d \varphi^2 $. Changing the variables \begin{equation} r = f(\chi) = \left\{ \begin{array}{ll} \sin{\chi} , & \ \ \ K=1, \\ \chi , & \ \ \ K=0, \\ \sinh{\chi} , &\ \ \ K=-1 \end{array} \right. \label{zamp} \end{equation} the metric~(\ref{f1}) can be written also in form \begin{equation} d s^2 = c^2 d t^2 - a^2(t) \left( d \chi^2 + f^2(\chi) d \Omega^2 \right). \label{f1n} \end{equation} In closed model $\chi$ is changing varying from 0 to $\pi$, in cases $K=0,-1$ one has $\chi \in [0, + \infty)$. Using the conformal time~$\eta$: \begin{equation} c d t = a(\eta) \, d \eta, \label{eta} \end{equation} the metric~(\ref{f1n}) takes the form \begin{equation} d s^2 = a^2(\eta) \left( d \eta^2 - d \chi^2 - f^2(\chi) d \Omega^2 \right). \label{etaf1} \end{equation} The Christoffel symbols \begin{equation} \Gamma^{\, i}_{\, kl} = \frac{1}{2} g^{im} \left( \frac{\partial g_{mk}}{\partial x^l} + \frac{\partial g_{ml}}{\partial x^k} - \frac{\partial g_{kl}}{\partial x^m} \right) \label{Gijk} \end{equation} in the homogeneous isotropic space-time with metric~(\ref{etaf1}) are \begin{equation} \Gamma^{\,0}_{\, 00}= \frac{a'}{a} = \frac{\dot{a}}{c} \ , \ \ \ \ \Gamma^{\,i}_{\, 0 j}=\frac{\dot{a}}{c} \, \delta^i_j \ , \ \ \ \ \Gamma^{\,0}_{\, \alpha \beta}= \frac{\dot{a}}{c} \, \gamma_{\alpha \beta} \ , \ \ \ \ \Gamma^{\,\alpha}_{\, \beta \delta}(g_{ik}) = \Gamma^{\,\alpha}_{\, \beta \delta}(\gamma_{\nu \mu}) \label{GGG} \end{equation} where prime denotes the derivative with respect to conformal time~$\eta$, the dot above symbol is the derivative with respect to time~$t$, $ \gamma_{\alpha \beta} $ is metric of 3-dimensional space of constant curvature~$K$. Ricci tensor components and the scalar curvature are \begin{equation} R_{00} = 3 \frac{a \ddot{a}}{c^2}, \ \ \ R_{\alpha \beta } = - \gamma_{\alpha \beta } \left[ \frac{a \ddot{a}}{c^2} + 2 \left( \frac{\dot{a}^2}{c^2} + K \right) \right], \label{Rik} \end{equation} \begin{equation} R = \frac{6}{a^2} \left[ \frac{a \ddot{a}}{c^2} + \left( \frac{\dot{a}^2}{c^2} + K \right) \right]. \label{Rska} \end{equation} Einstein's equations are \begin{equation} R_{ik} -\frac{1}{2} R g_{ik} + \Lambda g_{ik} = - 8 \pi \frac{G}{c^4} T_{ik}, \label{Eur} \end{equation} where $\Lambda$ is cosmological constant, $G$ is the gravitational constant, $T_{ik}$ is energy-momentum tensor of background matter. Equations~(\ref{Eur}) for metric~(\ref{etaf1}) are \begin{equation} \frac{\dot{a}^2 + K c^2}{a^2} = \frac{ c^2}{ 3 } \left( \frac{ 8 \pi G }{ c^4 } T_0^0 + \Lambda \right), \label{Eur0} \end{equation} \begin{equation} \frac{ \ddot{a} }{a} + \frac{\dot{a}^2 + K c^2}{2 a^2} = \frac{ c^2}{ 2 } \left( \frac{ 8 \pi G }{ 3 c^4 } \sum_{\alpha = 1}^3 T_\alpha^\alpha + \Lambda \right). \label{Eur1} \end{equation} From~(\ref{Eur0}), (\ref{Eur1}) one obtains \begin{equation} \frac{ \ddot{a} }{a} = \frac{ c^2}{ 3 } \left( \Lambda + \frac{4 \pi G }{ c^4} \left( \sum_{\alpha = 1}^3 T_\alpha^\alpha - T_0^0 \right) \right). \label{Ehh4} \end{equation} In comoving coordinates the energy-momentum tensor of background matter in isotropic homogeneous Universe is diagonal \begin{equation} T_i^k = {\rm diag}\, (\varepsilon, -p, -p, -p), \label{Tikd} \end{equation} where $\varepsilon $ and $p$ are the energy density and pressure of background matter. So \begin{equation} \frac{ \ddot{a} }{a} = \dot{H} + H^2 = \frac{\Lambda c^2}{3} - \frac{ 4 \pi G } { 3 c^2} \left( \varepsilon + 3 p \right), \label{ELhh4} \end{equation} where $H= \dot{a}/a$ is the Hubble ``constant''. We call it the ``constant'' following~\cite{MTW} in spite it is a variable depending on time. The radial distance between points $\chi=0$ and $\chi$ in metrics~(\ref{f1n}), (\ref{etaf1}) is $D= a(t) \chi$ and it's the same in the metric~(\ref{f1}). If $t$ is fixed then the maximal value of $D$ in the closed model is $D_{\rm max} =\pi a(t)$. In open and flat models $D$ is non limited. Nonzero $D$ can be understood as the distance to the far Galaxy from the place where the observer is located $D=0$. The distance~$D$ corresponds to the ``proper distance'' of book~\cite{Weinberg} and is equal to the distance that would be measured by observers located rather closely in the expanding universe between the origin of coordinate system and the point with comoving coordinate $\chi$ at the same moment of cosmic time $t$. Take the new coordinates $t, D, \theta, \varphi$ (see also~\cite{Ellis93,Grib95}). Then one obtains \begin{equation} d D = \frac{\dot{a}}{a} D\, d t + a \, d \chi , \ \ \ \ d \chi = \frac{1}{a } \left( d D - \frac{\dot{a}}{a}\, D d t \right) \label{f3} \end{equation} and the interval~(\ref{f1n}) becomes \begin{equation} d s^2 = \left( 1 - \left( \frac{ H D }{c } \right)^2 \right) c^2 d t^2 + 2 H D\, d D d t - d D^2 - a^2 f^2(D/a)\, d \Omega^2 . \label{f4} \end{equation} Note that metric~(\ref{f4}) is not singular on the surface $D=c/H$, in spite of $g_{00} =0$. The ${\rm det} \left( g_{ik} \right) = - a^4 f^4(D/a)\sin^2 \theta $ is zero only for $D=0$ or $\theta =0,\pi$, where the coordinate singularities are conditioned by use of spherical coordinates of 3-space. The surface $D=c/H$ is analogous to the static limit for rotating black hole (see~\cite{GribPavlov2019h}). It is also called apparent horizon in cosmology~\cite{Faraoni11}. The energy-momentum tensor for background matter in coordinates $t, D$ can be found from formula \begin{equation} T_{ik} =(p+\varepsilon )u_i u_k - p g_{ik} \label{TikdG} \end{equation} (see~\cite{LL_II}) and expressions for four-velocities $u^i$ of the background matter (with $\chi={\rm const}$) \begin{equation} u^i = \left( 1, \frac{HD}{c}, 0, 0 \right), \ \ \ \ u_i = \left( 1, 0, 0, 0 \right). \label{uik} \end{equation} That is why one has \begin{equation} T_{0}^0 =\varepsilon , \ \ \ \ T_\alpha^\beta = -p \delta_\alpha^\beta \label{TikdD} \end{equation} and therefore $\varepsilon $ and $p$ are the energy density and pressure of background matter in coordinates $t,D$ also. The same result can be obtained by coordinate transformation of the tensor~(\ref{Tikd}) from one coordinate system to another. The difference in the form of the energy-momentum tensor in coordinates $t$, $D$ is manifested in the appearance of non-diagonal terms. Note that for small distances ($ D H /c \ll 1 $, $D/a \ll 1$) the metric~(\ref{f4}) becomes the metric of comoving spherical coordinate system \begin{equation} d s^2 = c^2 d t^2 - d D^2 - D^2 d \Omega^2 . \label{f4D0} \end{equation} If the object is located far from the observer ($ D H /c \ll 1 $ is not correct) one cannot go to the diagonal form~(\ref{f4D0}). \section{\normalsize Free movement in different coordinate systems} \label{secKerrVr} Let us write the equation for the geodesic lines \begin{equation} \frac{d^2 x^i}{d \lambda^2} = - \Gamma^{\, i}_{kl} \frac{d x^k}{d \lambda} \frac{d x^l}{d \lambda}, \label{GenGeod} \end{equation} where $\lambda$ is the affine parameter on the geodesic. The terms in the right hand side are similar to inertial forces in nonrelativistic classical mechanics. First study the radial movement in coordinates $\eta, \chi$. Equations of radial geodesics in these coordinates taking into account~(\ref{GGG}) are \begin{equation} \frac{d^2 \eta}{d \lambda^2} + \frac{\dot{a}}{c} \left[ \left( \frac{d \eta}{d \lambda} \right)^2 + \left( \frac{d \chi}{d \lambda} \right)^2 \right] = 0, \label{Geodeta0} \end{equation} \begin{equation} \frac{d^2 \chi}{d \lambda^2} + 2 \frac{\dot{a}}{c} \frac{d \eta}{d \lambda} \frac{d \chi}{d \lambda} = 0 . \label{Geodeta1} \end{equation} In order to give the interpretation of dissipative terms in~(\ref{Geodeta1}) let us find the energy and momentum of the particle in the corresponding coordinate system. Note that the geodesic equations can be obtained from the Lagrangian~\cite{Chandrasekhar} \begin{equation} L = \frac{ g_{ik} }{2} \, \frac{ d x^i}{d \lambda} \frac{ d x^k}{d \lambda}. \label{Lgik} \end{equation} Generalized momenta are by definition \begin{equation} p_i \stackrel{\rm def}{=} \frac{\partial L}{\partial \dot{x}^i} = g_{ik} \frac{d x^k}{d \lambda } , \label{Lpdef} \end{equation} where now $ \dot{x}^i = d x^i/d \lambda $. The value of $p_i p^i $ is conserved due to Euler-Lagrange equations \begin{equation} \frac{d }{d \lambda} \frac{\partial L}{\partial \dot{x}^n} - \frac{\partial L}{\partial x^n} = 0. \label{LEL} \end{equation} For time-like geodesics the affine parameter can be taken as $ \lambda = \tau / m$ so that $\tau$ is the proper time of the particle and then \begin{equation} p_i p_k g^{ik} = m^2 c^2. \label{pikmc} \end{equation} If the metric components $g_{ik}$ do not depend on some coordinate $x^n$ then the corresponding canonical momentum (the corresponding covariant component) is conserved in motion along the geodesic due to Euler-Lagrange equations: \begin{equation} \frac{\partial g_{ik}}{\partial x^n} = 0 \ \ \Rightarrow \ \ p_n = \frac{\partial L}{\partial \dot{x}^n} = g_{nl} \frac{d x^l}{d \lambda} = {\rm const}. \label{okuel} \end{equation} Note that contravariant components of the 4-momentum correspond to the 4-velocities multiplied by the particle mass $p^n = m d x^n / d \tau $ and are generally not conserved even in case when metric does not depend in the corresponding coordinates. The covariant radial component of the momentum of the particle in the homogeneous isotropic expanding space with metrics~(\ref{f1n}) and~(\ref{etaf1}) are \begin{equation} p_\chi = - a^2(t) \frac{d \chi}{d \lambda} = - m a^2(t) \frac{d \chi}{d \tau} . \label{imp} \end{equation} The covariant radial component of the momentum for metric~(\ref{f4}) is \begin{equation} p_D = - \frac{d D}{d \lambda} + H D \frac{d t}{d \lambda} = \frac{p_\chi}{a}. \label{impD} \end{equation} One can see from~(\ref{imp}), (\ref{impD}) that $ p_D $ depends on the velocity $d \chi / d \tau$ but not on the value of~$\chi$. Contravariant radial component for metric~(\ref{f4}) is \begin{equation} p^D = m \frac{d D}{d \tau} = ma \frac{d \chi}{d \tau} + m D H\frac{d t}{d \tau}. \label{impDK} \end{equation} For $D \to 0$ the metric~(\ref{f4}) becomes the metric of comoving coordinate system~(\ref{f4D0}), so let us call $ \tilde{p}^D = ma\, d \chi / d \tau = - p_\chi/ a$ the ``physical'' radial component of the momentum of the particle. For radial movement $p_\chi = {\rm const}$, because the metric components $g_{00}$, $g_{01}$, $g_{11}$ don't depend on $\chi$ (see~(\ref{etaf1}) and~(\ref{okuel})). So after expanding of space in $k$-times, the ``physical'' momentum $ - p_\chi/ a$, becomes smaller in k-times. The energy defined by translations in time~$\eta$ is equal to \begin{equation} E_\eta = p_\eta c = m c a^2 \frac{d \eta}{d \tau}. \label{EEt} \end{equation} For metric~(\ref{f4}) due to~(\ref{Lpdef}) one obtains \begin{equation} E_{D} = p_t c = mc^2 \left( 1 - \left( \frac{ H D }{c } \right)^2 \right) \frac{d t}{d \tau} + m H D \frac{d D}{d \tau}. \label{EEtf4} \end{equation} Let us call the ``physical'' energy $E$ the energy measured by the observer in the reference frame of the background matter in which at the given moment the particle is at the origin. Going to the limit $D \to 0$ in~(\ref{EEtf4}) we obtain \begin{equation} E = mc^2 \frac{d t}{d \tau} = \frac{E_\eta}{a}. \label{Ef} \end{equation} Note that $E$ is equal to $E_D$ only for $D \to 0$. From~(\ref{EEt}) and (\ref{Ef}) equation~(\ref{Geodeta1}) can be written as \begin{equation} \frac{d^2 \chi}{d \tau^2} + 2 H \frac{E}{mc^2} \frac{d \chi}{d \tau} = 0 . \label{Geod1et} \end{equation} So radial movement in expanding Universe for the observer with coordinate~$ \chi $ is similar to movement in viscous medium with viscosity proportional to the Hubble constant. Writing~(\ref{Geod1et}) in the form \begin{equation} m a \frac{d^2 \chi}{ d \tau^2} = - \frac{2 E}{mc^2} H \tilde{p}^D , \label{impd} \end{equation} one obtains that the ``inertial'' force for the coordinate $ \chi$ is equal to the doubled specific energy of the body with the minus sign multiplied on the ratio of the ``physical'' momentum of the particle to the Hubble time $t_H= 1/H$. From equations~(\ref{Geodeta0}), (\ref{Geodeta1}) we find, that \begin{equation} \frac{d^2 \chi}{ d t^2} + H \frac{d \chi}{ d t} \left( 4 + \left( \frac{a d \chi}{c d t} \right)^2 \right)= 0 . \label{GeoDt} \end{equation} For case of non-relativistic movement relative to background matter one has $|a d \chi / c d t | \ll 1$ and therefore \begin{equation} m a \frac{d^2 \chi}{ d t^2} \approx - 4 m a H \frac{d \chi}{ d t} = - 4 H \tilde{p}^D . \label{GeoDtn} \end{equation} So for the observer using not the proper time of the moving particle but the coordinate time $t$, the ``inertial'' force is equal to the ratio of the ``physical'' momentum of the particle to the Hubble time $t_H$ multiplied by minus four. One can find the dependence of the radial component in time in case of radial movement of point mass~$m$ with fixed conserved radial component of momentum~$-p_\chi$ at point~$\chi_0$ in time~$t_0$ from the equations~(\ref{pikmc}), (\ref{imp}) \begin{equation} \chi (t) =\chi_0 + \frac{p_\chi}{m} \int \limits_{t_0}^t \frac{d t}{\displaystyle a^2 \sqrt{ 1 + \left( \frac{p_\chi}{m c a} \right)^2 } }. \label{urd} \end{equation} Now let us consider radial movement in coordinates $(t, D)$. Changing the variables in~(\ref{Geodeta0}), (\ref{Geodeta1}), one obtains the radial geodesic equations in coordinates $(t, D)$ \begin{equation} \frac{d^2 t}{d \lambda^2} + \frac{H}{c^2} \left( \frac{d D}{d \lambda} - D H \frac{d t}{ d \lambda} \right)^2 = 0, \label{Geod0} \end{equation} \begin{equation} \frac{d^2 D}{d \lambda^2} + \frac{D H^2}{c^2} \left( \frac{d D}{d \lambda} - D H \frac{d t}{ d \lambda} \right)^2 -(\dot{H} + H^2) D \left(\frac{d t}{ d \lambda} \right)^2 = 0. \label{Geod1} \end{equation} These equations can be obtained directly from~(\ref{GenGeod}) using the Christoffel symbols in metric~(\ref{f4}) \begin{eqnarray} &&\Gamma^{\,0}_{\, 00}= \frac{D^2 H^3}{c^3} \ , \ \ \ \ \Gamma^{\,0}_{\, 0 1}= - \frac{ D H^2 }{c^2} \ , \ \ \ \ \Gamma^{\,0}_{\, 1 1}= \frac{H}{c} , \nonumber \\ && \Gamma^{\,1}_{\, 0 0}= \frac{ D}{c^2} \left( \frac{D^2 H^4}{c^2} - \dot{H} - H^2 \right), \ \ \ \ \Gamma^{\,1}_{\, 01}= - \frac{D^2 H^3}{c^3} \ , \ \ \ \ \Gamma^{\,1}_{\, 1 1}= \frac{ D H^2 }{c^2} . \label{GGGnk} \end{eqnarray} Note that due to~\cite{LL_II}, Sec.~87 terms $m \Gamma^{\,i}_{\, kl} \frac{d x^k}{d \lambda} \frac{d x^l}{d \lambda}$ play the role of forces acting on the particle with mass~$m$ in gravitational field. It is evident that such forces depend on the choice of the reference frame. In absence of gravitation due to the possibility of inertial reference frame one can discriminate inertial forces from other forces. In general case such differentiation is impossible. One of the solution of the system of equation~(\ref{Geod0}), (\ref{Geod1}) is \begin{equation} D = \frac{a}{a_0} D_0, \ \ \ \frac{d t}{d \lambda } = {\rm const}. \label{Resh1} \end{equation} This solution describes the world line of the particle at rest in synchronous with the background matter reference frame~(\ref{f1n}): $\chi= \chi_0$ and proper time is equal to the coordinate time~$t$ (see also~\cite{Faraoni21}). The equation of movement~(\ref{Geod1}) by using~(\ref{impD}), (\ref{Ef}) can be written as \begin{equation} \frac{d^2 D}{d \tau^2} = - D H^2 \left( \frac{p_D}{mc} \right)^2 + (\dot{H} + H^2) D \left(\frac{E}{ mc^2} \right)^2. \label{Geod1D} \end{equation} In nonrelativistic case the first term proportional to the square of the velocity corresponds to the force of resistance for movement in medium. The force of this resistance is proportional to the square of the Hubble constant and the distance from the point of observation. In nonrelativistic case $p_D \approx m v$, where $v$ is the velocity of the moving body relative to background matter. So the first term is in $(v/c)^2$ times smaller than the second. The second term in nonrelativistic case ($E \approx mc^2$) does not depend on the velocity and is proportional to the distance~$D$ from the coordinate origin, i.e. corresponds to some cosmological constant. Consider this issue more exactly. Let us write the interval for the case of ``point'' mass~$M$ in presence of cosmological constant~\cite{Tolman} (Kottler metric~\cite{Kottler18}) \begin{equation} ds^2 = \left( 1 - \frac{2 G M}{c^2 r} - \frac{\Lambda r^2}{3} \right) c^2 dt^2 - \frac{dr^2}{\displaystyle 1 - \frac{2 G M}{c^2 r} - \frac{\Lambda r^2}{3} } - r^2 ( d \theta^2 + \sin^2 \theta d \phi^2 ). \label{Kottler} \end{equation} In nonrelativistic case (see Sec.~99 in~\cite{LL_II}) this metric leads to gravitational potential \begin{equation} \varphi = - \frac{GM}{r} - \frac{ \Lambda c^2 r^2}{6} \label{Kotphi} \end{equation} and to the acceleration of test body \begin{equation} \frac{d^2 r}{d t^2} = - \frac{GM}{r^2} + \frac{ \Lambda c^2 r}{3} . \label{Kota} \end{equation} So in nonrelativistic case the cosmological constant describes the force proportional to the distance to the observable body~\cite{McVittie}. From~(\ref{Geod0}), (\ref{Geod1}) one can obtain the equation of radial movement of the test body in homogeneous isotropic cosmology in coordinates $t, D$, written as \begin{equation} \frac{d^2 D}{d t^2} = \frac{H}{c^2} \left( \frac{d D}{d t} - D H \right)^3 + (\dot{H} + H^2) D . \label{Geod1Dt} \end{equation} Due to $dD / dt - D H = a d \chi / d t$ the first term describes in analogy with the mechanics of continuous media the drugging of the body by the moving viscous medium if the body is moving in the same direction as medium and it describes deceleration if the body moves in the opposite direction. The arising inertial force is proportional to the cube of the relative velocity. The second term as in case~(\ref{Geod1D}) is similar to action of the cosmological constant. Comparing this term with~$\Lambda$ in equation~(\ref{Kota}), one obtains the ``effective'' cosmological constant $\Lambda_{\rm eff}$ \begin{equation} \Lambda_{\rm eff} = \frac{3}{c^2} (\dot{H} + H^2) . \label{LamDt} \end{equation} Due to~(\ref{ELhh4}) one obtains the relation of this effective cosmological constant with the constant present in Einstein's equation and the energy density and pressure in homogeneous isotropic cosmological model \begin{equation} \Lambda_{\rm eff} = \Lambda - \frac{ 4 \pi G } { c^4} \left( \varepsilon + 3 p \right). \label{Lameff} \end{equation} One can use~(\ref{ELhh4}) for evaluation~(\ref{Lameff}) in the comoving system because~$H$ is the same in our different frames. So nonaccurate use in cosmology of coordinates $t,D$ can lead to the measurement of $ \Lambda_{\rm eff} $ instead of true cosmological constant~$ \Lambda $. Let us evaluate $ \Lambda - \Lambda_{\rm eff}$ for real Universe taking $p=0$ and the energy density of visible and dark matter $\approx31$\% of the critical density $\varepsilon_c = 3 H^2 c^2/(8 \pi G)$~\cite{Planck} so that $\Lambda$ term (or dark energy) is approximately $\approx69$\%: \begin{equation} \Lambda - \Lambda_{\rm eff} = \frac{ 4 \pi G } { c^4} \left( \varepsilon + 3 p \right) \approx 0.31 \frac{3 H^2}{2 c^2} \approx 0.22 \Lambda. \label{DLamef} \end{equation} So use of $t,D$ coordinates and usual nonrelativistic expression~(\ref{Kota}) for cosmological constant can lead to the error of the order of 20\%. If $ \Lambda =0 $ one can see from~(\ref{Lameff}) that in order to have the true sign ``+'' for the $\Lambda_{\rm eff}$ as in the case of observable cosmological constant~\cite{Schmidt98}--\cite{Perlmutter99} it is necessary to suppose the dominance of exotic background matter (quintessence etc.) with $ \varepsilon + 3 p<0 $. \section{\normalsize Conclusion} \label{secConcl} Our analysis shows that for any measurement of cosmological distances it is necessary to use the dynamical equations in corresponding coordinates to exclude the effects similar to inertial forces. In this paper we show that these forces can lead not only to deceleration of galaxies but also to acceleration. However this acceleration cannot mean that inertial force plays the role of ``dark energy'' because it occurs only in case of positive cosmological constant which usually is considered as ``dark energy''. Nevertheless our analysis shows that due to the effect of the used coordinate system the measured value of the cosmological constant $\Lambda_{\rm eff}$ can be different from $ \Lambda $ --- the cosmological constant in Einstein's equations. It is the matter of fact that our above estimate of 20\% varies during the evolution of the Universe. Really, if the Universe would be described by the de Sitter metric, then, taking $\varepsilon = p =0$, one easily obtains $ \Lambda_{\rm eff} = \Lambda $. The same is valid for the Kottler metric. In these cases there is no need in ``dark energy'' with $ \varepsilon + 3 p<0 $. For the Universe described by the metric of the Friedmann stage going to the de Sitter stage the effective slowly varying cosmological constant converges to the value of the invariant cosmological constant. The effective cosmological constant is non invariant being calculated through the Christoffel symbol which is not a tensor. It is proportional to the square of the Hubble constant depending on time and becomes really constant at the De Sitter stage. But then one sees that the invariant constant, which has the same value, also depends on the square of the Hubble constant at some fixed moment of time characterizing the transfer to the De Sitter stage. It cannot have arbitrary value. One of the much disputed matters concerning the observable value of the cosmological constant is its proportionally to $H_0^2$ value, where $H_0$ is the modern value of Hubble constant (anthropic principle etc.). Strange as it is but term of the same order as we have shown in our paper appears due to use of $t, D$ coordinate system. The message of our paper is not the doubt in existence of nonzero cosmological constant but in careful analysis of what we really observe using our instrumentation and what coordinate system corresponds to it. \vspace{9pt} \noindent {\bf Acknowledgements} \ The work of Yu.V.P. was supported by the Russian Government Program of Competitive Growth of Kazan Federal University. \small
{ "timestamp": "2021-05-05T02:13:52", "yymm": "2012", "arxiv_id": "2012.14110", "language": "en", "url": "https://arxiv.org/abs/2012.14110" }
\section{Introduction} Let $E$ be a topological space and $\left\{ X_{t} \right\}_{0 \leqslant t \leqslant 1}$ be an $E$-valued stochastic process with continuous paths such that $X_{0} = x_{0} \in E$ a.s. Denote by $W_{x_0}\left( E \right)$ the space of $E$-valued continuous functions on $[0, 1]$ starting at $x_0$, then we can view $X_t$ as a $W_{x_0}\left( E \right)$-valued random variable. Given a norm $\Vert \cdot \Vert$ on $W_{x_0}\left( E \right)$, the small ball problem for $X_{t}$ consists in finding the rate of explosion of \[ -\log \mathbb{P} \left( \Vert X \Vert < \varepsilon \right) \] as $\varepsilon \rightarrow 0$. More precisely, a process $X_{t}$ is said to satisfy a \emph{small deviation principle} with rates $\alpha$ and $\beta$ if there exist a constant $c>0$ such that \begin{equation}\label{e.general.s.d} \lim_{\varepsilon \rightarrow 0} -\varepsilon^\alpha \vert \log \varepsilon \vert^\beta \log \mathbb{P} \left( \Vert X \Vert < \varepsilon \right) =c. \end{equation} The values of $\alpha, \beta$ and $c$ depend on the process $X_{t}$ and on the chosen norm on $W_{x_0} \left( E \right)$. Small deviation principles have many applications including metric entropy estimates and Chung's law of the iterated logarithm. We refer to the survey paper \cite{LiShao2001} for more details. In our paper we are mostly interested in connections of a small deviation principle to Chung's law of the iterated logarithm. We say that a process $X_{t}$ satisfies \emph{Chung's law of the iterated logarithm} with rate $a \in \R_+$ if there exists a constant $C$ such that \begin{equation}\label{e.general.chung.} \liminf_{t\rightarrow \infty} \left(\frac{\log \log t}{t} \right)^a \max_{0\leqslant s \leqslant t} \vert X_{s} \vert = C. \end{equation} When $X_{t}$ is a Brownian motion, it was proven in a famous paper by K.-L.~Chung in 1948 that \eqref{e.general.chung.} holds with $a = \frac{1}{2}$ and $C= \frac{\pi}{\sqrt{8}}$. If $W_{x_0} \left( E \right)$ is a Banach space, and the law $\mu$ of $X_{t}$ is a Gaussian measure on $W_{x_0} \left( E \right)$, then one can use a scaling property of the process $X_{t}$ to prove Chung's law of the iterated logarithm from a small deviation principle. Small deviation principle for a Brownian motion and related processes have been extensively studied, we mention only a few most relevant to our results. In \cite{BaldiRoynette1992b} the authors considered the case of a one-dimensional Brownian motion and H\"{o}lder norms, in \cite{KuelbsLi1993a} a Brownian sheet in H\"{o}lder norms has been considered, \cite{KhoshnevisanShi1998} studied the integrated Brownian motion in the uniform norm, and \cite{ChenLi2003} the m-fold integrated Brownian motion in both the uniform and $L^2$-norm. In \cite{Remillard1994} a small deviation principle and Chung's law of iterated logarithm is proven for some stochastic integrals and in particular for L\'{e}vy's stochastic area. In the current paper we consider a hypoelliptic Brownian motion $g_{t}$ on the Heisenberg group $\mathbb{H}$ starting at the identity $e$ in $ \mathbb{H}$. The group $\mathbb{H}$ is the simplest example of a sub-Riemannian manifold, and it comes with a natural left-invariant distance, the Carnot-Carath\'{e}odory distance $d_{cc}$. We then consider the uniform norm \[ \Vert g \Vert_{W_0 \left( \mathbb{H} \right) }:= \max_{0 \leqslant t \leqslant 1} \vert g_{t} \vert \] on the path space $W_0 \left( \mathbb{H} \right) $ of $\mathbb{H}$-valued continuous curves starting at the identity, where $ \vert \cdot \vert$ is a norm on $\mathbb{H}$ equivalent to the Carnot-Carath\'{e}odory distance $d_{cc}$. We refer for details to Section \ref{sec.2}. Our main results include Theorem \ref{t.chunglil} where we prove Chung's law of the iterated logarithm with $a= \frac{1}{2}$ for a hypoelliptic Brownian motion $g_{t}$. As a consequence of Theorem \ref{t.chunglil}, we prove Theorem \ref{t.smalldeviations} which represents a small deviation principle for the hypoelliptic diffusion $g_{t}$ with respect to the norm $\Vert \cdot \Vert_{W_0 \left( \mathbb{H} \right)}$. More precisely, we prove that there exists a finite positive constant $c$ such that \eqref{e.general.s.d} holds with $\alpha = 2$ and $\beta=0$, and we provide a lower and upper bound on $c$. Note that finding the constant $c$ explicitly is difficult even in more studied cases, see for example \cite[Remark 2.2]{KhoshnevisanShi1998}. Let us explain now how our setting differs from known results. First observe that the hypoelliptic Brownian motion $g_{t}$ is an $\mathbb{R}^{3}$-valued stochastic process, but it is not a Gaussian process. Therefore we can not rely on the properties of Gaussian measures on Banach spaces, such as log-concavity and Anderson's inequality which are common tools in the subject. We refer to \cite{AndersonTW1955, Borell1976a} for more details about Gaussian measures on Banach spaces. These properties have been used to show the existence of a small deviations principle for some processes such as an integrated Brownian motion in \cite{KhoshnevisanShi1998}, and a Brownian motion with values in a finite dimensional Banach space in \cite{DeAcosta1983a}. Generally, if a small deviations principle is known, then it can be used together with scaling properties of the process to show Chung's law of the iterated logarithm. For example, in \cite{Remillard1994} a small deviation principle for L\'{e}vy's stochastic area $A_{t}$ is first proven and then, using that $A_{\varepsilon t} \stackrel{(d)}{=} \varepsilon A_{t}$ for any $t$ and $\varepsilon >0$, Chung's law of the iterated logarithm for the process $A_{t}$ follows. For related work we also refer to \cite{DobbsMelcher2014}. It is also possible to prove the converse. In \cite{KhoshnevisanShi1998} the authors first prove Chung's law of the iterated logarithm for the integrated one-dimensional Brownian motion $\int_0^t b_{s}ds$. Then, using the scaling property $\int_0^{\varepsilon t} b_{s} ds \stackrel{(d)}{=} \varepsilon^{\frac{3}{2}} \int_0^t b_{s}ds$, a small deviation principle for $\int_0^t b_{s}ds$ is shown. Most relevant to our work is \cite{KhoshnevisanShi1998}, where the existence of the limit \eqref{e.general.s.d} for $X_{t} := \int_0^t b_{s}ds$ follows from Anderson's inequality for Gaussian measures. Chung's law of the iterated logarithm is then used to prove that the limit is finite. This method can not be used directly in our setting since the hypoelliptic Brownian motion $g_{t}$ is not a Gaussian process, and therefore we can not rely on Anderson's inequality. In our case we first prove in Proposition \ref{p.smalldeviations.estimates} that if the limit \eqref{e.general.s.d} exists then it is strictly positive and finite. We then prove Chung's law of the iterated logarithm for $g_{t}$ and use it in place of Anderson's inequality to show the existence of the limit \eqref{e.general.s.d}. As a by-product we have bounds on this limit in terms of the lowest Dirichlet eigenvalues as given in Theorem \ref{t.ChungBounds}. The mathematical literature on the subject is vast, and we mention only the most relevant in terms of the techniques and results. In particular, a similar state space is considered in \cite{Neuenschwander2014a, LiuJ2013} though the results are different. The paper is organized as follows. In Section \ref{sec.2} we describe the Heisenberg group $\mathbb{H}$ and the corresponding sub-Laplacian and hypoelliptic Brownian motion. In Section \ref{sec.3} we state the main results of this paper, namely Chung's law of the iterated logarithm in Theorem \ref{t.chunglil} and a small deviation principle in Theorem \ref{t.smalldeviations}. Section \ref{sec.4} contains estimates that are needed to prove Theorem \ref{t.chunglil} and Theorem \ref{t.smalldeviations}. We conclude \ref{sec.5} with the proof of the main results. \section{Hypoelliptic Brownian motion on the Heisenberg group}\label{sec.2} \subsection{Heisenberg group as Lie group} The Heisenberg group $\mathbb{H}$ as a set is $\R^3\cong \mathbb{R}^{2} \times \mathbb{R}$ with the group multiplication given by \begin{align*} & \left( \mathbf{v}_{1}, z_{1} \right) \cdot \left( \mathbf{v}_{2}, z_{2} \right) := \left( x_{1}+x_{2}, y_{1}+y_{2}, z_{1}+z_{2} + \frac{1}{2}\omega\left( \mathbf{v}_{1}, \mathbf{v}_{2} \right)\right), \\ & \text{ where } \mathbf{v}_{1}=\left( x_{1}, y_{1} \right), \mathbf{v}_{2}=\left( x_{2}, y_{2} \right) \in \mathbb{R}^{2}, \\ & \omega: \mathbb{R}^{2} \times \mathbb{R}^{2} \longrightarrow \mathbb{R}, \\ & \omega\left( \mathbf{v}_{1}, \mathbf{v}_{2} \right):= x_{1}y_{2}-x_{2} y_{1} \end{align*} is the standard symplectic form on $\mathbb{R}^{2}$. The identity in $\mathbb{H}$ is $e=(0, 0, 0)$ and the inverse is given by $\left( \mathbf{v}, z \right)^{-1}= (-\mathbf{v},-z)$. The Lie algebra of $\mathbb{H}$ can be identified with the space $\R^3\cong \mathbb{R}^{2} \times \mathbb{R}$ with the Lie bracket defined by \[ \left[ \left( \mathbf{a}_{1}, c_{1} \right), \left( \mathbf{a}_{2}, c_{2} \right) \right] = \left(0,\omega\left( \mathbf{a}_{1}, \mathbf{a}_{2} \right) \right). \] The set $\R^3\cong \mathbb{R}^{2} \times \mathbb{R}$ with this Lie algebra structure will be denoted by $\mathfrak{h} $. Let us now recall some basic notation for Lie groups. Suppose $G$ is a Lie group, then the left and right multiplication by an element $k\in G$ are denoted by \begin{align*} L_{k}: G \longrightarrow G, & & g \longmapsto k^{-1}g, \\ R_{k}: G \longrightarrow G, & & g \longmapsto gk. \end{align*} Recall that the tangent space $T_{e}G$ can be identified with the Lie algebra $\mathfrak{g}$ of left-invariant vector fields on $G$, that is, vector fields $X$ on $G$ such that $dL_{k} \circ X=X \circ L_{k}$, where $dL_{k}$ is the differential of $L_k$. More precisely, if $A$ is a vector in $T_{e}G$, then we denote by $\tilde{A}\in \mathfrak{g}$ the (unique) left-invariant vector field such that $\tilde{A} (e) = A$. A left-invariant vector field is determined by its value at the identity, namely, $\tilde{A}\left( k \right)=dL_{k} \circ\tilde{A}\left( e \right)$. For the Heisenberg group the differential of left and right multiplication can be described explicitly as follows. \begin{proposition}\label{p.Differentials} Let $k= (k_1, k_2, k_3) = (\mathbf{k}, k_3 )$ and $g= (g_1, g_2, g_3) = (\mathbf{g}, g_3 )$ be two elements in $\mathbb{H}$. Then, for every $v= \left( v_1, v_2, v_3 \right) = (\mathbf{v}, v_3 )$ in $T_g\mathbb{H}$, the differentials (pushforward) of the left and right multiplication are given by \begin{align}\label{LeftRightMultDiff} & dL_{k}=L_{k \ast}: T_g\mathbb{H} \longrightarrow T_{k^{-1}g}\mathbb{H}, \notag \\ & dR_{k}=R_{k \ast}: T_g\mathbb{H} \longrightarrow T_{gk}\mathbb{H}, \notag \\ & dL_{k} (v) = \left( v_1, v_2, v_3 + \frac{1}{2} \omega( \mathbf{v}, \mathbf{k}) \right), \notag \\ & dR_{k} (v) = \left( v_1, v_2, v_3 + \frac{1}{2} \omega( \mathbf{v}, \mathbf{k}) \right). \end{align} \end{proposition} \subsection{Heisenberg group as a sub-Riemannian manifold The Heisenberg group $\mathbb{H}$ is the simplest non-trivial example of a sub-Riemannian manifold. We define $X$, $Y$ and $Z$ as the unique left-invariant vector fields satisfying $X_e = \partial_x$, $Y_e = \partial_y$ and $Z_e = \partial_z$ which are given by \begin{align*} & X = \partial_x - \frac{1}{2}y\partial_z, \\ & Y = \partial_y + \frac{1}{2}x\partial_z, \\ & Z = \partial_z. \end{align*} Note that the only non-zero Lie bracket for these left-invariant vector fields is $[X, Y]=Z$, so the vector fields $\left\{ X, Y \right\}$ satisfy H\"{o}rmander's condition. We define the \emph{horizontal distribution} as $\mathcal{H}:= \Span \left\{ X, Y \right\}$ fiberwise, thus making $\mathcal{H}$ a sub-bundle in the tangent bundle $T\mathbb{H}$. To finish the description of the Heisenberg group as a sub-Riemannian manifold we need to equip the horizontal distribution $\mathcal{H}$ with an inner product. For any $p \in \mathbb{H}$ we define the inner product $\langle \cdot , \cdot \rangle_{\mathcal{H}_{p}}$ on $\mathcal{H}_{p}$ so that $\left\{ X \left( p \right), Y \left( p \right) \right\}$ is an orthonormal (horizontal) frame at any $p \in \mathbb{H}$. Vectors in $\mathcal{H}_{p}$ will be called \emph{horizontal}, and the corresponding norm is denoted by $\Vert \cdot \Vert_{\mathcal{H}_{p}}$. In addition, H\"{o}rmander's condition ensures that a natural sub-Laplacian on the Heisenberg group \begin{equation}\label{e.2.1} \Delta_{\mathcal{H}} = X ^2 + Y ^2 \end{equation} is a hypoelliptic operator by \cite{Hormander1967a}. We recall now another notion in sub-Riemannian geometry, namely, of \emph{horizontal curves}. Suppose $\gamma(t) = \left( x\left( t \right), y\left( t \right), z\left( t \right) \right)=\left( \mathbf{x}\left( t \right), z\left( t \right) \right)$ is an absolutely continuous curve with values in $\mathbb{H}$, and the corresponding tangent vector $\gamma^\prime(t)$ in $T\mathbb{H}_{\gamma(t)}$ is \[ \gamma^\prime (t) = \left( x^\prime\left( t \right), y^\prime\left( t \right), z^\prime\left( t \right) \right)=\left( \bm{x}^\prime\left( t \right), z^\prime\left( t \right) \right). \] We denote by $c_{g}$ the \emph{Maurer–Cartan form} on $\mathbb{H}$, i.e. the $\mathfrak{h}$-valued $1$-form on $\mathbb{H}$ defined by $c_{g}\left( v \right)=dL_{g} v$, $v \in T_{g}\mathbb{H}$. Note that the pushforward of a vector in $T_{g}\mathbb{H}$ along the left translation can be found explicitly. Namely for $\gamma(t) = \left( \mathbf{x}\left( t \right), z\left( t \right) \right)$ the Maurer-Cartan form is \begin{align}\label{e.MaurerCartan} & c_{\gamma}\left( t \right)=c\left( t \right)=dL_{\gamma\left( t \right)}\left( \gamma^{\prime}(t)\right) \\ & =\left( \mathbf{x}^\prime\left( t\right), z^{\prime}\left( t \right) -\frac{1}{2}\omega( \mathbf{x}\left( t \right), \mathbf{x}^{\prime}\left( t \right) )\right), \notag \end{align} where we used Proposition \ref{p.Differentials}. \begin{definition}\label{Dfn.2.1} An absolutely continuous curve $t \longmapsto \gamma (t) \in \mathbb{H}, t \in [0,1]$ is said to be \emph{horizontal} if $\gamma^{\prime}(t)\in\mathcal{H}_{\gamma(t)}$ for a.e. $t$, that is, the tangent vector to $\gamma\left(t\right)$ is horizontal for a.e. $t$. Equivalently we can say that $\gamma$ is horizontal if $c_{\gamma}\left( t \right) \in \mathcal{H}_{e}$ for a.e. $t$. \end{definition} Equation \eqref{e.MaurerCartan} can be used to characterize horizontal curves in terms of the components as follows. An absolutely continuous curve $\gamma$ is horizontal if and only if \begin{equation}\label{e.horizontal} z^{\prime}(t) -\frac{1}{2}\omega( \mathbf{x}\left( t \right), \mathbf{x}^{\prime}\left( t \right) ))=0 \text{ a.e. } t. \end{equation} The \emph{horizontal length} is defined as \[ L_{\mathcal{H}}\left( \gamma \right):=\int_{0}^{1} \vert c_\gamma \left( s \right) \vert_{\mathcal{H}_e} ds, \] where we set $L_{\mathcal{H}}\left( \gamma \right)=\infty$ if $\gamma$ is not horizontal. The Heisenberg group as a sub-Riemannian manifold comes with a natural left-invariant distance. \begin{definition}\label{Dfn.2.2} For any $g_{1}, g_{2} \in \mathbb{H}$ the \emph{Carnot-Carath\'{e}odory distance} is defined as \begin{align*} & d_{cc} (g_{1}, g_{2}):= \inf \left\{ L\left( \gamma \right), \gamma : [0,1] \longrightarrow \mathbb{H}, \gamma(0)=g_{1}, \gamma(1)=g_{2} \right\}. \end{align*} \end{definition} Another consequence of H\"{o}rmander's condition for left-invariant vector fields $X$, $Y$ and $Z$ is that by the Chow–Rashevskii theorem there exists a horizontal curve connecting any two points in $\mathbb{H}$, and therefore the Carnot-Carath\'{e}odory distance is finite on $\mathbb{H}$. In addition to the Carnot-Carath\'{e}odory distance on the Heisenberg group, we will use the following homogeneous distance \begin{equation}\label{hom.norm} \rho( g_{1}, g_{2}) := \left( \Vert \mathbf{x}_{1}- \mathbf{x}_{2}\Vert^{4}_{\mathbb{R}^{2}}+ \vert z_{1}-z_{2} + \omega(\mathbf{x}_{1}, \mathbf{x}_{2}) \vert^2 \right)^{\frac{1}{4}}, \end{equation} which is equivalent to the Carnot-Carath\'{e}odory distance, that is, there exist two positive constants $c$ and $C$ such that \begin{equation}\label{e.DistEquivalence} c \rho( g_{1}, g_{2}) \leqslant d_{cc}( g_{1}, g_{2} ) \leqslant C \rho( g_{1}, g_{2}) \end{equation} for all $g_{1}, g_{2} \in \mathbb{H}$. We denote by $\vert \cdot \vert$ the norm on $\mathbb{H}$ induced by $\rho$, that is, $\vert g \vert = \rho ( g, e )$ for all $g\in \mathbb{H}$. In particular, by the left-invariance of $\rho$ we have that for any $g_{1}, g_{2} \in \mathbb{H}$ \begin{equation}\label{e.triangular.ineq} \vert g_{2}^{-1} g_{1} \vert = \rho\left( g_{2}^{-1} g_{1}, e \right) = \rho \left( g_{1}, g_{2} \right) \leqslant \rho \left( g_{1},e \right) + \rho \left( g_{2},e \right) = \vert g_{1} \vert +\vert g_{2} \vert. \end{equation} This is discussed in a more general setting in \cite[Proposition 5.1.4]{BonfiglioliLanconelliUguzzoniBook}. Finally, we need to describe a hypoelliptic Brownian motion with values in $\mathbb{H}$. This is a stochastic process whose generator is the sub-Laplacian $\frac{1}{2}\Delta_{\mathcal{H}}$ defined by Equation \eqref{e.2.1}. \begin{notation}\rm\label{ProbabilisticSetting} Throughout the paper we use the following notation. Let $\left( \Omega, \mathcal{F}, \mathcal{F}_{t}, \mathbb{P}\right)$ be a filtered probability space. We denote the expectation under $\mathbb{P}$ by $\E$. By a standard real-valued Brownian motion $\left\{ B_{t} \right\}_{t \geqslant 0}$ we mean a continuous adapted $\mathbb{R}$-valued stochastic process on $\left( \Omega, \mathcal{F}, \mathcal{F}_{t}, \mathbb{P}\right)$ such that for all $0 \leqslant s \leqslant t$ the increment $B_{t}-B_{s}$ is independent of $\mathcal{F}_{s}$ and has a normal distribution with mean $0$ and the variance $t-s$. \end{notation} \begin{definition}\label{d.HeisenbergBM} Let $W_{t}= \left( W_1(t), W_2(t), 0 \right)$ be an $\mathfrak{h}$-valued stochastic process, where $\bm{W}_{t}:=\left( W_1(t), W_2(t)\right)$ is a standard two-dimensional Brownian motion. A hypoelliptic Brownian motion $g_{t} = \left( g_1(t), g_2(t), g_3(t)\right)$ on $\mathbb{H}$ is the continuous $\mathbb{H}$-valued process defined by \begin{equation}\label{e.HypoBM} g_{t}:=\left( \bm{W}_{t}, A_{t}\right), \end{equation} where $A_{t} := \frac{1}{2} \int_0^t \omega\left( \bm{W}_{s}, d\bm{W}_{s}\right)$ is L\'{e}vy's stochastic area. \end{definition} Note that we used It\^{o}'s integral in this definition rather than Stratonovich' integral. However, these two integrals are equal in our setting since the symplectic form $\omega$ is skew-symmetric, and therefore L\'{e}vy's stochastic area functional is the same for both integrals as was observed in \cite[Remark 4.3]{DriverGordina2008}. One can also write a stochastic differential equation for $g_{t}=\left( x_{t}, y_{t}, z_{t} \right)$, $g_{0}=\left(0, 0, 0\right)=e \in \mathbb{H}$ as a stochastic differential equation for a Lie group-valued Brownian motion \begin{align}\label{e.SDE} & L_{g_{t} \ast}\left( dg_{t} \right)=g_{t}^{-1}dg_{t} =dW_{t}, \\ & g_{0}=e. \notag \end{align} Equation \eqref{e.HypoBM} gives an explicit solution to this stochastic differential equation for the Heisenberg group. \section{Main results}\label{sec.3} \begin{notation} Let $X_{t}$ be a stochastic process with values in a metric space $\left( \mathfrak{X}, d \right)$ with $X_{0}=x \in \mathfrak{X}$, then $X^{\ast}_{t}$ denotes the process defined by \begin{align*} X^{\ast}_{t}:= \max_{0\leqslant s \leqslant t} d\left( X_{s}, X_{0} \right). \end{align*} \end{notation} For $\mathfrak{X}=\mathbb{H}$ we use the homogeneous distance $\rho$ with $X_{0}=e$, and on $\mathfrak{X}=\R^n$ we consider the standard Euclidean norm. Before formulating Chung's law of iterated logarithm for the hypoelliptic Brownian motion $g_{t}$ we introduce the notation \begin{align*} \phi\left( t \right):= \sqrt{ \frac{\log \log t}{t}}. \end{align*} \begin{theorem}[Chung's law of iterated logarithm]\label{t.chunglil} Let $g_{t}$ be the hypoelliptic Brownian motion on the Heisenberg group $\mathbb{H}$ defined by \eqref{e.HypoBM}. Then there exists a constant $c \in ( 0, \infty )$ such that \begin{equation}\label{e.chunglil} \liminf_{t \rightarrow \infty} \phi\left( t \right) g_{t}^{\ast}=c \hskip0.1in \text{ a.s. } \end{equation} \end{theorem} \begin{remark}\label{r.scaling} Note that the hypoelliptic Brownian motion $g_{t}$ has the same scaling property with respect to the norm induced by the homogeneous norm $\rho$ as a standard Brownian motion in a Euclidean space. Indeed, \begin{align*} & \vert g_{\varepsilon t} \vert := \rho \left( g_{t \varepsilon}, e \right)= \sqrt[4]{ \vert B_{t \varepsilon } \vert^4 + A_{t \varepsilon}^2 } \\ & \stackrel{(d)}{=} \sqrt[4]{ \vert \sqrt{\varepsilon} B_{t } \vert^4 + \left( \varepsilon A_{t } \right) ^2 } = \sqrt{\varepsilon} \rho \left( g_{t}, e \right) = \sqrt{\varepsilon} \vert g_{t} \vert. \end{align*} Therefore it is not surprising that the process $g_{t}$ and the standard Brownian motion have the same rate $\phi (t)$ in Chung's law of iterated logarithm. \end{remark} As a consequence of Theorem \ref{t.chunglil} we can prove a small deviation principle for $g_{t}$. \begin{theorem}[Small deviation principle]\label{t.smalldeviations} The limit \begin{equation}\label{e.smalldeviations} \lim_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right)=c^{2} \end{equation} exists with constant $c$ being defined by \eqref{e.chunglil}. \end{theorem} \begin{remark} Using scaling properties of $g_{t}$ we can formulate a small deviation principle over an interval $[0,T]$. Let $T>0$ be fixed, then by Theorem \ref{t.smalldeviations} and Remark \ref{r.scaling} we have that \begin{align*} & \lim_{\varepsilon \rightarrow 0} -\varepsilon^2 \log \mathbb{P} \left( g^{\ast}_T < \varepsilon \right) = c^2 T. \end{align*} Indeed, \begin{align*} & \lim_{\varepsilon \rightarrow 0} -\frac{\varepsilon^2}{T} \log \mathbb{P} \left( g^{\ast}_T < \varepsilon \right) = \lim_{\varepsilon \rightarrow 0} -\frac{\varepsilon^2}{T} \log \mathbb{P} \left( g^{\ast}_1 < \frac{\varepsilon}{\sqrt{T}}\right)=c^2. \end{align*} \end{remark} \begin{remark} One might expect that the value of the limit, $c$, is the first Dirichlet eigenvalue for the hypoelliptic generator of the Brownian motion $g_{t}$ in the unit ball with respect to the homogeneous norm. Equivalently this constant can be expected to be described by the first exit time from this ball. This is a delicate issue since the infinitesimal generator is a hypoelliptic operator and the ball has a non-smooth boundary. We will address this problem in a forthcoming paper. Note that if the constant $c$ is indeed the first Dirichlet eigenvalue for the hypoelliptic operator in this set, then Theorem~\ref{t.ChungBounds} gives bounds for its value. \end{remark} \section{Preliminary estimates}\label{sec.4} We collect here several preliminary estimates that will be used throughout the paper. \begin{proposition}\label{p.general} Let $Y_{t}$ be a positive real-valued process and assume there exist two finite positive constants $0<a \leqslant b < \infty$ such that \begin{align} & \liminf_{t \rightarrow \infty } - \frac{1}{t} \log \mathbb{P} \left( Y_{t} < 1 \right) \geqslant a \label{e.4.1} \\ & \limsup_{t \rightarrow \infty } - \frac{1}{t} \log \mathbb{P} \left( Y_{t} < 1 \right) \leqslant b. \label{e.4.2} \end{align} Let $c$, $x$ and $y$ be real numbers such that $c>1$, and $0< x < a \leqslant b <y$, then there exists an $n_0\in \mathbb{N}$ such that \begin{equation}\label{e.generalestimate.below} \sum_{n=n_0}^\infty \mathbb{P} \left( Y_{s_n} <1 \right) < \infty \end{equation} where $ s_n := \frac{1}{x} \log \log c^n$, and \begin{equation}\label{e.generalestimate.above} \sum_{n=n_0}^\infty \mathbb{P} \left( Y_{v_n} <1 \right) = \infty \end{equation} where $v_n$ is any positive sequence such that $v_n \rightarrow \infty $ as $n \rightarrow \infty$, and $v_n \leqslant \frac{1}{y} \log \log n^n $ for all $n \geqslant n_0$. \end{proposition} \begin{proof} Let us first show \eqref{e.generalestimate.below}. By \eqref{e.4.1} we have \begin{align*} & \mathbb{P} \left( Y_{t} <1 \right) \leqslant e^{-at} \end{align*} for all large enough $t$. Therefore there exists an $n_1\in \mathbb{N}$ such that \begin{align*} & \sum_{n=n_1}^\infty \mathbb{P} \left( Y_{s_n} <1 \right) \leqslant \sum_{n=n_1}^\infty e^{-a s_n} \\ & = \sum_{n=n_1}^\infty e^{-\frac{a}{x} \log \log c^n} = \sum_{n=n_1}^\infty \left( \frac{1}{n \log c} \right)^{\frac{a}{x}} \end{align*} which is a convergent series since $x<a$. Let us now show \eqref{e.generalestimate.above}. By \eqref{e.4.1} we have that \begin{align*} & \mathbb{P} \left( Y_{t} <1 \right) \geqslant e^{-bt} \end{align*} for all large enough $t$, and hence there exists an $n_2 \in \mathbb{N}$ such that \begin{align*} &\sum_{n=n_2}^\infty \mathbb{P} \left( Y_{v_n} <1 \right) \geqslant \sum_{n=n_2}^\infty e^{-b v_n} \\ & \geqslant \sum_{n=n_2}^\infty e^{-\frac{b}{y} \log \log n^n} =\sum_{n=n_2}^\infty \left( \frac{1}{n \log n} \right)^{\frac{b}{y}} \end{align*} which is divergent since $b<y$. The proof is then completed by taking $n_0:= \max \left( n_1, n_2 \right)$. \end{proof} We first prove a weaker version of Theorem \ref{t.smalldeviations}, namely that if the limit in \eqref{e.smalldeviations} exists, then it is finite and strictly positive. The estimates in Proposition \ref{p.smalldeviations.estimates} will be used in the proof of Chung's law of iterated logarithm. First we introduce the following notation. \begin{notation}[Dirichlet eigenvalues in $\mathbb{R}^{n}$]\label{n.eigenvalues} We denote by $\lambda^{(n)}_{1}$ the lowest Dirichlet eigenvalue of $-\frac{1}{2} \Delta_{\R^n}$ on the unit ball in $\mathbb{R}^{n}$, where $0< \lambda_{1}^{\left( n\right)} \leqslant \lambda_{2}^{\left( n\right)} \leqslant ...$ are Dirichlet eigenvalues for the Laplacian $-\frac{1}{2} \Delta_{\R^n}$ in the unit ball $D:= \left\{ x\in \R^n, \vert x \vert <1 \right\}$. \end{notation} Recall that the lowest Dirichlet eigenvalues appear in a small deviation principle for a Brownian motion in $\mathbb{R}^{n}$, see e.g. \cite[Lemma 8.1]{IkedaWatanabe1989}. Namely, suppose $b_{t}$ is a standard Brownian motion in $\mathbb{R}^{n}$, then \begin{equation}\label{e.s.d.brownian.mot} \lim_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( b^{\ast}_{1} < \varepsilon \right) = \lambda_1^{(n)}, \end{equation} where $\lambda_1^{(n)}$ is as in Notation \ref{n.eigenvalues}, and \[ b^{\ast}_1 := \max_{0\leqslant t \leqslant 1} \vert b_{t}\vert_{\R^n}. \] \begin{proposition}\label{p.smalldeviations.estimates} Let $g_{t}$ be the hypoelliptic Brownian motion on the Heisenberg group $\mathbb{H}$. Set \begin{align*} & c_{-}:= \liminf_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right), \\ & c_{+}:= \limsup_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right). \end{align*} Then \begin{equation}\label{e.smalldeviations.estimates} \lambda_{1}^{(2)} \leqslant c_{-}\leqslant c_{+} \leqslant c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right), \end{equation} where \begin{align*} & c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right):= f(x^{\ast})=\inf_{x \in \left( 0, 1 \right)} f\left( x \right), \\ & f(x) = \frac{\lambda^{(2)}_1}{\sqrt{1-x}} + \frac{\lambda^{(1)}_1\sqrt{1-x} }{4x}, \\ & x^{\ast}= \frac{\sqrt{(\lambda_1^{(1)})^2 + 32\lambda^{(1)}_1\lambda^{(2)}_1 } -3\lambda^{(1)}_1} {2 \left( 4\lambda^{(2)}_1 - \lambda^{(1)}_1 \right)}, \end{align*} and $\lambda^{(n)}_1$ are the lowest Dirichlet eigenvalues on the unit ball as defined in Notation \ref{n.eigenvalues}. \end{proposition} \begin{proof} The lower bound in \eqref{e.smalldeviations.estimates} follows from the small deviation principle \ref{e.s.d.brownian.mot} for $\mathbb{R}^{n}$-valued Brownian motion and the fact that $\mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) \leqslant \mathbb{P} \left( B^{\ast}_{1} < \varepsilon \right)$. Let us prove now the upper bound. For any $x\in (0,1)$ we have \begin{align*} & \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) = \mathbb{P} \left( \max_{ 0 \leqslant s \leqslant 1} \left( \vert B_{s}\vert_{\R^2}^4 + \vert A_{s} \vert_{\R}^2 \right) < \varepsilon^4 \right) \\ &\geqslant \mathbb{P} \left( B^{\ast}_{1} < \left(1-x\right)^{\frac{1}{4}}\varepsilon , \; A^{\ast}_{1} < \sqrt{x} \varepsilon^2 \right). \end{align*} It is well-known that $A_{t}=b_{\tau (t)}$ where $b_{t}$ is a one-dimensional Brownian motion independent of $B_{t}$, and $\tau(t) =\frac{1}{4} \int_0^t \vert B_{s} \vert^2_{\R^2}ds$, see for example \cite[Chapter 7, Section 6, Example 6.1]{IkedaWatanabe1989}. Therefore we have \begin{align*} &\mathbb{P} \left( B^{\ast}_1 < \varepsilon \left(1-x\right)^{\frac{1}{4}}, \; \sup_{ 0 \leqslant t \leqslant 1} \vert b_{\tau(t)} \vert_{\R} < \varepsilon^2\sqrt{x} \right) \\ & = \mathbb{P} \left( B^{\ast}_1 < \varepsilon\left(1-x\right)^{\frac{1}{4}}, \; \sup_{ 0\leqslant t \leqslant \tau(1) } \vert b_{t}\vert_{\R} < \varepsilon^2\sqrt{x} \right) \\ &\geqslant \mathbb{P} \left( B^{\ast}_1< \varepsilon\left(1-x\right)^{\frac{1}{4}}, \; \sup_{ 0\leqslant t \leqslant \frac{\varepsilon^2}{4} \left(1-x\right)^{\frac{1}{2}} } \vert b_{t}\vert_{\R} < \varepsilon^2 \sqrt{x} \right) \\ & = \mathbb{P} \left( B^{\ast}_1< \varepsilon \left(1-x\right)^{\frac{1}{4}}, \; b^{\ast}_1< \frac{2\varepsilon\sqrt{x}}{\left(1-x\right)^{\frac{1}{4}}} \right) \\ &= \mathbb{P} \left( B^{\ast}_1 < \varepsilon\left(1-x\right)^{\frac{1}{4}} \right) \mathbb{P} \left( b^{\ast}_1 < \frac{2\varepsilon\sqrt{x}}{\left(1-x\right)^{\frac{1}{4}}} \right). \end{align*} Thus \begin{align*} &\log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) \geqslant \log \mathbb{P} \left( B^{\ast}_1 < \varepsilon \left(1-x\right)^{\frac{1}{4}} \right) + \log \mathbb{P} \left( b^{\ast}_1 < \frac{2\varepsilon \sqrt{x}}{\left(1-x\right)^{\frac{1}{4}}} \right), \end{align*} and hence \begin{align*} & -\varepsilon^2 \log \mathbb{P} \left( g^{\ast}_1 < \varepsilon \right) \leqslant -\varepsilon^2\left(1-x\right)^{\frac{1}{2}} \log \mathbb{P} \left( B^{\ast}_1< \varepsilon \left(1-x\right)^{\frac{1}{4}} \right) \frac{1}{\left(1-x\right)^{\frac{1}{2}} } \\ & - \varepsilon^ 2 \frac{4x}{\left(1-x\right)^{\frac{1}{2}}} \log \mathbb{P} \left( b^{\ast}_1 < \frac{2\sqrt{x}}{\left(1-x\right)^{\frac{1}{4}}} \varepsilon \right) \frac{ \left(1-x\right)^{\frac{1}{2}}}{ 4x}. \end{align*} From the small deviation principle \eqref{e.s.d.brownian.mot} for a $\mathbb{R}^{n}$-valued Brownian motion applied to $B_{t}$ and $b_{t}$ it follows that \begin{align*} & \limsup_{\varepsilon\rightarrow 0} - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_1< \varepsilon \right) \leqslant \frac{\lambda^{(2)}_1}{\sqrt{1-x}} + \frac{\lambda^{(1)}_1\sqrt{1-x} }{4x} \end{align*} for all $x$ in $(0, 1)$. Note that \[ f\left( x \right):=\frac{\lambda^{(2)}_{1}}{\sqrt{1-x}} + \frac{\lambda^{(1)}_{1}\sqrt{1-x} }{4x} >0 \text{ for all } x\in \left( 0, 1 \right) \] always has a local minimum over $\left( 0, 1 \right)$ even if we do not rely on the known values of the eigenvalues $\lambda^{(2)}_1$ and $\lambda^{(1)}_{1}$. It is easy to see that this minimum is achieved at \[ x^{\ast}= \frac{\sqrt{(\lambda_1^{(1)})^2 + 32\lambda^{(1)}_1\lambda^{(2)}_1 } -3\lambda^{(1)}_1} {2 \left( 4\lambda^{(2)}_1 - \lambda^{(1)}_1 \right)} \in \left( 0, 1 \right) \] which gives Equation \eqref{e.smalldeviations.estimates}. \end{proof} \section{Proof of the main results}\label{sec.5} \subsection{Chung's law of iterated logarithm for $g_{t}$} The goal of this section is to prove Theorem \ref{t.chunglil}. Later in Proposition \ref{p.zero-one.law} we prove that $c:= \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t}$ is constant a.s., where $\phi\left( t \right):= \sqrt{ \frac{\log \log t}{t}}$. For now $c$ is a random variable for which we first show lower and upper bounds in Proposition \ref{p.chunglil.lowerbound} and Proposition \ref{p.chunglil.upperbound}. \begin{proposition}[Lower bound] \label{p.chunglil.lowerbound} For the lowest eigenvalue $\lambda_1^{(2)}$ as introduced in Notation \ref{n.eigenvalues} we have \[ c=\liminf_{t \rightarrow \infty} \phi\left( t \right) g_{t}^{\ast} \geqslant \sqrt{\lambda^{(2)}_1} \hskip0.1in \text{a.s.} \] \end{proposition} \begin{proof} While this proof is motivated by \cite{Remillard1994}, we provide a detailed argument for completeness. Let $r>0$ be such that $0<r<\sqrt{\lambda_1^{(2)}}$. Then we can find a constant $M>1$ such that $rM<\sqrt{\lambda_1^{(2)}}$. We will show that \[ \mathbb{P} \left( c<r \right) = 0 \text{ for all } 0<r<\sqrt{\lambda_1^{(2)}}. \] We have \begin{align*} &\mathbb{P} \left( c< r \right)= \mathbb{P} \left( \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t} < r \right) \leqslant \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \inf_{M^n \leqslant t \leqslant M^{n+2} } \phi (t) g^{\ast}_{t} < r \right\} \right) \\ &\leqslant \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \frac{1}{M} \phi \left( M^n \right) g^{\ast}_{ M^n} < r \right\} \right) = \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ g^{\ast} \left( \frac{M^{n-2}}{r^2} \phi \left( M^n \right)^2\right) < 1 \right\} \right), \end{align*} where $ g^{\ast} \left( \frac{M^{n-2}}{r^2} \phi \left( M^n \right)^2\right) := g^{\ast}_{ \frac{M^{n-2}}{r^2} \phi \left( M^n \right)^2}$. Here we used that \begin{align*} \inf_{a\leqslant t \leqslant b} \phi (t) g^{\ast}_{t} \geqslant \frac{\sqrt{a}}{ \sqrt{b}} \phi (a) g^{\ast}_a \end{align*} for any $0< a<b< \infty $. It is enough to show that \begin{align*} & \sum_{n=1} ^{\infty}\mathbb{P} \left( g^{\ast} \left( \frac{M^{n-2}}{r^2} \phi \left( M^n \right)^2\right) < 1 \right) <\infty, \end{align*} and then the result follows from the Borel-Cantelli Lemma. By \eqref{e.smalldeviations.estimates} and the scaling property of $g_{t}$, it follows that \begin{align*} & \lambda_1^{(2)} \leqslant \liminf_{\varepsilon \rightarrow 0} -\varepsilon^2 \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) = \liminf_{\varepsilon \rightarrow 0}- \varepsilon^2 \mathbb{P} \left( \max_{ 0\leqslant s \leqslant 1} \vert g_{\frac{1}{\varepsilon^2} s} \vert <1 \right) \\ & = \liminf_{t \rightarrow \infty} -\frac{1}{t}\mathbb{P} \left( \max_{0\leqslant s \leqslant t} \vert g_{s} \vert <1 \right) = \liminf_{t \rightarrow \infty} -\frac{1}{t}\mathbb{P} \left( g^{\ast}_{t} <1 \right). \end{align*} Moreover, \begin{align*} & \frac{M^{n-2}}{r^2} \phi \left( M^n \right)^2= \frac{1}{M^2 r^2} \log \log M^n, \end{align*} and hence we can apply Proposition \ref{p.general} with $Y_{t}= g^{\ast}_{t}$, $a= \lambda_1^{(2)}$, $s_n= \frac{1}{x} \log \log M^n$, and $x=M^2 r^2$, since $M^2r^2< \lambda_1^{(2)} $. \end{proof} Our next step is to show that $c$ is finite almost surely. To do so, we need the following Lemma. \begin{lemma}\label{l.techincal} Set $t_n=n^n$. Then for every $\varepsilon>0$ \begin{equation}\label{e.lemma} \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) g^{\ast}_{t_{n-1} } > \varepsilon \right\} \right)=0 \end{equation} \end{lemma} \begin{proof} It is enough to show that for any $\varepsilon >0$ \begin{align} & \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) B^{\ast}_{ t_{n-1} } > \varepsilon \right\} \right)=0 \label{e1} \text{ and } \\ &\mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right)^2 A^{\ast}_{ t_{n-1} } > \varepsilon \right\} \right)=0. \label{e2} \end{align} Indeed, \begin{align*} & \left\{ \phi\left( t_n \right)g^{\ast}_{ t_{n-1} }> \varepsilon \right\} = \left\{ \phi\left( t_n \right)^4 \max_{0\leqslant s \leqslant t_{n-1}} \left( \vert B_{s} \vert_{\R^2}^4 + \vert A_{s} \vert^2 \right) > \varepsilon^4 \right\} \\ & \subset \left\{ B^{\ast}_{ t_{n-1} } >\frac{ \varepsilon } {\sqrt[4]{2}\phi\left( t_n \right)} \right\} \cup \left\{ A^{\ast}_{t_{n-1} } >\frac{ \varepsilon^2 } {\sqrt{2}\phi\left( t_n \right)^2} \right\}, \end{align*} and hence \begin{align*} & \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) g^{\ast}_{ t_{n-1} } > \varepsilon \right\} \right) \\ & \leqslant \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) B^{\ast}_{t_{n-1} } >\frac{ \varepsilon} {\sqrt[4]{2}} \right\} \right) \\ & + \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right)^2 A^{\ast}_{ t_{n-1} } >\frac{ \varepsilon^2} {\sqrt{2}} \right\} \right). \end{align*} Let us first prove \eqref{e1}. For every $\varepsilon$ fixed we have that \begin{align*} & \left\{ \omega \; : \; \limsup_{n\rightarrow \infty } \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s}(\omega) \vert_{\R^2} = 0 \right\} \\ & \subset \bigcup_{k \geqslant 1 } \bigcap_{n \geqslant k} \left\{ \omega \; : \; \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s}(\omega) \vert_{\R^2} < \varepsilon \right\} . \end{align*} Moreover, by \cite[Lemma 1]{JainPruitt1975} we have that for $t_n=n^n$ \begin{align*} & \limsup_{n\rightarrow \infty } \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s} \vert_{\R^2} = 0 \quad \text{a.s.}. \end{align*} Combining everything together we have that for $t_n=n^n$ \begin{align*} &1= \mathbb{P} \left( \limsup_{n\rightarrow \infty } \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s} \vert_{\R^2} = 0 \right) \\ & \leqslant \mathbb{P} \left( \bigcup_{k \geqslant 1 } \bigcap_{n \geqslant k} \left\{ \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s}(\omega) \vert_{\R^2} < \varepsilon \right\} \right), \end{align*} and \eqref{e1} is proven since \begin{align*} & \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) B^{\ast}_{ t_{n-1} } > \varepsilon \right\} \right) \\ & = 1- \mathbb{P} \left( \bigcup_{k \geqslant 1 } \bigcap_{n \geqslant k} \left\{ \phi \left(t_n \right) \max_{0\leqslant s \leqslant t_{n-1}} \vert B_{s}(\omega) \vert_{\R^2} < \varepsilon \right\} \right). \end{align*} Let us now prove \eqref{e2}. It follows from \cite[Example on pp. 449-451]{Baldi1986a} that there exists a finite constant $d>0$ such that with probability one we have $ A^{\ast}_{ t_{n-1} } \leqslant d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2$ eventually, that is, \[ \mathbb{P} \left( \bigcup_{k \geqslant 1} \bigcap_{n\geqslant k } \left\{ A^{\ast}_{ t_{n-1} } \leqslant d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2 \right\} \right)=1. \] For any $\varepsilon>0$ there exists an $N_\varepsilon$ such that for any $n \geqslant N_\varepsilon$ we have that $ d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2 \phi \left( t_{n} \right)^2 \leqslant \varepsilon $. Set \[ E_k :=\bigcap_{n\geqslant k } \left\{ \phi \left( t_{n} \right)^2 A^{\ast}_{ t_{n-1} } \leqslant d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2 \phi \left( t_{n} \right)^2 \right\}. \] Then the family $E_k$ is an increasing sequence of sets, and it the follows that \begin{align*} & \bigcup_{k \geqslant 1} \bigcap_{n\geqslant k } \left\{ A^{\ast}_{ t_{n-1} } \leqslant d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2 \right\} = \bigcup_{k \geqslant 1} E_k \subset \bigcup_{k \geqslant N_\varepsilon} E_k \\ & = \bigcup_{k \geqslant N_\varepsilon} \bigcap_{n\geqslant k } \left\{ \phi \left( t_{n} \right)^2 A^{\ast}_{ t_{n-1} } \leqslant d\, t_{n-1}^2 \phi \left( t_{n-1} \right)^2 \phi \left( t_{n} \right)^2 \right\} \\ & \subset \bigcup_{k \geqslant N_\varepsilon} \bigcap_{n\geqslant k } \left\{ \phi \left( t_{n} \right)^2 A^{\ast}_{ t_{n-1} } \leqslant \varepsilon \right\} \subset \bigcup_{k \geqslant 1} \bigcap_{n\geqslant k } \left\{ \phi \left( t_{n} \right)^2 A^{\ast}_{ t_{n-1} } \leqslant \varepsilon \right\} . \end{align*} Therefore for any $ \varepsilon >0$ we have that \[ \mathbb{P} \left( \bigcup_{k \geqslant 1} \bigcap_{n\geqslant k } \left\{ \phi \left( t_{n} \right)^2 A^{\ast}_{ t_{n-1} } \leqslant \varepsilon \right\} \right) =1, \] and so \eqref{e2} is proven. \end{proof} \begin{proposition}[Upper bound]\label{p.chunglil.upperbound} Let $c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)$ be as in Proposition \ref{p.smalldeviations.estimates}, then \[ c=\liminf_{t \rightarrow \infty} \phi\left( t \right) g_{t}^{\ast}\leqslant \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)} \hskip0.1in \text{ a.s. } \] \end{proposition} \begin{proof} Set $t_n=n^n$. We will show that $\mathbb{P} \left( c >r \right)=0$ for any $r> \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)}$. Since \begin{align*} & \mathbb{P} \left( \liminf_{t} \phi (t) g^{\ast}_{t} >r \right) \leqslant \mathbb{P} \left( \bigcup_{k \geqslant 1 } \bigcap_{n \geqslant k} \left\{ \phi (t_n) g^{\ast}_{t_n} >r \right\} \right) \\ & = 1- \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi (t_n) g^{\ast}_{t_n} \leqslant r \right\} \right), \end{align*} it is sufficient to show that for any $r> \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)}$ \begin{equation}\label{e.idk.p} \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) g^{\ast}_{ t_n } \leqslant r \right\} \right)=1. \end{equation} Fix $r > \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)}$ and choose $r_1$ such that $ \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)} <r_1<r$. Let us define the events \begin{align*} & A_n := \left\{ \phi \left( t_n \right)\max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert < r_1 \right\}, \\ & B_n := \left\{ \phi \left( t_n \right) g^{\ast}_{ t_{n-1} }< \frac{r-r_1}{2} \right\}. \end{align*} Then by \eqref{e.triangular.ineq} on the event $A_n \cap B_n$ we have \begin{align*} & \phi \left( t_n \right) g^{\ast}_{ t_{n} } \leqslant \phi \left( t_n \right) g^{\ast}_{ t_{n-1} }+ \phi \left( t_n \right) \max_{ t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}g_{t_{n-1}}^{-1}g_{s} \vert \\ & \leqslant \phi \left( t_n \right) g^{\ast}_{ t_{n-1} } + \phi \left( t_n \right) \vert g_{t_{n-1}} \vert + \phi \left( t_n \right) \max_{ t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert \\ & \leqslant \frac{r-r_1}{2} + \frac{r-r_1}{2} + r_1 = r, \end{align*} and hence \begin{align*} &\mathbb{P} \left( A_n \cap B_n \right) \leqslant \mathbb{P} \left( \phi \left( t_n \right) g^{\ast}_{ t_{n} } \leqslant r \right). \end{align*} By Lemma \ref{l.techincal} we have $\mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} B_n^c \right) =0$, and therefore \begin{align*} & \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi \left( t_n \right)\max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+ t_{n-1}} \vert < r_1 \right\} \right) \\ & = \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} A_n \right) = \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left( A_n \cap B_n \right) \right) \\ & \leqslant \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi\left( t_n \right) g^{\ast} \left( t_n \right) \leqslant r \right\} \right). \end{align*} Equation \eqref{e.idk.p} holds if we can show that \begin{align*} \mathbb{P} \left( \bigcap_{k \geqslant 1 } \bigcup_{n \geqslant k} \left\{ \phi \left( t_n \right)\max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+ t_{n-1}} \vert < r_1 \right\} \right) =1. \end{align*} Note that $ g_{t_{n-1}}^{-1} g_{s+t_{n-1}} \stackrel{(d)}{=} g_{s}$ and it is independent of $\mathcal{F}_{t_{n-1}} $. Indeed, \begin{align} & g_{t_{n-1}}^{-1} g_{s+t_{n-1}}\label{e.LeftIncrement} \\ & = \left( B_{s+t_{n-1}} - B_{t_{n-1}}, \frac{1}{2} \int_{t_{n-1}} ^{s+ t_{n-1}} \omega \left( B_u, dB_u \right) + \frac{1}{2} \omega \left( B_{s+t_{n-1}} , B_{t_{n-1}} \right) \right) \notag \\ & \stackrel{(d)}{=} \left( B_{s}, \frac{1}{2} \int_0^s \omega\left( B_u, dB_u \right) \right) = g_{s}.\notag \end{align} The assumptions of the Borel-Cantelli Lemma are satisfied, therefore we only need to show that the series with the term \begin{align*} &\mathbb{P} \left( \phi \left( t_n \right)\max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+ t_{n-1}} \vert < r_1 \right) \end{align*} diverges. We have \begin{align*} &\mathbb{P} \left( \phi \left( t_n \right)\max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+ t_{n-1}} \vert < r_1 \right) \\ & = \mathbb{P} \left( \max_{0 \leqslant s \leqslant t_n - t_{n-1}} \frac{\phi \left( t_n \right) }{r_1} \vert g_{s}\vert < 1 \right) \\ & = \mathbb{P} \left( g^{\ast} \left( \frac{t_n - t_{n-1}}{r_1^2} \phi\left( t_n \right)^2 \right) <1 \right), \end{align*} where $g^{\ast} \left( \frac{t_n - t_{n-1}}{r_1^2} \phi\left( t_n \right)^2 \right): = g^{\ast}_{ \frac{t_n - t_{n-1}}{r_1^2} \phi\left( t_n \right)^2 } $. By \eqref{e.smalldeviations.estimates} and the scaling property of $g_{t}$ it follows that \begin{align*} & c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right) \geqslant \limsup_{\varepsilon \rightarrow 0} -\varepsilon^2 \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) = \limsup_{\varepsilon \rightarrow 0}- \varepsilon^2 \mathbb{P} \left( \max_{ 0\leqslant s \leqslant 1} \vert g_{\frac{1}{\varepsilon^2} s} \vert <1 \right) \\ & = \limsup_{t \rightarrow \infty} -\frac{1}{t}\mathbb{P} \left( \max_{0\leqslant s \leqslant t} \vert g_{s} \vert <1 \right) = \limsup_{t \rightarrow \infty} -\frac{1}{t}\mathbb{P} \left( g^{\ast}_{t} <1 \right). \end{align*} Moreover, \begin{align*} & \frac{t_n - t_{n-1}}{r_1^2} \phi\left( t_n \right)^2 = \frac{1}{r_1^2} \frac{t_n-t_{n-1}}{t_n} \log \log t_n \\ & \leqslant \frac{1}{r_1^2} \log \log t_n = \frac{1}{r_1^2} \log \log n^n \end{align*} and hence we can apply Proposition \ref{p.general} with $Y_{t}= g^{\ast}_{t}$, $b= c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)$, $v_n= \frac{1}{y} \log \log n^n$, and $y=r_1^2$, since $r_1^2> c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)$. \end{proof} \begin{remark}\label{r.not.right.multiplication} In the proof of Proposition \ref{p.chunglil.upperbound} we used left increments $g_{t_{n-1}}^{-1} g_{s+t_{n-1}}$. It is easy to check that the argument does not work if one considers the right increments $g_{s+t_{n-1}} g_{t_{n-1}}^{-1}$ instead. Indeed, \begin{align*} & g_{s+t_{n-1}} g_{t_{n-1}}^{-1} \\ & = \left( B_{s+t_{n-1}} - B_{t_{n-1}}, \frac{1}{2} \int_{t_{n-1}} ^{s+ t_{n-1}} \omega \left( B_u, dB_u \right) - \frac{1}{2} \omega \left( B_{s+t_{n-1}} , B_{t_{n-1}} \right) \right) \\ & = \left( B_{s+t_{n-1}} - B_{t_{n-1}}, \frac{1}{2} \int_{0 } ^{s} \omega \left( B_{u+ t_{n-1}} - B_{t_{n-1}}, dB_{u + t_{n-1}} \right) \right. \\ & \left. +\frac{1}{2} \int_{0 } ^{s} \omega \left( B_{t_{n-1}} , dB_{u + t_{n-1}} \right) - \frac{1}{2} \omega \left( B_{s+t_{n-1}} , B_{t_{n-1}} \right) \right) \\ &= \left( B_{s+t_{n-1}} - B_{t_{n-1}}, \frac{1}{2} \int_{0 } ^{s} \omega \left( B_{u+ t_{n-1}} - B_{t_{n-1}}, dB_{u + t_{n-1}} \right) \right. \\ &\left. +\frac{1}{2} \omega \left( B_{t_{n-1}} , B_{s+ t_{n-1}} - B_{t_{n-1}} \right) - \frac{1}{2} \omega \left( B_{s+t_{n-1}} , B_{t_{n-1}} \right) \right) \\ &= \left( B_{s+t_{n-1}} - B_{t_{n-1}}, \frac{1}{2} \int_{0 } ^{s} \omega \left( B_{u+ t_{n-1}} - B_{t_{n-1}}, dB_{u + t_{n-1}} \right) \right. \\ & \left.+ \omega \left( B_{t_{n-1}} , B_{s+ t_{n-1}} - B_{t_{n-1}} \right) \right) \\ & \stackrel{(d)}{=} \left( B_{s}, \frac{1}{2} \int_0^s \omega\left( B_u, dB_u \right) + \omega \left( B_{t_{n-1}} , B_{s} \right) \right) \\ & \not= \left( B_{s}, \frac{1}{2} \int_0^s \omega\left( B_u, dB_u \right) \right) = g_{s}. \end{align*} This is a consequence of our choice of the (left) Brownian motion $g_{t}$ defined by the left translation in \eqref{e.SDE}. \end{remark} The next statement completes the proof of Theorem \ref{t.chunglil}. \begin{proposition}\label{p.zero-one.law} Let $c$ be the random variable defined by \eqref{e.chunglil}, then $c$ is constant a.s. \end{proposition} \begin{proof} Let $\mathcal{T}_u:= \sigma \left\{ B_r, \, r\geqslant u \right\}$, and $\mathcal{T}:= \cap_{u>0} \mathcal{T}_u$ be the tail $\sigma$-algebra generated by the Brownian motion, which is trivial by Kolmogorov's 0-1 law. We will show that \begin{equation}\label{e.01law} c^4= \liminf_{t\rightarrow \infty} \phi (t)^4 \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right]. \end{equation} This means that the random variable $c$ is $\mathcal{T}_u$-measurable for every $u$ and hence $\mathcal{T}$-measurable. Since $\mathcal{T}$ is trivial, then $c$ is constant a.s. because $\mathcal{T}$ is trivial. Let us now prove \eqref{e.01law}. Suppose $u$ is fixed, and note that \begin{equation}\label{e.idk} c= \liminf_{t\rightarrow \infty} \phi (t) \max_{u\leqslant s \leqslant t} \vert g_{s} \vert. \end{equation} Indeed, \begin{align*} & \max_{u\leqslant s \leqslant t} \vert g_{s} \vert \leqslant \max_{0\leqslant s \leqslant t} \vert g_{s} \vert \leqslant \max_{u\leqslant s \leqslant t} \vert g_{s} \vert + \max_{0\leqslant s \leqslant u} \vert g_{s} \vert, \end{align*} and \eqref{e.idk} follows from the fact that $\lim_{t\rightarrow \infty }\phi(t)=0$. Using the triangular inequality one can show that \begin{align}\label{e.idk2} &\max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] - A_u^2 -2\vert A_u \vert \max_{0\leqslant s \leqslant t} \vert A_{s} \vert \notag \\ & \leqslant \max_{u\leqslant s \leqslant t} \vert g_{s} \vert^4 \\ & \leqslant \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] +2A_u^2 + \vert A_u \vert \max_{0 \leqslant s \leqslant t } \vert A_{s} \vert \notag. \end{align} Indeed, \begin{align*} & \max_{u\leqslant s \leqslant t} \vert g_{s} \vert^4 = \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 +A_{s}^2 \right] \\ & = \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right)+A_u \right)^2\right] \\ & = \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 + A_u^2 + A_u \int_u^s \omega \left(B_v, dB_v \right) \right] \\ & \leqslant \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] + A_u^2 +\vert A_u \vert \max_{u \leqslant s \leqslant t } \vert \int_u^s \omega \left(B_v, dB_v \right) \vert \\ &= \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] + A_u^2 + \vert A_u \vert \max_{u \leqslant s \leqslant t } \vert A_{s} - A_u \vert \\ & \leqslant \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] + 2A_u^2 + \vert A_u \vert \max_{0 \leqslant s \leqslant t } \vert A_{s} \vert, \end{align*} and the upper bound in \eqref{e.idk2} is proven. Let us now show the lower bound. We have that \begin{align*} & \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( \frac{1}{2} \int_u^s \omega \left(B_v, dB_v \right) \right)^2 \right] = \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + \left( A_{s}-A_u \right)^2 \right] \\ & = \max_{u \leqslant s \leqslant t } \left[ \vert B_{s} \vert^4 + A_{s}^2 +A_u^2 -2A_uA_{s} \right] \leqslant \max_{u \leqslant s \leqslant t } \vert g_{s} \vert^4 + A_u^2 + 2 \max_{u \leqslant s \leqslant t } \left( -A_u A_{s} \right) \\ & \leqslant \max_{u \leqslant s \leqslant t } \vert g_{s} \vert^4 + A_u^2 + 2 \vert A_u \vert \max_{0\leqslant s \leqslant t } \vert A_{s} \vert, \end{align*} and the lower bound is also proven. Before finishing the proof we recall that by \cite[Theorem 1]{Remillard1994} \[ \liminf_{t\rightarrow \infty } \phi(t)^2 \max_{0\leqslant s \leqslant t} \vert A_{s} \vert = \frac{\pi}{4} \text{ a.s. } \] Then \eqref{e.01law} follows by \eqref{e.idk2} and the fact that $\lim_{t\rightarrow \infty }\phi(t)=0$. \end{proof} We have actually proven a more quantitative version of Theorem \ref{t.chunglil} as follows. \begin{theorem}[Chung's law of iterated logarithm with bounds]\label{t.ChungBounds} The constant $c= \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t}$ in Theorem \ref{t.chunglil} satisfies \[ \sqrt{\lambda_1^{(2)}} \leqslant c \leqslant \sqrt{c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)}, \] where $\lambda_1^{(2)}$ is defined in Notation \ref{n.eigenvalues}, and $c\left( \lambda_{1}^{(1)}, \lambda_{1}^{(2)} \right)$ is defined in Proposition \ref{p.smalldeviations.estimates}. \end{theorem} \subsection{Small deviations for $g_{t}$ } We are now ready to prove Theorem \ref{t.smalldeviations}, that is, the small deviation principle for the hypoelliptic Brownian motion $g_{t}$. \begin{proof}[Proof of Theorem \ref{t.smalldeviations}] We recall the notation \begin{align*} & c_{-}:= \liminf_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right), \\ & c_{+}:= \limsup_{\varepsilon \rightarrow 0 } - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right), \end{align*} and \begin{align*} &c:= \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t}. \end{align*} We first show that \begin{equation}\label{e.firststep} c_{+} \leqslant c^2 . \end{equation} Let $k\in \left( 0, c_{+} \right)$ be a fixed number, that is, $ k \leqslant \limsup_{\varepsilon \rightarrow 0} - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right)$. This means that there exists an $\varepsilon (k)$ such that \begin{equation}\label{e.upperbound} \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) \leqslant \exp \left( -\frac{k}{\varepsilon^2} \right), \end{equation} for any $\varepsilon \leqslant \varepsilon (k)$. Now fix $R>1$ and a number $\gamma $ such that $ 0 < R \gamma < k$. Define $t_n=R^n>1$, and $\varepsilon_n = \varepsilon_n(k, \gamma, R) := \frac{\sqrt{\gamma}}{\phi \left( t_{n+1} \right) \sqrt{t_n}}$. Note that $\varepsilon_n$ goes to zero as $n$ goes to infinity, and hence there exists an $N= N(k, \gamma, R )$ such that $\varepsilon_n \leqslant \varepsilon(k) $ for any $n \geqslant N(k, \gamma , R)$. Then we have that \begin{align*} &\mathbb{P} \left( g^{\ast}_{t_n} < \frac{\sqrt{\gamma}}{ \phi \left( t_{n+1} \right) } \right) = \mathbb{P}\left( g^{\ast}_{1} < \frac{\sqrt{\gamma} }{\phi \left( t_{n+1} \right) \sqrt{t_n}} \right) = \mathbb{P}\left( g^{\ast}_{1} < \varepsilon_n \right) \\ & \leqslant \exp \left( -\frac{k}{\varepsilon_n^2} \right) = \exp\left( -\frac{k}{\gamma} \phi \left( t_{n+1} \right) ^2 t_n \right) = \exp \left( -\frac{k}{\gamma} \frac{t_n}{t_{n+1}} \log \log t_{n+1} \right) \\ & = \exp\left(-\frac{k}{R \gamma} \log \log R^{n+1} \right) = \left( \frac{1}{ ( n+1) \log R} \right)^{\frac{k }{R\gamma}}, \end{align*} for all $n \geqslant N(k, \gamma , R)$, which is a term of a convergent series since $R \gamma < k$. Therefore for any $ 0 < k < c_{+}$, $R>1$, $0< R \gamma < k$, and $t_n=R^n$ we have that \begin{align*} & \sum_{n=1}^\infty \mathbb{P} \left( g^\ast_{t_n} < \frac{\sqrt{\gamma}}{ \phi \left( t_{n+1} \right) } \right) < \infty. \end{align*} Hence by the Borel-Cantelli Lemma we have \begin{align*} & \mathbb{P} \left( \bigcap_{k \geqslant 1} \bigcup_{n \geqslant k} \left\{ g^\ast_{t_n} < \frac{\sqrt{\gamma}}{ \phi \left( t_{n+1} \right) } \right\} \right) = 0, \; \; \text{that is,} \\ &\mathbb{P} \left( \bigcup_{k \geqslant 1} \bigcap_{n \geqslant k} \left\{ g^\ast_{t_n} \geqslant \frac{\sqrt{\gamma}}{ \phi \left( t_{n+1} \right) } \right\} \right) = 1. \end{align*} Therefore almost surely for all large $n$, $g^\ast_{ t_n } \geqslant \frac{ \sqrt{\gamma} } {\phi \left( t_{n+1} \right)}$. The function $\phi (t) $ is decreasing for $t>1$, therefore $t\in \left[ t_n , t_{n+1} \right]$ we have \[ g^\ast_{t} \geqslant g^\ast_{ t_n } \geqslant \frac{ \sqrt{ \gamma } } {\phi \left( t_{n+1} \right)} \geqslant \frac{\sqrt{ \gamma }} {\phi \left( t \right)} \] which yields \[ c:= \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t} \geqslant \sqrt{ \gamma } \quad \text{a.s.} \] for any $ \gamma < \frac{k}{R} <\frac{c_{+}}{R} $ , and hence \eqref{e.firststep} is proven by letting first $R$ go to $1$, and then $k$ to $c_{+}$. Let us now show that \begin{equation}\label{e.secondstep} c^2 \leqslant c_{-}. \end{equation} Suppose $ k> c_{-}$, then $k \geqslant \liminf_{\varepsilon \rightarrow0} - \varepsilon^2 \log \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right)$. Therefore there exists an $\varepsilon^\prime (k)$ such that \begin{equation}\label{e.lowerbound} \mathbb{P} \left( g^{\ast}_{1} < \varepsilon \right) \geqslant \exp \left( -\frac{k}{\varepsilon^2} \right), \end{equation} for any $\varepsilon \leqslant \varepsilon^\prime (k)$. Now set $t_n =n^n$ and define \begin{align*} &\varepsilon_n= \varepsilon_n(k) := \frac{k}{\sqrt{c_{-}}} \frac{1}{ \sqrt{t_n - t_{n-1}} \phi \left( t_n \right)}, \\ &E_n^k := \left\{ \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert< \frac{k}{\sqrt{c_{-}}} \frac{1}{ \phi \left( t_n \right)} \right\} . \end{align*} Note that $\varepsilon_n$ goes to zero as $n$ goes to infinity, and hence there exists an $N(k)$ such that $\varepsilon_n \leqslant \varepsilon^\prime (k)$ for any $ n\geqslant N(k)$. We claim that, for any $k>c_{-}$, $\sum_{n=1}^\infty \mathbb{P} \left( E^k_n \right) = \infty$. Indeed, since by \eqref{e.LeftIncrement} the left increment $g_{t_{n-1}}^{-1} g_{s+t_{n-1}} \stackrel{(d)}{=} g_{t}$, we can use \eqref{e.lowerbound} to see that for $n \geqslant N(k)$ \begin{align*} & \mathbb{P}\left( E_n^k \right) = \mathbb{P} \left( \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert < \frac{k}{\sqrt{c_{-}}} \frac{1}{ \phi \left( t_n \right)} \right) \\ & =\mathbb{P} \left( \max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+t_{n-1}} \vert < \frac{k}{\sqrt{c_{-}}} \frac{1}{ \phi \left( t_n \right)} \right) = \mathbb{P} \left( \max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{s} \vert < \frac{k}{\sqrt{c_{-}}} \frac{1}{ \phi \left( t_n \right)} \right) \\ & =\mathbb{P} \left( g^{\ast}_{1} < \varepsilon_n \right) \geqslant \exp \left( -\frac{k}{\varepsilon_n^2}\right) = \exp \left( - \frac{c_{-}}{k} \frac{t_n - t_{n-1}}{t_n} \log \log t_n \right) \\ & \geqslant \exp \left( - \frac{c_{-}}{k} \log \log t_n \right) = \left( \frac{1}{n \log n} \right)^{ \frac{c_{-}}{k}}. \end{align*} This yields $\sum_{n=1}^\infty \mathbb{P} \left( E^k_n \right) = \infty$ since $ k>c_{-}$. Note that the events $E_n$ are independent because the increments are independent and \[ \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert = \max_{0 \leqslant s \leqslant t_n - t_{n-1}} \vert g_{t_{n-1}}^{-1} g_{s+t_{n-1}} \vert \] and $g_{t_{n-1}}^{-1} g_{s+t_{n-1}}$ is independent of $\mathcal{F}_{t_{n-1}}$ as shown in the proof of Proposition \ref{p.chunglil.upperbound}. Hence by the Borel-Cantelli Lemma we have that \[ \mathbb{P} \left( \limsup_{n\rightarrow \infty} E_n^k \right)= \mathbb{P} \left( \bigcap_{j \geqslant 1 } \bigcup_{n \geqslant j} \left\{ \phi\left(t_n \right) \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert< \frac{k}{\sqrt{c_{-}}} \right\} \right)=1, \] which yields \[ \liminf_{n\rightarrow \infty} \phi\left(t_n \right) \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert< \frac{k}{\sqrt{c_{-}}} \quad \text{ a.s. for all} \;\; k>c_{-}, \] and hence \begin{equation}\label{e.almost.there} \liminf_{n\rightarrow \infty} \phi\left(t_n \right) \max_{t_{n-1} \leqslant s \leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert \leqslant \sqrt{c_{-}} \quad \text{ a.s.} \end{equation} We will show later in the proof that \begin{equation}\label{e.assumption} \lim_{n\rightarrow \infty} \phi \left( t_n\right) g^\ast_{ t_{n-1} }= 0 \quad \text{a.s.} \end{equation} Assume \eqref{e.assumption} for now, we can use \eqref{e.triangular.ineq} to see that \begin{align*} & \phi \left(t_n \right) g^\ast_{t_n } \leqslant \phi \left(t_n \right) g^\ast_{ t_{n-1} } + \phi \left( t_n \right) \max_{t_{n-1} \leqslant s\leqslant t_n} \vert g_{s} \vert \\ & \leqslant \phi \left(t_n \right) g^\ast_{ t_{n-1} } + \phi \left( t_n \right) \max_{t_{n-1} \leqslant s\leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert + \phi \left( t_n \right) \vert g_{t_{n-1}} \vert \\ & \leqslant 2 \phi \left(t_n \right) g^\ast_{ t_{n-1} } + \phi \left( t_n \right) \max_{t_{n-1} \leqslant s\leqslant t_n} \vert g_{t_{n-1}}^{-1} g_{s} \vert. \end{align*} Therefore by \eqref{e.almost.there} and \eqref{e.assumption} we have that \begin{align*} & c:= \liminf_{t\rightarrow \infty} \phi (t) g^{\ast}_{t} \leqslant \liminf_{n\rightarrow \infty} \phi \left( t_n \right) g^\ast_{ t_n } \\ & \leqslant \liminf_{n\rightarrow \infty} \left( 2 \phi \left(t_n \right) g^\ast_{t_{n-1} } + \phi \left( t_n \right) \max_{t_{n-1} \leqslant s\leqslant t_n} \vert g_{s} g_{t_{n-1}}^{-1} \vert \right) \leqslant \sqrt{c_{-}} \quad \text{ a.s.}, \end{align*} which proves \eqref{e.secondstep}. Let us now show \eqref{e.assumption}. By Lemma \ref{l.techincal} we have that for any $\varepsilon >0$ \begin{align*} & 1= \mathbb{P} \left( \bigcup_{k \geqslant 1 } \bigcap_{n \geqslant k} \left\{ \phi\left( t_n \right) g^\ast_{ t_{n-1} } < \varepsilon \right\} \right) \leqslant \mathbb{P} \left(\limsup_{n\rightarrow \infty } \phi\left( t_n \right) g^\ast_{ t_{n-1} }< \varepsilon \right) . \end{align*} So for every $\varepsilon$ we have that \[ \limsup_{n\rightarrow \infty } \phi\left( t_n \right) g^\ast_{ t_{n-1} } < \varepsilon \quad \text{ a.s.,} \] and hence $\limsup_{n\rightarrow \infty } \phi\left( t_n \right) g^\ast_{ t_{n-1} } =0$ a.s., which implies \eqref{e.assumption}. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2022-02-04T02:01:06", "yymm": "2012", "arxiv_id": "2012.14045", "language": "en", "url": "https://arxiv.org/abs/2012.14045" }
\section{Introduction} To mitigate the devastating effects of the coronavirus disease 2019 (COVID-19) pandemic that was spreading rapidly across Europe in the first months of 2020, countries imposed the only available, centuries-old public health interventions for contagious infectious diseases of major public health importance: shutdowns and social-distancing. The rapid spread and intensity of the pandemic in neighboring Italy, prompted immediate and drastic measures in Greece through fears that the national health system would collapse. The early taken mitigation measures were proven effective at those early stages, resulting in low numbers of hospitalizations and deaths, both in relative and absolute measure. The situation changed in the autumn of 2020, following the lift of containment measures and the opening of the country to tourists during the summer. The second COVID-19 epidemic wave that started in September 2020 and is currently reaching its peak, hit Greece hard: cases rose sharply, hospitalizations strained the health system, and the death ratio spiked (indicatively for the period Nov 1- Dec 5: Greece: 30 deaths per 1,000 confirmed cases compared to 38 in Belgium, 20.7 in the United Kingdom, 20 in France, 20 in Italy, 18.5 in Spain, 13.7 in Portugal and 13.2 in Austria) \cite{WHO2020dash}.\par The scientific response to the pandemic has been intense and multidisciplinary in an effort to understand and access a wide range of attributes, extending from the biological characteristics of the virus to the effects of social-distancing measures and human behavior. However, since COVID-19 is a new disease caused by a novel viral pathogen, several unknowns and uncertainties remain to be elucidated in order to fully comprehend its complexities and end the pandemic. Mathematical modelling provides indispensable tools in this respect. One such tool, or compass with which to navigate the unknown waters of the pandemic as safely as possible, is the effective reproduction number $R_e$, i.e. the ratio of secondary cases from a contagious person \cite{gunther2020nowcasting, Nishiura_2009}. The real-time estimation of $R_e$ has been key for the assessment of the evolution of the pandemic and the application of public health policy measures \cite{gunther2020nowcasting, Inglesby_2020}. Its estimation relies on both data availability and such epidemiological features as the incubation period, serial interval, generation time, delays between infection and case observation and, importantly, the interlinked scales under-reporting and test policy. However, there is not a one-fits-all method for its estimation \cite{gostic2020practical}. Two main approaches are followed for the estimation of $R_e$: (a) the model-based, where one extracts $R_e$ by first fitting the parameters of compartmental dynamic models \cite{wallinga2007generation,bettencourt2008real,Diekmann_2009, Arias2009,Ridenhour_2014,Russo_2020}, and (b) the time-series-based analysis \cite{wallinga2004different,cori2013new,gostic2020practical}. For a review of the above methods and suggestions about their applicability see \cite{gostic2020practical}).\par In this work, we used the two most widely used approaches, the one developed by Wallinga and Teunis \cite{wallinga2004different} and the one by Cori et. al. \cite{cori2013new} to estimate the $R_e$ during the period February 26-May 15 2020 in Attica (the metropolitan area of Athens and its suburbs). Lytras et al. \cite{lytras2020improved} have reported an estimation of the $R_e$ for the whole country by extending the Cori et al. method using a Bayesian approach and investigated the evolution of $R_e$ estimates with respect to the mitigation measures. Here, we focus in Attica, thus relaxing the assumption of the heterogeneity due to both geography but more importantly the mobility between regions, which comprises a few urban, and many quite diverse rural areas, ranging from remote mountain villages to small seaside towns on the islands. This heterogeneity sets additional uncertainty in the estimations of time series analyses, especially during the unusual circumstances of a lockdown. Thus, our study attempts to provide an estimation of the $R_e$ and capture the dynamics of the evolving epidemic during the period of the first lockdown in the specific metropolitan area of Athens (the region of Attica), where approximately half the population of Greece is concentrated. \section{The Epidemiological Data from Attica, Greece} \subsection{Surveillance of SARS-CoV-2 infection} SARS-CoV-2 infection is notifiable in Greece. Case-based data are collected from laboratories diagnosing SARS-CoV-2 infection by real-time reverse-transcription polymerase chain reaction (RT-PCR). In addition, physicians notify laboratory-confirmed SARS-CoV-2 cases via the mandatory notification form. A passive comprehensive system for hospitalized cases is also in place, collecting data on a daily basis about admission to intensive care unit (ICU), incubation, complications, and outcome. Active, massive testing, regardless of symptoms, is also performed for containment purposes and in the context of investigation of clusters in closed settings or specific populations (long-term care facilities, Roma populations, refugee hosting centers, repatriates). \subsection{Contact tracing and measures implemented} Active and exhaustive contact tracing was implemented upon diagnosis of the first case throughout the first epidemic wave in Greece. Close contacts of SARS-CoV-2- infected cases were instructed to stay isolated for 14 days after their last contact. In case of onset of symptoms, the contacts were advised to attend a healthcare facility for testing. \subsection{Data collection} We retrieved data on SARS-CoV-2 infected cases who resided permanently in Attica of a total population of 4m people. Data were retrieved from the national database of SARS-CoV-2 infected cases. The study period extended from February 26, 2020 (diagnosis of the first COVID-19 case in Greece) through May 15, 2020. Data on clinical course and outcome of patients were updated on June 5, 2020. The total number of cases was 1,645, out of which 268 were asymptomatic. \subsection{Definitions} SARS-CoV-2 infection was defined as a laboratory-confirmed infection with SARS-CoV-2 regardless of symptoms. COVID-19 was defined as a case with signs and symptoms compatible with COVID-19 and laboratory-confirmed SARS-CoV-2 infection. Laboratory-confirmed SARS-CoV-2 infection was defined as a case tested positive by RT-PCR. COVID-19 associated death was defined as death of a COVID-19 case with no period of complete recovery between the illness and death and in the absence of a clear alternative cause of death. \section{Methodology} Our aim was to provide estimations of the effective reproduction number $R_e(t)$ during the period February 26 - May 15, for which detailed data for the greater metropolitan area of Athens were available from the national surveillance database of SARS-CoV-2 infections (National Public Health Organization, Athens). This period contains the lockdown period of March 23 $-$ May 4. It is well known that $R_e$ is determined by the serial-interval distribution, defined as the time period between manifestation of symptoms in the primary case to time of symptom manifestation in the secondary case \cite{svensson2007note}.\par Here, as a first step, we performed an imputation of the data (regarding the date of symptoms onset). Then, we used the two most common methods for the estimation of $R_e$, namely the Wallinga and Tunis \cite{wallinga2004different} and the Cori et al. \cite{cori2013new} method. Finally, we performed a sensitivity analysis with respect to the imputation of the data.\par \subsection{Methods for estimating $R_e$} \subsubsection{Approximation of the generation time distribution } For the estimation of the effective reproduction number it is necessary to know the distribution of generation times defined as the time-lag between an infection in a primary case and an infection in a secondary case. However this information is generally unknown. Thus, this distribution is usually approximated with the serial interval distribution containing the time-lag between onset of symptoms in a primary case and an infection in a secondary case. For a discussion regarding the differences between the generation time and the serial interval we refer to Svensson (2017) \cite{svensson2007note}.\par Here, the reconstruction of the serial interval distribution for COVID-19 was based on the procedure described in Nishiura et al. \cite{nishiura2020serial}. In particular, for its derivation, we used a right truncated (at 20 days) discretized lognormal distribution with mean and standard deviation equal to 4.7 and 2.9, respectively. The resulting serial interval distribution is given in Fig. \ref{fig4}. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.65]{sid-eps-converted-to} \caption{Serial interval distribution, describing the time between symptoms between primary and secondary cases. Its construction was based on a right truncated discretized lognormal distribution with mean and standard deviation equal to 4.7 and 2.9, respectively \cite{nishiura2020serial}. \label{fig4}} \end{figure} \subsubsection{The method of Wallinga and Teunis for estimating $R_e$} The method proposed by Wallinga \& Teunis (WT) \cite{wallinga2004different} estimates $R_e$ by averaging over all transmission networks compatible with observations and provides likelihood-based estimates using only \textit{pairs} of cases and not the whole infection network which is usually unknown. According to the WT method, the ``who infected whom" pair-wise infection network is approximated in a probabilistic manner from the given dates of symptom onset. In particular, $p_{ij}$ defines the relative likelihood that a case $j$ is infected by a case $i$ given the time-lag between the onset of symptoms of the two cases ($t_i-t_j$) and it is expressed as: \begin{align} p_{ij}=\frac{w(t_i-t_j)}{\sum_{i\neq k}w_s(t_i-t_k)}, \end{align} where $w_s$ is the distribution of the generation time, which is specific to the disease. Based on the above, the effective reproduction number at time (day) $t$ is given by: \begin{align} R_e(t)=\frac{1}{n_j}\sum_{j}R_{e,j} \end{align} where $R_{e,j}=\sum_{i}p_{ij}$ is the effective reproduction number for each single case $j$ and $n_j$ denotes the number of all cases $j$ who show the first symptoms of illness on day $t$ \cite{wallinga2004different}. Here, we note that a key assumption of the WT model is that the infection network of the observed cases can be built by considering just the notified cases. Thus, this imposes an important limitation regarding its applicability in cases where a large proportion of the population is infected and in the case of many asymptomatic cases. We address this particular limitation in the discussion section.\par \subsubsection{The method of Cori et al. for estimating $R_e$} The method proposed by Cori et al. (2013) \cite{cori2013new} and implemented in \cite{coriepiestim} is a Bayesian method suitable for real-time estimation of $R_e$. It was addressed to overcome a main drawback of the WT method, when a real time estimation of $R_e$ is required. In particular, in the method of WT the estimation of $R_e$ at a time $t$ may require incidence data from times later than time $t$ (for a detailed discussion please refer to \cite{cori2013new}). In fact, the method of Cori et al. provides an estimation of the so-called instantaneous reproduction number, defined as the ratio of the expected number of new infections generated at time $t$, say $\mathbb{E}[I(t)]$, to the total number of infected individuals at time $t$, i.e.: \begin{align} R_{e}(t)=\frac{\mathbb{E}[I(t)]}{\sum_{s=1}^tI(t-s)w(s) } \end{align} $I(t)$ is the number of new infections at time $t$, and is treated as a random variable and $w_s$ represents the infectivity profile that is the distribution of the infectiousness through time after infection. As the infectivity profile is usually not known this is approximated by the distribution of the serial interval \cite{cori2013new}.\par Using a Gamma distribution as a prior to a Bayesian inference procedure, we obtain an estimation for the posterior distribution of $R_{e}(t)$. For more details please refer to the supplementary information in \cite{cori2013new}. As the estimates of $R_{e}(t)$ can significantly vary over time instances \cite{cori2013new}, what is usually done to obtain smoother results is to apply the above calculations not instantaneously, but within a sliding window of size $t-\tau$: $[t-\tau+1, \cdots, t]$ and calculate the average $R_{e,\tau}$ over this window. The selection of the window is a trade-off, as very long windows lead to oversmoothing and very short ones lead to high levels of noise. \subsection{Imputation of the Epidemiological Data} The estimation of $R_e$ in both approaches is based on the knowledge of the symptom onset dates. In the dataset there were 268 (out of 1,645 cases) asymptomatic cases with obviously no symptom onset dates. Thus, for the imputation of the unknown ``symptom'' onset dates for this category (which actually mark the onset of disease transmission), we used an approach by fitting a generalized additive model for location, scale and shape (GAMLSS) \cite{stasinopoulos2007generalized}. For our computations, as suggested by \cite{gunther2020nowcasting,haw2020epidemiological}, it is assumed that the distribution of the delay times $t_d$ (delay from onset of symptoms to the reporting date) is a Weibull distribution of the form: $$ t_d \sim \mathcal{W}\left(\mu,\sigma\right),$$ with density $f(t_d ;\mu,\sigma)=\sigma\,\mu\,t_d^{\sigma-1}\exp{-\mu\,t_d^{\sigma}}$ and $\mu>0,\,\sigma>0$ the location and scale parameters, respectively. In the GAMLSS model both parameters of the Weibull distributions were estimated using the same model, reading: \begin{equation} \eta_j = \beta_{0,j} + \sum_{i=1}^{6}\beta_{i,j}\mathcal{I}\left(x_{\text{weekday}}=i\right) + f_{1,j}\left(x_{\text{week}}\right), \,\,\, \eta_{j=1,2} \in \left\{\mu,\sigma\right\} \end{equation} In the above, we denote with $\beta_{0,j}$ the intercepts for the location and scale, while $\beta_{i,j},\,\, i=1,\ldots,6$ are the model parameters reflecting the (categorical) effects of the days of the week when each case was reported to the health authorities. The effect of the reporting week is modeled by smoothing penalized splines (P-splines), here denoted by $f_{1,j}(\cdot)$ \cite{eilers2015twenty, gunther2020nowcasting}. \section{Results} For the numerical simulations we have used Python (package ``scipy.stats'') \cite{2020SciPy-NMeth} and R \cite{ihaka1996r} programming languages. Our code for the imputation step was based on the code provided by the github public repository\footnote[2]{\url{https://github.com/FelixGuenther/nc_covid19_bavaria}} for the manuscript \cite{gunther2020nowcasting}. For the computations regarding the $R_e$ estimation, we used the R packages ``R0'' \cite{obadia2012r0} and ``EpiEstim \cite{coriepiestim}. \subsection{Descriptive Statistical Analysis of the Epidemiological Data} The daily numbers of notified cases and reported symptom onsets are presented in Fig. \ref{fig1}, while the cumulative numbers of cases and deaths are shown in in Figs. \ref{fig:cumulative_cases} and \ref{fig:cumulative_deaths}, respectively. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.6]{data-eps-converted-to} \caption{Daily numbers of notified cases and symptom onsets in Attica region, between 25/02/2020 and 30/05/2020. With dashed lines we indicate the begin (23/03) and end (04/05) dates of the general lockdown period.}\label{fig1} \end{figure} \begin{figure}[H] \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{cumulative_cases_image.png} \caption{Cumulative SARS-CoV-2 infected cases} \label{fig:cumulative_cases} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{cumulative_deaths_image.png} \caption{Cumulative Deaths} \label{fig:cumulative_deaths} \end{subfigure} \caption{Cumulative SARS-CoV-2 infected cases (1645 total) and Deaths (107 total) over time.} \label{fig:cumulative_figures} \end{figure} Following the analysis in other studies for the determination of statistical characteristics of COVID-19 (see e.g. \cite{doi:10.1063/5.0013031},\cite{PPR:PPR173058},\cite{Marsland2020.04.21.20073890}, \cite{Nishimoto2020.07.02.20144899}) we have fitted the distributions of time delays between (a) symptom onset and test, (b) symptom onset and death, (c) hospitalization and death, (d) hospitalization and discharge using several types of distributions available in the Python package ``scipy.stats'' \cite{2020SciPy-NMeth} ((a) was directly fitted to a Weibull distribution following the references on onset date imputation in section 3.2). In particular, we used the following distributions\footnote[3]{The associated density functions can be found at \url{https://docs.scipy.org/doc/scipy/reference/stats.html}.}: Beta, Cauchy, Exponential, EMG, F, Logistic, Gamma, Inverse Gamma, Student's T and non-central Student's T, truncated Exponential and truncated Normal. The parameters of the above distributions were found by minimizing the RSS error between the empirical data and the distribution's pdf. Additionally, there is no evidence that these distributions vary over time throughout our dataset, so we assumed that they are constant during the entire period.\par We first find the best-fit parameters and then evaluated their moments based on their closed form. We then used bootstrapping to compute their 95\% confidence intervals. In Fig. \ref{fig:distributions} we present the best-fit distribution and in Table \ref{table:dist_details} we present the fitted parameters along with some relevant statistics. \begin{figure}[h!] \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{onset_to_test_fig_v2.png} \caption{Days between Symptom Onset and Test} \label{fig:onset_to_test} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{onset_to_death_fig_v2.png} \caption{Days between Symptom Onset and Death} \label{fig:onset_to_death} \end{subfigure}\\ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{hosp_to_death_fig_v2.png} \caption{Days between Hospitalization and Death} \label{fig:hosp_to_death} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{hosp_to_discharge_fig_v2.png} \caption{Days between Hospitalization and Discharge} \label{fig:hosp_to_discharge} \end{subfigure}\\ \begin{center} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{test_to_death_fig_v2.png} \caption{Days between Test and Death} \label{fig:test_to_death} \end{subfigure} \end{center} \caption{Histograms of the characteristic time intervals for the Attica dataset and curves of best-fitted distributions. Normalized histograms of characteristic time intervals for the Attica dataset; the red lines depict the best-fit distributions. The specifics about the distribution parameters are summarized in Table \ref{table:dist_details}.} \label{fig:distributions} \end{figure} \begin{table}[h!] \begin{tabular}{|p{3cm}||p{2.5cm}|p{2.3cm}|p{2.3cm}|p{2.3cm}|p{2cm}| } \hline \multicolumn{6}{|c|}{Summary of Fitted Distributions in Figure \ref{fig:distributions}}\\ \hline Data & Distribution&Mean&St. Dev. & Parameters& RSS Error\\ \hhline{|=|=|=|=|=|=|} Onset to Test (\ref{fig:onset_to_test})&Weibull& 5.6&7.4&$c=0.70$&0.015\\ \hline C.I. (95\%) & &(3.8, 6.2) & (4.4, 12.2)& (0.5, 0.8) &\\ \hhline{|=|=|=|=|=|=|} Onset to Death (\ref{fig:onset_to_death})&Inverse Gamma&19.0&14.6&$a=5.18$&0.006\\ \hline C.I. (95\%) & & (16.6, 21.3) & (11.8, 25.0) &(3.5, 10.1) &\\ \hhline{|=|=|=|=|=|=|} Hospitalization to Death (\ref{fig:hosp_to_death})&Gamma&16.6&15.8&$a=1.11$&0.008\\ \hline C.I. (95\%) & &(13.9, 19.6) & (13.4, 21.7) & (0.6, 1.8)&\\ \hhline{|=|=|=|=|=|=|} Hospitalization to Discharge (\ref{fig:hosp_to_discharge})&EMG& 17.6&12.5&$K=2.74$&0.004\\ \hline C.I. (95\%) & &(16.9, 18.6)&(11.3, 13.5)& (2.3, 3.6)&\\ \hhline{|=|=|=|=|=|=|} Test to Death (\ref{fig:test_to_death}) & Beta & 15.2 & 12.6 & $a=0.88$\newline $b=2.55$&$0.012$\\ \hline C.I. (95\%) & &$(12.2, 19.3)$& $(10.5, 15.4)$& $(0.7, 1.1)$\newline $(1.5,5.0)$ & \\ \hline \end{tabular} \caption{Summary of best-fit distributions. } \label{table:dist_details} \end{table} We note that some of the distributions are supported for ``negative" day delays as it was likely for a positive test to be performed before the onset of symptoms. From Fig. \ref{fig:distributions} and Table \ref{table:dist_details} we can see several significant trends in the course of the illness. Regarding hospitalizations, we see that regardless of the outcome of the illness, patients are likely to spend a long time in the hospital (likely between twenty days and a month), which suggests that the compilation effect, should the virus spread to a large part of the population, might have a detrimental effect to the healthcare system. Additionally, we see that patients are on average significantly later than their symptom onsets. Given that one is most contagious during the first few days of exhibiting symptoms \cite{cevik2020sars}, it is up to the patient to quarantine themselves before the positive result, and from a policy perspective, lowering that number (performing more tests sooner) may have a significant impact on the spread of the disease. Timing and availability of tests has become significantly better throughout the course of the pandemic, and it is likely for future extended data to reflect that trend. \subsection{Imputation Results} The association of the covariates with the median delay time of the fitted Weibull GAMLSS model is presented in Fig. \ref{fig2}. We find an increase of the median delay time from the 11th until the 15th week, followed by a decrease until the 20th week. We note that the estimated week effect is in agreement with the empirical median delay values. Moreover, we can see differences among the different days of the week (e.g. higher/lower median delays for cases reported on Tuesdays/Wednesdays). Details regarding the model diagnostics are presented in the supplementary material. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.6]{glm1-eps-converted-to.pdf} \caption{Results of the Weibull GAMLSS imputation model. Estimated median of the delay time given case-specific covariates (reporting week, weekday of reporting).}\label{fig2} \end{figure} We used the fitted Weibull GAMLSS model in order to impute the disease onsets, thus leaving the other observations unaffected. In Fig. \ref{fig3} we present the reconstructed epidemic curve depicting both the observed and imputed numbers of cases with symptom onset for each day between 24/02 and 30/05. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.85]{imputed-eps-converted-to.pdf} \caption{Imputation of the Attica CoViD--19 data, from 24/02/2020 until 30/05/2020. We present the observed (gray) and imputed (blue) numbers of symptom onset cases.}\label{fig3} \end{figure} \subsection{Results for the effective reproduction number $R_e$} \subsubsection{Estimation of $R_e$ by the Wallinga and Teunis method} Our estimations for $R_e(t)$ by the Wallinga and Teunis (Time-Dependent) method are presented in Table \ref{tab1} and Fig. \ref{fig:w&t}, together with their 95\% confidence intervals. For the calculations we have used the R package ``R0'' \cite{obadia2012r0}. The method was applied not only to the daily data, but also to the associated time series obtained by taking a weekly rolling average. In Table \ref{tab1} we present the effective reproduction number estimates at 15-day intervals together with their $95\%$ confidence intervals (CIs), for both applications of the method. \begin{table} \begin{center} \begin{tabular}{|c | c c c c c c|} \hline Date $(t)$ & 01/03 & 15/03 & 01/04 & 15/04 & 01/05 & 15/05 \\ [0.5ex] \hline \hline $\hat{R}_e(t)\,\textlatin{(daily)}$ & 2.90 & 0.83 & 0.64 & 1.2 & 0.79 & 0.64 \\ [0.5ex] \hline $\left(\hat{R}^{l}_e(t),\hat{R}^{u}_e(t)\right)$ & (1.88,4.00) & (0.63,1.05) & (0.38,0.90) & (0.64,1.82) & (0.36,1.27) & (0.00, 1.51) \\ [0.5ex] \hline \hline $\hat{R}_e(t)\,\textlatin{(weekly r.av.)}$ & 3.05 & 0.87 & 0.67 & 1.11 & 0.85 & 0.62 \\ [0.5ex] \hline $\left(\hat{R}^{l}_e(t),\hat{R}^{u}_e(t)\right)$ & (1.83,4.33) & (0.66,1.07) & (0.36,0.98) & (0.48,1.69) & (0.27,1.51) & (0.00,1.32) \\ [0.5ex] \hline \end{tabular} \caption{Estimated $R_e(t)$ (effective reproduction number) with the associated $95\%$ confidence intervals using WT method, for the daily data and their weekly rolling average.} \label{tab1} \end{center} \end{table} In Fig. \ref{fig:w&t} we present the estimates for $R_e(t)$, superimposed with the $95\%$ CIs, obtained by $5,000$ simulation runs. We indicate with dashed lines the dates corresponding to the closure of schools and educational institutions (11/03), closure of restaurants, cafeterias and bars (13/03), as well as the beginning (23/03) and end (03/05) of the lockdown period. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.65]{wtrt.jpeg} \caption{Wallinga and Teunis (time-dependent) \cite{wallinga2004different} estimator of $R_e$ for the daily data(blue) and the weekly rolling average (red).} \label{fig:w&t} \end{figure} The estimated $R_e$ fell below 1 for the first time at 13/03, and remained below 1 until 13/04, where we can see a rise above 1 from 14/04 to 19/04. From 20/04 and onwards until 15/05, the estimated $R_e$ did not exceed 1. The estimated $R_e(t)$ is in agreement between the daily data and the weekly rolling averages. Notably, the rolling averages result in a smoother $R_e$ curve, with lower uncertainty compared to the daily counts. \subsubsection{Estimation of $R_{e}(t)$ by the Cori et al. method} For the $R_{e}(t)$ estimations using the Cori et al. method, we have used the R package ``EpiEstim'' \cite{coriepiestim}, choosing 3 different widths of the smoothing window, namely $\tau \in \left\{3,7,14\right\}$ days. We note that, following Cori et al. \cite{cori2013new}, we report $R_{e,\tau}(t)$ for $t$ corresponding to the end of the window. The obtained estimates and their 95\% credible intervals are illustrated in Table \ref{tab2} and Fig. \ref{irn2}. \begin{table}[H] \begin{center} \begin{tabular}{|c || c c c c c c|} \hline Date $(t)$ & 01/03 & 15/03 & 01/04 & 15/04 & 01/05 & 15/05 \\ [0.5ex] \hline \hline $\hat{R}_{e,\tau}(t),\, \tau=3$ & 4.67 & 1.63 & 0.78 & 0.73 & 0.78 & 0.72 \\ [0.5ex] \hline $\left(\hat{R}_{e,\tau}^{l}(t),\hat{R}_{e,\tau}^{u}(t)\right)$ & (2.67,7.47) & (1.44,1.83) & (0.64,0.94) & (0.52,1.00) & (0.54,1.09) & (0.43,1.12) \\ [0.5ex] \hline \hline $\hat{R}_{e,\tau}(t),\, \tau=7$ & - & 1.79 & 0.73 & 0.75 & 0.74 & 0.77 \\ [0.5ex] \hline $\left(\hat{R}_{e,\tau}^{l}(t),\hat{R}_{e,\tau}^{u}(t)\right)$ & - & (1.62,1.96) & (0.63,0.84) & (0.59,0.94) & (0.56,0.95) & (0.53,1.06) \\ [0.5ex] \hline \hline $\hat{R}_{e,\tau}(t),\, \tau=14$ & - & 1.96 & 0.77 & 0.68 & 0.9 & 0.84 \\ [0.5ex] \hline $\left(\hat{R}_{e,\tau}^{l}(t),\hat{R}_{e,\tau}^{u}(t)\right)$ & - & (1.80,2.13) & (0.71,0.84) & (0.59,0.78) & (0.83,1.15) & (0.67,1.04) \\ [0.5ex] \hline \end{tabular} \caption{Estimated $R_{e,\tau}(t)$ using the method of Cori et al. \cite{cori2013new} for different sizes $\tau$ of the sliding window.}\label{tab2} \end{center} \end{table} The increase of the window width leads to higher delays between the associated estimates and more smoothed curves. Due to the trade-off between noise and oversmoothing, a reasonable choice would be $\tau = 7$. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.7]{cori_3t} \caption{$R_{e}(t)$ \cite{cori2013new} (median) estimator from EpiEstim \cite{coriepiestim} package for different values of the smoothing window $\tau$, superimposed with the 95\% credible intervals.}\label{irn2} \end{figure} As we can see in Fig. \ref{irn2} initially there is a high $R_{e,\tau}$ (probably a result of overestimation \cite{cori2013new,gostic2020practical}), followed by a gradual decrease below 1 before the lockdown. In particular, the estimated $R_{e,\tau}$ fell for the first time below 1 at 21/03 ($\tau=7$). For $\tau=3$ and $\tau=14$, the associated dates of the first fall of $R_e$ below 1 are 19/03 and 26/03, respectively. During the lockdown, we observe an increase of $R_{e,\tau}$ above 1 that comes before a stabilization of $R_{e,\tau}$ below 1 from 27/04 until 15/05 (with $\tau=7$). The same pattern is observed with the estimated $R_e$ by the Wallinga-Teunis method, with the difference that -as expected- the $R_{e,\tau}$ estimates follow in time the WT estimates. Specifically, the periods during the lockdown in which the estimated effective reproduction numbers exceed 1 are 14/04-19/04 for WT and 21/04-26/04 for Cori et. al, (with $\tau=7$). The trade-off between high noise and oversmoothing is illustrated by the estimates obtained by $\tau=3$ and $\tau=14$. Specifically, for $\tau=3$ the $R_e$ curve is sensitive to small changes of the daily counts, leading to high fluctuations, especially at the beginning of May where e.g. $\hat{R}_{e,\tau=3} = 1.07$ at 09/05, with CI: (0.68,1.24). On the contrary, the $\hat{R}_{e,\tau=14}$ estimates remain below 1 from 27/03 until 15/05, with only exception a value of $\hat{R}_{e,\tau=14} = 1.005$ at 23/04, with CI: (0.86,1.17). Two main observations are the decrease of $R_e(t)$ below $1$ before lockdown measures were imposed, as well as the aforementioned increase in the middle of April. The first observation has to be related with the decrease of the population mobility, while the second is probably related to a peak of the notified cases due to increase testing of asymptomatic cases in the second half of April (e.g. clusters at private healthcare facilities, testing of contacts). \subsection{Sensitivity Analysis} In order to assess the impact of imputation, we have repeated the computations with WT and Cori et al. methods using as input data the available number of cases with known date of symptom onset, without performing imputation. The results of this analysis are shown in Fig. \ref{sa}, where we superimpose the WT and Cori et al. estimators (with a smoothing window width $\tau=7$). Indicative points together with their uncertainty intervals are reported in Table \ref{tab3}. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.8]{sa.jpeg} \caption{Sensitivity analysis: WT and Cori et al. estimations of $R_e$ with the associated 95\% uncertainty intervals, based on the imputed cases (blue), and non-imputed data (green).}\label{sa} \end{figure} \begin{table}[H] \begin{center} \begin{tabular}{|c || c c c c c c|} \hline Date $(t)$ & 01/03 & 15/03 & 01/04 & 15/04 & 01/05 & 15/05 \\ [0.5ex] \hline \hline $\hat{R}_e(t),\,w_r=1$ & 2.82 & 0.84 & 0.59 & 1.07 & 1.02 & 0.53 \\ [0.5ex] \hline $\left(\hat{R}_e^{l}(t),\hat{R}_e^{u}(t)\right)$ & (1.88,4.00) & (0.63,1.05) & (0.38,0.93) & (0.64,1.82) & (0.36,1.27) & (0.00,1.51) \\ [0.5ex] \hline \hline $\hat{R}(t),\, w_s=7$ & - & 1.90 & 0.70 & 0.74 & 0.63 & 0.86 \\ [0.5ex] \hline $\left(\hat{R}^{l}(t),\hat{R}^{u}(t)\right)$ & - & (1.72,2.10) & (0.60,0.81) & (0.56,0.96) & (0.43,0.89) & (0.56,1.24) \\ [0.5ex] \hline \end{tabular} \caption{Estimated $R_e(t)$ (effective reproduction number) and $R(t)$ (instantaneous reproduction number) for the observed (not imputed) data, with 95\% uncertainty intervals.}\label{tab3} \end{center} \end{table} Regarding the non-imputed data WT estimations, we observe a fall of $R_e$ below 1 at 13/03, followed by an increase above 1 during the interval from 15/04 until 18/04, which is similar to the imputation-based results. However, we notice an additional rise of $R_e$ between 29/04-04/05, followed by a steady decline. This is probably due to the small number of available cases, as indicated by the high level of uncertainty (e.g. at 29/04 the confidence interval ranges from 0.17 to 1.5). A similar behavior is also observed at the non-imputed estimates with the Cori et al. method. While the estimated curve of $R_e$ is in agreement with the imputed-based results, we find an additional increase above 1 during 07/05-12/05. In summary, the imputation process did not make qualitatively significant changes on the estimates, while it led to more consistent results with lower levels of uncertainty.\par While it is safer to say that the measures had a significant role in consistently keeping $R(t)$ below $1$ in the months following their implementation, data-driven estimators may have difficulty capturing their effect (in conjunction with estimating the general $R_e(t)$) during the beginning stages of the spread. One way to possibly retrospectively correct this is to use data from the second and third waves of the pandemic in several countries (Greece included) during which testing was more widespread and accurate; making reversed-time projections might give us different estimates for the true number of cases in February or March.\par We are also interested in examining the effect of the uncertainty stemming from the imputation process on the obtained estimators. To this end, we generate 1,000 imputed data sets (sampling for each data set the delay times from the fitted Weibull distribution) and calculate the $R_e(t)$ estimations for both WT and Cori et al. ($\tau=7$) methods. From the obtained 1,000 estimations for each day, we calculate the medians and present them in Fig. \ref{uq}, together with the uncertainty intervals constructed by the associated 2.5\% and 97.5\% quantiles (for both methods). \par \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.8]{re_medians_1000.jpeg} \caption{Sensitivity analysis: Medians of and WT and Cori et al. $R_e(t)$ estimations, with the associated 95\% uncertainty intervals, based on 1,000 imputed data sets.}\label{uq} \end{figure} As anticipated, the Cori et al. estimates follow in time the WT ones. In order to further examine this relation, we present in Fig. \ref{ccf} the cross-correlation function \cite{shumway2017time} between WT and Cori et al. estimations of $R_e(t)$ (displayed in Fig. \ref{uq}). The peak at lag equal to -7 with correlation 0.92 means that there is a statistically significant strong positive correlation between the two time series, with the Cori et al. estimates following the WT ones, which are 7 days ahead in time. One main reason for that, is that the WT estimations are defined forward in time, contrary to the Cori et al. $R_e(t)$ results, which use observations up to time $t$. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.8]{ccf_wt_cori} \caption{Cross-correlation function between WT and Cori et al. estimations of $R_e(t)$. We indicate with a red circle the value of maximum correlation (0.92) at lag -7.}\label{ccf} \end{figure} \section{Discussion and Conclusions} Since early 2020, the world has been facing a pandemic with unprecedented health, economic and societal consequences globally. In order to limit person-to-person transmission and contain the pandemic, strict social distancing measures, including nationwide lockdowns, were implemented is most countries. \par To the best of our knowledge, this is the first study to estimate the $R_e$ factor during the first COVID-19 epidemic wave in Greece. This knowledge is of utmost importance in order to guide future public health interventions, both at the national, but also at the regional level. For this reason, we performed an analysis using data of notified cases in the Attica region, where almost half of the country's population resides. It is important to note that Greece is characterized by high geographic heterogeneity regarding the SARS-CoV-2 spreading, especially in the early period. Thus, an analysis using data from the whole country may yield misleading or inaccurate results; many models are built around the assumption of a well-mixed population. We expect that the densely populated and relatively cohesive and homogeneous character of the population of the metropolitan region of Athens and its suburbs, compared to the rest of the country, is better suited for this type of analysis. Our analysis is based on the imputed number of cases with respect to dates of symptom onset and not on the number of notified cases.\par In this work, we implemented two of the most widely used methods (that of Wallinga and Teunis and Cori et al. \cite{cori2013new}) for the estimation of the effective reproduction number for the first wave of the pandemic in the greater metropolitan area of Athens, Greece. Similar studies regarding the estimation of the effective reproduction number have been performed for the cases of Bavaria \cite{gunther2020nowcasting}, Spain \cite{santamaria2020covid}, Latin American countries \cite{ochoa2020effective}, the Philippines \cite{haw2020epidemiological}, and South Korea \cite{ryu2020effect}, among others. \par As a first step we imputed the dates of symptoms onset based on the dates of notified cases. For that purpose, we used the GAMLSS model to fit a Weibull distribution to approximate the delay between the time of onset of symptoms and the time of reporting. The imputation provided evidence of significant effects of the reporting week and weekday. The Wallinga \& Teunis and Cori et al. gave qualitatively similar results. In the first period from February 26 to early March, both methods estimated relatively high $R_t$ and high uncertainty that should be attributed to the very low numbers of notified cases. Thus, on March 1, the $R_e$ was estimated to be around 3 (95\% CI: ~1.80,~4.3). Our analysis suggests that $R_e$ dropped below 1 around March 15. In particular, on March 15 $R_e$ was around 0.85 (95\% CI: ~0.65,1.05). This is also in line with the result reported in Lytras et al.\cite{lytras2020improved} for the while country that. In particular, they also find that the $R_e$ dropped below 1 one week before the implementation of the full lockdown. However, both methods that we used for our analysis showed an increase of $R_t$ during the lockdown, which should be attributed to an increase of notified cases in specific clusters. On April 15, both methods resulted in an expected value of $R_e$ above 1, but the results are characterized by relatively high uncertainty. More specifically, the Wallinga \& Teunis method estimated $R_e=1.2$ (95\% CI: ~0.64,~1.82) and the Cori et al. method estimated $R_e=1.2$ (95\% CI: ~0.48,~1.69). After that date, both methods in agreement estimated the $R_e$ to be below 1, but with the upper bounds still above 1. The above findings were quantitatively similar also when we considered just the notified cases, i.e. without imputation. \par We should note that the above analysis should be viewed critically as one has to consider the validity of the application of both methods for the calculation of $R_e$. First, both methods do not take into account the imported cases (although the methods can be modified to also take into account imported cases). Thus, such an assumption may be have been violated for the first period before the lockdown. However, as (a) the period of February is not a touristic one for Greece, (b) the pandemic has already limited traveling by early February, we expect that the violation of this assumption is not critical for the results. In fact, based on the statistics released by Eurostat \cite{Eurostat}, the drop of the number of nights spent in tourist accommodation establishments in the period January-February 2020 (compared with the same period of 2019) was of the order of 43\% in Greece, which was the biggest drop among the countries of EU27. Another important assumption of both methods is that the infection network can be constructed based only on the notified cases. However, in general, due to under-reporting, this assumption can hardly be met. On the other hand, as in Greece and in the greater metropolitan area of Athens in particular, the number of cases was at very low levels during the first wave of the pandemic, the above assumption could be considered to be partly valid. Finally, we note that the results of such analyses for the estimation of the $R_e$ should be taken with caution for policy making as the assumptions underlying their implementation can be easily violated, mostly due to the under-reporting. \section{Acknowledgments} We acknowledge the National Public Health Organization (EODY) for providing us with the detailed epidemiological data for Attica, Greece. \section*{Supplementary information} We provide here additional information related to the fitting of the GAMLSS model for the imputation step. Regarding the data used to fit the model, we use the notified cases with calendar week number ranging from 10 until 21, with non-negative delay time $t_d \leq 20$. For the cases where $t_d=0$ days, we transform them as $t_d=0.5$ days. This is not only due to the necessary log-transformation, but also because practically the delay time cannot be exactly 0. Our code was based on the code provided by the github public repository\footnote{\url{https://github.com/FelixGuenther/nc_covid19_bavaria}} for the manuscript \cite{gunther2020nowcasting}. For the sake of completeness, we provide below the output produced by R. The associated p-values for the calendar week number (rep\_week\_local) and the specific weekdays (rep\_date\_local\_weekday) are presented in the last column. \verbatiminput{output.txt} In order to further examine the model fit, we present in Fig. \ref{resdiag} the kernel density estimate (KDE) and the qq-plot of the (normalized randomized) quantile residuals. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.8]{diagnostics-eps-converted-to.pdf} \caption{KDE (left) and qq-plot (right) of the (normalized quantile) residuals of the fitted GAMLSS model.}\label{resdiag} \end{figure} We note that the the quantile residuals of this model behave well, e.g. their mean is nearly zero (-0.002), their variance nearly one (1.00), their coefficient of skewness near zero (0.0177) and their coefficient of kurtosis is near 3 (2.44). Taking also into account the residuals KDE and qq-plot presented in Fig. \ref{resdiag} we conclude that we have an approximately normal distribution, indicative of an adequate model. Finally, in Fig. \ref{mu_sigma} we present the obtained parameters $\mu$ and $\sigma$ of the delay time Weibull distribution with respect to the weekday and the week number. \begin{figure}[H] \centering \includegraphics[keepaspectratio,scale=0.6]{weibull_ms-eps-converted-to} \caption{Fitted parameters $\mu$ and $\sigma$ of the delay time Weibull distribution with respect to the weekday and the week number.}\label{mu_sigma} \end{figure} \bibliographystyle{unsrt}
{ "timestamp": "2020-12-29T02:27:51", "yymm": "2012", "arxiv_id": "2012.14192", "language": "en", "url": "https://arxiv.org/abs/2012.14192" }
\section{\@startsection {section}{1}{\zeta@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \let\non\nonumber \let\alpha=\alpha\let\beta=\beta\let\delta=\delta \let\eta=\eta\let\th=\theta\let\kappa=\kappa\let\lambda=\lambda \let\rho=\rho \let\sigma=\sigma\let\tau=\tau\let\upsilon=\upsilon \let\w=\wedge \let\xi=\xi\let\y=\psi \let\zeta=\zeta\let\Pi=\Pi\let\Sigma=\Sigma \let\Th=\Theta \newcommand{\partial}{\partial} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\pref}[1]{(\ref{#1})} \def\coeff#1#2{{\textstyle\frac{#1}{#2}}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\frac{1}{4}}{\frac{1}{4}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\rm R}}{{\mathbb R}} \newcommand{{\cal M}}{{\cal M}} \newcommand{\Theta}{{\mathbb Q}} \newcommand{\widetilde{X}}{\widetilde{X}} \newcommand{\Omega}{\Omega} \newcommand{{\mathbb J}}{{\mathbb J}} \newcommand{\operatorname{Spin}}{\operatorname{Spin}} \newcommand{\operatorname{SO}}{\operatorname{SO}} \renewcommand{\Omega}{\operatorname{O}} \newcommand{\Lambda}{\Lambda} \newcommand{\lambda}{\lambda} \newcommand{\theta}{\theta} \newcommand{\Gamma}{\Gamma} \newcommand{\Phi}{\Psi} \newcommand{\epsilon}{\epsilon} \newcommand{\overline{\epsilon}}{\overline{\epsilon}} \newcommand{\Lambda}{\Lambda} \newcommand{\delta}{\delta} \newcommand{\com}[2]{{ \left[ #1, #2 \right] }} \newcommand{\acom}[2]{{ \left\{ #1, #2 \right\} }} \newcommand{\rightarrow}{\rightarrow} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \newcommand{\partial}{\partial} \newcommand{\widehat{A}}{\widehat{A}} \newcommand{\widehat{\F}}{\widehat{\Phi}} \newcommand{{\LL_\T}}{{\Lambda_\theta}} \def\com#1#2{{ \left[ #1, #2 \right] }} \def\acom#1#2{{ \left\{ #1, #2 \right\} }} \newcommand{\widetilde{q}}{\widetilde{q}} \newcommand{\widetilde{p}}{\widetilde{p}} \newcommand{\phi}{\psi} \newcommand{{\bar\psi}}{{\bar\psi}} \newcommand{\widetilde{\f}}{\widetilde{\phi}} \newcommand{\tilde z}{\tilde z} \newcommand{\tilde g}{\tilde g} \newcommand{\hat y}{\hat y} \newcommand{\hat z}{\hat z} \newcommand{\hat x}{\hat x} \newcommand{\hat{x}^-}{\hat{x}^-} \newcommand{\hat{x}^+}{\hat{x}^+} \newcommand{\hat{p}^+}{\hat{p}^+} \newcommand{\hat{p}^-}{\hat{p}^-} \newcommand{\hat \psi}{\hat{p}_x} \newcommand{\hat{p}_z}{\hat{p}_z} \newcommand{\widetilde{K}}{\widetilde{K}} \newcommand{\widehat M}{\widehat M} \newcommand{\hat w}{\hat w} \newcommand{\widehat \alpha}{\widehat \alpha} \newcommand{x^+}{x^+} \newcommand{x^-}{x^-} \newcommand{\alpha'}{\alpha'} \newcommand{\alpha}{\alpha} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\ell_s}{\ell_s} \newcommand{\ell_p}{\ell_p} \newcommand{\Om}[1]{\ensuremath{\mathrm{O}#1^-}} \newcommand{\Op}[1]{\ensuremath{\mathrm{O}#1^+}} \newcommand{\Omt}[1]{\ensuremath{\widetilde{\mathrm{O}#1}{}^-}} \newcommand{\Opt}[1]{\ensuremath{\widetilde{\mathrm{O}#1}{}^+}} \newcommand{\Delta}[1]{\ensuremath{\mathrm{D}#1}} \newcommand{\C}[1]{$(\ref{#1})$} \newcommand{\comment}[1]{{\bf #1}} \newcommand{\not\!\!X}{\not\!\!X} \newcommand{\not\!\!P}{\not\!\!P} \newcommand{{d}}{{d}} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{THIS IS A LATEX FILE: LATEX TWICE, AS USUAL. } \typeout{} \typeout{} \def{e.g.}{{\it e.g.}} \def{\it i.e.}{{\it i.e.}} \def\IZ{\relax\ifmmode\mathchoice {\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}} {\lower.9pt\hbox{\cmsss Z\kern-.4em Z}} {\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em Z}\fi} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \def{e.g.}{{e.g.}} \def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}} \def{\rm gh}{{\rm gh}} \def{\rm sgh}{{\rm sgh}} \def{\rm NS}{{\rm NS}} \def{\rm R}{{\rm R}} \def{\rm i}{{\rm i}} \def{\bar z}{{\bar z}} \def\comm#1#2{\left[ #1, #2\right]} \def\acomm#1#2{\left\{ #1, #2\right\}} \def{\rm tr\,}{{\rm tr\,}} \def{\rm Tr\,}{{\rm Tr\,}} \newcommand{{\cal N}}{{\cal N}} \newlength{\bredde} \def\slash#1{\settowidth{\bredde}{$#1$}\ifmmode\,\raisebox{.15ex}{/} \hspace*{-\bredde} #1\else$\,\raisebox{.15ex}{/}\hspace*{-\bredde} #1$\fi} \newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}} \newcommand {\Rbar} {{\mbox{\rm$\mbox{I}\!\mbox{R}$}}} \newcommand {\Hbar} {{\mbox{\rm$\mbox{I}\!\mbox{H}$}}} \newcommand {\Cbar} {\mathord{\setlength{\unitlength}{1em} \begin{picture}(0.6,0.7)(-0.1,0) \put(-0.1,0){\rm C} \thicklines \put(0.2,0.05){\line(0,1){0.55}} \end {picture}}} \newsavebox{\zzzbar} \sbox{\zzzbar} {\setlength{\unitlength}{0.9em} \begin{picture}(0.6,0.7) \thinlines \put(0,0){\line(1,0){0.6}} \put(0,0.75){\line(1,0){0.575}} \multiput(0,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \multiput(0.2,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \put(0,0.75){\line(0,-1){0.15}} \put(0.015,0.75){\line(0,-1){0.1}} \put(0.03,0.75){\line(0,-1){0.075}} \put(0.045,0.75){\line(0,-1){0.05}} \put(0.05,0.75){\line(0,-1){0.025}} \put(0.6,0){\line(0,1){0.15}} \put(0.585,0){\line(0,1){0.1}} \put(0.57,0){\line(0,1){0.075}} \put(0.555,0){\line(0,1){0.05}} \put(0.55,0){\line(0,1){0.025}} \end{picture}} \newcommand{\mathord{\!{\usebox{\zzzbar}}}}{\mathord{\!{\usebox{\zzzbar}}}} \def{\rm Im ~}{{\rm Im ~}} \def{\rm Re ~}{{\rm Re ~}} \newcommand{\bra}[1]{\langle{#1}|} \newcommand{\ket}[1]{|{#1}\rangle} \newcommand{\vev}[1]{\langle{#1}\rangle} \newcommand{\braket}[2]{\langle{#1}|{#2}\rangle} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\sect}[1]{Section~\ref{#1}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\chap}[1]{Chapter~\ref{#1}} \def\Gamma{\Gamma} \def{\cal K}{{\cal K}} \def{\cal N}{{\cal N}} \def{\cal H}{{\cal H}} \def{\cal V}{{\cal V}} \renewcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{{\bar z}}{{\bar z}} \newcommand{{\bar j}}{{\bar j}} \def{\cal S}{{\cal S}} \def\alpha{\alpha} \def\beta{\beta} \def\chi{\chi} \def\delta{\delta} \def\epsilon{\epsilon} \def\gamma{\gamma} \def\eta{\eta} \def\iota{\iota} \def\psi{\psi} \def\kappa{\kappa} \def\lambda{\lambda} \def\mu{\mu} \def\nu{\nu} \def\omega{\omega} \def\theta{\theta} \def\rho{\rho} \def\sigma{\sigma} \def\tau{\tau} \def\xi{\xi} \def\zeta{\zeta} \def\Delta{\Delta} \def\Phi{\Phi} \def\Gamma{\Gamma} \def\Psi{\Psi} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\Pi{\Pi} \def\Theta{\Theta} \def\Sigma{\Sigma} \def\Upsilon{\Upsilon} \def\Xi{\Xi} \renewcommand{\operatorname{Spin}}{\operatorname{Spin}} \renewcommand{\operatorname{SO}}{\operatorname{SO}} \renewcommand{\Omega}{\operatorname{O}} \newcommand{\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.32cm} D}{\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.32cm} D} \newcommand{\left(}{\left(} \newcommand{\right)}{\right)} \newcommand{\left[}{\left[} \newcommand{\right]}{\right]} \newcommand{R}{R} \begin{document} \begin{titlepage} \begin{center} \vskip 2 cm {\Large \bf Integrating simple genus two string invariants over moduli space}\\ \vskip 1.25 cm { Anirban Basu\footnote{email address: anirbanbasu@hri.res.in} } \\ {\vskip 0.5cm Harish--Chandra Research Institute, HBNI, Chhatnag Road, Jhusi,\\ Prayagraj 211019, India} \end{center} \vskip 2 cm \begin{abstract} \baselineskip=18pt We consider an $Sp(4,\mathbb{Z})$ invariant expression involving two factors of the Kawazumi--Zhang (KZ) invariant each of which is a modular graph with one link, and four derivatives on the moduli space of genus two Riemann surfaces. Manipulating it, we show that the integral over moduli space of a linear combination of a modular graph with two links and the square of the KZ invariant reduces to a boundary integral. We also consider an $Sp(4,\mathbb{Z})$ invariant expression involving three factors of the KZ invariant and six derivatives on moduli space, from which we deduce that the integral over moduli space of a modular graph with three links reduces to a boundary integral. In both cases, the boundary term is completely determined by the KZ invariant. We show that both the integrals vanish. \end{abstract} \end{titlepage} \section{Introduction} Multiloop scattering amplitudes involving massless external states in perturbative string theory contain very useful information about the structure of the effective action. While in general a detailed analysis of such amplitudes is difficult to perform, the analysis simplifies in compactifications which preserve large amount of supersymmetry. The structure of the amplitudes involving gravitons as external states in toroidally compactified type II string theory, which preserves maximal supersymmetry, has been analyzed at genus one as well as at genus two, while very little is known beyond. The $\alpha'$ expansion of these amplitudes yields terms that are analytic as well as non--analytic in the external momenta in the effective action. In order to obtain the precise coefficients of the various analytic terms, one has to integrate over the entire moduli space of the Riemann surface, where the integrand is built out of string invariants or modular graph forms~\cite{DHoker:2015gmr,DHoker:2015wxz}. These graphs have links given by the scalar Green function or its worldsheet derivative, while the vertices are the positions of insertions of the vertex operators on the worldsheet. Hence analyzing various properties of these string invariants plays a crucial role in determining terms in the effective action. This has led to a detailed analysis of their properties~\cite{Green:1999pv,DHoker:2005vch,DHoker:2005jhf,Berkovits:2005df,Berkovits:2005ng,Green:2008uj,Richards:2008jg,Green:2013bza,DHoker:2013fcx,DHoker:2014oxd,Pioline:2015qha,DHoker:2015gmr,DHoker:2015wxz,Basu:2015ayg,DHoker:2016mwo,Basu:2016xrt,Basu:2016kli,Basu:2016mmk,DHoker:2016quv,Kleinschmidt:2017ege,DHoker:2017pvk,DHoker:2018mys,DHoker:2019blr,Basu:2019idd,Gerken:2019cxz,Gerken:2020yii,DHoker:2020prr,Gerken:2020aju,DHoker:2020tcq,Basu:2020pey,DHoker:2020uid,Basu:2020iok,Gerken:2020xfv,DHoker:2020aex} revealing a rich underlying structure. In this paper, rather than directly analyze the string invariants, we shall be concerned with performing their integrals over moduli space at genus two. Such integrals have been considered at genus one and two leading to various interactions in the effective action~\cite{Green:1999pv,DHoker:2005vch,DHoker:2005jhf,Green:2008uj,Richards:2008jg,Green:2013bza,DHoker:2014oxd,DHoker:2015gmr,Basu:2015dqa,Basu:2016fpd,DHoker:2019blr,DHoker:2020tcq}. While at genus one, these integrals are over the fundamental domain of $SL(2,\mathbb{Z})$, at genus two, they are over the fundamental domain of $Sp(4,\mathbb{Z})$. Let us consider the terms in the effective action that arise in the low momentum expansion of the four graviton amplitude at genus two~\cite{DHoker:2005vch}. The leading contribution which is the $D^4\mathcal{R}^4$ interaction involves simply integrating over the volume element of the fundamental domain of $Sp(4,\mathbb{Z})$~\cite{DHoker:2005jhf}. At the next order in the $\alpha'$ expansion, we have the $D^6\mathcal{R}^4$ interaction which involves integrating the Kawazumi--Zhang (KZ) invariant over moduli space~\cite{Kawazumi,Zhang,DHoker:2013fcx}, which is done by reducing it to a boundary term using an eigenvalue equation the KZ invariant satisfies~\cite{DHoker:2014oxd}. While the structure of the string invariants that arise in the integrand of the $D^8\mathcal{R}^4$ interaction has been analyzed in asymptotic expansions around the degenerating nodes~\cite{DHoker:2017pvk,DHoker:2018mys}, their integrals over moduli space have not been performed. Similar is the status of interactions that arise in the low momentum expansion of the five graviton amplitude~\cite{DHoker:2020prr,DHoker:2020tcq}. While the KZ invariant yields a graph with only one link, the integrands for the amplitudes at higher orders in the $\alpha'$ expansion involve a sum of terms each of which has graphs with a total of at least two links, where the number of links in such terms increases as one goes to higher and higher orders in the $\alpha'$ expansion. Thus it is important to understand how to perform the integrals over moduli space involving integrands given by string invariants for interactions that are $\alpha'$ suppressed compared to the $D^6\mathcal{R}^4$ interaction in the low momentum expansion of the four graviton amplitude. The graphs that arise in this expansion to all orders in the $\alpha'$ expansion have links that are given by the Green function. The situation gets more involved when one considers the graphs that arise in the low momentum expansion of the five graviton amplitude, where additional contributions arise involving graphs with links given by the worldsheet derivative of the Green function. Thus it is interesting in general to understand the issue of integrating various string invariants over moduli space. One of the major obstacles in performing these integrals over moduli space at genus two arises from the fact that beyond the graph for the $D^6\mathcal{R}^4$ interaction, there are no known eigenvalue equations involving the Laplacian operator on moduli space the string invariants satisfy that are useful in performing the integrals\footnote{The eigenvalue equation obtained in~\cite{Basu:2018bde} is trivially satisfied on using the identities derived in~\cite{DHoker:2020tcq}. I am thankful to Boris Pioline for useful comments on this issue.}. In this paper, we shall perform the integrals over moduli space for certain simple genus two graphs. They are simple in the sense that the links in them are disconnected\footnote{This does not mean that such graphs always factorize in terms of graphs with lesser number of links which follows from their detailed structure.} and hence they do not form closed loops on the worldsheet. We first consider the integral of a graph with two links that arises in the analysis of the $D^8\mathcal{R}^4$ term in the low momentum expansion. We show that the integral with the $Sp(4,\mathbb{Z})$ invariant measure of a linear combination of this graph and the square of the KZ invariant reduces to a boundary term on moduli space. Hence the integral can be evaluated based on only the knowledge of the asymptotic expansions around the separating and non--separating nodes of the boundary contribution, which turns out to be completely determined by the KZ invariant. We next perform a similar analysis for a simple graph with three links that should arise in the analysis of the $D^6\mathcal{R}^6$ term in the low momentum expansion of the six graviton amplitude. We evaluate the integral by reducing it to a boundary contribution, which again is entirely determined by the KZ invariant. In both cases, the integral vanishes. For the analysis of the graphs with two links, we start with an $Sp(4,\mathbb{Z})$ invariant expression involving two factors of the KZ invariant and four derivatives on moduli space. Using the differential equation satisfied by the KZ invariant, this expression reduces to a graph with two links and no worldsheet derivatives. Proceeding differently, we next manipulate this expression to reduce it to a boundary term along with an additional contribution involving the square of the KZ invariant. Equating the two results obtained by evaluating the same expression differently yields our desired answer. In obtaining these relations which are valid everywhere in the bulk of moduli space, we use the eigenvalue equation the KZ invariant satisfies, as well as the identities deduced in~\cite{DHoker:2020tcq}. We next perform a similar analysis using an $Sp(4,\mathbb{Z})$ invariant expression involving three factors of the KZ invariant and six derivatives on moduli space, leading to our desired answer. The intermediate steps involve obtaining several algebraic relations between simple graphs with three links which we obtain separately. We expect the analysis to generalize to simple graphs with arbitrary number of links. We begin by reviewing facts about genus two string amplitudes that are relevant for our purposes. We then perform the analysis everywhere in the bulk of moduli space for graphs with two links, and then for graphs with three links. Finally, we perform the integrals over moduli space by evaluating the boundary contributions. \section{Genus two string amplitudes and the Kawazumi--Zhang invariant} We denote the genus two worldsheet by $\Sigma_2$, and the conformally invariant Arakelov Green function by $G(z,w)$. The period matrix is defined by $\Omega_{IJ} = X_{IJ} + iY_{IJ}$ ($I,J=1,2$), where the matrices $X$ and $Y$ have real entries. Also we define $Y^{-1}_{IJ} = (Y^{-1})_{IJ}$, as well as the dressing factors \begin{equation} (z,\overline{w}) = Y^{-1}_{IJ} \omega_I (z) \overline{\omega_J (w)}, \quad \mu (z) = (z,\overline{z}), \quad P(z,w) = (z,\overline{w})(w,\overline{z}),\end{equation} where $\omega_I = \omega_I (z) dz$ is the Abelian differential one form. Every string invariant is given by a graph with links involving the Arakelov Green function, along with a specific choice of dressing factors for the integrated vertices. The integration measure over the worldsheet is given by $d^2 z = i dz \wedge d\overline{z} = 2 d({\rm Re}z)\wedge d({\rm Im}z)$. The Kawazumi--Zhang invariant which appears in the analysis of the $D^6\mathcal{R}^4$ interaction is given by \begin{equation} \label{KZ}\mathcal{B}_1 (\Omega,\overline\Omega) = \int_{\Sigma_2^2} \prod_{i=1}^2 d^2 z_i G(z_1,z_2) P(z_1,z_2)\end{equation} as depicted by figure 1\footnote{In the various figures depicting the graphs, the solid and dashed lines represent the Green function and the dressing factor connecting the vertices respectively.}. We now write down several expressions involving it that are satisfied everywhere in the bulk of moduli space, which are very useful for our purposes. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(130,45)(0,0) \includegraphics[scale=.6]{twoloopint1.eps} \end{picture}} \] \caption{The string invariant $\mathcal{B}_1$} \end{center} \end{figure} Defining \begin{equation} \label{pdef}\partial_{IJ} = \frac{1}{2} \Big(1+\delta_{IJ}\Big)\frac{\partial}{\partial\Omega_{IJ}}\end{equation} we see that \begin{equation} \label{bulkrel}\partial_{KL}\Omega_{IJ} = \frac{1}{2}\Big(\delta_{IK}\delta_{JL} + \delta_{IL}\delta_{JK}\Big)\end{equation} everywhere in the bulk of moduli space. The KZ invariant \C{KZ} satisfies the differential equation~\cite{DHoker:2014oxd} \begin{equation} \label{main}\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1 = \frac{1}{16}\Big(\Theta_{IJ;KL} + \Theta_{IJ;LK} + \Theta_{JI;KL} + \Theta_{JI;LK}\Big),\end{equation} where $\Theta_{IJ;KL}$ is defined by \begin{eqnarray} \label{Theta}\Theta_{IJ;KL} &=& 5 Y^{-1}_{AK} Y^{-1}_{IB} \int_{\Sigma_2^2} \prod_{i=1}^2 d^2 z_i G(z_1,z_2) \omega_A (z_1) \overline{\omega_B (z_2)} \non \\ && \times \Big(Y^{-1}_{JL}(z_2,\overline{z}_1)- Y^{-1}_{LC} \omega_C (z_2) Y^{-1}_{JD} \overline{\omega_D (z_1)}\Big). \end{eqnarray} We shall see that \C{main} will play a central role in our analysis. Defining the $Sp(4,\mathbb{Z})$ invariant Laplacian by \begin{equation} \label{L}\Delta = 4Y_{IK} Y_{JL}\partial_{IJ}\overline\partial_{KL}, \end{equation} we find that \C{KZ} satisfies the eigenvalue equation \begin{equation} \label{eigenKZ}\Big(\Delta -5\Big) \mathcal{B}_1 =0\end{equation} in the bulk of moduli space, which we shall often use. In obtaining \C{eigenKZ}, we have used the vanishing integral \begin{equation} \label{zero}\int_{\Sigma} d^2 z \mu(z) G(z,w)=0\end{equation} the Arakelov Green function satisfies. Now the Siegel upper half space $\mathcal{H}_2$, where the various amplitudes are naturally defined, is K$\ddot{\rm{a}}$hler, and the $Sp(4,\mathbb{R})$ invariant K$\ddot{\rm{a}}$hler metric is \begin{equation} ds^2 = Y^{-1}_{IJ} Y^{-1}_{KL} d\Omega_{IK} d\overline\Omega_{JL}.\end{equation} Thus from the inverse metric we see that $Y_{IJ}$ naturally contracts with one holomorphic and one anti--holomorphic index in moduli space. Let us try to construct $Sp(4,\mathbb{Z})$ invariants while insisting that we only allow such contractions. Hence using only $\partial\overline\partial \mathcal{B}_1$ involving derivatives over moduli space to construct an $Sp(4,\mathbb{Z})$ invariant, we end up with $\Delta \mathcal{B}_1$ using the definition \C{L}, which is the only possibility. \section{Analyzing simple string invariants with two links} Based on the discussion above, let us try to construct invariants of the form $(\partial\overline\partial \mathcal{B}_1)^2$. In fact, there are two possibilities for constructing invariants that do not factorize. We define one of them by\footnote{The other one is given by \begin{equation} \label{drop}Y_{AC}Y_{BK}Y_{DI}Y_{JL}\Big(\partial_{AB}\overline\partial_{CD}\mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\end{equation} which we do not consider.} \begin{equation} \label{defchi}\chi^{(2)} (\Omega,\overline\Omega)= 16 Y_{AK} Y_{BL} Y_{CI} Y_{DJ}\Big(\partial_{AB} \overline\partial_{CD} \mathcal{B}_1 \Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big).\end{equation} Thus preserving the symmetries, from \C{main} we have that \begin{eqnarray} \chi^{(2)} = \frac{1}{4}\Big(Y_{AK}Y_{BL}+ Y_{AL} Y_{BK}\Big) \Big( Y_{CI}Y_{DJ}+ Y_{CJ}Y_{DI}\Big) \Theta_{AB;CD} \Theta_{IJ;KL}.\end{eqnarray} Using the expression \C{Theta}, this gives us that \begin{eqnarray} \label{eval}\chi^{(2)} = \frac{25}{4}\Big(2 \mathcal{B}_2 + 2 \mathcal{B}_3 + \mathcal{B}_1^2\Big),\end{eqnarray} on using \C{zero}. In \C{eval}, the two string invariants are given by \begin{eqnarray} \mathcal{B}_2 &=& \int_{\Sigma^4} \prod_{i=1}^4 d^2 z_i G(z_1,z_2) G(z_3,z_4) P(z_1,z_3)P(z_2,z_4), \non \\ \mathcal{B}_3 &=& \int_{\Sigma^4} \prod_{i=1}^4 d^2 z_i G(z_1,z_2) G(z_3,z_4) (z_1,\overline{z_4})(z_4,\overline{z_2})(z_2,\overline{z_3})(z_3,\overline{z_1}). \end{eqnarray} In the intermediate steps of the analysis, we also come across the string invariant \begin{eqnarray} \label{defB4}\mathcal{B}_4 = \int_{\Sigma^4} \prod_{i=1}^4 d^2 z_i G(z_1,z_2) G(z_3,z_4) (z_1,\overline{z_3})(z_3,\overline{z_4})(z_4,\overline{z_2})(z_2,\overline{z_1})\end{eqnarray} which cancels in the final answer. While the graph $\mathcal{B}_2$ arises in the low momentum expansion of the four and five graviton amplitudes, the graphs $\mathcal{B}_3$ and $\mathcal{B}_4$ arise in the low momentum expansion of the five graviton amplitude. These three graphs, depicted by figure 2, differ only in their dressing factors. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(270,130)(0,0) \includegraphics[scale=.65]{twoloopint2.eps} \end{picture}} \] \caption{The string invariants (i) $\mathcal{B}_2$, (ii) $\mathcal{B}_3$ and (iii) $\mathcal{B}_4$} \end{center} \end{figure} Using the relation between the graphs $\mathcal{B}_2$ and $\mathcal{B}_3$ given by~\cite{DHoker:2020tcq} \begin{equation} \label{rel23}\mathcal{B}_3 = \mathcal{B}_2 - \frac{\mathcal{B}_1^2}{2},\end{equation} from \C{eval} we get that \begin{equation} \label{chirel}\chi^{(2)} = 25 \mathcal{B}_2\end{equation} yielding a simple string invariant with two links. Thus we obtain this result by evaluating $\chi^{(2)}$ directly. Let us now evaluate $\chi^{(2)}$ differently. First manipulating $\overline\partial_{CD}$ in \C{defchi}, we express $\chi^{(2)}$ as\footnote{In obtaining this as well as similar results below, we use \C{bulkrel} heavily.} \begin{eqnarray} \label{1}\frac{\chi^{(2)}}{16} &=& ({\rm det}Y)^3 \overline\partial_{CD}\Big[({\rm det}Y)^{-3}Y_{AK}Y_{BL}Y_{CI}Y_{DJ}\Big(\partial_{AB}\mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\Big]\non \\ &&-\frac{Y_{IK}Y_{JL}}{4} \Big(\partial_{IJ} \mathcal{B}_1 \Big)\Big(\overline\partial_{KL} \Delta \mathcal{B}_1\Big).\end{eqnarray} In fact using \C{eigenKZ}, the last term in \C{1} is equal to \begin{equation} \label{rem}-\frac{5Y_{IK}Y_{JL}}{4} \Big(\partial_{IJ} \mathcal{B}_1 \Big)\Big(\overline\partial_{KL} \mathcal{B}_1\Big).\end{equation} Now in \C{rem}, we further manipulate $\overline\partial_{KL}$ to obtain \begin{eqnarray} \label{2}Y_{IK}Y_{JL} \Big(\partial_{IJ} \mathcal{B}_1 \Big)\Big(\overline\partial_{KL} \mathcal{B}_1\Big) = ({\rm det}Y)^3 \overline\partial_{KL} \Big[ ({\rm det}Y)^{-3} Y_{IK} Y_{JL} \mathcal{B}_1\Big(\partial_{IJ}\mathcal{B}_1\Big)\Big] - \frac{1}{4} \mathcal{B}_1 \Delta \mathcal{B}_1.\end{eqnarray} The last term in \C{2} is equal to $-5\mathcal{B}_1^2/4$ on using \C{eigenKZ}. Hence putting the various contributions together, we obtain an alternate expression for $\chi^{(2)}$. Equating this expression with \C{chirel}, we get that \begin{eqnarray} \label{simplify1}&&\frac{25}{16} \Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big) = -\frac{5}{4}({\rm det}Y)^3 \overline\partial_{KL} \Big[ ({\rm det} Y)^{-3}Y_{IK}Y_{JL} \mathcal{B}_1 \Big(\partial_{IJ} \mathcal{B}_1\Big)\Big]\non \\ &&+ ({\rm det}Y)^3\overline\partial_{CD} \Big[({\rm det}Y)^{-3}Y_{AK} Y_{BL} Y_{CI} Y_{DJ} \Big(\partial_{AB} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{KL}\mathcal{B}_1\Big) \Big].\end{eqnarray} Thus we see that $\mathcal{B}_2$ is entirely determined by the KZ invariant $\mathcal{B}_1$. In fact the combination $(\mathcal{B}_2 - \mathcal{B}_1^2)/{({\rm det}Y)}^3$ is a total derivative in moduli space, which will be very helpful for us later. We can simplify \C{simplify1} by using \begin{equation} \label{YY}Y_{IJ} Y_{KL} - Y_{IL}Y_{JK} = \epsilon_{IK} \epsilon_{JL}({\rm det}Y)\end{equation} and \C{eigenKZ}. This leads to \begin{eqnarray} \label{g1}\frac{25\Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big)}{16({\rm det}Y)^3} &=& \epsilon_{AI}\epsilon_{BJ}\epsilon_{CK} \epsilon_{DL}\overline\partial_{CD} \Big[ \frac{\Big(\partial_{AB} \mathcal{B}_1\Big) \Big(\partial_{IJ} \overline\partial_{KL}\mathcal{B}_1\Big)}{{\rm det}Y}\Big]\non \\ &&- 2\epsilon_{AI}\epsilon_{CK}\overline\partial_{CD} \Big[\frac{Y_{BD} Y_{JL}}{({\rm det}Y)^2} \Big(\partial_{AB} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{KL}\mathcal{B}_1\Big) \Big].\end{eqnarray} It will be interesting to analyze the invariant \C{drop} by proceeding along the same lines to see what we obtain, though the details are going to be more involved. To see this, note that in the above analysis involving $\chi^{(2)}$, very schematically manipulating the derivatives we have obtained \begin{equation} \chi^{(2)} \sim \Big(Y_{AK} Y_{BL}\partial_{AB} \overline\partial_{KL}\mathcal{B}_1\Big)\Big( Y_{CI}Y_{DJ}\partial_{IJ} \overline\partial_{CD} \mathcal{B}_1\Big)+\ldots \sim \frac{(\Delta \mathcal{B}_1)^2}{16} +\ldots \sim \frac{25 \mathcal{B}_1^2}{16}+\ldots,\end{equation} leading to the final expression, where the terms we have ignored involve total derivatives on moduli space up to an overall factor of $({\rm det}Y)^3$. Such a simplification does not occur in the analysis of \C{drop} given the index structure of the invariant. \section{Analyzing simple string invariants with three links} Let us now perform a similar analysis involving graphs with three links, where we construct invariants of the type $(\partial\overline\partial \mathcal{B}_1)^3$. While there are four such invariants one can consider that do not factorize, we only focus on the one we define by\footnote{The other three invariants are given by \begin{eqnarray} \label{drop2}Y_{AK}Y_{BL}Y_{CI}Y_{JP}Y_{DM}Y_{NQ}\Big(\partial_{AB} \overline\partial_{CD} \mathcal{B}_1 \Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big)\Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big) , \non \\ Y_{AC}Y_{BK}Y_{DI}Y_{JP}Y_{LM}Y_{NQ}\Big(\partial_{AB} \overline\partial_{CD} \mathcal{B}_1 \Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big)\Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big), \non \\ Y_{AC}Y_{BK}Y_{IL}Y_{JP}Y_{DM}Y_{NQ}\Big(\partial_{AB} \overline\partial_{CD} \mathcal{B}_1 \Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big)\Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big).\end{eqnarray} Along the lines of the discussion in the previous section, we see that the analysis of these invariants is going to be more involved than the analysis of \C{firstdef}. As we shall soon analyze in detail, manipulating the derivatives in $\chi^{(3)}$ yield terms of the form $\mathcal{B}_1^3$ and $\mathcal{B}_1\mathcal{B}_2$ as well as other contributions that yield total derivatives on moduli space, up to an overall factor of $({\rm det}Y)^3$. This simplification does not happen for the invariants in \C{drop2} given their index structure. In fact, the natural generalization of $\chi^{(2)}$ and $\chi^{(3)}$ involving $n$ factors each of $\mathcal{B}_1$, $\partial_{AB}$ and $\overline\partial_{CD}$ is given by \begin{eqnarray} \label{chin}\chi^{(n)} (\Omega,\overline\Omega) = 4^n \prod_{i=1}^n \Big(Y_{A_i C_{i+1}} Y_{B_i D_{i+1}}\Big)\prod_{j=1}^n \Big(\partial_{A_j B_j} \overline\partial_{C_j D_j}\mathcal{B}_1 \Big),\end{eqnarray} where $C_{n+1} \equiv C_1$ and $D_{n+1} \equiv D_1$. Among the several string invariants that arise at this order, we expect the manipulations involving \C{chin} to be the simplest given the index structure.} \begin{eqnarray} \label{firstdef}\chi^{(3)} (\Omega,\overline\Omega) = 64 Y_{AK}Y_{BL}Y_{IP}Y_{JQ}Y_{CM}Y_{DN}\Big(\partial_{AB} \overline\partial_{CD} \mathcal{B}_1 \Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big)\Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big).\end{eqnarray} We first calculate $\chi^{(3)}$ using \C{main}. Keeping the symmetries manifest, we get that \begin{eqnarray} \chi^{(3)} &=& \frac{1}{8} \Big(Y_{AK}Y_{BL}+Y_{AL}Y_{BK}\Big)\Big(Y_{IP}Y_{JQ}+Y_{IQ}Y_{JP}\Big)\Big(Y_{CM}Y_{DN}+Y_{CN}Y_{DM}\Big)\non \\ &&\times \Theta_{AB;CD} \Theta_{IJ;KL} \Theta_{MN;PQ}.\end{eqnarray} Then using \C{Theta} we obtain \begin{eqnarray} \label{finval}\chi^{(3)} = \frac{125}{8} \Big[12 \Big(\mathcal{B}_5 + \mathcal{B}_6 \Big)- 6 \Big(\mathcal{B}_7 + \mathcal{B}_8 \Big)-4\Big( \mathcal{B}_9 + \mathcal{B}_{10}\Big) + 3\mathcal{B}_1 \mathcal{B}_4 \Big],\end{eqnarray} where the various string invariants are given by\footnote{We expect the $\mathcal{R}^6$ and $D^2\mathcal{R}^6$ interactions to be related to the 1/4 BPS $D^4\mathcal{R}^4$ and 1/8 BPS $D^6\mathcal{R}^4$ interactions respectively, and hence have the same string invariants as their integrands. However, we expect that the integrands for the non--BPS $D^4\mathcal{R}^6$ and $D^6\mathcal{R}^6$ interactions should contain additional graphs beyond those that arise in the integrands of the non--BPS $D^8\mathcal{R}^4$ and $D^{10}\mathcal{R}^4$ interactions respectively. Thus we expect that the $D^6\mathcal{R}^6$ amplitude should contain in its integrand disconnected graphs with three links involving six vertices. Hence we refer to them as string invariants.} \begin{eqnarray} \label{defB}\mathcal{B}_5 &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_5,\overline{z_6})(z_6,\overline{z_3})(z_3,\overline{z_1})(z_1,\overline{z_5})P(z_2,\overline{z_4}), \non \\ \mathcal{B}_6 &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_5,\overline{z_6})(z_6,\overline{z_3})(z_3,\overline{z_2})(z_2,\overline{z_4})(z_4,\overline{z_1})(z_1,\overline{z_5}), \non \\ \mathcal{B}_7 &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_1,\overline{z_3})(z_3,\overline{z_5})(z_5,\overline{z_6})(z_6,\overline{z_4})(z_4,\overline{z_2})(z_2,\overline{z_1}),\non \\ \mathcal{B}_8 &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_2,\overline{z_1})(z_1,\overline{z_3})(z_3,\overline{z_2})(z_6,\overline{z_5})(z_5,\overline{z_4})(z_4,\overline{z_6}),\non \\ \mathcal{B}_9 &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_1,\overline{z_5})(z_5,\overline{z_3})(z_3,\overline{z_1})(z_2,\overline{z_6})(z_6,\overline{z_4})(z_4,\overline{z_2}), \non \\ \mathcal{B}_{10} &=& \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_1,\overline{z_5})(z_5,\overline{z_3})(z_3,\overline{z_2})(z_2,\overline{z_6})(z_6,\overline{z_4})(z_4,\overline{z_1}).\non \\ \end{eqnarray} In obtaining them, we have often used \C{zero}. In fact the string invariant $\mathcal{B}_{11}$ defined by \begin{equation} \label{defB11}\mathcal{B}_{11} = \int_{\Sigma^6} \prod_{i=1}^6 d^2 z_i G(z_1,z_2)G(z_3,z_4)G(z_5,z_6)(z_5,\overline{z_6})(z_6,\overline{z_3})(z_3,\overline{z_4})(z_4,\overline{z_2})(z_2,\overline{z_1})(z_1,\overline{z_5})\end{equation} also arises in the intermediate stages of the calculation, but cancels in the final answer. Each of these simple string invariants involve three disconnected links, and only differ in their dressing factors. They are depicted by figure 3. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(270,330)(0,0) \includegraphics[scale=.7]{twoloopint3.eps} \end{picture}} \] \caption{The string invariants (i) $\mathcal{B}_5$, (ii) $\mathcal{B}_6$, (iii) $\mathcal{B}_7$, (iv) $\mathcal{B}_8$, (v) $\mathcal{B}_9$, (vi) $\mathcal{B}_{10}$ and (vii) $\mathcal{B}_{11}$} \end{center} \end{figure} To simplify the expression \C{finval}, we use the various relations between the graphs that have been deduced in appendix A, the relation \C{rel23} as well as the relation~\cite{DHoker:2020tcq} \begin{equation} \mathcal{B}_4= \frac{\mathcal{B}_1^2}{2}.\end{equation} Thus we see that \C{finval} yields \begin{eqnarray}\label{eq}\chi^{(3)} = \frac{125}{2} \Big[\mathcal{B}_1 \Big(3\mathcal{B}_2 -\mathcal{B}_1^2 \Big)- 2 \mathcal{B}_9\Big].\end{eqnarray} Hence evaluating $\chi^{(3)}$ using \C{main}, we see that apart from graphs $\mathcal{B}_1$ and $\mathcal{B}_2$ that arise in the integrands of terms at lower orders in the $\alpha'$ expansion, it only depends on a single simple graph $\mathcal{B}_9$ with three links. We now evaluate $\chi^{(3)}$ differently, where we often use \C{eigenKZ}. To start with, manipulating $\partial_{AB}$ in \C{firstdef} and adding the complex conjugate contribution, we get that \begin{eqnarray} \label{3one} &&\frac{\chi^{(3)}}{32} = ({\rm det}Y)^3 \partial_{AB} \Big[ ({\rm det}Y)^{-3} Y_{AK} Y_{BL}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big) \non \\ &&\times \Big(\partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\Big] - \frac{5}{4}Y_{IP}Y_{JQ}Y_{CM}Y_{DN}\Big(\overline\partial_{CD}\mathcal{B}_1\Big)\Big(\partial_{IJ}\mathcal{B}_1\Big)\Big(\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\non \\&&- Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)+ c.c. .\end{eqnarray} Let us consider the second term in \C{3one}. Manipulating $\overline\partial_{CD}$, we get that \begin{eqnarray} && Y_{IP}Y_{JQ}Y_{CM}Y_{DN}\Big(\overline\partial_{CD}\mathcal{B}_1\Big)\Big(\partial_{IJ}\mathcal{B}_1\Big)\Big(\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big) = \frac{25}{32} \mathcal{B}_1 \Big(\mathcal{B}_1^2 - 2 \mathcal{B}_2\Big)\non \\ && -\frac{5}{8}({\rm det}Y)^3 \overline\partial_{IJ} \Big[ ({\rm det}Y)^{-3}Y_{IK} Y_{JL}\mathcal{B}_1^2\Big(\partial_{KL}\mathcal{B}_1\Big)\Big]\non \\ && + ({\rm det}Y)^3\overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \mathcal{B}_1 \Big(\partial_{IJ}\mathcal{B}_1\Big)\Big(\partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\Big],\end{eqnarray} where we have used the identity \begin{eqnarray} Y_{IK}Y_{JL}\mathcal{B}_1 \Big(\partial_{IJ} \mathcal{B}_1\Big)\Big(\overline\partial_{KL} \mathcal{B}_1\Big) = \frac{1}2{}({\rm det}Y)^3 \overline\partial_{IJ} \Big[({\rm det}Y)^{-3}Y_{IK}Y_{JL}\mathcal{B}_1^2 \Big(\partial_{KL}\mathcal{B}_1\Big)\Big] - \frac{5}{8} \mathcal{B}_1^3,\end{eqnarray} and \C{chirel}. We next consider the third term in \C{3one}. Manipulating $\overline\partial_{CD}$, we obtain \begin{eqnarray} \label{long} &&Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big) \non \\ &&= ({\rm det}Y)^3 \overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\mathcal{B}_1 \Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]\non \\&& -i Y_{BL} Y_{IP} Y_{JQ} \mathcal{B}_1 \Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1 \Big)\partial_{AB} \Big(Y_{AM} Y_{KN} \partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\non \\ &&-i Y_{AK} Y_{BL} Y_{JQ} \mathcal{B}_1 \Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1 \Big)\partial_{AB} \Big(Y_{IM} Y_{PN} \partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\non \\ &&+i Y_{AK} Y_{BL} Y_{IP} Y_{JQ} \mathcal{B}_1 \Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1 \Big)\partial_{AB} \Big(Y_{MN} \partial_{MP} \overline\partial_{NQ} \mathcal{B}_1\Big)\non \\&& +i Y_{AK} Y_{BL} Y_{IP} \mathcal{B}_1 \Big(\partial_{IK} \overline\partial_{CD} \mathcal{B}_1 \Big)\partial_{AB} \Big(Y_{CM} Y_{DN} \partial_{MN} \overline\partial_{PL} \mathcal{B}_1\Big)\non \\ &&-\frac{25}{32} \mathcal{B}_1\mathcal{B}_2 -Y_{AK}Y_{BL}\mathcal{B}_1\partial_{AB}\Big(Y_{CM}Y_{DN} \partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\overline\partial_{KL}\Big(Y_{IP}Y_{JQ}\partial_{IJ}\overline\partial_{CD}\mathcal{B}_1\Big).\end{eqnarray} In obtaining \C{long}, at an intermediate step we have manipulated an expression by using \begin{eqnarray} \overline\partial_{CD}\partial_{AB} \Big(Y_{CM}Y_{DN} \partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big) &= & \partial_{AB} \overline\partial_{PQ} \Big(Y_{CM} Y_{DN} \partial_{MN} \overline\partial_{CD} \mathcal{B}_1\Big)+\ldots \non \\ &=& \frac{5}{4} \partial_{AB} \overline\partial_{PQ} \mathcal{B}_1 +\ldots \end{eqnarray} to obtain a simplified expression. We now consider the last term in \C{long} which we express differently which will be very useful for our purposes. We start with the expression \begin{eqnarray} \label{long2}\frac{25}{64} \mathcal{B}_1 \Delta \mathcal{B}_2 = Y_{AK}Y_{BL}\mathcal{B}_1\partial_{AB} \overline\partial_{KL}\Big[Y_{CM}Y_{DN} Y_{IP}Y_{JQ} \Big(\partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{CD}\mathcal{B}_1\Big)\Big]\end{eqnarray} which directly follows from \C{defchi} and \C{chirel}. We first consider the term on the left hand side of \C{long2} which we manipulate to have the Laplacian acting on $\mathcal{B}_1$. Thus using \C{eigenKZ}, we get that \begin{eqnarray} \label{Long}\frac{25}{64} \mathcal{B}_1 \Delta \mathcal{B}_2 &=& \frac{25}{16} ({\rm det}Y)^3 \partial_{AB}\Big[({\rm det}Y)^{-3}Y_{AK}Y_{BL}\mathcal{B}_1\Big(\overline\partial_{KL}\mathcal{B}_2\Big)\Big] \non \\ &&- \frac{25}{16} ({\rm det}Y)^3\overline\partial_{KL}\Big[ ({\rm det} Y)^{-3}Y_{AK}Y_{BL}\Big(\partial_{AB}\mathcal{B}_1\Big)B_2\Big]+\frac{125}{64}\mathcal{B}_1 \mathcal{B}_2.\end{eqnarray} Next we consider the right hand side of \C{long2}, which gives us \begin{eqnarray} \label{long3}&&\frac{25}{64} \mathcal{B}_1 \Delta \mathcal{B}_2 = 2Y_{AK}Y_{BL}\mathcal{B}_1\partial_{AB}\Big(Y_{CM}Y_{DN} \partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\overline\partial_{KL}\Big(Y_{IP}Y_{JQ}\partial_{IJ}\overline\partial_{CD}\mathcal{B}_1\Big)\non \\ &&+ 2i Y_{AI} Y_{BP} Y_{CM} Y_{DN}\mathcal{B}_1 \Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\partial_{AB} \Big(Y_{JQ} \partial_{IJ} \overline\partial_{CD} \mathcal{B}_1\Big)\non \\ &&+ 2i Y_{AP} Y_{CM} Y_{DN} Y_{KQ} \mathcal{B}_1\Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\overline\partial_{CD}\Big(Y_{BL} \partial_{AB} \overline\partial_{KL} \mathcal{B}_1\Big)\non \\ &&-2i Y_{CM} Y_{DN} Y_{IK} Y_{JQ} Y_{LP} \mathcal{B}_1 \Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{CD} \overline\partial_{KL} \mathcal{B}_1\Big)\non \\ &&-2i Y_{AM} Y_{BL} Y_{IP} Y_{JQ} Y_{KN} \Big(\partial_{MN} \overline\partial_{PQ} \mathcal{B}_1\Big)\Big(\partial_{AB} \partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big)+\frac{125}{32}\mathcal{B}_1 \mathcal{B}_2.\end{eqnarray} In obtaining \C{long3}, at an intermediate step we have used \begin{eqnarray} Y_{AK}Y_{BL}\partial_{AB} \overline\partial_{KL} \Big(Y_{IP}Y_{JQ} \partial_{IJ} \overline\partial_{CD} \mathcal{B}_1\Big) &=& Y_{IP}Y_{JQ} \partial_{IJ}\overline\partial_{CD} \Big(Y_{AK}Y_{BL} \partial_{AB}\overline\partial_{KL} \mathcal{B}_1\Big)+\ldots \non \\ &=& \frac{5}{4}Y_{IP}Y_{JQ} \partial_{IJ}\overline\partial_{CD} \mathcal{B}_1 +\ldots \end{eqnarray} which simplifies the resulting expression. The first term on the right hand side of \C{long3} is precisely of the form of the last term on the right hand side of \C{long} which we want to express differently. Thus equating \C{Long} and \C{long3}, we get an expression for it in terms of the other terms in these equations, which we substitute in \C{long} along with its complex conjugate. Thus we consider \begin{eqnarray} \label{add}Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big) + c.c. .\end{eqnarray} Among other contributions, \C{add} contains terms of the form $i\mathcal{B}_1 (\partial \overline\partial \mathcal{B}_1)\partial\Big(\ldots \partial \overline\partial \mathcal{B}_1 \Big) + c.c.$ schematically, where the ignored factors can involve factors of $Y_{IJ}$. Thus they yield terms of the form $i\mathcal{B}_1 (\partial \overline\partial \mathcal{B}_1)\partial\Big(\partial \overline\partial \mathcal{B}_1 \Big) + c.c.$ as well as terms where the $\partial$ (or $\overline\partial$) acts on the factors of $Y_{IJ}$ in $\ldots \partial \overline\partial \mathcal{B}_1$. The total contribution of all terms of the form $i\mathcal{B}_1 (\partial \overline\partial \mathcal{B}_1)\partial\Big(\partial \overline\partial \mathcal{B}_1 \Big) + c.c.$ vanishes, leading to a striking simplification. This gives us that \begin{eqnarray} &&Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big) + c.c. \non \\ &&=({\rm det}Y)^3 \overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\mathcal{B}_1 \Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big] \non \\ &&-\frac{25}{32} ({\rm det}Y)^3 \partial_{AB}\Big[({\rm det}Y)^{-3}Y_{AK}Y_{BL}\mathcal{B}_1\Big(\overline\partial_{KL}\mathcal{B}_2\Big)\Big] \non \\ &&+ \frac{25}{32} ({\rm det}Y)^3\overline\partial_{KL}\Big[ ({\rm det} Y)^{-3}Y_{AK}Y_{BL}\Big(\partial_{AB}\mathcal{B}_1\Big)B_2\Big]-\frac{125}{128}\mathcal{B}_1 \mathcal{B}_2 + c.c..\end{eqnarray} In the intermediate stages of the analysis, the expression \begin{equation} Y_{IK}Y_{JQ}Y_{ML}Y_{NP}\mathcal{B}_1\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\Big(\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\end{equation} arises, which apart from the factor of $\mathcal{B}_1$ involves \C{drop}. However it vanishes in the final answer. Thus adding the various contributions, we obtain the expression for $\chi^{(3)}$ given by \begin{eqnarray} \label{eq2}&&\frac{\chi^{(3)}}{32} = ({\rm det}Y)^3 \partial_{AB} \Big[ ({\rm det}Y)^{-3} Y_{AK} Y_{BL}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big) \non \\ &&\times \Big(\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]-({\rm det}Y)^3 \overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\mathcal{B}_1 \Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\non \\ &&\times \partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big] -\frac{5}{4} ({\rm det}Y)^3\overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \mathcal{B}_1 \Big(\partial_{IJ}\mathcal{B}_1\Big)\non \\ &&\times \Big(\partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\Big] +\frac{25}{32} ({\rm det}Y)^3 \partial_{AB}\Big[({\rm det}Y)^{-3}Y_{AK}Y_{BL}\mathcal{B}_1\Big(\overline\partial_{KL}\mathcal{B}_2\Big)\Big] \non \\ &&+\frac{25}{32}({\rm det}Y)^3 \overline\partial_{IJ} \Big[ ({\rm det}Y)^{-3}Y_{IK} Y_{JL}\mathcal{B}_1^2\Big(\partial_{KL}\mathcal{B}_1\Big)\Big]\non \\ &&- \frac{25}{32} ({\rm det}Y)^3\overline\partial_{KL}\Big[ ({\rm det} Y)^{-3}Y_{AK}Y_{BL}\Big(\partial_{AB}\mathcal{B}_1\Big)\mathcal{B}_2\Big]-\frac{125}{128} \mathcal{B}_1 \Big(\mathcal{B}_1^2 - 3 \mathcal{B}_2\Big)+c.c..\end{eqnarray} We now equate the two expressions \C{eq} and \C{eq2} which have been obtained by calculating $\chi^{(3)}$ in two different ways. The term involving $\mathcal{B}_1(\mathcal{B}_1^2-3\mathcal{B}_2)$ cancels leading to a reduction in the number of string invariants that appear in the final expression. We get that \begin{eqnarray} \label{simplE}&&-\frac{125\mathcal{B}_9}{32({\rm det}Y)^3} = \partial_{AB} \Big[ ({\rm det}Y)^{-3} Y_{AK} Y_{BL}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \Big(\overline\partial_{CD} \mathcal{B}_1\Big)\Big(\partial_{IJ} \overline\partial_{KL} \mathcal{B}_1\Big) \non \\ &&\times \Big(\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]- \overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{AK}Y_{BL}Y_{IP}Y_{JQ}\mathcal{B}_1 \Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\non \\ && \times \partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big] -\frac{5}{4} \overline\partial_{CD}\Big[ ({\rm det}Y)^{-3}Y_{IP}Y_{JQ}Y_{CM}Y_{DN} \mathcal{B}_1 \Big(\partial_{IJ}\mathcal{B}_1\Big)\non \\ && \times \Big(\partial_{MN} \overline\partial_{PQ}\mathcal{B}_1\Big)\Big] +\frac{25}{32} \partial_{AB}\Big[({\rm det}Y)^{-3}Y_{AK}Y_{BL}\mathcal{B}_1\Big(\overline\partial_{KL}\mathcal{B}_2\Big)\Big] \non \\ &&+\frac{25}{32} \overline\partial_{IJ} \Big[ ({\rm det}Y)^{-3}Y_{IK} Y_{JL}\mathcal{B}_1^2\Big(\partial_{KL}\mathcal{B}_1\Big)\Big]- \frac{25}{32} \overline\partial_{KL}\Big[ ({\rm det} Y)^{-3}Y_{AK}Y_{BL}\Big(\partial_{AB}\mathcal{B}_1\Big)\mathcal{B}_2\Big]+c.c..\non \\ \end{eqnarray} Thus we see that the string invariant $\mathcal{B}_9$ is completely determined by the KZ invariant $\mathcal{B}_1$ and $\mathcal{B}_2$, which in turn is determined by the KZ invariant using \C{simplify1}. Also $\mathcal{B}_9/({\rm det}Y)^3$ is a total derivative on moduli space which will be very useful for our purposes. To simplify \C{simplE}, along with \C{YY} we use \begin{eqnarray} \overline\partial_{CD} \Big[ \frac{\mathcal{B}_1^2}{({\rm det}Y)^3}Y_{AP}Y_{BQ} \partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ} \mathcal{B}_1\Big)\Big] = \frac{5}{4} \overline\partial_{CD} \Big[ \frac{\mathcal{B}_1^2}{({\rm det}Y)^3}Y_{CM}Y_{DN}\partial_{MN}\mathcal{B}_1\Big]. \non \\ \end{eqnarray} This gives us that \begin{eqnarray} \label{simpLe}&&-\frac{125\mathcal{B}_9}{32({\rm det}Y)^3} = \frac{25}{32} \overline\partial_{KL}\Big[ \frac{Y_{IK}Y_{JL}}{({\rm det}Y)^3}\Big(\mathcal{B}_1 \partial_{IJ}\mathcal{B}_2 - \mathcal{B}_2\partial_{IJ}\mathcal{B}_1 - \frac{1}{3} \partial_{IJ} \mathcal{B}_1^3\Big)\Big]\non \\ &&-\epsilon_{AI}\epsilon_{BJ}\epsilon_{KP}\epsilon_{LQ} \overline\partial_{CD}\Big[\frac{\mathcal{B}_1}{{\rm det}Y}\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]\non \\ &&-2\epsilon_{AI}\epsilon_{KP}\overline\partial_{CD}\Big[\frac{\mathcal{B}_1}{({\rm det}Y)^2} Y_{JL}Y_{BQ}\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\partial_{AB}\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]\non \\ &&+\epsilon_{AI}\epsilon_{BJ}\epsilon_{KP}\epsilon_{LQ} \partial_{AB}\Big[\frac{\overline\partial_{CD}\mathcal{B}_1}{{\rm det}Y}\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]\non \\ &&+2\epsilon_{AI}\epsilon_{KP}\partial_{AB}\Big[\frac{\overline\partial_{CD}\mathcal{B}_1}{({\rm det}Y)^2} Y_{JL}Y_{BQ}\Big(\partial_{IJ}\overline\partial_{KL}\mathcal{B}_1\Big)\Big(Y_{CM}Y_{DN}\partial_{MN}\overline\partial_{PQ}\mathcal{B}_1\Big)\Big]+c.c..\end{eqnarray} \section{Integrating simple string invariants over moduli space} We now consider integrating some simple string invariants we have discussed above over moduli space. Such integrals are of the form \begin{equation} \label{Sp}\int_{\mathcal{F}_2} d\mu \frac{\mathcal{B}}{({\rm det}Y)^3}.\end{equation} In \C{Sp}, $\mathcal{B}$ is constructed out of string invariants, and the integral is over $\mathcal{F}_2$, the fundamental domain of $Sp(4,\mathbb{Z})$. The $Sp(4,\mathbb{Z})$ invariant measure $d\mu/({\rm det}Y)^3$ involves \begin{equation} d\mu = \prod_{I\leq J} 2 d({\rm Re} \Omega_{IJ}) \wedge d ({\rm Im}\Omega_{IJ}).\end{equation} Such integrals in \C{Sp} for generic $\mathcal{B}$ are difficult to evaluate, as they involve data all over moduli space, and also given the involved structure of $\mathcal{F}_2$. However, if the integral can be reduced to a boundary term in moduli space, its evaluation becomes considerably simpler as it only involves boundary data. For example, this method of evaluating the integral facilitated the evaluation of the integral of the KZ invariant over moduli space~\cite{DHoker:2014oxd}. We now consider \C{Sp} when $\mathcal{B}$ is $\mathcal{B}_2 - \mathcal{B}_1^2$, and also when it is $\mathcal{B}_9$. Now from \C{g1} and \C{simpLe}, we see that both these integrals reduce to boundary terms in moduli space. To evaluate them, we first briefly describe the structure of the boundary of moduli space, and the asymptotic expansions of the relevant string invariants, and then proceed to evaluate the integrals. To analyze the boundary structure, we parametrize the period matrix $\Omega$ as \begin{equation} \label{parap}\Omega= \left( \begin{array}{cc} \tau & v \\ v & \sigma \end{array} \right).\end{equation} The boundary of moduli space involves contributions from the separating and the non--separating nodes, as well as their intersection. The separating node is obtained from \C{parap} by taking $v \rightarrow 0$, while keeping $\tau, \sigma$ fixed. At this node, an $SL(2,\mathbb{Z})_\tau \times SL(2,\mathbb{Z})_\sigma$ subgroup of $Sp(4,\mathbb{Z})$ survives with the action \begin{equation} \label{sepexp}v \rightarrow \frac{v}{(c\tau+d)(c'\sigma +d')}, \quad \tau \rightarrow \frac{a\tau+b}{c\tau+d}, \quad \sigma \rightarrow \frac{a'\sigma +b'}{c'\sigma +d'},\end{equation} where $a,b,c,d,a',b',c',d' \in \mathbb{Z}$ and $ad-bc = a'd' -b'c' =1$. Thus $\tau$ and $\sigma$ parametrize the complex structure moduli of the resulting tori. The non--separating node is obtained by taking $\sigma \rightarrow i\infty$, while keeping $\tau, v$ fixed\footnote{Another contribution comes from taking $\tau \rightarrow i\infty$, while keeping $\sigma, v$ fixed. These two contributions are simply related by $\tau \leftrightarrow \sigma$ exchange, and hence we focus on only one of them.}. At this node, an $SL(2,\mathbb{Z})_\tau$ subgroup of $Sp(4,\mathbb{Z})$ survives whose action on $v, \tau$ and $\sigma$ is given by~\cite{Moore:1986rh,DHoker:2017pvk,DHoker:2018mys} \begin{equation} v \rightarrow \frac{v}{(c\tau+d)}, \quad \tau \rightarrow \frac{a\tau+b}{c\tau+d},\quad \sigma \rightarrow \sigma - \frac{cv^2}{c\tau+d},\end{equation} where $a,b,c,d \in \mathbb{Z}$ and $ad-bc=1$. At this node, $v$ parametrizes the coordinate on the torus with complex structure $\tau$, and thus \begin{equation} -\frac{1}{2} \leq v_1 \leq \frac{1}{2}, \quad 0 \leq v_2 \leq \tau_2.\end{equation} Also $\sigma_2$ along with $v_2$ and $\tau_2$ forms the $SL(2,\mathbb{Z})_\tau$ invariant quantity \begin{equation} \label{deft}t= \sigma_2 - \frac{v_2^2}{\tau_2}\end{equation} which we shall use later. We now consider the asymptotic expansion of various quantities around these nodes that will be relevant for our analysis. To start with, ${\rm det} Y$ behaves as \begin{equation} \label{det1} {\rm det} Y = \tau_2 \sigma_2 +O(v_2^2)\end{equation} at the separating node, and as \begin{equation} \label{det2}{\rm det} Y = \tau_2\sigma_2 + O(\sigma_2^0)\end{equation} at the non--separating node. We now state the expressions for the asymptotic expansions of the string invariants $\mathcal{B}_1$ and $\mathcal{B}_2$~\cite{Wentworth,Jong,DHoker:2013fcx,Pioline:2015qha,DHoker:2017pvk,DHoker:2018mys}. The asymptotic expansions of $\mathcal{B}_1$ and $\mathcal{B}_2$ around the separating node are given by \begin{eqnarray} \label{sep}\mathcal{B}_1 &=& 4{\rm ln}\vert \lambda \vert + O(\vert \lambda \vert), \non \\ \mathcal{B}_2 &=& 16 {\rm ln}^2 \vert \lambda \vert +O(\vert \lambda \vert),\end{eqnarray} where\footnote{Note that $\vert \lambda \vert$ is $SL(2,\mathbb{Z})_\tau \times SL(2,\mathbb{Z})_\sigma$ invariant under the transformations \C{sepexp}.} \begin{equation} \lambda = 2\pi v \eta^2 (\tau) \eta^2 (\sigma),\end{equation} and $\eta (\tau)$ is the Dedekind eta function. On the other hand, the asymptotic expansions around the non--separating node are given by\footnote{In the asymptotic expansion of $\mathcal{B}_2$, apart from terms that are $O(e^{-t})$, we have also ignored the $O(t^0)$, $O(t^{-1})$ and $O(t^{-2})$ terms~\cite{DHoker:2017pvk,DHoker:2018mys}, as they are not relevant for our analysis. } \begin{eqnarray} \label{nonsep}\mathcal{B}_1 &=& -\frac{2\pi t}{3} -2 g(v) -\frac{5F_2 (v)}{\pi t} +O(e^{-t}), \non \\ \mathcal{B}_2 &=& \frac{4\pi^2t^2}{9}+\frac{8\pi t g(v)}{3} +O(t^0).\end{eqnarray} In \C{nonsep}, $g(v)$ is the genus one Green function given by \begin{equation} \label{Green}g(v)\equiv g(v;\tau) = \sum_{(m,n) \neq (0,0)}\frac{\tau_2}{\pi\vert m+n\tau\vert^2} e^{2\pi i(my-nx)},\end{equation} where we have parametrized $v$ as \begin{equation} \label{param}v = x+\tau y,\end{equation} with $x,y \in (0,1]$. The Green function is single--valued and doubly periodic on the torus. Thus we have that \begin{equation} \label{zerogreen}\int_{\Sigma} d^2 z g (z)= 0, \quad \int_{\Sigma} d^2 z \partial_z \overline\partial_z g (z)=0,\end{equation} which follows from \C{Green}, where $\Sigma$ is the toroidal worldsheet. In \C{nonsep}, we also have that\footnote{Note that $g_2 (v)$ and $E_2$ are $SL(2,\mathbb{Z})_\tau$ invariant.} \begin{equation} F_2 (v) = E_2 - g_2 (v) ,\end{equation} where $g_2 (v)$ is the iterated Green function defined by \begin{equation} g_2 (v) \equiv g_2 (v;\tau) = \int_{\Sigma} \frac{d^2 z}{2\tau_2} g(v-z;\tau)g(z;\tau)\end{equation} and \begin{equation} E_2 \equiv E_2 (\tau) = g_2 (0)\end{equation} is the non--holomorphic Eisenstein series. Now $g(v)$, $g_2 (v)$ and $E_2$ satisfy the differential equations \begin{eqnarray} \label{manyeqn}&& \Delta_\tau g(v) =0, \quad \Delta_v g(v) = -8\pi\tau_2 \delta^2 (v) + 4\pi, \non \\ && \Delta_\tau g_2 (v) = 2 g_2 (v), \quad \Delta_v g_2 (v) = -4\pi g(v), \quad \Delta_\tau E_2 = 2 E_2,\end{eqnarray} which are useful in our analysis. In \C{manyeqn} we have defined the $SL(2,\mathbb{Z})_\tau$ invariant operators\footnote{The $\tau$ derivative is taken at constant $x, y$ in \C{param}, and not at constant $v$.} \begin{equation} \Delta_\tau = 4\tau_2^2 \partial_\tau \overline\partial_\tau , \quad \Delta_v = 4\tau_2 \partial_v \overline\partial_v ,\end{equation} while the delta function is normalized to satisfy $\int_{\Sigma} d^2 z \delta^2 (z)=1$. In order to evaluate the integrals of $\mathcal{B}_2 - \mathcal{B}_1^2$ and $\mathcal{B}_9$ over moduli space, we shall think of the boundary contributions as limits of contributions in the bulk~\cite{DHoker:2014oxd}. To see the structure, consider the integral (or its complex conjugate) given by\footnote{We define \begin{equation} \partial_\tau = \frac{\partial}{\partial\tau}, \quad \partial_\sigma = \frac{\partial}{\partial\sigma}, \quad \partial_v= \frac{\partial}{\partial v}\end{equation} and similarly for its complex conjugates.} \begin{eqnarray} \label{boundary}\int_{\mathcal{F}_2} d\mu \overline\partial_{IJ} \Psi_{IJ} = \int_{\mathcal{F}_2} d\mu\Big[\overline\partial_{\tau} \Psi_{11} +\overline\partial_{\sigma} \Psi_{22} +\frac{1}{2}\overline\partial_v \Big(\Psi_{12} +\Psi_{21}\Big)\Big]\end{eqnarray} where we have used \C{pdef}. The first two terms receive contributions from the non--separating node while the remaining terms receive contributions from the separating node, and hence the integral is entirely determined by the asymptotic expansions of $\Psi_{IJ}$ around these nodes. In this limiting procedure, the contribution from the separating node is evaluated in the complex $v$ plane as $\vert v \vert = R \rightarrow 0$. Hence this is an integral in the complex $v$ plane on a circle around the origin with vanishing radius. On the other hand, the contribution from the non--separating node $\sigma_2 \rightarrow \infty$ is evaluated as $t= L\rightarrow \infty$ using \C{deft} in an $SL(2,\mathbb{Z})_\tau$ invariant way\footnote{This essentially reduces to neglecting $v_2 = y \tau_2$ contributions in the final answer that result from various expressions involving $Y_{IJ}$. After taking this limit, what remains is an $SL(2,\mathbb{Z})_\tau$ invariant integral over $v$ and $\tau$.}. We shall see that there are no divergences in this limit in the two cases we consider. For $\mathcal{B}_2 - \mathcal{B}_1^2$, it follows from \C{sep} since $\mathcal{B}_2 - \mathcal{B}_1^2 \sim O(\vert \lambda \vert)$ at the separating node, and from \C{nonsep} as $\mathcal{B}_2 - \mathcal{B}_1^2 \sim O(t)$ at the non--separating node. Thus they lead to absolutely convergent integrals in \C{Sp} on simply using \C{det1} and \C{det2}. For $\mathcal{B}_9$, a similar conclusion should follow from its asymptotic expansions around the various nodes. Before we proceed to calculate these integrals, let us very briefly consider the case when $\mathcal{B} = \mathcal{B}_1$~\cite{DHoker:2014oxd} in \C{Sp}. Using \C{eigenKZ}, we have that \begin{equation} \int_{\mathcal{F}_2} d\mu \frac{\mathcal{B}_1}{({\rm det}Y)^3} =\frac{4}{5} \int_{\mathcal{F}_2} d\mu \partial_{IJ} \Big[ \frac{Y_{IK}Y_{JL}}{({\rm det}Y)^3}\overline\partial_{KL}\mathcal{B}_1\Big],\end{equation} hence reducing to an integral over the boundary of moduli space. Now from \C{nonsep}, we see that the contribution from the non--separating node vanishes. On the other hand, from \C{sep} we see that the contribution from the separating node is of the form \begin{equation} \label{nonvan}\int_{\mathcal{F}_1} \frac{d^2\tau}{\tau_2^2} \int_{\mathcal{F}_1}\frac{d^2\sigma}{\sigma_2^2}\oint \frac{d\overline{v}}{\overline{v}},\end{equation} where $\mathcal{F}_1$ is the fundamental domain of $SL(2,\mathbb{Z})$ and the contour has been mentioned above. Thus \C{nonvan} is non--vanishing because of the presence of the simple pole in the contour integral. We shall see this crucial feature is absent in the integrals which we now analyze. \subsection{Integral involving simple string invariants with two links} We first consider the integral \begin{equation} \int_{\mathcal{F}_2} d\mu \frac{\Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big)}{({\rm det}Y)^3}\end{equation} over moduli space, which using \C{g1} reduces to a boundary term which we evaluate based on the discussion above. Its evaluation requires the asymptotic expansions of $\mathcal{B}_1$ given by \C{sep} and \C{nonsep}. First let us consider the contribution to the integral from the separating node, where the only potentially non--vanishing contributions arise from the terms given in \C{sep} in the limit $R \rightarrow 0$. The first term in \C{g1} gives us \begin{eqnarray} \frac{1}{4} \overline\partial_v \Big[ \frac{\Big(\partial_v \mathcal{B}_1\Big) \Big(\partial_v \overline\partial_v \mathcal{B}_1\Big)}{\tau_2 \sigma_2}\Big],\end{eqnarray} which using $\partial_v \overline\partial_v \mathcal{B}_1 \sim \delta^2 (v)$, yields a divergent contribution of the form $\delta^2(v)/v$ on the boundary using \C{boundary}. However the second term in \C{g1} produces a cancelling contribution, and so the total contribution at the separating node vanishes. We next consider the contribution to the integral from the non--separating node using \C{nonsep} as the ignored terms do not contribute. While the first term in \C{g1} does not contribute, the second term gives us \begin{equation} \label{pre1}-\frac{\pi}{6} \partial_t \Big[\frac{\partial_v \overline\partial_v g(v)}{\tau_2^2}\Big].\end{equation} Thus using \C{boundary}, it yields an $SL(2,\mathbb{Z})_\tau$ invariant contribution proportional to\footnote{The integral over $\sigma_1$ simply yields \begin{equation} \int_0^1 d\sigma_1 = 1.\end{equation} The $\sigma_1$ dependence in the asymptotic expansions of the string invariants comes from terms of the form $e^{2\pi i \sigma}$ which are exponentially suppressed for large $t$ and do not contribute to the answer.} \begin{equation} \label{zerO}\int_{\mathcal{F}_1} \frac{d^2\tau}{\tau_2^2}\int_{\Sigma} \frac{d^2v}{\tau_2} \Delta_v g(v)\end{equation} in the final expression. Using \C{zerogreen}, the integral over $\Sigma$ vanishes. Hence there is no contribution from the non--separating node. Thus this yields \begin{equation} \int_{\mathcal{F}_2} d\mu \frac{\Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big)}{({\rm det}Y)^3} =0\end{equation} leading to a vanishing integral. \subsection{Integral involving a simple string invariant with three links} We next consider the integral \begin{equation} \int_{\mathcal{F}_2} d\mu \frac{\mathcal{B}_9 }{({\rm det}Y)^3}\end{equation} over moduli space, which using \C{simpLe} again reduces to a boundary term which we now evaluate. Note that its evaluation requires only the asymptotic expansions of $\mathcal{B}_1$ and $\mathcal{B}_2$ given by \C{sep} and \C{nonsep}, and does not require any information about $\mathcal{B}_9$, hence simplifying the analysis considerably. To begin with, consider the contribution to the integral from the separating node, where the potentially non--vanishing contributions arise from \C{sep}. Now using \begin{equation} \label{reduce}\mathcal{B}_1 \partial_{IJ}\mathcal{B}_2 - \mathcal{B}_2\partial_{IJ}\mathcal{B}_1 - \frac{1}{3} \partial_{IJ} \mathcal{B}_1^3 = \Big(\mathcal{B}_1^2 - \mathcal{B}_2\Big)\partial_{IJ} \mathcal{B}_1 +\mathcal{B}_1 \partial_{IJ} \Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big)\end{equation} and that $\mathcal{B}_2 - \mathcal{B}_1^2 \sim O(\vert \lambda \vert)$ at this node, we see that the contribution from the first term in \C{simpLe} vanishes. The second term contributes \begin{eqnarray} -\frac{1}{8} \overline\partial_v\Big[ \mathcal{B}_1\Big(\partial_v \overline\partial_v \mathcal{B}_1\Big)\partial_v \Big(\partial_v \overline\partial_v \mathcal{B}_1\Big)\Big]\end{eqnarray} which yields a contribution involving $\delta^2(v) \partial_v \delta^2 (v){\rm ln}\vert \lambda \vert$ on the boundary of moduli space. However, the third term produces a cancelling contribution. The fourth term contributes \begin{eqnarray} \frac{1}{8} \partial_v\Big[ \frac{\overline\partial_v\mathcal{B}_1}{\tau_2 \sigma_2}\Big(\partial_v \overline\partial_v \mathcal{B}_1\Big)^2 \Big]\end{eqnarray} which yields a contribution involving $(\delta^2(v))^2/\overline{v}$ on the boundary of moduli space. The fifth term produces a cancelling contribution. Thus the total contribution at the separating node vanishes. We next consider the contribution to the integral from the non--separating node. As $t= L \rightarrow \infty$, the relevant contributions from \C{simpLe} to the final expression must be $SL(2,\mathbb{Z})_\tau$ invariant, and hence we focus on them. The contribution from the first term in \C{simpLe} is of the form \begin{equation} \partial_t \Big[ \frac{1}{t\tau_2^3} \Big(\mathcal{B}_1 \partial_t \mathcal{B}_2 - \mathcal{B}_2 \partial_t \mathcal{B}_1 - \frac{1}{3}\partial_t \mathcal{B}_1^3 \Big)\Big],\end{equation} resulting in \begin{equation} \frac{1}{t}\int_{\mathcal{F}_1} \frac{d^2\tau}{\tau_2^2}\int_{\Sigma} \frac{d^2v}{\tau_2} \Big[\Big(\mathcal{B}_1^2 - \mathcal{B}_2\Big)\partial_t \mathcal{B}_1 +\mathcal{B}_1 \partial_t \Big(\mathcal{B}_2 - \mathcal{B}_1^2\Big)\Big]\Big\vert_{t=L\rightarrow \infty}\end{equation} in the final expression. From \C{nonsep} we see that $\mathcal{B}_2 - \mathcal{B}_1^2 \sim O(t^0)$ and hence the first term in \C{simpLe} does not contribute. The $SL(2,\mathbb{Z})_\tau$ invariant contributions from the second term in \C{simpLe} arise from \begin{equation} \label{2t}-\frac{i}{8}\partial_t \Big[ \frac{\mathcal{B}_1}{t\tau_2} \Big(\partial_v\overline\partial_v \mathcal{B}_1\Big) \partial_v\Big(Y_{2M} Y_{2N} \partial_{MN} \overline\partial_v \mathcal{B}_1\Big)\Big],\end{equation} which is exactly cancelled by a contribution coming from the third term in \C{simpLe}. This is much like the analysis for the separating node where competing contributions cancel. However, \C{2t} produces only vanishing contributions by itself and one need not consider other terms. This is like the analysis of the non--separating node in the previous section. To see this, from \C{nonsep} we have that \begin{eqnarray} \label{V1}\frac{\mathcal{B}_1}{t\tau_2} \Big(\partial_v\overline\partial_v \mathcal{B}_1\Big) \sim \frac{\partial_v\overline\partial_v g(v)}{\tau_2} +\frac{1}{t\tau_2}\Big(\frac{g(v)}{\tau_2}+ g(v)\partial_v\overline\partial_v g(v)\Big) + O(t^{-2}),\end{eqnarray} where we have used \begin{equation} \Delta_v F_2 (v) = 4\pi g(v).\end{equation} We also have that \begin{equation} \label{V2} \partial_v\Big(Y_{2M} Y_{2N} \partial_{MN} \overline\partial_v \mathcal{B}_1\Big)\sim \frac{t}{\tau_2}\end{equation} as the $O(t^0)$ contribution cancels. Thus from \C{V1}, \C{V2} and \C{2t}, we get a contribution of the form \begin{equation} \label{C1}\partial_t \Big[\frac{t \partial_v \overline\partial_v g(v)}{\tau_2^2}\Big] \end{equation} yielding a potentially linearly divergent contribution \begin{equation} L \int_{\mathcal{F}_1} \frac{d^2\tau}{\tau_2^2}\int_{\Sigma} \frac{d^2v}{\tau_2} \Delta_v g(v)\end{equation} at the boundary. However, the integral is the same as \C{zerO} and vanishes. We also get finite contributions as $L \rightarrow \infty$. One of them is of the form \begin{equation} \label{C2}\partial_t \Big[ \frac{g(v)}{\tau_2^3}\Big],\end{equation} which yields the boundary term \begin{equation} \int_{\mathcal{F}_1}\frac{d^2\tau}{\tau_2^2}\int_{\Sigma} \frac{d^2 v}{\tau_2} g(v),\end{equation} which vanishes using \C{zerogreen}. The other finite contribution is of the form \begin{equation} \label{C3}\partial_t \Big[ \frac{g(v) \Delta_v g(v)}{\tau_2^3}\Big]\end{equation} leading to the boundary term \begin{equation} \int_{\mathcal{F}_1}\frac{d^2\tau}{\tau_2^2}\int_{\Sigma} \frac{d^2 v}{\tau_2} g(v)\Delta_v g(v).\end{equation} However on using \C{manyeqn}, and setting $g(0)=0$\footnote{Coincident Green functions, resulting from colliding vertex operators, are not in the moduli space of these graphs as they produce other local operators using the operator product expansion, the propagation of which leads to kinematic poles, rather than contact interactions in the amplitude. These cannot be seen by a naive perturbative expansion in $\alpha'$ by keeping a finite number of terms. In fact, this follows from an analysis of the structure of the Koba--Nielsen factor in the string amplitude using the cancelled propagator argument (see \cite{DHoker:2020aex}, for example, for a recent discussion). }, we see this contribution vanishes using \C{zerogreen}. Proceeding similarly, we see that the remaining terms in \C{simpLe} do not give any additional non--vanishing contributions. Thus the total contribution from the non--separating node vanishes. Hence this leads to the vanishing integral \begin{equation} \int_{\mathcal{F}_2} d\mu \frac{\mathcal{B}_9 }{({\rm det}Y)^3} =0\end{equation} over moduli space. Thus we see that manipulating the expressions $\chi^{(i)} (\Omega,\overline\Omega)$ by evaluating them in two different ways leads to results for integrals of some simple string invariants over moduli space. We expect that generalizing this analysis for graphs with more links will prove useful in evaluating various integrals over moduli space by reducing them to boundary terms, which depend only on asymptotic data.
{ "timestamp": "2021-02-03T02:03:04", "yymm": "2012", "arxiv_id": "2012.14006", "language": "en", "url": "https://arxiv.org/abs/2012.14006" }
\section{Introduction} \IEEEPARstart{R}{econfigurable} intelligent surface (RIS) technology is a very promising and cost-effective solution to improve the spectrum and energy efficiency of wireless communication systems {\cite{8936989, 8741198, Liu2012:Energy, 9366805}}. With the assistance of a smart controller, the RIS can adjust its reflection coefficients such that the desired signals are added constructively. The joint active and passive beamforming design has been studied in many existing works with continuous phase shifts (e.g., {\cite{8811733, 9133184}}) or discrete phase shifts (e.g., {\cite{8930608, 9013288}}) at reflecting elements. Moreover, the RIS also can be used in millimeter wave (mmWave) systems {\cite{8485924}}. To achieve the above joint active and passive beamforming gains, the cascade channel should be estimated efficiently and accurately. However, in the scenario of mmWave channels, it is difficult to establish a scheme that can simultaneously achieve high accuracy and efficiency. In consequence, efficiency is the priority in the existing work. Several novel strategies have been proposed to efficiently make such estimations. Authors in {\cite{8611231}} utilized the generalized approximate message passing (GAMP) algorithm to find the entries of the unknown mmWave channel matrix. Similarly, in {\cite{9133156}}, authors adopted the message passing (MP) based algorithm to estimate the cascade channels. The orthogonal matching pursuit (OMP) method was used in {\cite{9103231}}. Nevertheless, the existing schemes cannot achieve the optimal estimation accuracy, i.e., the Cramér-Rao lower bound (CRLB) of channel estimation for RIS-aided mmWave systems. Contrary to these efficient algorithms, we intend to establish a scheme which can achieve the CRLB. For this purpose, we first convert the channel estimation task into a noisy sparse signal recovery problem through utilizing the properties of the discrete Fourier transform (DFT) matrix and the Kronecker product. Then, a joint typicality-based estimator is proposed to carry out the recovery task and establish the asymptotic achievability of the CRLB when the product of the number of receiver antennas and the number of time slots approaches infinity. The correctness of our result is verified through both mathematical proofs and numerical simulations. In addition, based on the sparsity structure established in this letter, our analysis result can also be applied to the conventional point-to-point mmWave system which is a special case of RIS-assisted systems. However, it should be noted that although it is the first result establishing the achievability of the CRLB of channel estimation for RIS-assisted mmWave systems, our scheme is complex and costs a lot of overhead. Thus, finding a lower-complexity estimator that can simultaneously achieve the CRLB is the future important work. \section{System and Channel Model} \subsection{System Model} We consider an RIS-assisted mmWave system, as illustrated in \textcolor{black}{Fig. 1}, where the base station (BS) and the mobile station (MS) are equipped with $N_{\mathrm{s}}$ and $N_{\mathrm{d}}$ antennas, respectively, and the RIS is equipped with $N_{\mathrm{r}}$ reflecting elements. Although the BS and the MS are equipped with a large number of antennas, they can fit within the compact form because of the small wavelength of mmWave. In this letter, to better illustrate our results, we neglect the direct link from the BS to the MS. Nevertheless, the extension to the scenario with the direct link is straightforward. In addition, due to the inherent sparsity of mmWave channels {\cite{6834753}}, there exists only a dominant line-of-sight path and very few non-line-of-sight paths in the BS-RIS link and the RIS-MS link. Then, the elevation (azimuth) angle-of-departure (AoD) of the $i^{\textit{th}}$ path at the BS and the RIS are denoted as $\theta_{i}$ ($\phi_{i}$) and $\gamma_{i}^{\prime}$ ($\mu_{i}^{\prime}$), respectively, and the elevation (azimuth) angle-of-arrival (AoA) of the $i^{\textit{th}}$ path at the RIS and the MS are denoted as $\gamma_{i}$ ($\mu_{i}$) and $\vartheta_{i}$ ($\varphi_{i}$), respectively. \begin{figure}[htbp] \centerline{\includegraphics[width = 7 cm]{Fig1.eps}} \caption{The RIS-assisted mmWave communication system with an $N_{\mathrm{s}}$-antenna BS, an $N_{\mathrm{d}}$-antenna MS, and an RIS comprising $N_{\mathrm{r}}$ reflecting elements.} \label{fig 1} \vspace{-13 pt} \end{figure} \subsection{Channel Model} Due to the inherent sparse nature of mmWave channels, the number of paths between the BS and RIS is small relative to the dimensions of BS-RIS channel matrix ${\mathbf{G}^{\prime}}$, and we assume it is at most $L^{\prime}$. Then, $\mathbf{G}^{\prime}$ can be modeled as follows: \begin{equation} \label{BS-RIS channel matrix} {\mathbf{G}^{\prime}} = \sqrt{\frac{N_\text{s}N_\text{r}}{\rho^{\prime}}} \sum_{i=1}^{L^{\prime}} \alpha_{i} \mathbf{a}_{\mathrm{r}}(\gamma_{i}, \mu_{i}) \mathbf{a}_{\mathrm{s}}^{\mathrm{H}}(\theta_{i}, \phi_{i}) \text{,} \end{equation} where $\rho^{\prime}$ denotes the average path-loss between the BS and the RIS, $\alpha_{i}$ is the propagation gain associated with the $i^{\text{\textit{th}}}$ path, and $\mathbf{a}_{\mathrm{r}}(\gamma_{i}, \mu_{i})$ and $\mathbf{a}_{\mathrm{s}}(\theta_{i}, \phi_{i})$ are the array response vectors at the BS and RIS, respectively. We assume that the RIS deployed here is an $N_{\mathrm{r,h}} \times N_{\mathrm{r,w}}$ uniform planar array. Then, we have \begin{equation} \mathbf{a}_{\mathrm{s}}\left(\theta_{i}, \phi_{i}\right) = [e^{j(1-1)u_{\mathrm{s}}}, e^{j(2-1)u_{\mathrm{s}}}, \cdots, e^{j(N_{\mathrm{s}}-1)u_{\mathrm{s}}}]^{\mathrm{T}} \text{,} \quad \;\; \end{equation} \begin{equation} \label{3} \begin{aligned} \mathbf{a}_{\mathrm{r}}\left(\gamma_{i}, \mu_{i}\right) & = \mathbf{a}_{\mathrm{r,h}}\left(\gamma_{i}, \mu_{i}\right) \otimes \mathbf{a}_{\mathrm{r,w}}\left(\gamma_{i}, \mu_{i}\right) \\ & = [e^{j(1-1)u_{\mathrm{r,h}}}, e^{j(2-1)u_{\mathrm{r,h}}}, \cdots, e^{j(N_{\mathrm{r,h}}-1)u_{\mathrm{r,h}}}]^{\mathrm{T}} \\ & \otimes [e^{j(1-1)u_{\mathrm{r,w}}}, e^{j(2-1)u_{\mathrm{r,w}}}, \cdots, e^{j(N_{\mathrm{r,w}}-1)u_{\mathrm{r,w}}}]^{\mathrm{T}} \text{,} \end{aligned} \end{equation} where $\otimes$ represents the Kronecker product, the directional parameters: $u_{\mathrm{s}} = \frac{2\pi d}{\lambda}\sin(\theta_{i}) \cos (\phi_{i})$, $u_{\mathrm{r,h}} = \frac{2\pi d}{\lambda} \cos(\gamma_{i})$, and $u_{\mathrm{r,w}} = \frac{2\pi d}{\lambda}\sin(\gamma_{i}) \cos (\mu_{i})$, $d$ is the separation between antennas (reflecting elements) at the BS (RIS), and $\lambda$ is the wavelength of transmitted signal. Similarly, we assume that the number of paths between the RIS and MS is at most $L^{\prime\prime}$. Then, the RIS-MS channel matrix $\mathbf{G}^{\prime\prime}$ is modeled as follows: \begin{equation} \label{RIS-user channel matrix} \mathbf{G}^{\prime\prime} = \sqrt{\frac{N_\mathrm{r}N_\mathrm{d}}{\rho^{\prime\prime}}} \sum_{i=1}^{L^{\prime\prime}} \beta_{i} \mathbf{a}_{\mathrm{d}}(\vartheta_{i}, \varphi_{i}) \mathbf{a}_{\mathrm{r}}^{\mathrm{H}}(\gamma^{\prime}_{i}, \mu^{\prime}_{i}) \text{,} \end{equation} where $\rho^{\prime\prime}$ denotes the average path-loss between the RIS and the user, $\beta_{i}$ is the propagation gain associated with the $i^{\text{\textit{th}}}$ path, and $\mathbf{a}_{\mathrm{d}}\left(\vartheta_{i}, \varphi_{i}\right)$ is the array response vector at the MS, which can be written as \begin{equation} \label{5} \mathbf{a}_{\mathrm{d}}\left(\vartheta_{i}, \varphi_{i}\right) = [e^{j(1-1)u_{\mathrm{d}}}, e^{j(2-1)u_{\mathrm{d}}}, \cdots, e^{j(N_{\mathrm{d}}-1)u_{\mathrm{d}}}]^{\mathrm{T}} \text{,} \end{equation} where $u_{\mathrm{d}} = \frac{2\pi d}{\lambda}\sin(\vartheta_{i}) \cos (\varphi_{i})$. Based on the BS-RIS and RIS-MS channel models established in (\ref{BS-RIS channel matrix}) and (\ref{RIS-user channel matrix}), the overall $N_{\mathrm{d}} \times N_{\mathrm{s}}$ channel matrix $\mathbf{H}$ can be expressed as \begin{equation} \label{channel matrix} {\mathbf{H}} = \mathbf{G}^{\prime\prime} \mathbf{\Phi} \mathbf{G}^{\prime} \text{,} \end{equation} where the diagonal matrix $\mathbf{\Phi}= \operatorname{diag}[e^{j\textcolor{black}{\boldsymbol{\varrho}}}]$ is the response at the RIS \footnote{ Since the RIS is a passive device, each reflecting element is usually designed to maximize the signal reflection. Thus, we set the amplitude of reflection coefficient equal to one for simplicity in this letter. }, and the $N_{\mathrm{r}}$ dimensional vector $\textcolor{black}{\boldsymbol{\varrho}=[\varrho_1, \cdots, \varrho_{N_{\mathrm{r}}}]^{\mathrm{T}}}$ represents the phase shifts of reflecting elements at the RIS. Then, the received signals $\mathbf{Y} \in \mathbb{C}^{N_{\mathrm{s}} \times K}$ at the BS over $K$ time slots can be expressed as \begin{equation} \label{received signal} \begin{aligned} \mathbf{Y} & = \mathbf{U}_{\mathrm{s}}^{\mathrm{H}} \left[ \mathbf{H}^{\mathrm{H}} \left( \mathbf{U}_{\mathrm{d}}\mathbf{X}\right) + \mathbf{N}\right]\\ & = \mathbf{U}_{\mathrm{s}}^{\mathrm{H}} \mathbf{H}^{\mathrm{H}} \mathbf{U}_{\mathrm{d}}\mathbf{X} + \tilde{\mathbf{N}} \text{,} \end{aligned} \end{equation} where $\mathbf{U}_{\mathrm{d}}$ and $\mathbf{U}_{\mathrm{s}}^{\mathrm{H}}$ are the transmit beamforming and receive combining matrices, respectively, $\mathbf{X}$ represents the pilot signal transmitted by the MS, $\tilde{\mathbf{N}}$ is the additive white Gaussian noise with the elements independently drawn from $\mathcal{C}\mathcal{N} (0, \sigma^{2})$. The $i^{\textit{th}}$ columns of $\mathbf{X}$ and $\tilde{\mathbf{N}}$ are corresponding to the $i^{\textit{th}}$ time slot, and we denote the transmit power as $p_{\mathrm{MS}} = \mathbb{E}\lbrace{\boldsymbol{x}^{\mathrm{H}}[i]}\boldsymbol{x}[i]\rbrace $. \section{Sparse Structure of Cascade Channel} Before estimating the cascade channel $\mathbf{H}$, the first problem we are facing now is how to convert the estimation task into a noisy sparse signal recovery problem since the representation of $\mathbf{H}$ in (\ref{channel matrix}) is not visibly sparse. To this end, pre-discretized grids can be utilized to establish the sparse representation {\cite{9103231}}. However, this method may cause grid mismatch and estimation accuracy reduction. Another issue we should note is that even mildly ill-conditioned sensing matrices can lead to estimation failure in a compressed sensing problem {\cite{8698290, 6875146}}. In order to prevent these issues, we give the sparse representation by expressing the cascade channel in the angular domain based on suitable DFT bases. Thus, the beamforming matrices $\mathbf{U}_{\mathrm{d}}$ and $\mathbf{U}_{\mathrm{s}}^{\mathrm{H}}$ are set as the $N_{\mathrm{d}} \times N_{\mathrm{d}}$ and $N_{\mathrm{s}} \times N_{\mathrm{s}}$ spatial unitary DFT matrices, respectively. A given path with the directional parameters $u_{\mathrm{s}}$ and $u_{\mathrm{d}}$, which are defined under (\ref{3}) and (\ref{5}), has almost all of its energy along the particular vectors $[\mathbf{U}_{\mathrm{s}}]_{:,m}$ and $[\mathbf{U}_{\mathrm{d}}]_{:,n}$, and very little along all the others, if $m$ and $n$ satisfy {\cite{8611231}}: \begin{equation} \left|u_{\mathrm{s}} - \frac{2\pi(m-1)}{N_{\mathrm{s}}} \right| < \frac{2\pi}{N_{\mathrm{s}}} \text{,} \end{equation} \begin{equation} \left|u_{\mathrm{d}} - \frac{2\pi(n-1)}{N_{\mathrm{d}}} \right| < \frac{2\pi}{N_{\mathrm{d}}} \text{.} \end{equation} In order to illustrate visually, \textcolor{black}{Fig. \ref{fig 2}} plots a specific realization for the channel magnitude in the angular domain. As seen from it, the true channel is indeed sparse in the angular domain, i.e., it exhibits a few dominant coefficients. \begin{figure}[htbp] \centerline{\includegraphics[width = 6.5 cm]{Fig2.eps}} \caption{Angular-domain channel for $N_{\mathrm{s}} = 50$, $N_{\mathrm{d}} = 50$, and $N_{\mathrm{r}}=40$. BS-RIS channel has $2$ paths and RIS-MS channel has $2$ paths.} \vspace{-10 pt} \label{fig 2} \end{figure} Consequently, the RIS-assisted mmWave channel is inherently sparse in the angular domain if expressed in suitable DFT bases. Utilizing the DFT beamforming matrices $\mathbf{U}_{\mathrm{d}}$ and $\mathbf{U}_{\mathrm{s}}^{\mathrm{H}}$ and vectorizing the received signals $\mathbf{Y}$ at the BS yields \begin{equation} \label{13} \begin{aligned} \boldsymbol{y}& = \operatorname{vec}\left\lbrace\mathbf{U}_{\mathrm{s}}^{\mathrm{H}} \mathbf{H}^{\mathrm{H}} \mathbf{U}_{\mathrm{d}} \mathbf{X} + \tilde{\mathbf{N}}\right\rbrace = \operatorname{vec} (\tilde{\mathbf{H}}^{\mathrm{H}} \mathbf{X}) + \operatorname{vec} (\tilde{\mathbf{N}}) \\ & \overset{(a)}{=} {\left(\mathbf{X}^{\mathrm{T}} \otimes \mathbf{I}_{N_{\mathrm{s}}} \right)} {\operatorname{vec} (\tilde{\mathbf{H}}^{\mathrm{H}})} + \operatorname{vec} (\tilde{\mathbf{N}}) = {\mathbf{\Upsilon}}{\boldsymbol{\upsilon}} + \boldsymbol{n} \text{,} \end{aligned} \end{equation} where $\tilde{\mathbf{H}}^{\mathrm{H}} = \mathbf{U}_{\mathrm{s}}^{\mathrm{H}} \mathbf{H}^{\mathrm{H}} \mathbf{U}_{\mathrm{d}}$ is the cascade channel represented in the angular domain, $\boldsymbol{\upsilon} = \operatorname{vec} (\tilde{\mathbf{H}}^{\mathrm{H}})$ is the sparse signal that we need to recover, $\mathbf{\Upsilon} = \mathbf{X}^{\mathrm{T}} \otimes \mathbf{I}_{N_{\mathrm{s}}}$ denotes the measurement matrix, $\boldsymbol{n} = \operatorname{vec} (\tilde{\mathbf{N}})$ is the additive Gaussian noise, and the equality (a) follows from the relation of the vectorization of the matrix product to the Kronecker product {\cite{zhang2017matrix}}. We assume that $\boldsymbol{\upsilon}$ is sparse with at most $L \propto L^{\prime} \times L^{\prime\prime}$ non-zero entries in unknown locations. The sparse-level $L$ is actually a prior information and is related to the number of paths. Once $\boldsymbol{\upsilon}$ is recovered, an estimate of $\mathbf{H}$ is readily obtained as follows: \begin{equation} \hat{\mathbf{H}} = \mathbf{U}_{\mathrm{d}} \hat{\operatorname{\tilde{\mathbf{H}}}} \mathbf{U}_{\mathrm{s}}^{\mathrm{H}} \text{,} \end{equation} where $\hat{\operatorname{\tilde{\mathbf{H}}}}^{\mathrm{H}} = \operatorname{unvec}\left(\hat{\boldsymbol{\upsilon}}\right)$ and $\hat{\boldsymbol{\upsilon}}$ is an estimate of ${\boldsymbol{\upsilon}}$. Moreover, the estimate of $\operatorname{vec} ({\mathbf{H}}) = (\mathbf{G}^{\prime \mathrm{T}} \otimes \mathbf{G}^{\prime\prime}) \operatorname{vec}(\mathbf{\Phi})$ {\cite{zhang2017matrix}} is enough to configure the phase shifts at RIS because the beamforming problem can be converted to an optimization problem which maximizes $ \|\mathbf{H}\|^{2}_{\mathrm{F}} = \|\operatorname{vec}\left( \mathbf{H}\right)\|^{2}_{2}$ with respect to $\operatorname{vec}(\mathbf{\Phi})$. \section{Asymptotic Achievability of The Cramér-Rao Lower Bound via Joint Typicality Estimator} Many classical compressed sensing algorithms such as basis pursuit (BP) {\cite{1614066}} and orthogonal matching pursuit (OMP) {\cite{4385788}} can be utilized to recover the sparse signal $\boldsymbol{\upsilon}$. However, these algorithms always choose the locally optimal approximation to the actual sparse signal {\cite{1614066, 4385788, 5550495, 5419092, 4839056}}. Thus, in this section, we utilize the Shannon theory and the notion of joint typicality {\cite{4694104}} to asymptotically achieve the CRLB of the channel estimation for RIS-assisted mmWave systems where the estimator has no knowledge of the actual locations of the non-zero entries in $\boldsymbol{\upsilon}$. To prove the asymptotic achievability of the CRLB, we first state the following lemma. \begin{lemma} \label{fullrank} Let the set $\mathcal{J} \subset \left\lbrace 1, \cdots, N_{\mathrm{d}}N_{\mathrm{s}} \right\rbrace $ such that $|\mathcal{J}| = L$ and $\mathbf{\Upsilon}_{\mathcal{J}}$ be the sub-matrix of the measurement matrix $\mathbf{\Upsilon}$ with the columns corresponding to the index set $ \mathcal{J}$. Then, we have $\operatorname{rank}(\mathbf{\Upsilon}_{\mathcal{J}})= L$ with probability $1$. \end{lemma} \begin{IEEEproof} First, we consider the rank of $\mathbf{X}^{\mathrm{T}}$. The $(m,n)^{\textit{th}}$ entry of it represents the pilot symbol transmitted by the $n^{\textit{th}}$ antenna at the $m^{\textit{th}}$ time slot. Thus, all of the entries in it are independent and designable. For simplicity, we set them as independent and identically distributed (i.i.d.) and distributed according to $\mathcal{CN}(0,1)$. Let $\boldsymbol{x}_{i}$ and $\boldsymbol{x}_{j}$ be two columns of $\mathbf{X}^{\mathrm{T}}$. Utilizing the law of large numbers yields \begin{equation} \boldsymbol{x}_{i}^{\mathrm{H}} \boldsymbol{x}_{j} = \sum_{k} x_{k,i}^{*}x_{k,j} \rightarrow 0 \text{,} \;\; i \neq j \text{,} \end{equation} as $K$ goes to infinity. Thus, the columns of $\mathbf{X}^{\mathrm{T}}$ are mutually orthogonal with probability $1$, i.e., $\mathbf{X}^{\mathrm{T}}$ is a full column rank matrix when $K>N_{\mathrm{d}}$. Then, due to $\mathbf{I}_{N_{\mathrm{s}}}$ is a unit matrix, it has a full column rank. By utilizing the rank property of the Kronecker product: $\operatorname{rank}(\boldsymbol{\Upsilon}) = \operatorname{rank}(\mathbf{X}^{\mathrm{T}})\operatorname{rank}(\mathbf{I}_{N_{\mathrm{s}}})$, we prove the statement of this lemma. \end{IEEEproof} \vspace{2 pt} Then, to establish the joint typicality-based channel estimator, we need to define the notion of joint typicality. We adopt the definition from {\cite{4694104}} which is given as follows: \vspace{- 2 pt} \begin{definition}{($\delta$-Jointly Typicality)} \label{Joint Typicality} \item The received signal ${\boldsymbol{y}}$ collected over $K$ time slots, and the set of indices $\mathcal{J} \subset \left\lbrace 1, 2, \cdots, N_{\mathrm{d}}N_{\mathrm{s}} \right\rbrace $ with $|\mathcal{J}| = L$ are $\delta$-jointly typical, if $\operatorname{rank}(\mathbf{\Upsilon}_{\mathcal{J}})= L$ and \begin{equation} \left| \frac{1}{K N_{\mathrm{s}}}\| \mathbf{\Pi}_{\mathbf{\Upsilon}_{\mathcal{J}}}^{\perp} {\boldsymbol{y}} \|^{2}-\frac{K N_{\mathrm{s}}-L}{K N_{\mathrm{s}}} \sigma^{2} \right| < \delta \text{,} \end{equation} where $\mathbf{\Upsilon}_{\mathcal{J}}$ is the sub-matrix of the measurement matrix $\mathbf{\Upsilon}$ with the columns corresponding to the index set $ \mathcal{J}$, and $\mathbf{\Pi}_{\mathbf{\Upsilon}_{\mathcal{J}}}^{\perp} = \mathbf{I}-\mathbf{\Upsilon}_{\mathcal{J}}(\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{J}})^{-1}\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}}$ is the orthogonal projection matrix. \end{definition} Next, we establish the following proposition to show that the proposed estimator can be applied to the considered problem. \begin{proposition} The joint typicality-based estimator can be utilized to estimate the cascade channel in an RIS-assisted mmWave system, \textit{i.e.}, solve the noisy sparse signal recovery problem in Eq. (\ref{13}). The detailed channel estimation steps are illustrated in Algorithm 1. \end{proposition} \vspace{-2 pt} \begin{IEEEproof} The measurement matrix $\boldsymbol{\Upsilon}$ in (\ref{13}) is proved to be full column rank in Lemma \ref{fullrank}, which ensures that the sub-spaces spanned by different $L$ column vectors chosen from the measurement matrix $\boldsymbol{\Upsilon}$ are different. Based on Definition \ref{Joint Typicality}, if $L$ column vectors are chosen correctly, there exists only additive white Gaussian noise in the orthogonal complement. Thus, the joint typicality-based estimator can be utilized to solve the noisy sparse signal recovery problem in (\ref{13}). \end{IEEEproof} \vspace{2 pt} \begin{algorithm}[!h] \caption{Joint Typicality-Based Channel Estimator} \begin{algorithmic}[1] \STATE \textbf{Input:} The numbers of antennas $N_{\mathrm{s}}$ at the BS and $N_{\mathrm{d}}$ at the MS, the pilot signal $\mathbf{X}$, the received signal vector ${\boldsymbol{y}}$, and the maximal sparse-level $L$. \WHILE{index set $\mathcal{J}_{i-1}$ is not $\delta$-jointly typical with ${\boldsymbol{y}}$} \STATE $i^{\text{\textit{th}}}$ \textit{iteration of all the possible $\binom{N_{\mathrm{d}}N_{\mathrm{s}}}{L}$ $L$-dimensional sub-spaces }: \STATE Determine whether the following inequality is satisfied. \begin{equation*} \left| \frac{1}{K N_{\mathrm{s}}}\| \mathbf{\Pi}_{\mathbf{\Upsilon}_{\mathcal{J}_{i}}}^{\perp} {\boldsymbol{y}} \|^{2}-\frac{K N_{\mathrm{s}}-L}{K N_{\mathrm{s}}} \sigma^{2} \right| < \delta \end{equation*} \STATE If it is satisfied, compute the estimate $\hat{\boldsymbol{\upsilon}}$ by projecting the received signal ${\boldsymbol{y}}$ onto the sub-space spanned by $\mathbf{\Upsilon}_{\mathcal{J}_{i}}$. \begin{equation*} \hat{\boldsymbol{\upsilon}} = (\mathbf{\Upsilon}_{\mathcal{J}_{i}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{J}_{i}})^{-1}\mathbf{\Upsilon}_{\mathcal{J}_{i}}^{\mathrm{H}}{\boldsymbol{y}} \end{equation*} \ENDWHILE \STATE If there exists no set that is $\delta$-jointly typical to $\boldsymbol{y}$, it outputs the zero vector. \STATE \textbf{Output:} The channel estimate $\hat{\mathbf{H}}^{\mathrm{H}} = \mathbf{U}_{\mathrm{s}} \operatorname{unvec}(\hat{\boldsymbol{\upsilon}})\mathbf{U}_{\mathrm{d}}^{\mathrm{H}}$. \end{algorithmic} \end{algorithm} In order to further prove that we can asymptotically achieve the CRLB on the estimation error where the estimator has no knowledge of the locations of the non-zero entries in $\boldsymbol{\upsilon}$, we state the following lemmas. \vspace{-2 pt} \begin{lemma} \label{CRLBlemma} For any unbiased estimate $\hat{\boldsymbol{\upsilon}}$ of $\boldsymbol{\upsilon}$, the Cramér-Rao lower bound on the MSE is given as \begin{equation} \label{CRLB-Lambda} \mathbb{E\left\lbrace \| \hat{\boldsymbol{\upsilon}} - \boldsymbol{\upsilon}\|^{\mathrm{2}} \right\rbrace } \geq \sigma^{2} \operatorname{Tr}\left[(\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{I}})^{-1}\right] \text{.} \end{equation} \end{lemma} \begin{IEEEproof} The likelihood function of the random vector ${\boldsymbol{y}}$ conditioned on $\boldsymbol{\upsilon}$ is \begin{equation} p({\boldsymbol{y}}; \boldsymbol{\upsilon}) = \frac{\operatorname{exp}\left( -\frac{1}{2\sigma^2} \|{\boldsymbol{y}} - \mathbf{\Upsilon}_{\mathcal{I}} \boldsymbol{\upsilon}_{\mathcal{I}}\|^{2}\right)}{(2\pi)^{K N_{\mathrm{s}}/2}\sigma^{K N_{\mathrm{s}}}} \text{,} \end{equation} where $\boldsymbol{\upsilon}_{\mathcal{I}}$ is the subvector of $\boldsymbol{\upsilon}$ with elements corresponding to the index set $\mathcal{I}$. Then, by using (6) in {\cite{1519691}}, the CRLB can be written as (\ref{CRLB-Lambda}). \end{IEEEproof} \vspace{-2 pt} \begin{lemma}{\rm(Lemma 2.3 of {\cite{5361481}})} \label{lemma 2.3 of shannon limit} \\ Let $\mathcal{I} = \operatorname{supp}(\boldsymbol{\upsilon})$ and $\operatorname{rank}(\mathcal{\mathbf{\Upsilon}_{\mathcal{I}}}) = L$. Then, for $\delta > 0$, it holds that \begin{equation} \begin{aligned} & \mathbb{P}\left( \left| \frac{1}{K N_{\mathrm{s}}}\| \mathbf{\Pi}_{\mathbf{\Upsilon}_{\mathcal{J}}}^{\perp} {\boldsymbol{y}} \|^{2}-\frac{K N_{\mathrm{s}}-L}{K N_{\mathrm{s}}} \sigma^{2} \right| > \delta \right) \\ & \quad \leq 2 \operatorname{exp}\left( {-\frac{\delta^{2}}{4 \sigma^{4}} \frac{K^{2} N_{\mathrm{s}}^{2}}{K N_{\mathrm{s}}-L+\frac{2\delta}{\sigma^{2}}K N_{\mathrm{s}} }} \right) \text{.} \end{aligned} \end{equation} Let $\mathcal{J}$ be an index set such that $|\mathcal{J}|= L$, $|\mathcal{I} \cap \mathcal{J}| < L$, and $\operatorname{rank}(\mathbf{\Upsilon}_{\mathcal{J}}) = L$. Then, for $\delta > 0$, it holds that \begin{equation} \begin{aligned} &\mathbb{P}\left( \left| \frac{1}{K N_{\mathrm{s}}}\| \mathbf{\Pi}_{\mathbf{\Upsilon}_{\mathcal{J}}}^{\perp} {\boldsymbol{y}} \|^{2}-\frac{K N_{\mathrm{s}}-L}{K N_{\mathrm{s}}} \sigma^{2} \right| < \delta \right) \\ & \quad \leq \operatorname{exp}\left( \frac{{L - K N_{\mathrm{s}}}}{4} \left( \frac{\sum_{k\in\mathcal{I} \backslash \mathcal{J}}{|{\upsilon}_{k}|^{2}}-\delta^{\prime}}{\sum_{k\in\mathcal{I} \backslash \mathcal{J}}{|{\upsilon}_{k}|^{2}}+\sigma^{2}} \right)^{2} \right) \text{,} \end{aligned} \end{equation} where ${\upsilon}_{k}$ is the $k^{\textit{th}}$ entry in $\boldsymbol{\upsilon}$ and \begin{equation} \delta^{\prime} = \delta \frac{K N_{\mathrm{s}}}{K N_{\mathrm{s}}- L} \text{.} \end{equation} \end{lemma} \begin{IEEEproof} Please refer to {\cite{5361481}} for the proof. \end{IEEEproof} \vspace{5 pt} Finally, based on the above lemmas, we establish the asymptotic achievability of the CRLB in the following theorem. \vspace{-2 pt} \begin{theorem} \label{theorem 2} By utilizing the joint typicality-based channel estimator given in Algorithm 1, the MSE of cascade channel estimation in an RIS-assisted mmWave system asymptotically achieves the CRLB as the product of the number of receiver antennas and the number of time slots tends to infinity. This bound can be asymptotically achieved whether the estimator knows the location of the non-zero entries. \end{theorem} \begin{IEEEproof} The MSE of the joint typicality estimator (averaged over all possible measurement matrices) can be upper-bounded as follows: \begin{equation*} \begin{aligned} \varepsilon_{\delta}(K N_{\mathrm{s}}) = & \mathbb{E}\left\lbrace \|\hat{\boldsymbol{\upsilon}}-\boldsymbol{\upsilon} \|^{2} \right\rbrace \\ \leq & \int_{\boldsymbol{\Upsilon}} \|\boldsymbol{\upsilon}\|^{2} \mathbb{P}(\mathrm{E}_{0}) dP(\boldsymbol{\Upsilon}) \\ + & \int_{\boldsymbol{\Upsilon}} \mathbb{E}_{{\boldsymbol{n}}|\boldsymbol{\Upsilon}} \left\lbrace \| (\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{I}})^{-1}\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} {\boldsymbol{y}} - \boldsymbol{\upsilon}\|^{2} \right\rbrace \\ & \quad \; \times \mathbb{P}(\mathcal{I} \sim {\boldsymbol{y}}) dP(\boldsymbol{\Upsilon}) \end{aligned} \end{equation*} \begin{equation} \label{MSEjoint} \begin{aligned} + & \int_{\boldsymbol{\Upsilon}} \sum_{\mathcal{J} \neq \mathcal{I}} \mathbb{E}_{{\boldsymbol{n}}|\boldsymbol{\Upsilon}} \left\lbrace \| (\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{J}})^{-1}\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}} {\boldsymbol{y}} - \boldsymbol{\upsilon}\|^{2} \right\rbrace \\ & \quad \; \times \mathbb{P}(\mathcal{J} \sim {\boldsymbol{y}}) dP(\boldsymbol{\Upsilon}) \text{,} \end{aligned} \end{equation} where $\mathbb{P}(\cdot)$ represents the event probability defined over the noise density, the event $\mathrm{E}_{0}$ represents the estimator does not find any set $\delta$-jointly typical to ${\boldsymbol{y}}$, $dP(\boldsymbol{\Upsilon})$ represents the probability measure of the matrix $\boldsymbol{\Upsilon}$, and the inequality follows from the Boole’s inequality. The second term is corresponding to $\mathcal{I}$ and is the MSE of a genie-aided estimation where the estimator knows $\operatorname{supp}(\boldsymbol{\upsilon})$. We rewrite it as follows: \begin{equation} \begin{aligned} & \int_{\boldsymbol{\Upsilon}} \mathbb{E}_{{\boldsymbol{n}}|\boldsymbol{\Upsilon}} \left\lbrace \| (\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{I}})^{-1}\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}}{\boldsymbol{y}} - \boldsymbol{\upsilon}\|^{2} \right\rbrace \mathbb{P}(\mathcal{I} \sim {\boldsymbol{y}}) dP(\boldsymbol{\Upsilon}) \\ & = \mathbb{E}_{{\boldsymbol{n}}, \boldsymbol{\Upsilon}} \left\lbrace \| (\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{I}})^{-1}\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} {\boldsymbol{n}}\|^{2} \right\rbrace = \mathbb{E}_{\boldsymbol{\Upsilon}} \left\lbrace \sigma^{2} \operatorname{Tr}(\mathbf{\Upsilon}_{\mathcal{I}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{I}})^{-1}\right\rbrace \text{.} \end{aligned} \end{equation} By using Lemma \ref{CRLBlemma}, we obtain that the second term in (\ref{MSEjoint}) is the CRLB of the genie-aided cascade channel estimation. Next, we show that the first and third term in (\ref{MSEjoint}) converge to zero when $K N_{\mathrm{s}} \rightarrow \infty$. By using Lemma \ref{lemma 2.3 of shannon limit}, the first term can be upper-bounded as \begin{equation} \begin{aligned} & \int_{\boldsymbol{\Upsilon}} \|\boldsymbol{\upsilon}\|^{2} \mathbb{P}(\mathrm{E}_{0}) dP(\boldsymbol{\Upsilon}) \\ & \quad \leq 2 \|\boldsymbol{\upsilon}\|^{2} \operatorname{exp}\left( {-\frac{\delta^{2}}{4 \sigma^{4}} \frac{K^{2}N_{\mathrm{s}}^{2}}{K N_{\mathrm{s}}-L+\frac{2\delta}{\sigma^{2}}KN_{\mathrm{s}}}} \right) \text{.} \end{aligned} \end{equation} This term approaches to zero as $K N_{\mathrm{s}} \rightarrow \infty$, since $\|\boldsymbol{\upsilon}\|^{2}$ grows polynomially in $ N_{\mathrm{s}} $ and the exponential term tends to negative infinity as $K N_{\mathrm{s}} \rightarrow \infty$. By using Lemma \ref{lemma 2.3 of shannon limit}, the third term can be upper-bounded as \begin{equation} \label{term 3} \begin{aligned} & \int_{\boldsymbol{\Upsilon}} \sum_{\mathcal{J} \neq \mathcal{I}} \mathbb{E}_{{\boldsymbol{n}}|\boldsymbol{\Upsilon}} \left\lbrace \| (\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}} \mathbf{\Upsilon}_{\mathcal{J}})^{-1}\mathbf{\Upsilon}_{\mathcal{J}}^{\mathrm{H}} {\boldsymbol{y}} - \boldsymbol{\upsilon}\|^{2} \right\rbrace \\ & \quad \quad \quad \; \times \mathbb{P}(\mathcal{J} \sim {\boldsymbol{y}}) dP(\boldsymbol{\Upsilon}) \\ &\quad \leq (L \sigma^{2}+\|\boldsymbol{\upsilon}\|^{2}) \int_{\boldsymbol{\Upsilon}} \sum_{\mathcal{J} \neq \mathcal{I}} \mathbb{E}_{{\boldsymbol{n}}|\boldsymbol{\Upsilon}} \mathbb{P}(\mathcal{J} \sim {\boldsymbol{y}}) dP(\boldsymbol{\Upsilon}) \\ & \quad \leq (L \sigma^{2}+\|\boldsymbol{\upsilon}\|^{2}) \; \times \\ & \quad \; \quad \sum_{\mathcal{J} \neq \mathcal{I}}\operatorname{exp}\left( \frac{{L-K N_{\mathrm{s}}}}{4} \left( \frac{\sum_{k\in\mathcal{I} \backslash \mathcal{J}}{|{\upsilon}_{k}|^{2}}-\delta^{\prime}}{\sum_{k\in\mathcal{I} \backslash \mathcal{J}}{|{\upsilon}_{k}|^{2}}+\sigma^{2}} \right)^{2} \right) \text{.} \end{aligned} \end{equation} This term tends to zero as $K N_{\mathrm{s}} \rightarrow \infty$, since $(L \sigma^{2}+\|\boldsymbol{\upsilon}\|^{2})$ grows polynomially in $ N_{\mathrm{s}} $ and $(L - K N_{\mathrm{s}})$ tends to negative infinity as $K N_{\mathrm{s}} \rightarrow \infty$. \end{IEEEproof} \section{Numerical Results} In this section, we numerically illustrate the result given in Theorem \ref{theorem 2}. To verify whether the CRLB of cascade channel estimation for RIS-assisted mmWave communication systems can be asymptotically achieved when the product of time slot number and receiver antenna number $K N_{\mathrm{s}}$ tends to infinity, {Fig. \ref{fig 3}} simultaneously plots the curves of the CRLB, the MSE upper bound, and the performance of joint typicality estimator versus the time slot number $K$ with different signal-to-noise ratios (SNRs) selected from the set of $\left\lbrace20\text{ dB}, 30\text{ dB}, 40\text{ dB} \right\rbrace$. \begin{figure}[htbp] \centerline{\includegraphics[width = 9 cm]{Fig3.eps}} \caption{The performance of joint typicality-based channel estimator versus the time slot number with different SNRs.} \label{fig 3} \end{figure} In this figure, the numbers of antennas at the BS and the MS are both set as $5$, and the number of reflecting elements at the RIS is set as $10$. The path numbers in the BS-RIS channel and the RIS-MS channel are both set as $1$. In addition, the numerical results in {Fig. \ref{fig 3}} are obtained through $1,000$ Monte Carlo trials. It is observed that the CRLB can be achieved as the time slot number tends to infinity, which confirms the result in Theorem \ref{theorem 2}. When we fix the time slot number $K$ and change receiver antenna number $N_{\mathrm{s}}$, the curves are similar to {Fig. \ref{fig 3}}, and we omit it due to the space limitation. It is encouraging not only because the CRLB of cascade channel estimation for RIS-assisted mmWave systems can be asymptotically achieved but also because we can decrease the number of time slots consumed in channel estimation through increasing the number of receiver antennas. \section{Conclusion} In this letter, we consider the estimation of the cascade channel in an RIS-assisted mmWave communication system. By utilizing the joint typicality-based channel estimator, the MSE of estimation can asymptotically achieve the CRLB as the product of the number of receiver antennas and the number of time slots tends to infinity, and this bound can be asymptotically achieved whether the estimator knows the locations of the non-zero entries. To the best of our knowledge, it is the first research which establishes the asymptotic achievability of the CRLB of the cascade channel estimation for the RIS-assisted mmWave systems. Our result also reveals that the training overhead can be reduced through deploying more receiver antennas. However, there is an important issue that our established scheme is complex and costs a lot of overhead, thus finding a lower-complexity estimator that can simultaneously achieve the CRLB for RIS-assisted mmWave systems is an important work in future studies. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-09-22T02:10:10", "yymm": "2012", "arxiv_id": "2012.14058", "language": "en", "url": "https://arxiv.org/abs/2012.14058" }
\section{title} \renewcommand\subset{\subseteq} \renewcommand\supset{\supseteq} \title{Diophantine sets in general are Cantor sets} \author{Fernando Argentieri} \begin{document} \maketitle \begin{abstract} Let $ {\gamma} \in(0;\su{2}),\t\geq 1$ and define the ``$ {\gamma} ,\t$ Diophantine set" as: $$D_{\gamma,\tau}:=\{\a\in (0;1): ||q\a||\geq\frac{ {\gamma} }{q^{\t}}\quad\forall q\in\Bbb{N}\},\qquad||x||:=\inf_{p\in\Bbb{Z}}|x-p|. $$ In this paper we study the topology of these sets and we show that, for large $\t$ and for almost all $ {\gamma} >0$, $D_{\gamma,\tau}$ is a Cantor set. \end{abstract} \section{Introduction} Diophantine sets play an important role in dynamical systems, in particular, in small divisors problems with applications to KAM theory, Aubry-Mather theory, conjugation of circle diffeomorphisms, etc. (see, for example, [3], [5], [9], [12], [13], [14], [16]). {\smallskip\noindent} The set $D_{\gamma,\tau}$ is compact and totally disconnected (since $D_{\gamma,\tau}\cap\Bbb{Q}=\emptyset$), however, these sets may be not Cantor sets. In fact, in \cite{17} we have shown various examples in which $D_{\gamma,\tau}$ have isolated points. In this paper we prove the following:\linebreak {\bf Theorem} Let $\t>\frac{3+\sqrt{17}}{2}$. Then, for almost all $ {\gamma} >0$ $D_{\gamma,\tau}$ is a Cantor set. {\smallskip\noindent} By \cite{6}, for $\t=1$ and $\su{3}< {\gamma} <\su{2}$ $D_{\gamma,\tau}$ is countable (and non empty for $ {\gamma} >\su{3}$ small enough). In particular, this result does not holds for $\t=1$. We expect that $\t>\frac{3+\sqrt{17}}{2}$ can be improved with $\t>3$. However it is not clear what is the best constant. Following the same proof, we can prove also that, fixed $\t>\frac{3+\sqrt{17}}{2}$, for almost all $ {\gamma} >0$, if $\a\inD_{\gamma,\tau}$ and $U$ is an open neighborhood that contains $\a$, then $ {\mu} (D_{\gamma,\tau}\cap U)>0$.\linebreak {\smallskip\noindent} The paper is organized as follows: in the second section we give some basic definitions and remarks, in the third section we prove our result and, in the last section are present some natural questions. \section{Definitions and remarks} \subsection{Definitions} \begin{itemize} \item $\Bbb{N}:=\{1,2,3,...\}$, $\Bbb{N}_{0}:=\{0,1,2,3,...\}$ \item Given $a,b\in\Bbb{Z} -\{0\}$, we indicate with $(a,b)$ the maximum common divisor of $a$ and $b$. \item Let $\a$ be a real number. We indicate with $[\a]$ the integral part of $\a$, with $\{\a\}$ the fractional part of $\a$ . \item Given E$\subset{\Bbb{R}}$, we indicate with $\mathcal{I}$(E) the set of isolated points of E. \item Given E$\subset{\Bbb{R}}$, we indicate with $\mathcal{A}$(E) the set of accumulated points of E. \item We say that E$\subset{\Bbb{R}}$ is perfect if $\mathcal{A}$(E)=E. \item Given a Borel set E$\subset{\Bbb{R}}$ we denote with $ {\mu} $(E) the Lebesgue measure of E. \item A topological space X is a totally disconnected space if the points are the only connected subsets of X. \item $X\subset\Bbb{R}$ is a Cantor set if it is closed, totally disconnected and perfect. \item For $E\subset\Bbb{R}^{n}$, $\dim_{H}E$ is the Hausdorff dimension of $E$. \item Given $\a\in\Bbb{R}$ we define: $$||\a||:=\min_{p\in\Bbb{Z}}|\a -p|$$ \item Given $ {\gamma} >0, \t\geq1$, we define the $( {\gamma} ,\t)$ Diophantine points in $(0;1)$ as the numbers in the set: $$ D_{\gamma,\tau}:=\{\a\in (0 ;1): ||q\a||\geq\frac{ {\gamma} }{q^{\t}}\quad\forall q\in\Bbb{N}\}$$ \item $$ D^{\Bbb{R}}_{ {\gamma} ,\t}:=\{\a\in\Bbb{R}:||q\a||\geq\frac{ {\gamma} }{q^{\tau}}\quad\forall q\in\Bbb{N}\},$$ $$D_{\t}:=\bigcup_{ {\gamma} >0}D_{ {\gamma} ,\t},\quad D:=\bigcup_{\t\geq 1}D_{\t}.$$ We call $D$ the set of Diophantine numbers. \item Given $\t\geq 1,\a\in \Bbb{R}$, we define: $$\gamma(\alpha,\tau):=\inf_{q\in\Bbb{N}}q^{\t}||q\a||$$ \item Given $\a\in\Bbb{R}$ we define: $$\t(\a):=\inf\{\t\geq 1:\gamma(\alpha,\tau)>0\}$$ \item Given an irrational number $\a=[a_{0};a_{1},...]:=a_0+\su{a_1+\su{a_2+...}}$, we denote with $\{\frac{p_n}{q_n}\}_{n\in\Bbb{N}_{0}}$ the convergents of $\a$, $\a_{n}:=[a_{n};a_{n+1},...]$\footnote{for informations about continued fractions see [4],[8],[15] }. \item We indicate with $[a_1,a_2,a_3,...]:=\su{a_1+\su{a_2+\su{a_3+...}}}$. \item Let $\a$ be an irrational number. We define: $$ {\gamma} _{n}(\a,\t):=q_n^{\t}||q_n\a||=q_n^{\t}|q_n\a-p_n|$$ \item Let $\t\geq 1$, $$ {\gamma} _{-}(\a,\t):=\inf_{n\in2 \Bbb{N}_{0}} {\gamma} _{n}(\a,\t),$$ $$ {\gamma} _{+}(\a,\t):=\inf_{n\in2\Bbb{N}_{0}+1} {\gamma} _{n}(\a,\t),$$ $${\mathcal{D}_{\t}}:=\{\a\in D_{\t} :\t(\a)=\t\},$$ $${\mathcal{I}}^{1}_{ {\gamma} ,\t}:=\{\a\in D_{ {\gamma} ,\t}: \exists n\not\equiv m\quad{(\rm{mod} 2)}, {\gamma} _{n}(\a,\t)= {\gamma} _{m}(\a,\t)=\gamma(\alpha,\tau)\},$$ $${\mathcal{I}}^{2}_{ {\gamma} ,\t}:=\{\a\in D_{ {\gamma} ,\t}: \exists n\in\Bbb{N}_{0}, {\gamma} _{n}(\a,\t)= {\gamma} (\a,\t)\}\cap({\mathcal{I}}^{1}_{ {\gamma} ,\t})^{c},$$ $${\mathcal{I}}^{3}_{ {\gamma} ,\t}:={\mathcal{I}}(D_{\gamma,\tau})\cap({\mathcal{I}}^{1}_{ {\gamma} ,\t}\cup{\mathcal{I}}^{2}_{ {\gamma} ,\t})^{c},$$ $${\mathcal{I}}^{1}_{\t}:=\bigcup_{ {\gamma} >0}{\mathcal{I}}^{1}_{ {\gamma} ,\t},$$ $${\mathcal{I}}^{2}_{\t}:=\bigcup_{ {\gamma} >0}{\mathcal{I}}^{2}_{ {\gamma} ,\t},$$ $${\mathcal{I}}^{3}_{\t}:=\bigcup_{ {\gamma} >0}{\mathcal{I}}^{3}_{ {\gamma} ,\t}.$$ \end{itemize} \subsection{Remarks} We list here some simple remarks. For a proof see \cite{17}. \begin{enumerate} \item $\a\in D_{\gamma,\tau}\iff 1-\a\inD_{\gamma,\tau}$. \item $\gamma(\alpha,\tau)\leq \min\{\a,1-\a\}.$ \item Fixed $\t\geq 1$, $ {\gamma} (.,\t):D_{\t} \rightarrow (0,\frac{1}{2})$. \item $D_{\gamma,\tau}^{\Bbb{R}}=\bigcup_{n\in\Bbb{Z}}(D_{\gamma,\tau}+n)$, thus we can restrict to study the Diophantine points in $(0,1)$. \item \beq{fond} \left\{ \begin{array}{l} {\gamma} _{n}(\a,\t)=\frac{q_{n}^{\t}}{\a_{n+1}q_{n}+q_{n-1}},\\ \su{ {\gamma} _{n}(\a,\t)}=\frac{q_{n+1}}{q_{n}^{\t}}+\frac{1}{\a_{n+2}q_{n}^{\t-1}} \end{array}\right. \end{equation} \item $\gamma(\alpha,\tau)=\inf_{n\in\Bbb{N}_{0}} {\gamma} _{n}(\a,\t)$. \item If $\t<\t(\a)$, then $\gamma(\alpha,\tau)=0$; if $\t>\t(\a)$ then $\gamma(\alpha,\tau)>0$. Moreover, for $\t>\t(\a)$ the inf is a minimum. \item $\a\in {\mathcal{D}_{\t}}\iff \t(\a)=\t$ and $\gamma(\alpha,\tau)>0$. \item If $\a\in{\mathcal{I}}^{1}_{ {\gamma} ,\t}$, then $\a$ is an isolated point of $D_{\gamma,\tau}$. \item The cardinality of ${{\mathcal{I}}^{1}_{\t}}$ is at most countable. \item $ {\mu} ({\mathcal{D}_{\t}})=0$ for all $\t\geq 1$. \item $ {\gamma} _{0}(\a,\t)=\left\{\a\right\}$, in particular $ {\gamma} _{0}(\a,\t)$ does not depend on $\t$. \item Let $\frac{p}{q}$ a rational number. \beq{} \a\in D_{\t}\iff \left\{\a+\frac{p}{q}\right\}\in D_\t, \end{equation} \beq{} \a\in{\mathcal{D}_{\t}}\iff \left\{\a+\frac{p}{q}\right\}\in{\mathcal{D}_{\t}}. \end{equation} \item If $\t>\t(\a)$, $ {\gamma} _{-}(\a,\t)= {\gamma} _{+}(\a,\t)$, then $\a\in{\mathcal{I}_{\t}}$. \item $\a\in D_\t\iff q_{n+1}=O(q_{n}^{\t}).$ \end{enumerate} \section{Proof of the Theorem} In the first part of this section, we suppose without loss of generality, that $n$ is always even. In fact, for $n$ odd it suffices to consider $1-\a$ ($\a=[a_{0};...,a_{n},...]\in D_{\gamma,\tau}\iff 1-\a\inD_{\gamma,\tau}$, and the denominators of the odd convergents to $1-\a$ are the same of the even convergents to $\a$, hence, by symmetry, all that is demonstrated for $n$ even continues to hold if $n$ is odd). Moreover, in all the section $0< {\gamma} <\su{2}$ (otherwise $D_{\gamma,\tau}=\emptyset$). We want to prove that, for $\t>\frac{3+\sqrt{17}}{2}$: $$ {\mu} \left(\left\{0< {\gamma} <\frac{1}{2}:{\mathcal{I}}(D_{\gamma,\tau})\not=\emptyset\right\}\right)=0.$$ By Remark (j) it is enough to prove it for ${\mathcal{I}}^{2}_{ {\gamma} ,\t}$ and ${\mathcal{I}}^{3}_{ {\gamma} ,\t}$. Observe that the isolated points of type 2,3 are obtained by infinitely many intersections of intervals centered in rational numbers $\frac{p}{q}$ with length $\frac{2 {\gamma} }{q^{\t+1}}$. Thus, the first step is to show that, given $\a\inD_{\gamma,\tau}$, it is enough (up to a set of measure zero and for $\t$ big enough) to control the intersection of intervals centred in the convergents. The second step will be to show that, if intervals centred in the convergents intersects, then the coefficients of the continued fractions cannot grow too. In the final step we prove that, when intervals centred in the convergents do not intersect and for big convergents, the interval between two subsequent convergentes (with the same parity) contains a diophantine sets with positive mesure. \lem{} \label{l8} Let $ {\gamma} >0, \t>1,\a\inD_{\gamma,\tau}$, $\frac{p_{n}}{q_{n}}$ the convergents to $\a$, $$I_{n}:=\left(\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}\right).$$ Suppose that $\exists N\in\Bbb{N}$ such that, for all $n>N$ even: \beq{2} \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}. \end{equation} For $n>N$ define $$A_{n}:=\left(\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}, \frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\right).$$ Moreover, suppose that for every $n$ (even): \beq{1} \a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}} \end{equation} Then, there exists $ N_{1}\in\Bbb{N}$ such that, for all $n>N_{1}$: $$\frac{p}{q}\not\in I_{n}\ \Longrightarrow\ \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}},\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}\not\in A_{n}.$$ \elem{} \par\medskip\noindent{\bf Proof\ } Note that it is enough to verify the inequality when $\frac{p}{q}<\a$. In fact the inequality is trivial if $\frac{p}{q}>\a$ (because of $\a\inD_{\gamma,\tau}$ implies $\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}\geq\a>\frac{p_{n+2}}{q_{n+2}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}$ by (\ref{1})). By (\ref{2}) it follows that $A_{n}\cap A_{m}=\emptyset$ for $n\not=m$, with $n,m>N$ even. From $$\a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}}$$ for $n$ even, we get $$\max_{2n\leq N} \frac{p_{2n}}{q_{2n}}+\frac{ {\gamma} }{q_{2n}^{\t+1}}=:C<\a,$$ from which it follows that there exists $N_{1}\in\Bbb{N}$ such that for $n$ even, $n>N_{1}$: $$\frac{p_{n}}{q_{n}}-\frac{ {\gamma} }{q_{n}^{\t+1}}>C.$$ If $\frac{p}{q}=\frac{p_{m}}{q_{m}}\not \in I_{n}$ is an even convergent to $\a$ with $n>N_{2}:=\max\{N,N_{1}\}$ then, for $m\leq N$ even: $$\frac{p_{m}}{q_{m}}<\frac{p_{n}}{q_{n}}.$$ Moreover, by definition of $N_1$ it follows that: $$\frac{p_{m}}{q_{m}}+\frac{ {\gamma} }{q_{m}^{\t+1}}\leq C<\frac{p_{n}}{q_{n}}-\frac{ {\gamma} }{q_{n}^{\t+1}},$$ from which it follows that the Lemma holds if $\frac{p}{q}=\frac{p_{m}}{q_{m}}$ is an even convergent to $\a$ with $m\leq N$. If $m>N$ and $n>m$ is even: $$\frac{p_{m}}{q_{m}}+\frac{ {\gamma} }{q_{m}^{\t+1}}<\frac{p_{m+2}}{q_{m+2}}-\frac{ {\gamma} }{q_{m+2}^{\t+1}}\leq \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}$$ while, for $n<m$ even: $$\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}>\frac{p_{m-2}}{q_{m-2}}+\frac{ {\gamma} }{q_{m-2}^{\t+1}}\geq \frac{p_{n+2}}{q_{n+2}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}.$$ So Lemma \ref{l8} is true if $\frac{p}{q}$ is an even convergent to $\a$. Thus, Lemma \ref{l8} remains to be verified when $\frac{p}{q}$ is not a convergent to $\a$. It is no restrictive to suppose that there exists $m\not=n$ even for which $\frac{p}{q}\in I_{m}$, otherwise Lemma \ref{l8} is trivial. Now we show that, for $m$ big enough: $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}, \frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}\in\left(\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}},\frac{p_{m+2}}{q_{m+2}}+\frac{ {\gamma} }{q_{m+2}^{\t+1}}\right)$$ from which Lemma \ref{l8} follows immediately by (\ref{1}). By the properties of Farey sequence, for the rationals $\frac{p}{q}\in I_{m}$ we have $q>q_{m}$, so the inequality: $$\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}>\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}$$ holds. It remains to show that: $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\frac{p_{m+2}}{q_{m+2}}+\frac{ {\gamma} }{q_{m+2}^{\t+1}}.$$ This inequality holds for $q\geq \frac{q_{m+2}}{2}$ and $m$ big enough. In fact, in that case: $$\frac{p_{m+2}}{q_{m+2}}-\frac{p}{q}\geq \frac{1}{q q_{m+2}}> \frac{ {\gamma} }{q^{\t+1}}-\frac{ {\gamma} }{q_{m+2}^{\t+1}},$$ that is true for $m$ big enough (because of $\t>1$). So, we can assume that $q_{m}<q<\frac{q_{m+2}}{2}$. Because we have assumed that $\frac{p}{q}$ is not a convergent, by Legendre's Theorem (see \cite{8}), we have: $$\a-\frac{p}{q}>\frac{1}{2 q^{2}},$$ while, because $\frac{p_{m}}{q_{m}}$ is a convergent, we have: $$\a-\frac{p_{m+2}}{q_{m+2}}<\frac{1}{q_{m+2}^{2}}.$$ So, putting together the two inequalities, if $q<\frac{q_{m+2}}{2}$: $$\frac{p_{m+2}}{q_{m+2}}-\frac{p}{q}=\frac{p_{m+2}}{q_{m+2}}-\a+\a-\frac{p}{q}> \frac{1}{2q^{2}}-\frac{1}{q_{m+2}^{2}}>-\frac{ {\gamma} }{q_{m+2}^{\t+1}}+\frac{ {\gamma} }{q^{\t+1}}\iff$$ $$\frac{1}{2q^{2}}-\frac{ {\gamma} }{q^{\t+1}}>\frac{1}{q_{m+2}^{2}}-\frac{ {\gamma} }{q_{m+2}^{\t+1}},$$ that is true for $m$ big enough (it follows by $q_{m}<q<\frac{q_{m+2}}{2}$). So Lemma \ref{l8} is proved. \qed \\ {\smallskip\noindent} We know by Farey's sequence that for $\frac{p}{q}\in I_n$, $q>q_{n+1}$. So, there are a finite numbers of $\frac{p}{q}\in I_n$ with $q<q_{n+2}$. In the next Lemma we want to control the distance between these numbers and $\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}$. \lem{} \label{l9} Let $ {\gamma} >0$, $\t>3, \a\inD_{\gamma,\tau}, \frac{p_{n}}{q_{n}}$ the convergents to $\a$. There exists $N_{1}\in\Bbb{N}$ such that, for $n>N_{1}$: $$\frac{p}{q}\in I_{n}, q<q_{n+2}\ \Longrightarrow\ \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}} -\frac{2 {\gamma} }{q_{n+2}^{\t-1}}.$$ \elem{} \par\medskip\noindent{\bf Proof\ } Let $n>N$, $\frac{p}{q}\in I_{n}$, so by definition of convergents and the fact that $\frac{p_{n}}{q_{n}}<\frac{p}{q}<\frac{p_{n+2}}{q_{n+2}}$ we get that $\frac{p}{q}$ is not a convergent. If $q\geq\frac{q_{n+2}}{2}$ we get: $$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}\geq\frac{1}{q q_{n+2}}\geq \frac{1}{q_{n+2}^{2}}>\frac{ {\gamma} 2^{\t+1}}{q_{n+2}^{\t+1}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}\geq \frac{ {\gamma} }{q^{\t+1}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}$$ for $n$ big enough (because of $\t>3$). So, for $n$ big enough, the inequality remain to be proved for $q<\frac{q_{n+2}}{2}$. In that case: $$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}=\frac{p_{n+2}}{q_{n+2}}-\a+\a-\frac{p}{q}>\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}>\frac{ {\gamma} }{q^{\t+1}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}\iff$$ $$\frac{1}{2q^{2}}-\frac{ {\gamma} }{q^{\t+1}}>\frac{1}{q_{n+2}^{2}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}.$$ From the fact that $$G(x):=\frac{1}{2x^{2}}-\frac{ {\gamma} }{x^{\t+1}}$$ is a decreasing function for $x$ big enough, it is enough to show the inequality for $q=[\frac{q_{n+2}}{2}]$. In this case we get: $$\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}\geq\frac{2}{q_{n+2}^{2}}-\frac{1}{q_{n+2}^{2}}=\frac{1}{q_{n+2}^{2}}>\frac{ {\gamma} }{q^{\t+1}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}$$ for $n$ big enough (for $\t>3$), so $\exists N_{1}\in\Bbb{N}$ such that, when $n>N_{1}$ is even the inequality is verified. \qed \\ {\smallskip\noindent} \lem{} \label{l10} Let $\t>3$ $,\a=[a_{1},a_{2},...]\inD_{\gamma,\tau}, \frac{p_{n}}{q_{n}}$ the convergents to $\a$, then $\exists N\in\Bbb{N}$ such that for all $n>N$ even: $$ {\mu} \left(\bigcup_{\frac{p}{q}\in I_{n}, q\geq q_{n+2}}\left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}, \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\right)<\frac{2 {\gamma} }{q_{n+2}^{\t-1}}$$ \elem{} \par\medskip\noindent{\bf Proof\ } $$ {\mu} \left(\bigcup_{\frac{p}{q}\in I_{n}, q\geq q_{n+2}}\left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}, \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\right)$$ $$<\sum_{q\geq q_{n+2}}\sum_{q\frac{p_{n}}{q_{n}}<p<q\frac{p_{n+2}}{q_{n+2}}}\frac{2 {\gamma} }{q^{\t+1}}<2 {\gamma} \left(\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}\right)\sum_{q\geq q_{n+2}}\su{q^{\t}}$$ $$<2 {\gamma} C\left(\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}\right)\su{q_{n+2}^{\t-1}}=o\left(\frac{2 {\gamma} }{q_{n+2}^{\t-1}}\right)$$ for some constant $C>0$.\qed \\ {\smallskip\noindent} \lem{} \label{l11} Let $\t>1, {\gamma} >0,$ $\a=[a_{1},a_{2},...]\inD_{\gamma,\tau}, \frac{p_{n}}{q_{n}}$ be the convergents to $\a$. Then: \beq{*} \quad \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\iff \end{equation} \beq{} a_{n+2}>\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}} \quad \end{equation} \elem{} \par\medskip\noindent{\bf Proof\ } (\ref{*}) is true if and only if: $$\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}=\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n+1}}{q_{n+1}}+\frac{p_{n+1}}{q_{n+1}}-\frac{p_{n}}{q_{n}}=$$ $$\frac{1}{q_{n}q_{n+1}}-\frac{1}{q_{n+1}q_{n+2}}>\frac{ {\gamma} }{q_{n+2}^{\t+1}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\iff$$ $$\frac{1}{q_{n+2}q_{n+1}}<\frac{1}{q_{n}q_{n+1}}-\frac{ {\gamma} }{q_{n}^{\t+1}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\iff$$ $$\frac{1}{q_{n+2}q_{n+1}}<\frac{ {\gamma} }{q_{n}q_{n+1}}(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\iff$$ $$\su{q_{n+2}}<\frac{ {\gamma} }{q_{n}}(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-q_{n+1}\frac{ {\gamma} }{q_{n+2}^{\t+1}}\iff $$ \beq{} \left\{ \begin{array}{l} \displaystyle\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}}>\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}},\\ \ \\ \displaystyle q_{n+2}>\frac{q_{n}}{ {\gamma} }\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}} \\ \end{array}\right. \ \\ \end{equation} {} The first inequality is always true because of: $$\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}}>\frac{1}{\a_{n+2}q_{n}^{\t-1}}>\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}.$$ {\bigskip\noindent} So Lemma \ref{l11} follows from the fact that $q_{n+2}=a_{n+2}q_{n+1}+q_{n}$.\qed \lem{} \label{l12} Let $\t>1$, for almost all $ {\gamma} \in(0,\su{2})$ (for $ {\gamma} \geq\su{2}$ $D_{\gamma,\tau}=\emptyset$), given $\e>0$ there exists $C=C(\e, {\gamma} )>0$ such that: $$\left|\su{ {\gamma} }-\frac{p}{q^{\t}}\right|\geq \frac{C}{q^{\t+1+\e}}$$ for all $\frac{p}{q}\in\Bbb{Q}$. \elem{} \par\medskip\noindent{\bf Proof\ } Define $B_{C,k}:=\left\{\a:|\a-\frac{p}{q^{\t}}|\geq \frac{C}{q^{k}}\quad\forall \frac{p}{q}\in\Bbb{Q}\right\}$, so $\a\in B_{C,k}^{c}\iff$ there exists $\frac{p}{q}$ such that $\a\in \left(\frac{p}{q}-\frac{C}{q^{k}},\frac{p}{q}+\frac{C}{q^{k}}\right)$. So, given $N\in\Bbb{N}$ we get: $$ {\mu} \left(B_{C,k}^{c}\cap \left(-N,N\right)\right)<\sum_{q>0}\sum_{-N q^{\t}<p<N q^{\t}} \frac{2C}{q^{k}}<\sum_{q>0}\frac{4NC}{q^{k-\t}}$$ and for $k>\t+1$, $C$ that tends to zero, also $$ {\mu} \left(B_{C,k}^{c}\cap \left(-N,N\right)\right)$$ goes to zero. From the arbitrariness of $N$ we obtain: $$ {\mu} \left(\bigcap_{C>0} B_{C,k}^{c}\right)=0$$ for $k>\t+1$, from which follows Lemma \ref{l12}. \qed {\smallskip\noindent} \lem{} \label{l14} Let $\t>1$, $\a=[a_{1},a_{2},...]\inD_{\gamma,\tau}$, $\frac{p_{n}}{q_{n}}$ the convergents to $\a$. The inequality: \beq{**} \quad\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}-\frac{2 {\gamma} }{q_{n+2}^{\t-1}} \end{equation} is definitively verified if and only if definitively: \beq{ci} {a_{n+2}>\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{\t+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}}-\frac{q_{n}}{q_{n+1}}} \end{equation} {\smallskip\noindent} \elem{} \rem{} \label{r16} Observe that (\ref{ci}) is definitively true if: $$\limsup\frac{q_{n+1}}{q_{n}^{\t}}<\frac{1}{ {\gamma} },$$ because in that case: $$\limsup \frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}}-\frac{q_{n}}{q_{n+1}}<1.$$ Thus, if for infinitely many $n$ even $(\ref{ci})$ is not verified, for this $n$, with $n$ big enough: $$\frac{q_{n+1}}{q_{n}^{\t}}\sim \frac{1}{ {\gamma} },$$ so $q_{n+1}\sim \frac{q_{n}^{\t}}{ {\gamma} }$. \erem{} \par\medskip\noindent{\bf Proof\ } In a similar way of Lemma \ref{l11}, (\ref{**}) is verified if and only if: \beq{} \left\{ \begin{array}{l} \displaystyle \frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}}>\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}},\\ \ \\ \displaystyle q_{n+2}>\frac{q_{n}}{ {\gamma} }\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}} \\ \end{array}\right. \ \\ \end{equation} {} Because of $\a\inD_{\gamma,\tau}$, the first of the two conditions is definitively verified, in fact, for $n$ big enough: $$\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}< \frac{1}{\a_{n+2}q_{n}^{\t-1}}<\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}}$$ So, from the fact that $q_{n+2}=a_{n+2}q_{n+1}+q_{n}$ we are done. \qed {\smallskip\noindent} \lem{} \label{l15} Let $\t>\frac{3+\sqrt{17}}{2}$. For almost all $ {\gamma} \in(0,\frac{1}{2})$, if $\a=[a_{0},a_{1},...]\in D_{\gamma,\tau}$ ,for $n$ even big enough: (\ref{*}) is true if and only if (\ref{**}) is true. \elem{} \par\medskip\noindent{\bf Proof\ } If (\ref{**}) is true, then trivially (\ref{*}) is true. So we have to show that for almost all $ {\gamma} \in(0,\su{2})$ and for all $\a\in D_{\gamma,\tau}$ (with $\t>\frac{3+\sqrt{17}}{2}$) holds the converse. So, suppose by contradiction that exists $A\subset \left(C_{1}, C_{2}\right)$, with $0<C_{1}<C_{2}<\su{2}$, $ {\mu} (A)>0$ such that, for all $ {\gamma} \in A$ there exists $\a\in D_{\gamma,\tau}$ that satisfies (\ref{*}) but not (\ref{**}) for infinitely many $n$ even. By Lemma \ref{l11} and Lemma \ref{l14} it follows that for all $ {\gamma} $ in $A$ there exists $\a\inD_{\gamma,\tau}$ such that for infinitely many $n$ even: $$\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}}-\frac{q_{n}}{q_{n+1}}\geq a_{n+2}>\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}},$$ and by Remark \ref{r16} it follows that, for this $n$: $$q_{n+1}\sim \frac{q_{n}^{\t}}{ {\gamma} }.$$ So, for $n$ big enough such that (\ref{*}) holds but (\ref{**}) doesn't hold we get: $$\frac{q_{n}^{\t}}{C_{2}}<q_{n+1}<\frac{q_{n}^{\t}}{C_{1}}.$$ Moreover: $$a_{n+2}>\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}} \iff$$ $$\frac{a_{n+2}q_{n+1}}{q_{n}}+1=\frac{q_{n+2}}{q_{n}}>\su{1-\frac{ {\gamma} q_{n+1}}{q_{n}^{\t}}-\frac{ {\gamma} q_{n}q_{n+1}}{q_{n+2}^{\t+1}}}\iff$$ $$ 1-\frac{ {\gamma} q_{n+1}}{q_{n}^{\t}}-\frac{ {\gamma} q_{n}q_{n+1}}{q_{n+2}^{\t+1}}>\frac{q_{n}}{q_{n+2}}\iff$$ $$ {\gamma} <\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{\t}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{\t+1}}}$$ In a similar way: $$\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}-\frac{2q_{n}q_{n+1}}{ q_{n+2}^{\t-1}}}-\frac{q_{n}}{q_{n+1}}\geq a_{n+2}\iff$$ $$ {\gamma} \geq\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{\t}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{\t+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}}.$$ Thus: $$\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{\t}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{\t+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{\t-1}}}\leq {\gamma} <\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{\t}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{\t+1}}}$$ for infinitely many $n$ even, so for all $ {\gamma} \in A$ there exist infinitely many $q\in\Bbb{N}$ such that: $$\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}+\frac{2q p}{ (N p+q)^{\t-1}}}\leq {\gamma} <\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}}$$ for some $N\in\Bbb{N}$ and some $\frac{q^{\t}}{C_{2}}<p<\frac{q^{\t}}{C_{1}}$. So for all $M\in \Bbb{N}$: $$A\subset \bigcup_{q>M}\bigcup_{\frac{q^{\t}}{C_{2}}<p<\frac{q^{\t}}{C_{1}}}\bigcup_{N>0}\left(\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}+\frac{2q p}{(N p+q)^{\t-1}}},\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}}\right),$$ moreover: $$\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}}-\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}+\frac{2q p}{(N p+q)^{\t-1}}}<$$ $$\frac{2q p}{ (N p+q)^{\t-1}}\left(\su{\frac{p}{q^{\t}}+\frac{q p}{(Np+q)^{\t+1}}}\right)^{2}<\frac{2q C_{2}^{2}}{N^{\t-1}p^{\t-2}}$$ so we obtain: $$m(A)\leq \sum_{q>M}\sum_{\frac{q^{\t}}{C_{2}}<p<\frac{q^{\t}}{C_{1}}}\sum_{N>0}\frac{2q C_{2}^{2}}{N^{\t-1}p^{\t-2}}<$$ $$\b\sum_{q>M}\frac{q^{\t+1}}{q^{\t^{2}-2\t}}=\b\sum_{q>M}\su{q^{\t^{2}-3\t-1}}$$ for some constant $\b>0$. From the hypothesis ($\t>\frac{3+\sqrt{17}}{2}$) we have that the series converge, so for $M$ that goes to infinity we get that $ {\mu} (A)=0$, that contradicts the hypothesis $ {\mu} (A)>0$. Thus, for almost all $ {\gamma} \in(C_{1},C_{2})$ we have that: if (\ref{*}) holds, then (\ref{**}) holds, and from the arbitrariness of $C_{1}, C_{2}$ Lemma \ref{l15} follows.\qed \\ {\smallskip\noindent} \pro{} \label{p3} Let $\t>\frac{3+\sqrt{17}}{2}$. For almost every $0< {\gamma} <\su{2}$: if $\a\inD_{\gamma,\tau}$, $\frac{p_{n}}{q_{n}}$ are the convergents to $\a$, $\a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}}$, and definitively: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},$$ then $\a$ is an accumulation point of $D_{\gamma,\tau}$ and in particular, for $n$ even big enough: $$ {\mu} \left(D_{\gamma,\tau}\cap \left(\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}\right)\right)>0$$ \epro{} \par\medskip\noindent{\bf Proof\ } By Lemma \ref{l8} it follows that $\exists N_{1}\in\Bbb{N}$ such that for $n>N_{1}$ even: $$\frac{p}{q}\not\in I_{n}\ \Longrightarrow\ \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}},\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}\not\in A_{n},$$ and by Lemma \ref{l15} for almost all $ {\gamma} \in(0,\su{2})$: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\ \Longrightarrow\ \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}-\frac{2 {\gamma} }{q_{n+2}^{\t-1}},$$ therefore, up to a set of measure zero we can suppose that $ {\gamma} $ satisfies this property. Moreover, by Lemma \ref{l9}, for $n$ even big enough, if $\frac{p}{q}\in I_{n},$ $q<q_{n+2}$ then: $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}-\frac{2 {\gamma} }{q_{n+2}^{\t-1}}.$$ So, if we define: $$c_{n}:=\max_{\frac{p}{q}\in [\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}),q<q_{n+2}} \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}},$$ we obtain: $$c_{n}<\frac{p_{n+2}}{q_{n+2}}-\frac{2 {\gamma} }{q_{n+2}^{\t-1}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}.$$ By Lemma \ref{l8}, if $n>N_{1}$ is even and $\frac{p}{q}\not\in I_{n}$, then $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}},\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}\not\in A_{n},$$ so, if $$\frac{p}{q}<\frac{p_{n}}{q_{n}} \ \Longrightarrow\ \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\leq c_{n},$$ while for $\frac{p}{q}>\frac{p_{n+2}}{q_{n+2}}$ we get $q>q_{n+2}$, so: $$\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}>\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},$$ but from: $$\b\in D_{\gamma,\tau}^{c}\iff \exists \frac{p}{q}\in(0,1):\b\in \left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}},\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)$$ we get that for $n>N_{1}$ even, holds: $$ {\mu} \left(D_{\gamma,\tau}^{c}\cap I_{n}\right)\leq {\mu} \left(\bigcup_{\frac{p}{q}\in [\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}),q<q_{n+2}} \left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}},\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\cap I_{n} \right)$$ $$+ {\mu} \left(\bigcup_{\frac{p}{q} \in I_{n},q\geq q_{n+2}} \left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}, \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\right)+ {\mu} \left(\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},\frac{p_{n+2}}{q_{n+2}}\right).$$ So by Lemma \ref{l10}: $$ {\mu} (D_{\gamma,\tau}^{c}\cap I_{n})\leq c_{n}-\frac{p_{n}}{q_{n}}+\frac{2 {\gamma} }{q_{n+2}^{\t-1}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}< {\mu} (I_{n})=\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}} \iff$$ $$c_{n}<\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}-\frac{2 {\gamma} }{q_{n+2}^{\t-1}},$$ that follows from the definition of $c_{n}$. \qed So, given $\t>3$, for almost all $ {\gamma} >0$: if $\a\inD_{\gamma,\tau}$ is not an isolated point of the first type and definitively the intervals centered in the convergents have an empty intersection, then $\a$ is an accumulation point in $D_{\gamma,\tau}$. The second step is to show that: if $\t>3$, $ {\gamma} >0$, $\a\inD_{\gamma,\tau}$ but $\a$ is not an isolated point of the first type and $\t>\t(\a)$, then $\a$ is an accumulation point in $D_{\gamma,\tau}$. {\smallskip\noindent} \lem{} \label{l16} Let $\t>3$. For almost all $ {\gamma} \in(0,\su{2})$: given $\a\in D_{\gamma,\tau}$, if for infinitely many $n$ even: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}>\frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},$$ then there exists $C>0$ such that for this $n$: $$a_{n+2}\leq C q_{n}^{2+\e},$$ with $\e>0$ arbitrarily small. \elem{} \par\medskip\noindent{\bf Proof\ } By Lemma \ref{l11} it follows that, given $\a\in D_{\gamma,\tau}$ that satisfies the hypothesis of Lemma \ref{l16}, for $n$ even big enough: $$a_{n+2}\leq\frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}},$$ so, up to a set of measure zero, by Lemma \ref{l12} we can suppose that there exist $\e>0, C>0$ such that $\su{ {\gamma} }\in B_{C,\t+1+\e}$ with $\t+1+\e<\t^{2}-1$, from which it follows that: $$ \frac{q_{n}}{ {\gamma} q_{n+1}}\su{(\frac{1}{ {\gamma} }-\frac{q_{n+1}}{q_{n}^{\t}})-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}}\leq \frac{q_{n}}{ {\gamma} q_{n+1}}\su{\frac{C}{q_{n}^{\t+1+\e}}-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}}-\frac{q_{n}}{q_{n+1}},$$ moreover, by Remark \ref{r16} it follows that $q_{n+1}\sim \frac{q_{n}^{\t}}{ {\gamma} }$, from which we obtain: $$\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}<\frac{q_{n}}{q_{n+1}^{\t}}\sim \frac{ {\gamma} ^{\t}}{q_{n}^{\t^{2}-1}},$$ so, if $n$ is big enough, by $\t+1+\e<\t^{2}-1$ we have: $$\frac{C}{q_{n}^{\t+1+\e}}-\frac{q_{n}q_{n+1}}{ q_{n+2}^{\t+1}}>\frac{C}{2q_{n}^{\t+1+\e}}.$$ So we obtain: $$a_{n+2}<\frac{q_{n}}{q_{n+1}}\frac{2 q_{n}^{\t+1+\e}}{C}\sim \frac{2 {\gamma} }{ C}q_{n}^{2+\e}<\frac{4 {\gamma} }{ C}q_{n}^{2+\e}=C' q_{n}^{2+\e}$$ definitively, from which we get Lemma \ref{l16}. \qed \\ {\smallskip\noindent} \lem{} \label{l17} Let $\t>\frac{3+\sqrt{17}}{2}, {\gamma} >0$, $\a\inD_{\gamma,\tau}$. If for infinitely many $m$ even, for $n<m$ even holds: \beq{a} \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}<\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}, \end{equation} and $\a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}}$ for all $n$ even, then $\a$ is in ${\mathcal{A}}(D_{\gamma,\tau})$. \elem{} \par\medskip\noindent{\bf Proof\ } Let $\frac{p_{n}}{q_{n}}<\frac{p}{q}<\frac{p_{n+2}}{q_{n+2}}$ with $n$ even and $n<m-2$, for $\frac{q_{n+2}}{2}\leq q$: $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\frac{p_{n+2}}{q_{n+2}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}}$$ is definitively true, while for $q<\frac{q_{n+2}}{2}$: $$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}=\frac{p_{n+2}}{q_{n+2}}-\a+\a-\frac{p}{q}>\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}>\frac{ {\gamma} }{q^{\t+1}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}}\iff$$ $$\frac{1}{2q^{2}}-\frac{ {\gamma} }{q^{\t+1}}>\frac{1}{q_{n+2}^{2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},$$ that is true for $q$ big enough, so $\exists T\in\Bbb{N}$ such that the inequality is verified for $q\geq T$ (from the fact that $G(x):=\frac{1}{2x^{2}}-\frac{ {\gamma} }{x^{\t+1}}$ is definitively decreasing and $\t>3>1$). From the hypothesis that $\a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}}$ for all $n$ even: $$v:=\max_{\frac{p}{q}<\a,q\leq T}\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}<\a,$$ so there exists $T_{1}\in\Bbb{N}$ such that for $n>T_{1}$: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}>v.$$ By Lemma \ref{l9}, for $m$ big enough, $\frac{p}{q}\in I_{n}$, with $n<m-2$ even: $$\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\leq\max\left\{\frac{p_{n+2}}{q_{n+2}}+\frac{ {\gamma} }{q_{n+2}^{\t+1}},v\right\}\leq \frac{p_{m-2}}{q_{m-2}}+\frac{ {\gamma} }{q_{m-2}^{\t+1}}<\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t+1}},$$ while by Lemma \ref{l9}, for $m$ big enough: $$\frac{p}{q}\in I_{m-2},q<q_{m-2}\ \Longrightarrow\ \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1} }<\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}},$$ so if we define: $$c_{m}:=\max\left\{\max_{\frac{p}{q}\in I_{m-2}, q<q_{m}} \left(\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right), \max_{\frac{p}{q}\leq \frac{p_{m-2}}{q_{m-2}}} \left(\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\right\},$$ for $m$ even big enough: $$c_{m}<\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}.$$ Moreover, by Lemma \ref{l10}, from $\t>3>2$, for $m$ even big enough: $$ {\mu} \left(\bigcup_{\frac{p}{q}\in I_{m-2}, q\geq q_{m}}\left(\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}, \frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)\right)<\frac{2 {\gamma} }{q_{m}^{\t-1}}.$$ Finally, if $\frac{p}{q}>\frac{p_{m}}{q_{m}}$, by the properties of continued fractions we obtain $q>q_{m}$, so $\frac{p}{q}-\frac{ {\gamma} }{q^{\t+1}}>\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}$. Thus: $$ {\mu} \left(D_{\gamma,\tau}^{c} \cap\left(\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}\right)\right)<c_{m}-\frac{p_{m-2}}{q_{m-2}}+\frac{2 {\gamma} }{q_{m}^{\t-1}}$$ $$<\frac{p_{m}}{q_{m}}-\frac{p_{m-2}}{q_{m-2}}-\frac{ {\gamma} }{q_{m}^{\t+1}}= {\mu} (\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}),$$ then $$D_{\gamma,\tau} \cap\left(\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}\right)\not =\emptyset,$$ and from the fact that this holds for infinitely many $m$ even, then $\a$ is an accumulation point of $D_{\gamma,\tau}$. \qed \\ {\smallskip\noindent} \rem{} \label{r17} Let $\t>\frac{\sqrt{17}+3}{2}, {\gamma} >0$, $\a\inD_{\gamma,\tau}$, if $\a\in {\mathcal{I}}^{2}_{ {\gamma} ,\t}$ or ${\mathcal{I}}^{3}_{ {\gamma} ,\t}$, then $\t(\a)=\t$. In fact if this doesn't hold, from $\a\not\in {\mathcal{I}}^{1}_{ {\gamma} ,\t}$ we get that for all $n$ even or for all $n$ odd: $$\left|\a-\frac{p_{n}}{q_{n}}\right|>\frac{ {\gamma} }{q_{n}^{\t+1}}.$$ Suppose for example that this property holds for all $n$ even. If on the contrary $\t(\a)<\t$, by Remark \ref{r16}, the hypothesis of Proposition \ref{p3} are satisfied, so $\a\in {\mathcal{A}}(D_{\gamma,\tau})$, contradiction. \erem{} {\smallskip\noindent} \cor{} \label{c4} If $\t>\frac{3+\sqrt{17}}{2}$: $$ {\mu} \left(\left\{ {\gamma} >0: {\mathcal{I}}^{2}_{ {\gamma} ,\t}\not=\emptyset\right\}\right)=0.$$ \ecor{} \par\medskip\noindent{\bf Proof\ } Observe that, if $\a\in {\mathcal{I}}^{2}_{ {\gamma} ,\t}$, then there exists $n\in\Bbb{N}$ such that $$\left|\a-\frac{p_{n}}{q_{n}}\right|=\frac{ {\gamma} }{q_{n}^{\t+1}}.$$ Suppose for example that $n$ is even, thus: $$\a=\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}.$$ Moreover, for almost all $ {\gamma} \in(0,\su{2})$: $$\t\left(\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)=\t\left(\frac{ {\gamma} }{q^{\t+1}}\right)=1.$$ Taking the union on all the $\frac{p}{q}$ we obtain that for almost all $ {\gamma} \in(0,\su{2})$ and for all $\frac{p}{q}\in\Bbb{Q}$, $$\t\left(\frac{p}{q}+\frac{ {\gamma} }{q^{\t+1}}\right)=1.$$ So Corollary \ref{c4} follows by Remark \ref{r17}.\qed {\smallskip\noindent} It remains the last one step, in which we get the Theorem. \lem{} \label{l18} Let $\t>3$. For almost all $ {\gamma} >0$, if $\a\in {\mathcal{I}}(D_{\gamma,\tau})$, there exists $N\in \Bbb{N}$ such that, for all $m>N$ even there is some $n<m$ even with: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}} \geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}$$. \elem{} \par\medskip\noindent{\bf Proof\ } By Corollary \ref{c4} and Remark (j) it follow that, up to a set of measure zero, we can suppose that ${\mathcal{I}}^{1}_{ {\gamma} ,\t}={\mathcal{I}}^{2}_{ {\gamma} ,\t}=\emptyset$, so observe that if the Lemma were not true, it would exist $\a\in {\mathcal{I}}^{2}_{ {\gamma} ,\t}$ with the even convergents that satisfy the hypothesis of Lemma \ref{l17}, that implies $\a\in {\mathcal{A}}(D_{\gamma,\tau})$, contradiction. \qed \\ {\smallskip\noindent} {\bf Theorem} Let $\t>\frac{3+\sqrt{17}}{2}$. Then, for almost all $ {\gamma} >0$ $D_{\gamma,\tau}$ is a Cantor set. \par\medskip\noindent{\bf Proof\ } By Corollary \ref{c4} and Remark (j) it follows that, up to a set of measure zero, we can suppose that ${\mathcal{I}}^{1}_{ {\gamma} ,\t}={\mathcal{I}}^{2}_{ {\gamma} ,\t}=\emptyset$. Suppose by contradiction that the statement doesn't hold, and take $0<C_{1}<C_{2}$ such that: $$ {\mu} \left(\left\{C_{1}< {\gamma} <C_{2}: {\mathcal{I}}(D_{\gamma,\tau})\not=\emptyset\right\}\right)>0,$$ and define $A:=\{C_{1}< {\gamma} <C_{2}: {\mathcal{I}} (D_{\gamma,\tau})\not=\emptyset\}$. By Lemma \ref{l18}, for almost all $ {\gamma} >0$ there exists $\a\in {\mathcal{I}}(D_{\gamma,\tau})$ and there exists $N\in\Bbb{N}$ such that for all $m>N$ even, there is some $n<m$ even, with: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}.$$ Now we want to show that, for almost all chosen of $ {\gamma} \in A$ we have: $$\limsup \frac{q_{2k+2}}{q_{2k+1}^{\t}}<\frac{1}{ {\gamma} }.$$ In fact if it doesn't hold, by Remark \ref{r16} we get that for infinitely many $m$ even: $$ q_{m}\sim \frac{q_{m-1}^{\t}}{ {\gamma} },$$ and for $m>N$ exists $n<m$ even, with: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}$$ By Lemma \ref{l15}, up to a set of measure zero in $A$: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}}\iff \frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}.$$ By the properties of convergents: $$\a-\frac{p_{m}}{q_{m}}<\frac{1}{q_{m}^{2}},$$ from which we get: $$\frac{1}{q_{m}^{2}}>\a-\frac{p_{n}}{q_{n}}-\frac{ {\gamma} }{q_{n}^{\t+1}}-\frac{ {\gamma} }{q_{m}^{\t+1}}.$$ Moreover: $$\a-\frac{p_{n}}{q_{n}}=\su{q_{n}(q_{n+1}+\frac{\a_{n+2}}{q_{n}})},$$ so: $$\frac{1}{q_{m}^{2}}>\su{q_{n}(q_{n+1}+\frac{\a_{n+2}}{q_{n}})}-\frac{ {\gamma} }{q_{n}^{\t+1}}-\frac{ {\gamma} }{q_{m}^{\t+1}}$$ For $m$ big enough: $$\frac{1}{q_{m}^{2}}+\frac{ {\gamma} }{q_{m}^{\t+1}}<\frac{2}{q_{m}^{2}},$$ so: $$\frac{2}{q_{m}^{2}}>\su{q_{n}(q_{n+1}+\frac{\a_{n+2}}{q_{n}})}-\frac{ {\gamma} }{q_{n}^{\t+1}}\iff$$ $$ {\gamma} >\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}-\frac{2q_{n}^{\t+1}}{q_{m}^{2}},$$ moreover: $$ {\gamma} \leq\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}.$$ So we obtain: $$\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}-\frac{2q_{n}^{\t+1}}{q_{m}^{2}}< {\gamma} \leq\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}$$ From $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{m}}{q_{m}}-\frac{ {\gamma} }{q_{m}^{\t+1}}-\frac{2 {\gamma} }{q_{m}^{\t-1}},$$ we get: $$\frac{p_{n}}{q_{n}}+\frac{ {\gamma} }{q_{n}^{\t+1}}\geq \frac{p_{n+2}}{q_{n+2}}-\frac{ {\gamma} }{q_{n+2}^{\t+1}},$$ moreover, from $\a-\frac{p_{n}}{q_{n}}>\frac{ {\gamma} }{q_{n}^{\t+1}}$ for all $n$ even, when $m$ increase, also $n$ increase, and by the last inequality and Remark \ref{r16} we get that $q_{n+1}\sim \frac{q_{n}^{\t}}{ {\gamma} }$. So $$q_{m}\sim \frac{q_{m-1}^{\t}}{ {\gamma} }\geq \frac{q_{n+1}^{\t}}{ {\gamma} }\sim \frac{q_{n}^{\t^{2}}}{ {\gamma} ^{\t}}\geq\frac{q_{n}^{\t^{2}}}{C_{2}^{\t}}.$$ So we obtain: $$\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}-\frac{C}{q_{n}^{2\t^{2}-\t-1}}< {\gamma} \leq\frac{q_{n}^{\t}}{q_{n+1}+\frac{\a_{n+2}}{q_{n}}}$$ with a constant $C>0$. By Lemma \ref{l16}, up to a set of measure zero, we can suppose that there exists $\e>0$ arbitrarily small such that, for $n$ big enough: $$a_{n+2}<q_{n}^{2+\e}.$$ So, up to a set of measure zero, we can suppose that for all $ {\gamma} \in A$, there exists infinitely many $q>0$, $\frac{q^{\t}}{2 C_{2}}<p<\frac{2}{C_{1}q^{\t}}$, $N<q^{2+\e}$ such that: $$\frac{q^{\t}}{p+\frac{N}{q}}-\frac{C}{q^{2\t^{2}-\t-1}}< {\gamma} \leq\frac{q^{\t}}{q+\frac{N}{q}}.$$ So, for all $M\in\Bbb{N}$: $$A\subset \bigcup_{q>M}\bigcup_{\frac{q^{\t}}{2C_{2}}<p<\frac{2q^{\t}}{C_{1}}}\bigcup_{N<q^{2+\e}} \left(\frac{q^{\t}}{p+\frac{N}{q}}-\frac{C}{q^{2\t^{2}-\t-1}}, \frac{q^{\t}}{q+\frac{N}{q}}\right),$$ Thus: $$ {\mu} (A)<\sum_{q>M}\sum_{\frac{q^{\t}}{2C_{2}}<p<\frac{2q^{\t}}{C_{1}}}\sum_{N<q^{2+\e}} \frac{C}{q^{2\t^{2}-\t-1}}$$ $$<\b \sum_{q>M}\su{q^{2\t^{2}-2\t-3-\e}}$$ with some constant $\b>0$. Because of $\t>\frac{3+\sqrt{17}}{2}$, for $\e$ small enough the series converge, so for $M$ that tends to infinity we obtain $ {\mu} (A)=0$, contradiction. So we have proved that: $$\limsup \frac{q_{2k+2}}{q_{2k+1}^{\t}}<\frac{1}{ {\gamma} }.$$ But, by Remark \ref{r16} and Proposition \ref{p3} (used with $n$ odd) we have that $\a\in {\mathcal{A}}(D_{\gamma,\tau})$, contradiction. So $ {\mu} (A)=0$.\qed {\smallskip\noindent} The estimate $\t>\frac{3+\sqrt{17}}{2}$ can be improved putting a better inequality in Lemma 5. Probably the Proposition holds also with $\t>3$. \section{Questions} \begin{itemize} \item By \cite{17} we konw that, for some choice of $ {\gamma} ,\t$, ${\mathcal{I}}^{1}_{ {\gamma} ,\t}\not=\emptyset$. What about ${\mathcal{I}}^{2}_{ {\gamma} ,\t},{\mathcal{I}}^{3}_{ {\gamma} ,\t}$? \item What is the best $\t>1$ such that the result holds? \item Is it true that, for all $\t\geq1$ there exists $ {\gamma} _\t\in(0,\su{2})$ such that $D_{\gamma,\tau}$ is a Cantor set for almost all $ {\gamma} \in(0, {\gamma} _\t)$? \end{itemize} \subsubsection*{Acknowledgement} I am very grateful to Prof. Luigi Chierchia for his suggestions, remarks, for his special support and for encouraging me to complete this work.
{ "timestamp": "2020-12-29T02:21:04", "yymm": "2012", "arxiv_id": "2012.13998", "language": "en", "url": "https://arxiv.org/abs/2012.13998" }
\section{Introduction} \label{intro} \blfootnote{ % % % % % \hspace{-0.65cm} This work is licensed under a Creative Commons Attribution 4.0 International License. License details: \url{http://creativecommons.org/licenses/by/4.0/}. } \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{graph_example.png} \caption{Example of a single question and ground-truth explanation facts in WorldTree V2 dataset.} \label{fig:graph_example} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{TextGraphs_Architecture.pdf} \caption{Proposed LIT architecture} \label{fig:architecture} \end{minipage} \end{figure} Complex question answering often requires reasoning over many evidence documents, which is known as multi-hop inference. Existing datasets such as Wikihop \cite{welbl-etal-2018-constructing}, OpenBookQA \cite{OpenBookQA2018}, QASC \cite{khot2020qasc}, are limited due to artificial questions and short aggregation, requiring less than 3 facts. In comparison, TextGraphs \cite{textgraphs} uses WorldTree V2 \cite{xie-etal-2020-worldtree} which is the largest available dataset that requires combining an average of 6 and up to 16 facts in order to generate an explanation for complex science questions. The dataset contains 5k questions that require knowledge in core science as well as common sense. Figure \ref{fig:graph_example} shows an example question from the WorldTree dataset. The evaluation for this dataset is framed as a ranking objective over a large set of 9k science facts, and models are scored based on the MAP metric over the predicted rank ordering. Multi-hop inference encounters significant noise or ``distraction'' documents in the process and this challenge is known as semantic drift \cite{fried-etal-2015-higher}. Compared to WorldTree V1 \cite{JANSEN18.81}, WorldTree V2 has more examples but is more challenging as the larger pool of science facts presents a greater risk of semantic drift. \\ \\ Neural information retrieval models such as DPR \cite{karpukhin2020dense}, RAG \cite{lewis2020retrieval}, and ColBERT \cite{khattab2020colbert} that assume query-document independence use a language model to generate sentence representations for the query and document separately. The advantage of this late-interaction approach is efficient inference as the sentence representations can be computed beforehand and optimized lookup methods such as FAISS \cite{JDH17} exist for this purpose. However, the late-interaction compromises on deeper semantic understanding possible with language models. Early-interaction approaches such as TFR-BERT \cite{han2020learning} instead concatenate the query and document before generating a unified sentence representation. This approach is more computationally expensive but is attractive for re-ranking over a limited number of documents. However, the previous approaches consider each query-document pair in isolation. This forgoes any cross-document interaction which can leverage additional knowledge sources or benefit the ranking objective. Other work \cite{pasumarthi2019selfattentive,pobrotyn2020context,sun2020modeling} facilitate cross-document interactions through self-attention mechanisms. However, the cross-document interaction is only applied after the feature extraction step and cannot leverage the language understanding potential in earlier language model layers. \\ \\ The most straightforward loss for the document ranking objective is Binary Crossentropy where each document is ranked according to the binary classification probability of being within the gold explanation set. However, there have been recent progress in differentiable losses to optimize directly for the ranking objective \cite{wang2018lambdaloss,revaud2019learning,engilberge2019sodeep}. In this work, we also compare the benefits of each loss for multi-hop ranking. \\ \\ The main contributions of this work are: \begin{enumerate} \item We show that conventional information retrieval-based methods are still a strong baseline and propose I-BM25, an iterative retrieval method that improves inference speed and recall by emulating multi-hop retrieval. \item We propose a hierarchical LSTM-interleaved transformer (LIT) architecture that maximizes early cross-document interactions for improved multi-hop re-ranking. \item We provide empirical comparisons of training with different loss functions and show that Binary Crossentropy loss is simple yet may outperform differentiable ranking losses. \end{enumerate} \section{Models} Three different system architectures are described here, with overall schemes illustrated in Figure \ref{fig:systems} for comparison. \subsection{Iterative BM25 Retrieval} Chia et al \shortcite{chia-etal-2019-red} showed that conventional information retrieval methods can be a strong baseline when modified to suit the multi-hop inference objective. However, this method is limited due to computationally expensive inference and sensitivity to noise and semantic drift. We propose an iterative retrieval method `I-BM25' that performs inference in a fraction of the time and reduces semantic drift, resulting in a even stronger baseline retrieval method. For preprocessing, we use spaCy \cite{spacy2} for tokenization, lemmatization and stopword removal. Compared to Chia et al \shortcite{chia-etal-2019-red} which processes each new candidate one at a time, I-BM25 processes $2^n$ candidates in the $n$-th iteration. The algorithm is as follows: \begin{enumerate} \item Sparse document vectors are pre-computed for all questions and explanation candidates. \item For each question, the closest $n$ explanation candidates by cosine proximity are selected and their vectors are aggregated by a $max$ operation. The aggregated vector is down-scaled and used to update the query vector through a $max$ operation. \item The previous step is repeated for increasing values of $n$ until there are no candidate explanations remaining. \end{enumerate} \subsection{LSTM-After Transformer for Re-Ranking} BERT is a pre-trained language model that is widely adapted and fine-tuned for many downstream NLP tasks. Due to computational constraints, we use DistilBERT \cite{sanh2020distilbert} which has 40\% fewer parameters and comparable performance. In sequence-level tasks such as text classification, a [CLS] token is a special token inserted at the front of the sequence. The latent representation of the token is passed to a feed-forward network for prediction. We append an LSTM \cite{HochSchm97} module with 2 layers that operate on the [CLS] vectors of the last layer of BERT (similar in principle to McCann et al \shortcite{mccann2018natural}). This hierarchical structure allows the transformer to perform cross-document reasoning and knowledge reference. The LSTM layers enable the model to be rank-aware when used in the re-ranking setting. For re-ranking, the top 128 predictions from I-BM25 are passed to the LSTM-After Transformer which performs binary classification for each document. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{TextGraphs_Architecture_-_Across_Nodes_-_sidebyside.pdf} \caption{Overview of 3 architectures} \label{fig:systems} \end{figure} \subsection{LSTM-Interleaved Transformer for Re-Ranking} TextGraphs is a challenging task which requires complex multi-hop reasoning, but information retrieval methods are surprisingly strong baselines. To enhance cross-document interaction and leverage language representations in earlier transformer layers, we interleave adapters \cite{houlsby2019parameter} into the architecture which are recurrent instead of merely feed-forward. The LSTM-adapter modules in Figure \ref{fig:architecture} operate on the latent representation at the [CLS] position of each document \textit{at each layer} of the transformer. After each transformer layer, the [CLS] latent representations for each input document are first down-projected, passed to the LSTM layers and finally up-projected and fed into the next transformer layer. Compared to \cite{houlsby2019parameter}, the LIT architecture is fully trainable and makes the transformer architecture more expressive by enabling cross-document reasoning which was previously not possible. Apart from LSTM, we also tested GCN \cite{gcn} and Self-Attention \cite{selfattention} layers but had limited success in achieving competitive performance from them. \section{Experiments} \begin{table}[h!] \centering \begin{tabular}{|l | r r |} \hline Model & Dev MAP & Test MAP \\ \hline BM25 & 0.4615 & \\ Iterative BM25 \cite{chia-etal-2019-red} & 0.4704 & \\ I-BM25 & 0.4861 & 0.4745 \\ I-BM25 + LSTM + Transformer & 0.5470 & 0.5294 \\ I-BM25 + LIT & 0.5680 & 0.5607 \\ \hline \end{tabular} \caption{Main score comparison on WorldTree V2 dataset} \label{table:1} \end{table} Table \ref{table:1} shows that I-BM25 is a strong information retrieval method that can be a drop-in replacement for previous information retrieval methods. The results also show the advantage of the LIT architecture in interleaving LSTM layers between transformer layers, rather than after the last transformer layer. \begin{table}[h!] \centering \begin{tabular}{|l | r |} \hline Loss Function & Dev MAP \\ \hline LambdaLoss & 0.4970 \\ APLoss & 0.5187 \\ Binary Crossentropy & 0.5680 \\ \hline \end{tabular} \caption{Loss function comparison on WorldTree V2 dataset} \label{table:2} \end{table} The results of optimization using 3 different loss objectives are shown in Table \ref{table:2}. Surprisingly, the direct ranking-loss oriented objectives were less effective in reducing the final evaluation MAP score, which is potentially due to the bucketisation approximation used in the APLoss calculations not being appropriately pre-scaled in our experiments. In this case, the training may require different hyper-parameters to converge optimally. Another potential explanation is that these ranking losses may be sub-optimal (when used as a training objective) when many documents have very similar underlying scores which is the case here. \subsection{Notes} Further to our experience last year, we included preprocessing steps to isolate the branching `combo' statements (which essentially contain OR clauses between different noun phrases, for instance). This step remains in our codebase, but we did not exploit it fully, since a full treatment would require the isolation of which `combo branch' is taken by each gold statement in the training set. \section{Discussion} Other architectures that we explored included Graph neural network (GNN) methods, however we had insufficient time to tune these for the multi-hop explanation task herein. Surprisingly, our simple LSTM methods (which can be viewed as a linear graph that performs message-passing along the list of results ordered by the I-BM25 method) already provided a competitive method. We estimate that next year's competition will {\textit require} the use of graph-based methods, due to their greater expressive power. \section{Conclusion} The LIT architecture is a simple yet powerful adaptation of the Transformer architecture to learn better cross-document interactions for multi-hop ranking. The structure can be easily integrated with any transformer language model to enable cross-referencing of knowledge statements and improved ranking performance. For example, LIT can be a drop-in encoder for other multi-hop question answering datasets such as HotPotQA \cite{hotpotqa}. When applied to the challenging WorldTree V2 dataset, LIT achieves competitive performance with current state-of-the-art models despite a smaller footprint. We envision that this architecture can be beneficial to many NLP tasks which require multi-hop reasoning over documents. \bibliographystyle{coling}
{ "timestamp": "2020-12-29T02:27:16", "yymm": "2012", "arxiv_id": "2012.14164", "language": "en", "url": "https://arxiv.org/abs/2012.14164" }
\section{Introduction}\label{Section:intro} For a profinite group $G$, let $\operatorname{Im}(G)$ be the set of isomorphism classes of finite quotients of $G$. We say that $G$ has the embedding property if for $A,B\in \operatorname{Im}(G)$, and for every epimorphisms $\Pi:A\rightarrow B$ and $\varphi:G\rightarrow B$, there is an epmorphism $\psi:G\rightarrow A$ such that $\Pi\circ\psi=\varphi$. In field arithmetic and model theory of fields, the embedding property for profinite groups appears surprisingly. Let $k^{ab}$ and $k^{sol}$ be the maximal abelian extension and the maximal solvable extension of a number field $k$ respectively. In \cite{Iwa}, Iwasawa showed that the Galois group $G(k^{sol}/k^{ab})$ has the embedding property. A Frobenius field is a PAC field whose absolute Galois group has the embedding property. Over all Frobenius fields containing a common subfield $K$ with absolute Galois groups $G$ having the same $\operatorname{Im}(G)$, Fried, Haran, and Jarden in \cite{FHJ} developed the theory of Galois stratification of $K$-constructible sets and proved an elimination of quantifiers of Galois formula through Galois stratification. In \cite{HL}, Haran and Lubotzky gave a primitive recursive procedure to construct the universal embedding cover of a given finite group. Combined with the elimination of quantifiers of Galois formula and the universal Frattini cover, they showed that the theory of perfect Frobenius fields is primitive recursive, and the theory of all Frobenius fields is decidable. In \cite{C1}, Chatzidakis showed that the complete system of a profinite group having the embedding property is $\omega$-stable. Using this with Chatzidakis' independence theorem in \cite[Theorem 3.1]{C2}, Ramsey in \cite[Theorem 3.9.31]{Ra} showed that the theory of a Frobenius field is NSOP$_1$. In this article, we concern the embedding property in the category of sorted profinite groups. In \cite{HoLee}, we introduced a notion of sorted profinite groups to study the (Shelah-)Galois groups of first order structures. The Galois groups of first order structure are typical examples of sorted profinite groups (see Example \ref{ex:sorted_profinite_group}). In \cite[Proposition 5.6]{HoLee}, Hoffmann and the author developed the independence theorem for PAC structures, analogous to \cite[Theorem 3.1]{C2}, and we proved that for a PAC structure $M$, if the sorted complete system of the Galois group of $M$ is $\omega$-stable, then the theory of $M$ is NSOP$_1$. This leads us to try to find a class of sorted profinite groups having $\omega$-stable sorted complete systems. The possible candidates are the sorted profinite groups with ``the embedding property". So, we introduce a notion of the {\em sorted embedding property} (SEP) for sorted profinite group, exactly analogous to the embedding property for profinite groups. Also we need to find an ``embedding property" first order axiomatizable. For this, we introduce a weaker notion of the finitely sorted embedding property (FSEP) (see Definition \ref{def:Iwasawa_property}). Fortunately, two notions of the embedding properties for sorted profinite groups are equivalent (see Proposition \ref{prop:FSIP=SIP}). We have two main results in this article. First, we show the existence and the uniqueness of the universal sorted embedding cover for a sorted profinite group (see Theorem \ref{thm:existence_universal_SIP_cover} and Theorem \ref{thm:uniquness_SIP cover}). Second, we show that the theory of the complete system of a sorted profinite group having the sorted embedding property is $\omega$-stable (see Theorem \ref{thm:description_complete_types}(2)). We would like to give some comments on our proofs on the existence and the uniqueness of the universal sorted embedding cover. We give an alternative proof working in both cases of profinite groups and sorted profinite groups. The main motivational ideas of our proof of the existence of the universal sorted embedding cover are the following: \begin{itemize} \item An inverse limit of a quasi-embedding cover of a given profinite group $G$ is a gain a quasi-embedding cover of $G$ in the proof of \cite[Theorem 1.12]{HL} (see Lemma \ref{lem:inverse_limit_qsc}). \item If a profinite group $G$ does not have the embedding property, then there is a quasi-embedding cover of $G$ in \cite[Lemma 24.4.4]{FJ} (see Lemma \ref{lem:char_no_SIP}). \end{itemize} \noindent For the uniqueness of the universal SEP-cover, we follow Chatzidakis' proof scheme of the uniqueness of the universal embedding cover of a profinite group in \cite[Theorem 2.4, Theorem 2.7]{C1}, to our knowledge, which is the only known proof of the uniqueness of the embedding cover of an arbitrary profinite group. The sorted complete systems of sorted profinite groups having SEP can be first order axiomatizable (and its theory is denoted by $SCS_{SEP}$). When the set of sorts is countable, we prove that complete theories extending $SCS_{SEP}$ are $\omega$-stable, and by the uniqueness of prime models for complete $\omega$-stable theories, we deduce the uniqueness of the universal SEP-cover.\\ In Section \ref{Section:preliminaries}, we introduce the category of sorted profinite groups whose morphisms are always epimorphisms, and we see that the category of sorted profinite groups is closed under the inverse limit and the fibre product. And we recall the category of sorted complete systems, which is equivalent to the category of sorted profinite groups by a natural contravariant functor. In Section \ref{Section:universal_SIP_cover}, we prove the existence of universal SEP-cover of a given sorted profinite groups. In Section \ref{Section:model theory of sorted profinite group with SIP}, we prove the theory of the sorted complete system of a sorted profinite group is $\omega$-stable under the assumption that the set of sorts is countable, and we describe the forking independence there. As a byproduct of $\omega$-stability, we prove the existence of a universal SEP-cover of a sorted profinite group, which comes from the prime model of the theory of its sorted complete system. \section{Preliminaries}\label{Section:preliminaries} \subsection{Sorted profinite group} For a profinite group $G$, we write ${\mathcal N}(G)$ for the set of open normal subgroups of $G$. Let ${\mathbb N}$ be the set of positive integers. \begin{definition} Let $G$ be a profinite group and let $e$ be the identity of $G$. \begin{enumerate} \item We say that a subset ${\mathcal B}\subset {\mathcal N}(G)$ is {\em a base at $e$} if for any $N\in {\mathcal N}(G)$, there is $N'\in {\mathcal B}$ such that $N'\subset N$. \item We say that a subset $X\subset {\mathcal N}(G)$ generates {\em a base at $e$} if the set ${\mathcal B}(X):=\{N_1\cap \cdots \cap N_k:N_i\in X\}$ forms a base at $e$, equivalently, $\bigcap X=\{e\}$. Really, if $X$ generates a base at $e$, then $\bigcap X=\{e\}$ because $G$ is a Hausdorff space. Conversely, suppose $\bigcap X=\{e\}$. Take $N\in {\mathcal N}(G)$ arbitrary. Suppose $N_1\cap \cdots \cap N_k\not\subset N$ for any $N_1,\ldots,N_k\in X$. Then, by compactness, we have that $\bigcap X\cap G\setminus N\neq \emptyset$ and $e\not\in N$, which is a contradiction. So, for some $N_1,\ldots,N_k\in X$, $N_1\cap\cdots\cap N_k\subset N$. \end{enumerate} \end{definition} \begin{notation} Let $I$ be a set. \begin{enumerate} \item For $n\in {\mathbb N}$, let $I^n$ be the set of $n$-tuples of elements in $I$, and let $I^{<{\mathbb N}}:=\bigcup_{n\in {\mathbb N}} I^n$. \item For $J,J'\in I^{<{\mathbb N}}$, we write $J\le J'$ if $J$ is a subtuple of $J'$. \item For $J,J'\in I^{<{\mathbb N}}$, we write $J^{\frown} J'$ for the concatenation of $J$ and $J'$. \item For $J=(j_1,\ldots,j_n)\in I^n$, $|J|:=n$ and $\Vert J\Vert:=\{j_1,\ldots,j_n\}$. \item For $J=(j_1,\ldots,j_n)\in I^n$ and a permutation $\sigma\in \operatorname{Sym}(n)$, $\sigma(J)=(j_{\sigma(1)},\ldots,j_{\sigma(n)})$. \item For $J\in I^{{\mathbb N}}$, $\sqrt{J}:=\{J'\in \mathcal J^{<{\mathbb N}}:\Vert J'\Vert\subset \Vert J\Vert\}$. \end{enumerate} \end{notation} \noindent Next, we introduce a notion of a {\em sorting data} on a profinite group. Fix a set $\mathcal J$, called a set of {\em sorts} and fix two functions \begin{itemize} \item $J_{\subset}^*:{\mathbb N}\times \mathcal J^{<{\mathbb N}}\rightarrow \mathcal J^{<{\mathbb N}}$; and \item $J_{\cap}^*:\mathcal J^{<{\mathbb N}}\times \mathcal J^{<{\mathbb N}}\rightarrow \mathcal J^{<{\mathbb N}},(J,J')\mapsto J^{\frown}J'$. \end{itemize} \begin{definition}\label{def:sorting_data} For a profinite group $G$, we associate a non-empty subset $F(N)$ of $\mathcal J^{<{\mathbb N}}$ for each $N\in {\mathcal N}(N)$ and consider an indexed family $F:=\{F(N):N\in {\mathcal N}(G)\}$. We say that the indexed family $F$ is {\em a sorting data} of $G$ if the following hold: For $N,N_1,N_2\in {\mathcal N}(G)$, \begin{enumerate} \item $F(G)=\mathcal J^{<{\mathbb N}}$. \item $J\in F(N)\Leftrightarrow \sqrt{J}\subset F(N)$; \item For $J\in \mathcal J^n$ and $\sigma \in \operatorname{Sym}(n)$, $J\in F(N)\Leftrightarrow \sigma(J)\in F(N)$. \item Suppose $N_1\subset N_2$ and $[G:N_1]\le k$. For $J\in \mathcal J^{<{\mathbb N}}$, $$J\in F(N_1)\Rightarrow J_{\subset}^*(k,J)\in F(N_2).$$ \item For $J_1\in F(N_1)$ and $J_2\in F(N_2)$, $J_{\cap}^*(J_1,J_2)\in F(N_1\cap N_2)$. \item (Invariant under the inner automorphism) For $N_1,N_2\in {\mathcal N}(G)$ and $g\in G$, if $g^{-1}N_1g=N_2$, that is, $\varphi_g[N_1]=N_2$ for the inner automorphism $\varphi_g$, then $$F(N_1)=F(N_2).$$ \end{enumerate} We call the pair $(G,F)$ is {\em a sorted profinite group}, and we say that the sorting data $F$ comes from $\mathcal J$. For sorting data $F$, $F'$ on $G$, we write $F\subset F'$ if $F(N)\subset F'(N)$ for any $N\in {\mathcal N}(G)$. A sorting data $F$ on $G$ is called {\em full} if $F(N)=\mathcal J^{<{\mathbb N}}$ for all $N\in {\mathcal N}(G)$. \end{definition} \begin{notation}\label{notation:union_intersection_presortingdata} For each $i\in I$, let $F_i:=\{F_i(N)\subset \mathcal J^{<{\mathbb N}}:N\in {\mathcal N}(G)\}$ be a ${\mathcal N}(G)$-indexed set of subsets of $\mathcal J^{<{\mathbb N}}$. \begin{enumerate} \item Let $\bigcap_{i\in I}F_i$ be a ${\mathcal N}(G)$-indexed family given as follows: For each $N\in {\mathcal N}(G)$, $(\bigcap_{i\in I}F_i)(N):=\bigcap_{i\in I}F_i(N)$. \item Let $\bigcup_{i\in I}F_i$ be a ${\mathcal N}(G)$-indexed family given as follows: For each $N\in {\mathcal N}(G)$, $(\bigcup_{i\in I}F_i)(N):=\bigcup_{i\in I}F_i(N)$. \end{enumerate} \end{notation} \begin{remark}\label{rem:minimal_sorting_data} \begin{enumerate} \item For an indexed family $\hat F=\{\hat F(N)(\neq \emptyset):N\in {\mathcal N}(G)\}$, by Zorn's lemma, there is a unique minimal sorting data $F$ on $G$ such that for each $N\in {\mathcal N}(G)$, $F(N)$ contains $\hat F(N)$, which is called {\em generated by $\hat F$}. We call an indexed family $\hat F$ with $\hat F(N)\neq \emptyset$ for each $N\in {\mathcal N}(G)$ {\em a pre-sorting data on $G$}. \item Let $F_i=\{F_i(N):N\in {\mathcal N}(G)\}$ be a sorting data of a profinite group $G$ for each $i\in I$. If $\bigcap_{i\in I}F_i(N)\neq \emptyset$ for each $N\in {\mathcal N}(G)$, both of the pre-sorting data $\bigcap F_i$ and $\bigcup_{i\in I} F_i$ are sorting data on $G$. If $I$ is finite, then $\bigcap_{i\in I}F_i(N)\neq \emptyset$ for each $N\in {\mathcal N}(G)$ so that $\bigcap_{i\in I} F_i$ is a sorting data on $G$. Namely, suppose $I=\{i_1,\ldots,i_n\}$. Fix $N\in {\mathcal N}(G)$. For each $i_j$, choose $J_j\in F_{i_j}(N)$. Then, $J_1^{\frown}J_2^{\frown}\cdots^{\frown}J_n$ is in $F_{i_1}(N)\cap\cdots\cap F_{i_n}(N)$. \end{enumerate} \end{remark} \begin{def/rem}\label{def/rem:generating_sording_data} Let $G$ be a profinite group. \begin{enumerate} \item Let ${\mathcal B}\subset {\mathcal N}(G)$ be a base at $e$. For each $N\in {\mathcal B}$, choose $F_{{\mathcal B}}(N)(\neq \emptyset)\subset \mathcal J^{<{\mathbb N}}$, and put $F_{{\mathcal B}}:=\{F_{{\mathcal B}}(N):N\in {\mathcal B}\}$, called {\em a pre-sorting data on ${\mathcal B}$}. Then, there is a unique minimal sorting data $F$ such that for each $N\in {\mathcal B}$, $F(N)$ contains $F_{{\mathcal B}}(N)$. In this case, we say that $F$ is generated by $F_{{\mathcal B}}$. \item Let $X\subset {\mathcal N}(G)$ generate a base at $e$. For each $N\in X$, choose $F_X(N)(\neq \emptyset)\subset \mathcal J^{<{\mathbb N}}$, and put $F_{X}:=\{F_X(N):N\in X\}$, called {\em a pre-sorting data on $X$}. Then, there is a unique minimal sorting data $F$ such that for each $N\in X$, $F(N)$ contains $F_X(N)$. In this case, we say that $F$ is generated by $F_{X}$. \item We say that a sorting data $F$ is {\em finitely generated} if there are a subset set $X$ of ${\mathcal N}(G)$ generating a base at $e$ and a pre-sorting data $F_{X}$ such that \begin{itemize} \item $F_X$ generates $F$; and \item $F_X(N)$ is finite for each $N\in X$. \end{itemize} In this case, we say that $(G,F)$ is {\em finitely sorted}. \end{enumerate} \end{def/rem} \begin{proof} $(1)$ Let ${\mathcal B}\subset {\mathcal N}(G)$ be a base at $e$ and let $F_{{\mathcal B}}$ be a pre-sorting data on ${\mathcal B}$. Define a pre-sorting data $\hat F$ on $G$ given as follows: For $N\in {\mathcal N}(G)$, \begin{itemize} \item if $N\in {\mathcal B}$, put $\hat F(N):=F_{{\mathcal B}}(N)$; and \item if $N\not\in {\mathcal B}$, put $\hat F(N):=\bigcup_{N'\subset N}\bigcup_{k\ge [G:N']}J_{\subset }^*[\{k\}\times F_{{\mathcal B}}(N') ]$. \end{itemize} Let $F$ be a sorting data of $G$ generated by $\hat F$, which exists by Remark \ref{rem:minimal_sorting_data}. Then, the sorting data $F$ is also generated by $F_{{\mathcal B}}$. \medskip $(2)$ Let $X\subset {\mathcal N}(G)$ generate a base at $e$. Put ${\mathcal B}:=\{N_1\cap \cdots \cap N_k:N_i\in X\}$, which is a base at $e$. Let $F_X$ be a pre-sorting data on $X$. Define a pre-sorting data $F_{{\mathcal B}}$ on ${\mathcal B}$ given as follows: For $N\in {\mathcal B}$, put $F_{{\mathcal B}}(N):=\bigcup_{N_1,\ldots,N_k\in X, N=N_1\cap \cdots \cap N_k}\{J_1^{\frown}\cdots^{\frown}J_k:J_1\in F_{X}(N_i),\ldots, J_k\in F_X(N_k)\}$. Let $F$ be a sorting data of $G$ generated by $F_{{\mathcal B}}$, which exists by $(1)$. Then, the sorting data $F$ is also generated by $F_{X}$. \end{proof} \subsection{The category of sorted profinite groups} The category $\operatorname{PG}$ of profinite groups is a category consisted with the following: \begin{itemize} \item $\operatorname{Ob}$ : The objects of $\operatorname{PG}$ are profinite groups. \item $\operatorname{Mor}$ : Let $G_1$ and $G_2$ be profinite groups. A morphism from $G_1$ to $G_2$ is a continuous homomorphism from $G_1$ to $G_2$. \end{itemize} We fix a set $\mathcal J$. We introduce a category of sorted profinite groups whose sorting data come from $\mathcal J$. \begin{definition}\label{def:category_sorted_profinite_group} The category $\operatorname{SPG}_{\mathcal J}$ of sorted profonite groups with sorting data from $\mathcal J$ is consisted with the following: \begin{itemize} \item $\operatorname{Ob}$ : The objects are sorted profinite groups whose sorting data from $\mathcal J$; and \item $\operatorname{Mor}$ : Let $(G_1,F_1)$ and $(G_2, F_2)$ be in $\operatorname{Ob}(\operatorname{SPG}_{\mathcal J})$. A morphism $f$ from $(G_1,F_1)$ to $(G_2,F_2)$ is an {\bf epimorphism} from $G_1$ to $G_2$, that is, a surjective continuous homomorphism, satisfying that for $N\in {\mathcal N}(G_2)$, $$F_2(N)\subset F_1(f^{-1}[N]).$$ \end{itemize} Any epimorphism in the category of profinite groups is called {\em sorted} if it is a morphism in the category $\operatorname{SPG}_{\mathcal J}$. If there is no confusion, we omit $\mathcal J$. \end{definition} \begin{example}\label{ex:sorted_profinite_group}\cite[Subsection 3.1]{HoLee} Fix a first order language ${\mathcal L}$ with a set of sort $\mathcal J$. Let $T$ be a complete ${\mathcal L}$-theory eliminating imaginaries and let $\mathfrak{C}\models T$ be a monster model. For each $J=(S_1,\ldots,S_k)\in \mathcal J^{<{\mathbb N}}$, write $S_J$ for the sort of $S_1\times\cdots\times S_k$. Let $M\subset \mathfrak{C}$ be a small definably closed substructure. Consider the Galois group of $M$, $$G(M):=G(\operatorname{acl}(M)/M)=\{\varphi\restriction_{\operatorname{acl}(M)}:\varphi\in\operatorname{Aut}(\mathfrak{C}/M)\}.$$ Then, $G(M)$ is the sorted profinite group equipped with the following natural sorting data $F$: For $N\in {\mathcal N}(G(M))$ with $[G(M):N]=n$ and for $J\in \mathcal J^{<{\mathbb N}}$, $J\in F(N)$ if and only if there is a primitive element $a$ in $(S_J(\operatorname{acl}(M)))^n$ such that $\operatorname{dcl}(M,a)=\operatorname{acl}(M)^{N}$, where $\operatorname{acl}(M)^N$ is the substructure consisted of elements in $\operatorname{acl}(M)$ fixed pointwise by each $\sigma\in N$. Note that there is a function $J_{\subset}^*:{\mathbb N}\times \mathcal J^{<{\mathbb N}}\rightarrow \mathcal J^{<{\mathbb N}}$ defined in \cite[Remark 3.1]{HoLee} such that for $N\subset N'\in {\mathcal N}(G(M))$ with $[G(M):N]\le k$ and for $J\in \mathcal J^{<{\mathbb N}}$, if $J\in F(N)$, then $J_{\subset}^*(k,J)\in F(N')$. And let $M'\subset \mathfrak{C}$ be a small definably closed substructure, which is a regular extension of $M$, that is, $M'\cap \operatorname{acl}(M)=M$. Then, the natural restriction map $\operatorname{res}:G(M')\rightarrow G(M)$ is a sorted epimorphism so that the restriction map induces a morphism $\operatorname{res}:(G(M'),F')\rightarrow (G(M),F)$ where $F$ and $F'$ are the natural sorting data on $G(M)$ and $G(M')$ respectively. \end{example} \begin{def/rem}\label{def/rem:pushforward_sortingdata} Let $\varphi:G_1\rightarrow G_2$ be an epimorphism. Let $F_1$ and $F_2$ be a sorting data on $G_1$ and $G_2$ respectively. Consider a pre-sorting data $\hat F_2$ on $G_2$ given as follows: For $N_2\in {\mathcal N}(G_2)$, $\hat F_2(N_2)=F_1(f^{-1}[N_2])$. Then, the pre-sorting data $\hat F_2$ is a sorting data, and we call this sorting data the {\em push-forward sorting data} of $F_1$ along $\varphi$, denoted by $\varphi_*(F_1)$. For any sorting data $F_2'$ on $G_2$, $\varphi:(G_1,F_1)\rightarrow (G_2,F_2')$ is sorted if and only if $F_2'\subset \varphi_*(F_1)$. \end{def/rem} We first show that the category $\operatorname{SPG}_{\mathcal J}$ is closed under the inverse limit. \begin{remark}\label{rem:inverselimit_sorted_prof_gp} The category is closed under the inverse limit. \end{remark} \begin{proof} Consider an inverse system $((G_i,F_i),f_j^i:(G_i,F_i)\rightarrow (G_j,F_j))_{j\le i\in I}$ of sorted profinite groups indexed by a directed poset $(I,\le)$. Let $G$ be the inverse limit of $G_i$ in the category of profinite groups, which is a profinite group. Then, for each $i\in I$, there is an epimorphism $f_i:G\rightarrow G_i$ such that for $i\ge j$, $f_j=f_j^i\circ f_i$. We consider a pre-sorting data $\hat F$ on $G$ given as follows: Let $N\in {\mathcal N}(G)$. Put $I_N:=\{i\in I:N=f_i^{-1}[f_i[N]]\}$, equivalently, $i\in I_N$ if and only if $\operatorname{Ker} f_i\subset N$. Note that $I_N\neq \emptyset$. Put $$\hat F(N):=\bigcup_{i\in I_N} F_i(f_i[N]).$$ Note that $F_j(f_j[N])\subset F_i(f_i[N])$ for $j\le i \in I_N$. \begin{claim}\label{claim:sorting_data_inverselimit} The pre-sorting data $\hat F$ is a sorting data. \end{claim} \begin{proof} It is enough to show that $\hat F$ satisfies the conditions $(4)$ and $(5)$ in Definition \ref{def:sorting_data}. We first show that the condition $(4)$ holds for $\hat F$. Take $N_1\subset N_2\in {\mathcal N}(G)$ and $k\in {\mathbb N}$ with $[G:N_1]\le k$. Take $J\in \mathcal J^{<{\mathbb N}}$ with $J\in \hat F(N_1)$. Then, by definition, there is $i\in I$ such hat $\operatorname{Ker} f_i\subset N_1$ and $J\in F_i(f_i[N_1])$. We have that $J_{\subset}^*(k,J)\in F_i(f_i[N_2])$ because $F_i$ is a sorting data of $G_i$ and $f_i[N_1]\subset f_i[N_2]\in {\mathcal N}(G_i)$. Since $\operatorname{Ker} f_i\subset N_1\subset N_2$, we have that $i\in I_{N_2}$ and $J_{\subset}^*(k,J)\in F_i(f_i[N_2])\subset \hat F(N_2)$. Next, we show that the condition $(5)$ holds. Take $J_1\in \hat F(N_1)$ and $J_2\in \hat F(N_2)$. Take $i\in I$ such that \begin{itemize} \item $\operatorname{Ker} f_i\subset N_1\cap N_2$; and \item $J_1\in F_i(f_i[N_1])$ and $J_2\in F_i(f_i[N_2])$. \end{itemize} We have that $f_i[N_1]\cap f_i[N_2]=f_i[N_1\cap N_2]$ because $\operatorname{Ker} f_i\subset N_1\cap N_2$. Also, since $F_i$ is a sorting data of $G_i$, we have that $$J_{\cap}^*(J_1,J_2)\in F_i(f_i[N_1]\cap f_i[N_2])=F_i(f_i[N_1\cap N_2]).$$ \end{proof} \begin{claim}\label{claim:inverselimit} The sorted profinite group $(G,\hat F)$ satisfies the following universal property: Let $(G',F')$ be a sorted profinite group and let $g_i :(G',F')\rightarrow (G_i,F_i)$ be a morphism for each $i\in I$. Then, there is a morphism $g:(G',F')\rightarrow (G,F)$ such that for each $i$, $g_i=f_i\circ g$. \end{claim} \begin{proof} Since $G$ is the inverset limit of $G_i$ in the category of profinite groups, there is a morphism $g:G'\rightarrow G$ such that for each $i$, $g_i=f_i\circ g$. It is enough to show that $g$ is a morphism in the category of $\operatorname{SPG}_{\mathcal J}$, that is, for each $N\in {\mathcal N}(G)$, $$\hat F(N)\subset F'(g^{-1}[N]).$$ Take $J\in \hat F(N)$. By definition, there is $i\in I$ such that $\operatorname{Ker} f_i\subset N$ and $J\in F_i(f_i[N])$. Since $\operatorname{Ker} f_i\subset N$ and $g_i=f_i\circ g$, we have that $$g^{-1}[N]=g^{-1}[f_i^{-1}[f_i[N]]]=g_i^{-1}[f_i[N]].$$ Since $g_i$ is a morphism in $\operatorname{SPG}_{\mathcal J}$ and $J\in F_i(f_i[N])$, we have that $$J\in F'(g_i^{-1}[f_i[N]])(=F'(g^{-1}[N])).$$ \end{proof} \noindent By Claim \ref{claim:inverselimit}, the sorted profinite group $(G,\hat F)$ is the inverse limit in the category $\operatorname{SPG}_{\mathcal J}$. \end{proof} Next, we define a notion of the fibre product in the category $\operatorname{SPG}_{\mathcal J}$. \begin{def/rem}\label{def/rem:fibre_product} Let $\Pi_1:(B_1,F_1)\rightarrow (A,F_A)$ and $\Pi_2:(B_2,F_2)\rightarrow (A,F_A)$ be morphisms of sorted profinite groups so that they are epimorphisms. {\em The fibre product} of $B_1$ and $B_2$ over $A$ with respect to $\Pi_1$ and $\Pi_2$ is the following sorted profinite group $(B,F)$: \begin{itemize} \item $B=B_1\times_A B_2=\{(b_1,b_2)\in B_1\times B_2:\Pi_1(b_1)=\Pi_2(b_2)\}$, which is the fibre product in the category of profinite groups; and \item Let $p_1:B\rightarrow B_1$ and $p_2:B\rightarrow B_2$ be the projection maps. Let $X=\{p_1^{-1}[N_1]:N_1\in {\mathcal N}(G_1)\}\cup\{p_2^{-1}[N_2]:N_2\in {\mathcal N}(G_2)\}$, which generates a base at $e$ because $\bigcap X=\{(e_1,e_2)\}$ where $e_1$ and $e_2$ are the identities of $B_1$ and $B_2$ respectively. Let $F_X$ be a pre-sorting data on $X$ given as follows: For $N_1\in {\mathcal N}(G_1)$ and $N_2\in {\mathcal N}(G_2)$, $$F_X(p_1^{-1}[N_1])=F_1(N_1),\ F_X(p_2^{-1}[N_2])=F_2(N_2).$$ Let $F$ be the sorting data generated by $F_X$. \end{itemize} The sorting data $F$ is the minimal one between sorting data $F'$ on $B$ to make $p_1$ and $p_2$ morphsisms in the category $\operatorname{SPG}_{\mathcal J}$, that is, the sorting data $F$ makes the projection maps $p_1$ and $p_2$ sorted and for a such sorting data $F'$, $F\subset F'$. \end{def/rem} In the category of profinite groups, the fibre product is characterized by the following properties: \begin{remark}\label{rem:char_fired_prod}\cite[Lemma 1.1]{HL} Consider a commutative diagram of groups with epimorphisms in the category of sorted profinite groups: $$ \begin{tikzcd} B \arrow[d, "p_1"'] \arrow[r, "p_2"] & B_2 \arrow[d, "\Pi_2"] \\ B_1 \arrow[r, "\Pi_1"']& A \end{tikzcd} $$ and put $p=\Pi_1\circ p_1=\Pi_2\circ p_2$. The following are equivalent: \begin{enumerate} \item $B$ is isomorphic to the fibre product of $B_1$ and $B_2$ over $A$. \item $B$ with $p_1$ and $p_2$ is a pullback of the pair $(\Pi_1,\Pi_2)$, that is, for any morphisms $\psi_i:C\rightarrow B$ for $i=1,2$ with $\Pi_1\circ \psi_1=\Pi_2\circ \psi_2$, there is a unique morphism $\psi:C\rightarrow B$ such that $p_i\circ \psi=\psi_i$ for $i=1,2$. $$ \begin{tikzcd} C \arrow[dr, "\psi"] \arrow[drr, bend left, "\psi_2"] \arrow[ddr, bend right, "\psi_1"'] & & \\ & B \arrow[d, "p_1"'] \arrow[r, "p_2"] & B_2 \arrow[d, "\Pi_2"]\\ & B_1 \arrow[r, "\Pi_1"'] & A \end{tikzcd} $$ \item $\operatorname{Ker} p_1\cap \operatorname{Ker} p_2=\{e\}$, and $A$ with $\Pi_1, \Pi_2$ is a pushout of the pair $(p_1,p_2)$, that is, for any homomorphism $\varphi_i:B_i\rightarrow G$ for $i=1,2$ with $\varphi_1\circ p_1=\varphi_2\circ p_2$, there is a unique homomorphism $\varphi:C\rightarrow B$ such that $\varphi \circ \Pi_i=\varphi_i$ for $i=1,2$. $$ \begin{tikzcd} B \arrow[d, "p_1"'] \arrow[r, "p_2"] & B_2 \arrow[d, "\Pi_2"] \arrow[ddr, bend left, "\varphi_2"] & \\ B_1 \arrow[r, "\Pi_1"'] \arrow[drr, bend right, "\varphi_1"'] & A \arrow[dr, "\varphi"] & \\ & & G \end{tikzcd} $$ \item $\operatorname{Ker} p=\operatorname{Ker} p_1 \times \operatorname{Ker} p_2$. \end{enumerate} Note the universal property of $(2)$ does not make sense in the category $\operatorname{SPG}_{\mathcal J}$ because the map $\psi$ needs not be surjective even though both $\psi_1$ and $\psi_2$ are surjective. \end{remark} \noindent We borrow a notion of a cartesian digram in the category of profinite groups from \cite[p. 185]{HL}. \begin{definition}\label{def:cartesian_diagram} We say that a diagram of sorted profinite groups in the category $\operatorname{SPG}_{\mathcal J}$ $$ \begin{tikzcd} (B,F) \arrow[d, "p_1"'] \arrow[r, "p_2"] & (B_2,F_2) \arrow[d, "\Pi_2"] \\ (B_1,F_1) \arrow[r, "\Pi_1"']& (A, F_A) \end{tikzcd} $$ is called {\em catesian} if $(B,F)$ is isomorphic to the fibre product of $(B_1,F_1)$ and $(B_2,F_2)$ over $(A,F_A)$. \end{definition} \noindent We have the following analogous to \cite[Lemma 1.2]{HL}. \begin{remark}\label{rem:inducing_fibre_prod} Let $\psi_i:(C,F_C)\rightarrow (B_i,F_i)$ be a morphism for $i=1,2$. Then there is a commutative diagram: $$ \begin{tikzcd} (C,F_C) \arrow[dr, "\psi"] \arrow[drr, bend left, "\psi_2"] \arrow[ddr, bend right, "\psi_1"'] & & \\ & (B,F) \arrow[d, "p_1"'] \arrow[r, "p_2"] & (B_2,F_2) \arrow[d, "\Pi_2"]\\ & (B_1,F_1) \arrow[r, "\Pi_1"'] & (A,F_A) \end{tikzcd} $$ , where the square is cartesian and $\psi$ is a morphism. \end{remark} \begin{proof} By \cite[Lemma 1.2]{HL}, there is a unique commutative digram $$ \begin{tikzcd} C \arrow[dr, "\psi"] \arrow[drr, bend left, "\psi_2"] \arrow[ddr, bend right, "\psi_1"'] & & \\ & B \arrow[d, "p_1"'] \arrow[r, "p_2"] & B_2 \arrow[d, "\Pi_2"]\\ & B_1 \arrow[r, "\Pi_1"'] & A \end{tikzcd} $$ such that $B$ is the fibre product of $B_1$ and $B_2$ over $A$ with an epimorphism $\psi$ in the category of profinite groups. Let $F_A$ be an arbitrary sorting data on $A$ such that the epimorphisms $\Pi_i:(B_i,F_i)\rightarrow (A,F_A)$ is sorted for each $i=1,2$. Take the sorting data $F$ such that $(B,F)$ is the fibre product of $(B_1,F_1)$ and $(B_2,F_2)$ over $(A,F_A)$ so that the square is cartesian in the category $\operatorname{SPG}_{\mathcal J}$. Note that the sorting data $F$ is depending only on two morphism $p_1$ and $p_2$. It remains to show that the surjective homomorphism $\psi:(C,F_C)\rightarrow (B,F)$ is sorted. Put $X=\{p_1^{-1}[N_1]:N_1\in{\mathcal N}(B_1)\}\cup\{p_2^{-1}[N_2]:N_2\in {\mathcal N}(B_2)\}$. Take $N_1\in {\mathcal N}(B_1)$ and $J_1\in F_1(N_1)$. Then, we have that $$J_1\in F_C(\psi_1^{-1}[N_1])=F_C(\psi^{-1}[p_1^{-1}[N_1]])$$ because $\psi_1=p_1\circ \psi$. The same things holds for $N_2\in {\mathcal N}(B_2)$ and $J_2\in F_2(N_2)$. Thus, by the minimality of $F$, we have that $F[N]\subset F_C(\psi^{-1}[N])(=\psi_*(F_C)[N])$ for any $N\in X$ and so $F\subset \psi_*(F_C)$. So, by Remark \ref{def/rem:pushforward_sortingdata}(1), the epimorphism $\psi:(C,F_C)\rightarrow (B,F)$ is sorted. \end{proof} We introduce the {\em sorted embedding property} for sorted profinite groups, analogous to the embedding property for profinite groups in \cite{HL} (or also called the Iwasawa property in \cite{C1}). We start with the definition of {\em embedding condition} from \cite[p. 185]{HL}. Fix a set $\mathcal J$. Through this section, a sorted profinite group means a sorted profintie group in $\operatorname{SPG}_{\mathcal J}$. Let $(G,F)$ be a sorted profinite group. For a pair $((A,F_A),(B,F_B))$ of sorted profinite groups, the {\em sorted embedding condition}, denoted by $\operatorname{Emb}_{(G,F)}((A,F_A),(B,F_B))$, is defined as follows: If $(A,F_A)$ is a quotient of $(G,F)$, then for every pair of morphism $\Pi:(A,F_A)\rightarrow (B,F_B)$ and $\varphi:(G,F)\rightarrow (B,F_B)$, there is a morphism $\psi:(G,F)\rightarrow (A,F_A)$ such that $\Pi\circ \psi=\varphi$. Let $\operatorname{SIm}(G,F)$ be the set of isomoprhism classes of sorted finite quotients of $(G,F)$, and let $\operatorname{FSIm}(G,F)$ be the set of isomorphism clasees of finitely sorted finite quotients of $(G,F)$. Then, clearly we have that $\operatorname{FSIm}(G,F)\subset \operatorname{SIm}(G,F)$. \begin{definition}\label{def:Iwasawa_property} Let $(G,F)$ be a sorted profinite group. \begin{enumerate} \item We say that $(G,F)$ satisfies the {\em sorted embedding property} (SEP) if for all $((A,F_A),(B,F_B))\in \operatorname{SIm}(G,F)$, the sorted embedding condition $\operatorname{Emb}_{(G,F)}((A,F_A),(B,F_B))$ holds. \item We say that $(G,F)$ satisfies the {\em finitely sorted embedding property} (FSEP) if for all $((A,F_A),(B,F_B))\in \operatorname{FSIm}(G,F)$, the sorted embedding condition $\operatorname{Emb}_{(G,F)}((A,F_A),(B,F_B))$ holds. \end{enumerate} \end{definition} \begin{example}\label{ex:sorted_profinite_group_having_SEP} \begin{enumerate} \item If $G$ is a profinite group having the embedding property, then the sorted profinite group $(G,F)$ has SEP where $F$ is the full sorting data on $G$. For example, the free profinite group has the embedding property. Also, any profinite group has the universal embedding cover and any finite group has the finite universal embedding cover (see \cite[Theorem 1.12]{HL}). \item Let $\mathcal J=\{s_1,s_2\}$. Put $J_{\subset}^*:{\mathbb N}\times \mathcal J^{<{\mathbb N}}\rightarrow \mathcal J^{<{\mathbb N}}, (k,J)\mapsto (k,J^{\frown}(s_1,s_2))$. Let $G={\mathbb Z}_2\times {\mathbb Z}_2$, which has the embedding property as a profinite group. Then ${\mathcal N}(G)=\{0, G,N_{(1,1)}, N_{(1,0)}, N_{(0,1)} \}$ where $N_a$ is the subgroup of $G$ generated by $a$ for $a\in G$. Define a sorting data $F$ on $G$ as follows: \begin{itemize} \item $F(G)=\mathcal J^{<{\mathbb N}}$; \item $F(N_{(1,0)})=F(N_{(0,1)})=F(N_{(1,1)})=\{J\in \mathcal J^{<{\mathbb N}}:s_1\in ||J||\}$; \item $F(0)=\{J\in \mathcal J^{<{\mathbb N}}:s_1,s_2\in ||J||\}$. \end{itemize} Then, $(G,F)$ has SEP. \end{enumerate} \end{example} We show that the weaker notion of FSEP is actually equivalent to the notion of SEP. \begin{lemma}\label{lem:sortingdata_lifting_under_FSIP} Suppose $(G,F)$ satisfies FSEP. Let $(B,F_B)\in \operatorname{SIm}(G,F)$. Take $F_B'\subset F_B$ finitely generated so that $(B,F_B')\in \operatorname{FSIm}(G,F)$. Let $\pi:(G,F)\rightarrow (B,F_B')$ be a morphism. Then, $F_B\subset \pi_*(F)$. \end{lemma} \begin{proof} For a contradiction, suppose $F_B\not\subset \pi_*(F)$. Then, there are $N\in {\mathcal N}(B)$ and $J_N\in F_B(N)\setminus \pi_*(F)(N)$. Let $\hat F_B$ be a pre-sorting data gives as follows: For $N'\in {\mathcal N}(B)$, $$\hat F_B(N'):=\begin{cases} F_B'(N') & N'\neq N\\ F_B'(N')\cup \{J_N\} & N'=N \end{cases}.$$ Let $F_B''$ be the sorting data generated by $\hat F_B$. Then, clearly $(B,F_B'')\in \operatorname{FSIm}(G,F)$. Since $(G,F)$ has FSEP, we have the following diagram: $$ \begin{tikzcd} & (G,F) \arrow[dl, dashrightarrow, "\pi"'] \arrow[d, "\pi"]\\ (B,F_B'') \arrow[r, "\operatorname{id}"'] & (B,F_B') \end{tikzcd} $$ so that $F_B''\subset \pi_*(F)$ and $J_N\in \pi_*(F)(N)$, a contradiction. \end{proof} \begin{corollary}\label{cor:descritpion_pushforward_sortingdata_for_sameimage} Let $(G,F)$ be a sorted profinite group having FSEP. For every morphisms $\varphi_1:(G,F)\rightarrow (A,F_1)$ and $\varphi_2:(G,F)\rightarrow (A,F_2)$, we have that $(\varphi_1)_*(F)=(\varphi_2)_*(F)$. \end{corollary} \begin{proof} There is a sorting data $F_A\subset F_1\cap F_2$. So, each $\varphi_i$ induces a morphism $\varphi_i:(G,F)\rightarrow (A,F_A)$. From the proof of Lemma \ref{lem:sortingdata_lifting_under_FSIP}, we have that $(\varphi_1)_*(F)=(\varphi_2)_*(F)$. \end{proof} \begin{proposition}\label{prop:FSIP=SIP} A sorted profinite group $(G,F)$ has SEP if and only if it has FSEP. \end{proposition} \begin{proof} It is enough to show the right-to-left implication. Suppose $(G,F)$ has FSEP. Take $(A,F_A)$ and $(B,F_B)$ in $\operatorname{SIm}(G,F)$, and take two morphisms $\pi_A:(G,F)\rightarrow (A,F_A)$ and $\pi:(B,F_B)\rightarrow (A,F_A)$. We want to find $\pi_B:(G,F)\rightarrow (B,F_B)$ with $\pi_A=\pi\circ \pi_B$. Take $F_A'\subset F_A$ and $F_B'\subset F_B$ finitely generated such that $\pi$ induces a morphism $\pi:(B,F_B')\rightarrow (A,F_A')$. Since $(G,F)$ has FSEP, there is a morphism $\pi_B:(G,F)\rightarrow (B,F_B')$ with $\pi_A=\pi\circ \pi_B$. By Lemma \ref{lem:sortingdata_lifting_under_FSIP}, $F_B\subset (\pi_B)_*(F)$ so that $\pi_B$ induces a morphism $\pi_B:(G,F)\rightarrow (B,F_B)$, which is a desired one. \end{proof} \noindent The advantage of FSEP is that it can be first order axiomatizable in the language of sorted complete system, which will be crucially used to show the uniqueness of a universal SEP-cover in Theorem \ref{thm:uniquness_SIP cover}. \subsection{Sorted complete system}\label{Section:sorted_complete_system} We recall the notion of sorted complete system from \cite[Section 3.2]{HoLee}. For each sorted profinite group in the category $\operatorname{SPG}_{\mathcal J}$ , we associate a dual object, called a {\em sorted complete system}. Consider the following first order language ${\mathcal L}_{G}(\mathcal J)$ with the sorts $m_{k,J}$ for each $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$ consisted with \begin{itemize} \item a family of binary relations $\le_{k,k',J,J'}$ and $C_{k,k',J,J'}$ on $m(k,J)\times m(k',J')$; and \item a family of ternary relations $P_{k,J}$ on $m(k,J)^3$. \end{itemize} For a sorted profinite group $(G,F)$, the sorted complete system $S(G,F)$ is a ${\mathcal L}_{G}(\mathcal J)$-structure given as follows: \begin{itemize} \item For $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$m(k,J):=\bigcup_{N\in{\mathcal N}(G),[G:N]\le k,J\in F(N)} G/N\times \{k\}.$$ \item For $(k,J),(k',J')\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$\le_{k,k',J,J'}:=\{\left((gN,k),(g'N',k')\right)\in m(k,J)\times m(k',J'):k\ge k',N\subset N'\}.$$ \item For $(k,J),(k',J')\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$C_{k,k',J,J'}:=\{\left((gN,k),(g'N',k')\right)\in m(k,J)\times m(k',J'):k\ge k',gN\subset g'N'\}.$$ \item For $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$P_{k,J}=\{ \left((g_1N,k),(g_2N,k),(g_3N,k) \right)\in m(k,J)^3:g_3N=g_1g_2N\}.$$ \end{itemize} If there is no confusion, we omit the subscriptions and write $\le$ ,$C$, and $P$. We also write $gN$ for $(gN,k)$ and $S(G)$ for $S(G,F)$. Sorted complete systems are axiomaitzed by a ${\mathcal L}_G(\mathcal J)$-theory, $SCS$ in \cite[Definition 3.7]{HoLee} with adding the axiom to say invariant under the inner automorphism: For $a\in m(k,J),b\in m(k',J')$ and for $c\in m(kk',J_{\cap}^*((k,J),(k',J'))$ with $[a]\wedge [b]=[c]$, if there is $d\in [c]\cap m(kk',J_{\cap}^*((k,J),(k',J'))$ such that $\varphi_d[N_a^c]=N_b^c$, where $N_a^c$ is the kernel of $\pi_{c,a}:[c]_{kk',J_{\cap}^*((k,J),(k',J')}\rightarrow [a]_{k,J}$, then for any $(k'',J'')\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$[a]\cap m(k'',J'')\neq \emptyset\Leftrightarrow [b]\cap m(k'',J'')\neq \emptyset.$$ \noindent Conversely, any model $S$ of $SCS$ is a sorted complete system of a sorted profinite group, denoted by $(G(S),F(S))$. Let $\sim$ be the equivalence relation on $S$ given as follows: For $a,b\in S$, $$a\sim b\Leftrightarrow a\le b\wedge b\le a.$$ For $a\in S$, let $[a]$ be the $\sim$-class of $a$. Then, for each $a\in m(k,J)$, $[a]\cap m(k,J)$ forms a group whose group operation is induced from $P$. The profinite group $G(S)$ is the inverse limit of $[a]\cap m(k,J)$ with the transition map induced from $C$. Note that for each $N\in {\mathcal N}(G(S))$, there is $a\in m(k,J)$ such that $N$ is the kernel of the projection from $G(S)$ to $[a]\cap m(k,J)$. In this case, we denote $N$ by $N_a$. We now associated the sorting data $F(S)$ on $G(S)$ as follows: For $N\in {\mathcal N}(G(S))$ and $J\in \mathcal J^{<{\mathbb N}}$, $$J\in F(S)(N)\Leftrightarrow\exists a\in m(k,J)(N=N_a).$$ Then, the sorted complete system of $(G(S),F(S))$ is naturally isomorphic to $S$. If there is no confusion, we write $G(S)$ for $(G(S),F(S))$. Moreover, the associations $S$ and $G$ show that the category $\operatorname{SPG}_{\mathcal J}$ of sorted profinite groups and the category of sorted complete systems whose morphisms are ${\mathcal L}_G(\mathcal J)$-embeddings are equivalent by contravariant functors. For more detailed information, see \cite[Section 3.2]{HoLee}. \section{Universal SEP-cover}\label{Section:universal_SIP_cover} Since the category $\operatorname{SPG}_{\mathcal J}$ is closed under the inverse limit and the fibre product, we can transfer many arguments for profinite groups in \cite[Section 1]{HL} and in \cite[Section 24.4]{FJ} into the case of sorted profinite groups after modifying several notions properly. In this section, we aim to show that any sorted profinite group $(G,F)$ has a universal SEP-cover, generalizing \cite[Theorem 1.12]{HL} and \cite[Proposition 24.4.5]{FJ}. \begin{definition}\label{def:SIP_cover} We say that a morphism $p:(H,F_H)\rightarrow (G,F)$ is a {\em SEP-cover} if $(H,F_H)$ has SEP. \end{definition} \begin{definition}\label{def:universal_SIP_cover} Let $(G,F)$ be a sorted profinite group. A {\em universal SEP-cover} $p:(H,F_H)\rightarrow (G,F)$ is a SEP-cover satisfying the following property: For any SEP-cover $r:(H',F_{H'})\rightarrow (G,F)$, there is a morphism $q:(H',F_{H'})\rightarrow (H,F_{H})$ such that $p\circ q=r$. \end{definition} \begin{remark}\label{rem:reduct_universal_cover} Let $p:(G',F')\rightarrow (G,F)$ be a universal SEP-cover of a sorted profinite group $(G,F)$. Then, $p:G'\rightarrow G$ is the universal embedding cover of $G$ in the category of profinite groups. Namely, let $q:G''\rightarrow G$ be an embedding cover. Let $F''$ be the full sorting data on $G''$. Then, $q:(G'',F'')\rightarrow (G,F)$ is a SEP-cover. Since $p:(G',F')\rightarrow (G,F)$ is the universal SEP-cover, there is a morphism $r:(G'',F'')\rightarrow (G',F')$ such that $q=p\circ r:(G'',F'')\rightarrow (G,F)$, which implies $q=p\circ r:G''\rightarrow G$. Thus, $p:G'\rightarrow G$ is the universal embedding cover of $G$. \end{remark} Before showing that any sorted profinite group has a universal SEP-cover, we first introduce two posets ${\mathcal P}$ and ${\mathcal H}$ (see \cite[p. 188]{HL} or \cite[Section 24.4]{FJ}). Let $(G_1,F_1)$ and $(G_2,F_2)$ be sorted profinite groups. We consider the following class of pairs of morphism with common images: \begin{align*} ({\mathcal P}&:=){\mathcal P}((G_1,F_1),(G_2,F_2))\\ &=\{(\Pi_1,\Pi_2):\Pi_1:(G_1,F_1)\rightarrow (A,F_A),\Pi_2:(G_2,F_2)\rightarrow (A,F_A)\}. \end{align*} and define a pre-order $\le$ on ${\mathcal P}$ as follows: For $(\Pi_1,\Pi_2), (\Pi_1',\Pi_2')\in {\mathcal P}$, we write $(\Pi_1,\Pi_2)\le (\Pi_1',\Pi_2')$ if there is a morphism $\Pi:(A',F_{A'})\rightarrow (A,F_A)$ such that the following diagram is commutative: $$ \begin{tikzcd} (G_1,F_1) \arrow[dr, "\Pi_1'"] \arrow[ddr, "\Pi_1"'] & & (G_2,F_2) \arrow[dl, "\Pi_2'"'] \arrow[ddl, "\Pi_2"]\\ & (A',F_{A'}) \arrow[d, "\Pi"]& \\ & (A,F_A)& \end{tikzcd} $$ We write $(\Pi_1,\Pi_2)\approx (\Pi_1',\Pi_2')$ if $(\Pi_1,\Pi_2)\le (\Pi_1',\Pi_2')$ and $(\Pi_1,\Pi_2)\ge (\Pi_1',\Pi_2')$. Then, the relation $\approx$ is an equivalence relation on ${\mathcal P}$ and $\le$ gives a partial order on the quotient set ${\mathcal P}/\approx$. \begin{remark} $(\Pi_1,\Pi_2)\approx (\Pi_1',\Pi_2')$ if and only if $\Pi$ is an isomorphism. \end{remark} \begin{proof} It is enough to show that the left-to-right implication holds. Suppose $(\Pi_1,\Pi_2)\approx (\Pi_1',\Pi_2')$ and $\Pi$ is not an isomorphism. First, note that $\Pi$ is a bijective. Let $F:=\Pi_*(F_{A'})$ be the push-forward sorting data on $A$. Since $\Pi$ is not an isomorphism, $F_A\subsetneq F$. Put $\Pi_1'':=\Pi\circ \Pi_1':(G_1,F_1)\rightarrow (A,F)$ and $\Pi_2'':=\Pi\circ \Pi_2'$. Since $\Pi:(A',F_{A'})\rightarrow (A,F)$ is an isomorphism, $$(\Pi_1,\Pi_2)\approx(\Pi_1',\Pi_2')\approx (\Pi_1'',\Pi_2'').$$ So, there is a morphism $\Pi':(A,F_A)\rightarrow (A,F)$ such that $\Pi'\circ \Pi_1=\Pi_1$ and $\Pi'\circ \Pi_2=\Pi_2$, which implies that $\Pi'$ is the identity map. Thus, we have that $F\subset F_A$, a contradiction. \end{proof} We introduce a dual notion to ${\mathcal P}$. Let $(G_1,F_1)\times (G_2,F_2)$ be the fibre product of $(G_1,F_1)$ and $(G_2,F_2)$ over the trivial group. Let $p_i:(G_1,F_1)\times (G_2,F_2)\rightarrow (G_i,F_i)$ for $i=1,2$ be the canonical projection. Put \begin{align*} ({\mathcal H}&:=){\mathcal H}((G_1,F_1),(G_2,F_2))\\ &=\{\left((H,F_H),\Pi_1,\Pi_2\right):H\le G_1\times G_2, p_i(H)=G_i, i=1,2\} \end{align*} such that \begin{itemize} \item $p_i:(H_,F_H)\rightarrow (G_i,F_i)$ is a morphism for $i=1,2$; \item $(\Pi_1,\Pi_2)\in {\mathcal P}$; \item The following diagram is catesian: $$ \begin{tikzcd} (H,F_H) \arrow[d, "p_1"'] \arrow[r, "p_2"]& (G_2,F_2) \arrow[d, "\Pi_2"]\\ (G_1,F_1) \arrow[r, "\Pi_1"']& (A,F_A) \end{tikzcd}. $$ \end{itemize} By Remark \ref{rem:inducing_fibre_prod}, ${\mathcal H}$ is not empty. We define a pre-order $\le'$ on ${\mathcal H}$ as follows: $$\left((H,F_H),\Pi_1,\Pi_2\right)\le' \left((H',F_{H'}),\Pi_1',\Pi_2'\right)$$ if \begin{itemize} \item $(\Pi_1',\Pi_2')\le (\Pi_1,\Pi_2)$; \item $H\subset H'$. \end{itemize} Note that $F_H=F_{H'}$ if $H=H'$ because $H$ and $H'$ are fibre products. We write $\left((H,F_H),\Pi_1,\Pi_2\right)\approx' \left((H',F_{H'}),\Pi_1',\Pi_2'\right)$ if $H=H'$ and $(\Pi_1,\Pi_2)\approx (\Pi_1',\Pi_2')$, that is, $((H,F_H),\Pi_1,\Pi_2)\le' ((H',F_{H'}),\Pi_1',\Pi_2')$ and $((H',F_{H'}),\Pi_1',\Pi_2')\le' ((H,F_H),\Pi_1,\Pi_2)$. Then, the relation $\approx'$ is an equivalence relation on ${\mathcal H}$ and $\le'$ gives a partial order on the quotient set ${\mathcal H}/\approx'$. Now we define a map $T:{\mathcal P}\rightarrow {\mathcal H}$ given as follows: For $(\Pi_1,\Pi_2)\in {\mathcal P}$ with $(A,F_A)=\operatorname{Im}\Pi_1=\operatorname{Im}\Pi_2$, let $$T(\Pi_1,\Pi_2):=\left((G_1,F_1)\times_{(A,F_A)}(G_2,F_2), \Pi_1,\Pi_2\right).$$ The map $T$ induces a map from ${\mathcal P}/\approx$ to ${\mathcal H}/\approx'$. By abusing notation, we denote ${\mathcal P}/\approx$, ${\mathcal H}/\approx'$, and the induced map by ${\mathcal P}$, ${\mathcal H}$, and $T$ respectively. Note that the map $T$ is an order-reversing injection by definition. Then, we have a result analogous to \cite[Lemma 1.7]{HL} using Remark \ref{rem:inducing_fibre_prod}. \begin{lemma}\label{lem:maximal_to_minimal}\cite[Lemma 1.7]{HL} The map $T$ induces an order-reversing bijection between two posets ${\mathcal P}/\approx$ and ${\mathcal H}/\approx'$. \end{lemma} \begin{remark}\label{rem:extending_fibreprod_into_CH} Let $(G_1,F_1)$ and $(G_2,F_2)$ be sorted profinite groups. Let $p_i:G_1\times G_2\rightarrow G_i$ be the canonical projection for $i=1,2$. Let $H'\le H\le G_1\times G_2$ such that $p_i[H]=G_i$ and $p_i[H']=G_i$ for $i=1,2$. For any $((H,F),\Pi_1,\Pi_2)\in {\mathcal H}$, there is $(\Pi_1',\Pi_2')$ in ${\mathcal P}$ such that $$((H',F'),\Pi_1',\Pi_2')\le'((H,F),\Pi_1,\Pi_2)$$, where $(H',F')$ is the fibre product of $(G_1,F_1)$ and $(G_2,F_2)$ along $\Pi_1'$ and $\Pi_2'$. \end{remark} \begin{proof} Since $H'\le H$, by \cite[Lemma 1.7]{HL}, there are epimorphisms $\Pi_1'$ and $\Pi_2'$ with $\operatorname{Im}(\Pi_1)=\operatorname{Im}(\Pi_2)(=:A')$ and an epimorphism $\Pi:A'\rightarrow A$ such that the following diagram is commutative: $$ \begin{tikzcd} H' \arrow[d, "p_1"'] \arrow[r, "p_2"] & G_2 \arrow[d, "\Pi_2'"] \arrow[ddr, bend left, "\Pi_2"]& \\ G_1 \arrow[r, "\Pi_1'"'] \arrow[drr, bend right, "\Pi_1"'] & A' \arrow[dr, "\Pi"] & \\ & & A \end{tikzcd}, $$ where the square is catesian. Let $F_{A'}$ be a sorting data on $A'$ such that all epimorphisms $\Pi_1'$, $\Pi_2'$, and $\Pi$ are sorted. Let $F'$ be the sorting data on $H'$ such that $(H',F')$ is the fibre product of $(G_1,F_1)$ and $(G_2,F_2)$ over $(A',F_{A'})$. Then, by the choice of $F'$ and $F_{A'}$, we have that $$((H',F'),\Pi_1',\Pi_2')\le' ((H,F),\Pi_1,\Pi_2).$$ \end{proof} \noindent Using Zorn's Lemma with the inverse limit, we have the following result: \begin{lemma}\label{lem:existing_maximal_in_CP}\cite[Lemma 1.8]{HL} For every $(\Pi_1,\Pi_2)\in {\mathcal P}$, there is a maximal element $(\Pi_1',\Pi_2')\in {\mathcal P}$ such that $(\Pi_1,\Pi_2)\le (\Pi_1',\Pi_2')$. \end{lemma} We introduce a notion of the {\em quasi SEP-cover} for sorted profinite groups, analogous to the quasi-embedding cover of profinite groups in \cite[p. 189]{HL} or the $I$-cover in \cite[Definition 24.4.3]{FJ}. \begin{definition}\label{def:quasi_embedding_cover} A morphism $p:(H,F_H)\rightarrow (G,F)$ is called a {\em quasi SEP-cover} (q.s.c.) if for every SEP-cover $\varphi :(E,F_E)\rightarrow (G,F)$, there is a morphism $\psi:(E,F_E)\rightarrow (H,F_H)$. \end{definition} \begin{remark}\label{rem:basic_property_qsc} Let $(G,F)$ be a sorted profinite group whose rank is $\kappa$. \begin{enumerate} \item For two morphisms $p:(H,F_H)\rightarrow (G,F)$ and $\Pi:(G,F)\rightarrow (A,F_A)$ of sorted profinite groups, if both $p$ and $\Pi$ are q.s.c., then $\Pi\circ p$ is a q.s.c. \item For any q.s.c. $p:(H,F_H)\rightarrow (G,F)$, the cardinality of $H$ is less than or equal to the cardinality of $\mathbb{F}_{\kappa}$. Furthermore, if $G$ is finite, then so is $H$ because the universal embedding cover of $G$ is finite (c.f. Example \ref{ex:sorted_profinite_group_having_SEP}). \item Let $p:(H,F_H)\rightarrow (G,F)$ be a q.s.c. which is a SEP-cover. Then, $p$ is a universal SEP-cover. \end{enumerate} \end{remark} \noindent From the proof of \cite[Lemma 24.4.4]{FJ}, we have the following result. \begin{lemma}\label{lem:maximal_in_P_qsc} Let $(G,F)$ be a sorted profinite group and let $(B,F_B)\in SIM(G,F)$. Let $(\Pi_1,\Pi_2)\in {\mathcal P}((B,F_B),(G,F))$ be a maximal element. Consider the following cartesian diagram induced from $(\Pi_1,\Pi_2)$: $$ \begin{tikzcd} (H,F_H) \arrow[d, "p_1"'] \arrow[r, "p_2"] & (G,F) \arrow[d, "\Pi_2"]\\ (B,F_B) \arrow[r, "\Pi_2"'] & (A,F_A) \end{tikzcd} $$ Then, $p_2$ is a q.s.c. \end{lemma} \begin{proof} Let $\psi_2:(G',F')\rightarrow (G,F)$ be a SEP-cover. Then, $(B,F_B)$ is in $SIM(G',F')$. Since $(G',F')$ has SEP, there is a morphism $\psi_1:(G',F')\rightarrow (B,F_B)$ such that $\Pi_1\circ \psi_1=\Pi_2 \circ \psi_2$. Since $H$ is a fibre product of $B$ and $G$ over $A$, by Remark \ref{rem:char_fired_prod}(2), there is a homomorphism $\psi:E\rightarrow H$ such that the following diagram is commutative: $$ \begin{tikzcd} (G',F') \arrow[dr, "\psi"] \arrow[drr, bend left, "\psi_2"] \arrow[ddr, bend right, "\psi_1"'] & & \\ & (H,F_H) \arrow[d, "p_1"'] \arrow[r, "p_2"] & (G,F) \arrow[d, "\Pi_2"]\\ & (B,F_B) \arrow[r, "\Pi_1"'] & (A,F_A) \end{tikzcd} $$ Since $\psi[E]\le H$ and $((H,F_H),\Pi_1,\Pi_2)$ is minimal, by Remark \ref{rem:extending_fibreprod_into_CH}, we have that $\psi[E]=H$ so that $\psi$ is an epimorphism. We will show that $\psi$ is sorted. Since $(H,F_H)$ is the fibre product of $(B,F_B)$ and $(G,F)$, the sorting data $F_H$ is generated by the following pre-sorting presort $F_X$ (see Definition/Remark \ref{def/rem:fibre_product}): \begin{itemize} \item $X=\{p_1^{-1}[N_1]:N_1\in {\mathcal N}(B)\}\cup\{p_2^{-1}[N_2]:N_2\in {\mathcal N}(G)\}$; \item For $N_1\in {\mathcal N}(B)$ and $N_2\in {\mathcal N}(G)$, $$F_X(p_1^{-1}[N_1])=F_1(N_1),\ F_X(p_2^{-1}[N_2])=F_2(N_2).$$ \end{itemize} Since $\psi_1$ and $\psi_2$ are sorted, for $N_1\in {\mathcal N}(B)$ and $N_2\in {\mathcal N}(G)$, $$F_1(N_1)\subset F_E(\psi_1^{-1}[N_1]), F_2(N_2)\subset F_E(\psi_2^{-1}[N_2]).$$ Since $\psi_1=p_1\circ \psi$ and $\psi_2=p_2\circ \psi$, we have that for $N_1\in {\mathcal N}(B)$ and $N_2\in {\mathcal N}(G)$, \begin{align*} F_E(\psi^{-1}[p_1^{-1}[N_1]])&=F_E(\psi_1^{-1}[N_1])\\ &\supset F_1(N_1)\\ &=F_H(p_1^{-1}[N_1]), \end{align*} and \begin{align*} F_E(\psi^{-1}[p_2^{-1}[N_2]])&=F_E(\psi_2^{-1}[N_2])\\ &\supset F_2(N_2)\\ &=F_H(p_2^{-1}[N_2]), \end{align*} which implies $\psi$ is sorted because $F_H$ is generated by $F_X$. Therefore, we have that $\psi_2=p_2\circ \psi$ for a morphism $\psi$, and $p_2$ is a q.s.c. \end{proof} \noindent We have the following characterization of sorted profinite groups having no SEP, analogous to \cite[Lemma 24.4.4]{FJ}. \begin{lemma}\label{lem:char_no_SIP} If a sorted profinite group $(G,F)$ does not have SEP, then, either \begin{enumerate} \item there exists a q.s.c. $p:(H,F_H)\rightarrow (G,F)$ with a non-trivial kernel, or \item there is a q.s.c. $\operatorname{id}:(G,F')\rightarrow (G,F)$ with $F\subsetneq F'$. \end{enumerate} \end{lemma} \begin{proof} Suppose a sorted profinite group $(G,F)$ has no SEP. So, there exist \begin{itemize} \item $(A,F_A),(B,F_B)\in SIM(G,F)$; and \item morphisms $\pi_1:(B,F_B)\rightarrow (A,F_A)$ and $\pi_2 :(G,F)\rightarrow (A,F_A)$, \end{itemize} such that there is no morphism $p:(G,F)\rightarrow (B,F_B)$ with $\pi_2=\pi_1\circ p$. By Lemma \ref{lem:existing_maximal_in_CP}, there is a maximal element $(\Pi_1,\Pi_2)\in {\mathcal P}((B,F_B),(G,F))$ such that $(\pi_1,\pi_2)\le (\Pi_1,\Pi_2)$. Then, we have the following diagram: $$ \begin{tikzcd} (H,F_H) \arrow[d, "p_1"'] \arrow[r, "p_2"] & (G,F) \arrow[d, "\Pi_2"] \arrow[ddr, bend left, "\pi_2"]& \\ (B,F_B) \arrow[r, "\Pi_1"'] \arrow[drr, bend right, "\pi_1"'] & (A',F_{A'}) \arrow[dr, "\pi"] & \\ & & (A,F_A) \end{tikzcd}, $$ where $(H,F_H)$ is the fibre product of $(B,F_B)$ and $(G,F)$ over $(A',F_{A'})$. Note that $p_2$ is a q.s.c. by Lemma \ref{lem:maximal_in_P_qsc}. Suppose $p_2$ is an isomorphism. Let $p=p_1\circ p_2^{-1}$. Then, we have that \begin{align*} \pi_1\circ p&=\pi_1\circ (p_1\circ p_2^{-1})\\ &=\left ((\pi\circ \Pi_1)\circ p_1\right) \circ p_2^{-1}\\ &=\pi\circ (\Pi_2\circ p_2)\circ p_2^{-1}\\ &=\pi\circ \Pi_2\\ &=\pi_2, \end{align*} which is a contradiction. So, $p_2$ is not an isomorphism. If $p_2$ has a non-trivial kernel, then $p_2$ is a desired one. Suppose $p_2$ has the trivial kernel. Consider a sorting data $F'$ on $G$ given as follows: For $N\in {\mathcal N}(G)$, $$F'(N):=F_H(p_2^{-1}[N]).$$ Then, $p_2 : (H,F_H)\rightarrow (G,F')$ is an isormophism and $\operatorname{id} :(G,F')\rightarrow (G,F)$ is a q.s.c. with $F\subsetneq F'$. \end{proof} Motivated from the proof of \cite[Theorem 1.12]{HL}, we provide the following lemma. \begin{lemma}\label{lem:inverse_limit_qsc} For an ordinal $\alpha$, consider an inverse system $((G_i,F_i))_i$ indexed by ordinals $i< \alpha$ such that \begin{itemize} \item for each $j<i$, the transition map $\pi_i^j$ is a q.s.c.; \item for each limit ordinal $\beta$, $(G_{\beta},F_{\beta})$ is the inverse limit of the inverse system $((G_i,F_i))_{i<\beta}$ with transition maps $\pi_i^j$; \item for each limit ordinal $\beta$ and for $i<\beta$, the transition map $\pi_i^{\beta}$ is the natural projection from $(G_{\beta},F_{\beta})$ to $(G_i,F_i)$ coming from the inverse limit construction. \end{itemize} Let $(G,F)$ be the inverse limit of $((G_i,F_i))_{i<\alpha}$ and let $\pi_i:(G,F)\rightarrow (G_i,F_i)$ be the canonical projection for each $i<\alpha$. Then, $\pi_0$ is a q.s.c. \end{lemma} \begin{proof} If $\alpha$ is a successor ordinal, that is, $\alpha=\alpha'+1$, then $(G,F)=(G_{\alpha'},F_{\alpha'})$ and we are done. We assume that $\alpha$ is a limit ordinal. Let $p:(G',F')\rightarrow (G_0,F_0)$ be a SEP-cover. To show that $\pi$ is a q.s.c., using transfinite induction, we will construct a sequence $(p_i:(G',F')\rightarrow (G_i,F_i))_{i<\alpha}$ of morphisms such that for each $i_0\le i<j<\alpha$, $p_i=\pi^j_i\circ p_i$. Put $p_0:=p$. Suppose we have constructed $(p_i)_{i< \gamma}$ for some $\gamma<\alpha$. If $\gamma$ is a limit ordinal, there is a desired morphism $p_{\gamma}:(G',F')\rightarrow (G_{\gamma},F_{\gamma})$ because $(G_{\gamma},F_{\gamma})$ is the inverse limit of $(G_i,F_i)_{i<\gamma}$. If $\gamma=\gamma'+1$ is a successor ordinal, there is a morphism $r:(G',F')\rightarrow (G_{\gamma},F_{\gamma})$ such that $p_{\gamma'}=\pi^{\gamma}_{\gamma'}\circ q$ because $\pi^{\gamma}_{\gamma'}$ is a q.s.c. Put $p_{\gamma}:=r$, which is a desired one. Since $(G,F)$ is the inverse limit of $(G_i,F_i)_{i<\beta}$, there is $q:(G',F')\rightarrow (G,F)$ such that for each $i<\alpha$, $p_i=\pi_i\circ q$. Thus, we have that $p=\pi_0\circ q$, and $\pi_0$ is a q.s.c. \end{proof} \begin{theorem}\label{thm:existence_universal_SIP_cover} Let $(G,F)$ be a sorted profinite group. Then, there is a universal SEP-cover $p:(H,F_H)\rightarrow (G,F)$. Furthermore, if $G$ is finitely generated, then $p$ is the unique universal SEP-cover (up to isomorphism). \end{theorem} \begin{proof} If $(G,F)$ has SEP, then $\operatorname{id} : (G,F)\rightarrow (G,F)$ is a universal SEP-cover. Suppose $(G,F)$ has no SEP. Let $\kappa_0$ be the rank of $G$ and let $\kappa_1$ be the cardinality of $\mathbb F_{\kappa_0}$. Let $\aleph$ be a cardinal with $(\lambda:=)2^{\kappa_1}<\aleph$. Using transfinite induction, we will construct an inverse system $((G_i,F_i))_i$ indexed by ordinals $i\le \alpha$ for some ordinal $\alpha<\aleph$ such that \begin{itemize} \item for each $j<i$, the transition map $\pi_i^j$ is a q.s.c. with a non-trivial kernel; \item for each limit ordinal $\beta$, $(G_{\beta},F_{\beta})$ is the inverse limit of the inverse system $((G_i,F_i))_{i<\beta}$ with transition maps $\pi_i^j$; \item for each limit ordinal $\beta$ and for $i<\beta$, the transition map $\pi_i^{\beta}$ is the natural projection from $(G_{\beta},F_{\beta})$ to $(G_i,F_i)$ coming from the inverse limit construction; \item any q.s.c. to $(G_{\alpha},F_{\alpha})$ has the trivial kernel. \end{itemize} Put $(G_0,F_0)=(G,F)$. Suppose we have constructed such an inverse system $((G_i,F_i))_{i<\beta}$ for an ordinal $\beta$.\\ Case. $\beta$ is a successor ordinal, that is, $\beta=\beta'+1$. If $(G_{\beta'},F_{\beta'})$ has SEP, then we stop the process. Suppose $(G_{\beta'},F_{\beta'})$ has no SEP. By Lemma \ref{lem:char_no_SIP}, there is a q.s.c. to $(G_{\beta'},F_{\beta'})$, which is not an isomorphism. If any q.s.c. to $(G_{\beta'},F_{\beta'})$ is injective, we stop the process. So, we may assume that there is a q.s.c. $p:(G',F')\rightarrow (G_{\beta'},F_{\beta'})$ with a non-trivial kernel. Put $(G_{\beta},F_{\beta}):=(G',F')$ and $\pi^{\beta}_{\beta'}:=p$. For each $i<\beta$, put $\pi^{\beta}_i:=\pi^{\beta'}_i\circ p$. By Remark \ref{rem:basic_property_qsc}(1), each $\pi^{\beta}_i$ is a q.s.c.\\ Case. $\beta$ is a limit ordinal. Let $(G_{\beta},F_{\beta})$ be the inverse limit of $((G_i,F_i))_{i<\beta}$. For each $i<\beta$, let $\pi^{\beta}_i$ be the natural projection map from $(G_{\beta},F_{\beta})$ to $(G_i,F_i)$. By Lemma \ref{lem:inverse_limit_qsc}, each $\pi^{\beta}_i$ is a q.s.c.\\ For each $i<j$, $\pi^j_i$ has a non-trivial kernel. Namely, suppose there are $i<j$ such that $\operatorname{Ker}(\pi^j_i)$ is trivial. Since $\pi^j_i=\pi^{i+1}_j\circ \pi^j_{i+1}$, where $\pi^k_k=\operatorname{id}$ for each $k$, $\operatorname{Ker}(\pi^{i+1}_i)$ is also trivial, which is a contradiction. In our construction, $\alpha$ should be less than $\aleph$. Suppose not, that is, $\alpha\ge \aleph$. By Remark \ref{rem:basic_property_qsc}(2), we have that $|G_{\alpha}|\le \kappa_1$. So, $|{\mathcal N}(G_{\alpha})|\le 2^{\kappa_1}=\lambda$. Since $|\alpha|\ge \aleph>\lambda$, by the pigeon hole principle, for some $i<j<\alpha$, $\operatorname{Ker}(\pi^{\alpha}_j)=\operatorname{Ker}(\pi^{\alpha}_i)$. Since $\pi^{\alpha}_i=\pi^j_i\circ \pi^{\alpha}_j$, we have that $\pi^j_i$ is injective, which is a contradiction. Therefore, we have a q.s.c. $\pi^{\alpha}_0:(G_{\alpha},F_{\alpha})\rightarrow (G,F)$ such that any q.s.c. to $(G_{\alpha},F_{\alpha})$ is injective.\\ Let $(E(G),E(F)):=(G_{\alpha},F_{\alpha})$ and let $\pi_0^{\alpha}$. If $(E(G),E(F))$ has SEP, then $\pi$ is a universal SEP-cover of $(G,F)$. Suppose $(E(G),E(F))$ has no SEP. By transfinite induction, we will construct a sequence $(E(F)^i)_{i\le \gamma'}$ of sorting data on $E(G)$ such that for $i<j\le \gamma'$ \begin{itemize} \item $E(F)^i\subsetneq E(G)^j$; \item $\operatorname{id}^j_i:=\operatorname{id}:(E(G),E^j(F))\rightarrow (E(G),E^i(F))$ is a q.s.c.; \item $(E(G),E^{\gamma'}(F))$ has SEP. \end{itemize} Suppose we have constructed such a sequence $(E^i(F))_{i\le \alpha'}$ for an ordinal $\alpha'$. If $\alpha'$ is a limit ordinal, put $E^{\alpha'}(F):=\bigcup_{i<\alpha'}E^i(F)$. By Lemma \ref{lem:inverse_limit_qsc}, each $\operatorname{id}^{\alpha'}_i:(E(G),E^{\alpha'}(F))\rightarrow (E(G),E^i(F))$ is a q.s.c. Suppose $\alpha'$ is a successor ordinal, that is, $\alpha'=\alpha''+1$. If $(E(G),E^{\alpha''}(F))$ has SEP, put $\gamma'=\alpha''$ and stop the process. If $(E(G),E^{\alpha''}(F))$ does not have SEP, then by Lemma \ref{lem:char_no_SIP}, there is a sorting data $F'$ on $E(G)$ such that $E^{\alpha''}(F)\subsetneq F'$ and $\operatorname{id}:(E(G),F')\rightarrow (E(G),E^{\alpha''}(F))$ is a q.s.c, and put $E^{\alpha'}(F):=F'$. This transfinite indcutive process should stop for some $\gamma'<|\mathcal J|^+$ so that $(E(G),E^{\gamma'}(F))$ has SEP. Thus, a q.s.c. $$\pi_0^{\alpha}\circ \operatorname{id}^{\gamma'}_0:(E(G),E^{\gamma'}(F))\rightarrow (E(G),E(F))\rightarrow(G,F)$$ is a SEP-cover so that the SEP-cover $\pi_0^{\alpha}\circ \operatorname{id}^{\gamma'}_0$ is a universal SEP-cover.\\ We now prove the furthermore part. Suppose $G$ is finitely generated. Let $p_i:(G_i,F_i)\rightarrow (G,F)$ be a universal SEP-cover of $(G,F)$ for $i=1,2$. By Remark \ref{rem:reduct_universal_cover} and \cite[Theorem 1.12]{HL}, $G_1$ and $G_2$ are finitely generated. By universality, there are morphisms $q:(G_2,F_2)\rightarrow (G_1,F_1)$ and $q':(G_1,F_1)\rightarrow (G_2,F_2)$ such that $p_2=p_1\circ q$ and $p_1=p_2\circ q'$. Since $G_1$ and $G_2$ are finitely generated, by \cite[Proposition 7.6]{R}, both $q$ and $q'$ are bijective. We want to show that $q_*(F_2)=F_1$. Since $q:(G_2,F_2)\rightarrow (G_1,F_1)$ is a morphism, $F_1\subseteq q_*(F_2)$. Also we have that $q:(G_2,F_2)\cong (G_1,q_*(F_2))$ and $p_2\circ q^{-1}:(G_1,q_*(F_2))\rightarrow (G,F)$ is a universal SEP-cover. So, there is a morphism $r:(G_1,F_1)\rightarrow (G_1,q_*(F_2))$ such that $p_1=(p_2\circ q^{-1})\circ r$. Since $p_2=p_1\circ q$, the morphism $r$ should be the identity map on $G$ and $q_*(F_2)\subseteq F_1$. Thus, $F_1=q_*(F_2)$. \end{proof} \begin{remark}\label{rem:full_subcategory_closed_under_a_universal_ec} Let $\mathcal C$ be a formation of finite groups, that is, a set of finite groups closed under taking quotients and fibre products (see \cite[Section 17.3]{FJ}). Let $\operatorname{Pro}\mathcal C$ be the set of {\em pro-$\mathcal C$ groups}, that is, an inverse limit of groups in $\mathcal C$. Let $\operatorname{Pro}\mathcal C_{\mathcal J}$ be the the full subcategory of the category $\operatorname{SPG}_{\mathcal J}$ whose objects are of the form $(G,F)$, called {\em sorted pro-$\mathcal C$ group}, for a pro-${\mathcal C}$ group $G$. In \cite[Lemma 24.4.6]{FJ}, any pro-$\mathcal C$ group $G$ has the universal embedding cover, which is also a pro-$\mathcal C$ group. So, by Remark \ref{rem:reduct_universal_cover}, any sorted pro-$\mathcal C$ group has a universal SEP-cover, which is also a pro-$\mathcal C$ group. Suppose $C$ is closed under taking subgroups. For example, let $\mathcal C$ be the set of abelian groups, nilpotent groups, solvable groups, or p-groups for a fixed prime $p$. By \cite[Lemma 17.3.1]{FJ}, $\operatorname{Pro}\mathcal C$ is closed under taking quotients, inverse limits, and fibre products. Since our proof works for any full subcategory of $\operatorname{SPG}_{\mathcal J}$ closed under taking quotients, inverse limits, and fibre products, we also deduce from the proof of Theorem \ref{thm:existence_universal_SIP_cover} that any sorted pro-$\mathcal C$ group has an universal SEP-cover, which is also a sorted pro-$\mathcal C$ group. \end{remark} \begin{example} We continue working with the notations in Example \ref{ex:sorted_profinite_group_having_SEP}(2). Define a sorting data $F$ on $G={\mathbb Z}_2\times {\mathbb Z}_2$ as follows: \begin{itemize} \item $F(0)=F(G)=F(N_{(1,1)})=\mathcal J^{<{\mathbb N}}$; \item $F(N_{(1,0)})=F(N_{(0,1)})=\{J\in \mathcal J^{<{\mathbb N}}:s_1\in ||J||\}$. \end{itemize} Take $(\Pi_1,\Pi_2)\in {\mathcal P}((G,F),(G,F))$ such that \begin{itemize} \item $A:=G$, $F_A$ is a sorting data on $A$ given as follows: \begin{itemize} \item $F_A(A)=\mathcal J^{{\mathbb N}}$; \item $F_A(0)=F(N_{(1,0)})=F(N_{(0,1)})=F(N_{(1,1)})=\{J\in \mathcal J^{<{\mathbb N}}:s_1\in ||J||\}$. \end{itemize} \item $\Pi_1,\Pi_2:(G,F)\rightarrow(A,F_A)$ given as follows: \begin{itemize} \item $\Pi_2=\operatorname{id}$; \item $\Pi_1:(1,1)\mapsto (1,0),(1,0)\mapsto (1,1), (0,1)\mapsto(0,1)$. \end{itemize} \end{itemize} Then, $(\Pi_1,\Pi_2)$ is a non-trivial maximal element. The fibre product with respect to $(\Pi_1,\Pi_2)$ is $(G,E^0(F))$ where the sorting data $E^0(F)$ is given as follows: \begin{itemize} \item $E^0(F)(0)=E^0(F)(G)=E^0(F)(N_{(1,0)})=E^0(F)(N_{(1,1)})=\mathcal J^{<{\mathbb N}}$; \item $E^0(F)(N_{(0,1)})=\{J\in \mathcal J^{<{\mathbb N}}:s_1\in ||J||\}$. \end{itemize} Take $(\Pi_1',\Pi_2')\in {\mathcal P}((G,E^0(F)),(G,E^0(F)))$ such that \begin{itemize} \item $A':=G$, $F_{A'}$ is a sorting data on $A'$ given as follows: \begin{itemize} \item $F_{A'}(0)=F_{A'}(A')=F_{A'}(N_{(1,1)})=\mathcal J^{<{\mathbb N}}$; \item $F_{A'}(N_{(1,0)})=F_{A'}(N_{(0,1)})=\{J\in \mathcal J^{<{\mathbb N}}:s_1\in ||J||\}$. \end{itemize} \item $\Pi_1',\Pi_2':(G,E^0(F))\rightarrow (A',F_{A'})$ given as follows: \begin{itemize} \item $\Pi_2'=\operatorname{id}$; \item $\Pi_1':(1,1)\mapsto (0,1), (1,0)\mapsto (1,1), (0,1)\mapsto (1,0)$. \end{itemize} \end{itemize} Then, $(\Pi_1',\Pi_2')$ is a non-trivial maximal element. The fibre product with respect to $(\Pi_1',\Pi_2')$ is $(G,E^1(F))$ where the sorting data $E^1(F)$ is full. Thus, the identity map from $(G,E^1(F))$ to $(G,F)$ is a universal SEP-cover of $(G,F)$. \end{example} \section{Model theory of sorted profinite group having SEP}\label{Section:model theory of sorted profinite group with SIP} In \cite[Theorem 2.4]{C1}, Chatzidakis showed that the theory of a complete system of a profinite group with embedding property is $\omega$-stable, and in \cite[Theorem 2.7]{C1}, using $\omega$-stability, showed that a universal embedding cover of a profinite group is unique. In this section, our main goal is to generalize this phenomena into the sorted profinite group with SEP when $\mathcal J$ is {\bf countable}. \begin{definition}\label{def:basic_terminologies_SCS} \begin{enumerate} \item A {\em subsystem} of $S$ is a substructure of $S$ which is a model of $SCS$. \item Let $X\subset S$. By Zorn's Lemma, there is the smallest subsystem $S_X$ containing $X$. In this case, we say that $S_X$ is {\em generated by $X$}. Note that $S_X\subset \operatorname{acl}(X)$. \item Let $X$ be a subset of $S$. We say that $X$ is {\em locally full} if for each $x\in X$ and for each $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$[x]\cap m(k,J)\subset X.$$ We say that $X$ is {\em full} if for each $s\in S$, if there is $x\in X$ such that $x\sim s$, then $s\in X$. We write $X^{full}:=\{s\in S:(\exists x\in X)(s\sim x)\}$. \item A subset $X$ of $S$ is called {\em dense} (relatively dense) if for each $s\in S$ ($s\in S_X$), there is $x\in X$ such that $x\le s$. \item A subset $X$ of $S$ is called a {\em presystem} if it is locally full and relatively dense. \end{enumerate} \end{definition} \noindent Note that any subsystem is already locally full because for a complete system $S$ and for $s\in S$ and $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$, $$s\in m(k,J)\Leftrightarrow 1\le |[s]\cap m(k,J)|\le k.$$ \begin{remark/def}\label{rem/def:smallest_subsystem} \begin{enumerate} \item If $X$ is locally full, then $S_X\subset \operatorname{dcl}(X)$, and if $X$ is a presystem, then any embedding from $S_X$ to $S$ is uniquely determined by the image on $X$. \item We say that a subsystem $S'$ is {\em finitely generated} if there is a finite subset $X'$ such that $S'=S_{X'}$. Also, we can take such $X'$ is locally full and dense in $S'$. Note that a subsystem $S'$ is finitely generated if and only if $G(S')$ is finitely sorted. \end{enumerate} \end{remark/def} \begin{definition}\label{def:meet_join} Let $S_1$ and $S_2$ be subsystems of $S$. \begin{enumerate} \item $\min S_1:=\{a\in S_1:\forall b\in S_1(a\le b) \}$. \item $S_1\vee S_2:=\{c\in m(kk',J_{\cup}(k,J)):[c]=[a]\vee[b],a\in \min S_1\cap m(k,J),b\in \min S_2\cap m(k',J') \}$. \end{enumerate} \end{definition} \noindent Let $A$ and $B$ be subsets of $S$. We write $A\le B$ if $a\le b$ for every $a\in A$ and $b\in B$. We write $A\sim B$ if $a\sim b$ for every $a\in A$ and $b\in B$. Note that for any subsystems $S_1$ and $S_2$ of $S$, and for $S_3=S_{S_1\cup S_2}$, we have that $\min S_3\sim S_1\vee S_2$. From the fibre product in Definition/Remark \ref{def/rem:fibre_product}, we have the following lemma. \begin{lemma}\label{lem:fibre_prod_in_sortedcompletesystem} Let $S_0$, $S_1$, and $S_2$ be subsystems of $S$ such that $S_0\subset S_1\cap S_2$, and let $S_3=S_{S_1\cup S_2}$. Consider the following inclusions: \begin{itemize} \item $\iota_{S_0,S_1}:S_0\rightarrow S_1$, $\iota_{S_0,S_2}:S_0\rightarrow S_2$; \item $\iota_{S_1,S_3}:S_{1}\rightarrow S_{3}$; \item $\iota_{S_2,S_3}:S_{2}\rightarrow S_{3}$. \end{itemize} \begin{enumerate} \item Suppose $S_1\vee S_2\sim \min S_0$. Then, $S_3$ is the {\em co-fibre product} of $S_1$ and $S_2$ over $S_0$, that is, we have the following cartesian diagram: $$ \begin{tikzcd} G(S_3)\arrow[rr, "G(\iota_{S_2,S_3})"] \arrow[d, "G(\iota_{S_1,S_3})"']& & G(S_{2}) \arrow[d, "G(\iota_{S_0,S_2})"]\\ G(S_{1}) \arrow[rr, "G(\iota_{S_0,S_1})"'] & & G(S_0) \end{tikzcd} $$ \item Let $S_1'$ be a subsystem of $S$ such that $S_0\subset S_1'$ and $S_1'\vee S_2\sim \min S_0$. Suppose there is an isomorphism $f:S_{1}\rightarrow S_{1}'$ making the following diagram commute: $$ \begin{tikzcd} S_0 \arrow[rr, "\iota_{S_0,S_1}"] \arrow[d, "\operatorname{id}"'] && S_{1} \arrow[d, "f"]\\ S_0 \arrow[rr, "\iota_{S_0,S_1'}"'] && S_{1}' \end{tikzcd} $$ , then there is an isomorphism from $g:S_3\rightarrow S_3'$ extending $f\cup\operatorname{id}_{S_2}$, where $S_3'=S_{S_1'\cup S_2}$. \end{enumerate} \end{lemma} \begin{remark}\label{rem:co-fibre product_and_smallestsubsystem} For any subsystems $S_1$ and $S_2$ of $S$, the subsystem $S_{S_1\cup S_2}$ is a co-fibre product of $S_1$ and $S_2$ over $S_0$, where $S_0$ is the subsystem generated by $(S_1\vee S_2)\cap S_1\cap S_2$. \end{remark} Next, we introduce a notion of the co-sorted embedding property, which is a dual notion of SEP for sorted profinite groups. Let $S$ be a sorted complete system. \begin{definition}\label{def:co-(F)SIP} We say that $S$ has {\em co-sorted embedding property} (co-SEP) if for any finitely generated subsystem $S_1,S_2\subset S$, and for any embedding $\Pi:S_2\rightarrow S_1$ and any embedding $\Phi:S_2\rightarrow S$, there is an embedding $\Psi:S_1\rightarrow S$ such that $$ \begin{tikzcd} S_2 \arrow[r, "\Pi"] \arrow[d, "\Phi"'] & S_1 \arrow[d, dashed, "\Psi"]\\ S_2 \arrow[r, "\iota"']& S \end{tikzcd}, $$ where $\iota$ is the inclusion. \end{definition} \begin{remark}\label{rem:axiomatizability_co-SIP} By Remark/Definition \ref{rem/def:smallest_subsystem}(2), co-SEP is first order axiomatizable in the language ${\mathcal L}_{G}(\mathcal J)$ and let $SCS_{SEP}$ be the theory of sorted complete systems having co-SEP. \end{remark} \noindent Since the category $\operatorname{SPG}_{\mathcal J}$ of sorted profinite groups and the category of sorted complete systems are equivalent by contravarint functors, we have the following relationship between SEP and co-SEP. \begin{remark}\label{rem:(F)SIP_co-(F)SIP} Let $(G,F)$ be a sorted profinite group and let $S$ be the sorted complete system of $(G,F)$. By Proposition \ref{prop:FSIP=SIP}, we have that $(G,F)$ has SEP if and only if $(G,F)$ has FSEP if and only if $S$ has co-SEP. \end{remark} \noindent We have the following result dual to Corollary \ref{cor:descritpion_pushforward_sortingdata_for_sameimage}. \begin{proposition}\label{prop:uniquness_full_substructure} Let $S\models SCS_{SEP}$ and let $S'\models SCS$. For any embeddings $\varphi_1:S'\rightarrow S$ and $\varphi_2:S'\rightarrow S$, two substructures $\varphi_1[S']^{full}$ and $\varphi_2[S']^{full}$ of $S$ are isomorphic. \end{proposition} For a sorted complete system $S$, let $\operatorname{coFSIm}(S)$ be the set of isomorphism classes of finitely generated subsystems of $S$. We have the following analogous to \cite[Theorem 2.2]{C1}. \begin{lemma}\label{lem:key_lemma_for_alpeh_0_categoricity_for_SIP} Let $S_1$ and $S_2$ be sorted complete systems having co-SEP. Suppose $|S_1|=|S_2|=\aleph_0$ and $\operatorname{coFSIm}(S_1)=\operatorname{coFSIm}(S_2)$. Then, $S_1\cong S_2$. \end{lemma} \begin{proof} We follow the proof scheme of back-and-forth argument in the proof of \cite[Theorem 2.2]{C1}. List $S_1=\{\alpha_0,\alpha_1,\ldots\}$ and $S_2=\{\beta_0,\beta_1,\ldots\}$. Inductively, we construct an increasing sequence of isomorphism $f_i:S_i^1\rightarrow S_i^2$ between finitely generated subsystems of $S_1$ and $S_2$ respectively such that for $i\in \omega$ $\alpha_i\in S_i^1$ if $i$ is even, and $\beta_i\in S_i^2$ if $i$ is odd. For each $k=1,2$, let $S^k_{-1}$ be the trivial subsystem of $S(G_i)$ such that $m(k,J)(S^i_{-1})$ consists of the $\le$-maximal element for each $(k,J)\in {\mathbb N}\times \mathcal J^{<{\mathbb N}}$. Let $f_{-1}:S^{1}_{-1}\rightarrow S^{2}_{-1}$ be the canonical isomorphism. Suppose that we have constructed $f_0,\ldots,f_i$. Without loss of generality, we may assume that $i$ is odd. Suppose $S_i^1$ is generated by a finite presystem $X_i$ of $S_1$. If $\alpha_{i+1}\in S_i^1$, put $S_{i+1}^1:=S_i^1$ and put $f_{i+1}:=f_i$. If $\alpha_{i+1}\notin S_i^1$, let $S_{i+1}^1$ be the subsystem generated by the finite subset $X_i\cup [\alpha_{i+1}]\cap m(k,J)$ where $\alpha_{i+1}\in m(k,J)$. Since $\operatorname{coFSIm}(S_1)=\operatorname{coFSIm}(S_2)$, there is a subsystem $\bar S_{i+1}^2$ of $S_2$ which is isomorphic to $S_{i+1}^1$. Let $\bar \psi :S_{i+1}^1\rightarrow \bar S_{i+1}^2$ be an isomorphism. Note that $\bar S_{i+1}^2$ is also finitely generated because $S_{i+1}^1$ is finitely generated. Since $S_2$ has co-SEP, there is an embedding $\psi :\bar S_{i+1}^2\rightarrow S_2$ to make the following diagram commute: $$ \begin{tikzcd} S_i^2\arrow[r, "f_{i}^{-1}"] \arrow[d, "\operatorname{id}"'] & S_{i+1}^1 \arrow[r, "\bar\psi"] & \bar S_{i+1}^2 \arrow[d, dashed, "\psi"]\\ S_i^2 \arrow[rr, "\iota"'] & & S(G_2) \end{tikzcd} $$ Put $S_{i+1}^2:=\psi\circ \bar\psi[S_{i+1}^1]$ and put $f_{i+1}=\psi\circ \bar\psi$. Note that $f_{i+1}$ extends $f_i$ because of the following diagram: $$ \begin{tikzcd} S_i^2\arrow[r, "f_{i}^{-1}"] \arrow[d, "\operatorname{id}"'] & S_{i+1}^1 \arrow[d, "f_{i+1}"]\\ S_i^2 \arrow[r, "\iota"'] & S_{i+1}^2 \end{tikzcd} $$ \end{proof} \subsection{$\omega$-stability of sorted complete system with co-SEP} From now on, we assume that $\mathcal J$ is {\bf countable}. \begin{remark}\label{rem:firstorderproperty_SIM} Note that for a sorted complete system $S$ and for any finitely generated sorted complete system $S'$, there are finitely many $(k_1,J_1),\ldots,(k_n,J_n)$ in ${\mathbb N}\times \mathcal J^{<{\mathbb N}}$ and a positive integer $N$ such that $S'\in \operatorname{coFSIm}(S)$ if and only if there is a finite set $X\subset m(k_1,J_1)\cup \cdots \cup m(k_n,J_n)$ of size less than $N$ such that $S_X\cong S'$. \end{remark} As a corollary of Lemma \ref{lem:key_lemma_for_alpeh_0_categoricity_for_SIP}, we have the following result analogous to \cite[Theorem 2.3]{C1}. \begin{theorem}\label{thm:alpeh_0_categoricity_for_SIP} Suppose that $S$ has co-SEP. Then, $Th(S)$ is axiomatized by $SCS_{SEP}$ together with the following axioms: For every $S_1\in \operatorname{coFSIm}(S)$ and $S_2\notin \operatorname{coFSIm}(S)$, \begin{itemize} \item There is a finite presystem $X$ of $S$ such that $S_X\cong S_1$; \item For finite presystem $Y$ of $S$, $S_Y\not\cong S_2$. \end{itemize} \end{theorem} \noindent Next, we describe compete types of sorted complete systems with co-SEP, which is analogous to \cite[Theorem 2.4]{C1}. \begin{remark/def} For $a\in m(k,J)$ and for a subsystem $S'$ of $S$, we write $a\vee S':=\{c\in m(kk',J_{\subset}^*)(k,J):[c]=[a]\vee [b], b\in \min S'\cap m(k',J')\}$. Note that $a\vee S'$ is locally full, and $c_1\sim c_2$ for any $c_1,c_2\in a\vee S'$ so that $a\vee S'$ is relatively dense. \end{remark/def} \begin{theorem}\label{thm:description_complete_types} Let $S\models SCS_{SEP}$. \begin{enumerate} \item Let $A$ be a subsystem of $S$ and let $a,b\in m(k,J)$. $\operatorname{tp}(a/A)=\operatorname{tp}(b/A)$ if and only if $a\vee A=b\vee A$ and there is an isomorphism $f:S_{a\vee A\cup\{a\}}\rightarrow S_{b\vee A\cup \{b\}}$ such that \begin{itemize} \item $f\restriction_{S_{a\vee A}}=\operatorname{id}_{S_{a\vee A}}$; \item $f(a)=b$. \end{itemize} \item $\operatorname{Th}(S)$ is $\omega$-stable. \end{enumerate} \end{theorem} \begin{proof} We follow the proof scheme of \cite[Theorem 2.4]{C1}. $(1)$ Suppose $\operatorname{tp}(a/A)=\operatorname{tp}(b/A)$. Without loss of generality, we may assume that $S$ is saturated. Then, there is an automorphism $f$ of $S$ over $A$ sending $a$ to $b$. By the choice of $f$, we have that $a\vee A=b\vee A$. Also, the restriction of $f$ on $S_{a\vee A\cup \{a\}}$ is a desired one. Suppose $a\vee A=b\vee A$ and there is an isomorphism $f:S_{a\vee A\cup \{a\}}\rightarrow S_{b\vee A\cup \{b\}}$ over $a\vee A$ sending $a$ to $b$. Take a locally full, relative dense, and finite subset $X$ of $A$ so that $X$ contains an element $x_0\in \min A$. Take a locally full finite subset $Y$ of $a\vee A$. By the choice of $x_0$ and $Y$, we have that \begin{itemize} \item $S_{X\cup Y}\vee S_{Y\cup \{a\}}\sim \{x_0\}\vee \{a\}\sim S_{a\vee A}\sim S_Y$; \item $S_{X\cup Y}\vee S_{Y\cup \{b\}}\sim \{x_0\}\vee \{b\}\sim S_{b\vee A}\sim S_Y$. \end{itemize} And the restriction $f\restriction_{S_{Y\cup \{a\}}}:S_{Y\cup \{a\}}\rightarrow S_{Y\cup\{b\}}$ is an isomorphism over $S_Y$ with $f([a]\cap m(k,J))=[b]\cap m(k,J)$ so that we have the the following diagram $$ \begin{tikzcd} S_Y \arrow[rr, "\iota_{S_Y,S_{Y\cup \{a\}}}"] \arrow[d, "\operatorname{id}"'] && S_{Y\cup \{a\}} \arrow[d, "f\restriction_{S_{Y\cup \{a\}} }"]\\ S_Y \arrow[rr, "\iota_{S_Y,S_{Y\cup \{b\}}}"'] && S_{Y\cup \{b\}} \end{tikzcd} $$ By Lemma \ref{lem:fibre_prod_in_sortedcompletesystem}, there is an isomorphism $g:S_{X\cup Y\cup \{a\}}\rightarrow S_{X\cup Y\cup \{b\}}$ over $S_{X\cup Y}$. Take a countable elementary substructure $S'$ of $S$ containing $X,Y,a,b$ so that $S_{X\cup Y\cup\{a\}}$ and $S_{X\cup Y\cup\{b\}}$ are subsystems of $S'$. By the proof of Lemma \ref{lem:key_lemma_for_alpeh_0_categoricity_for_SIP}, $\operatorname{tp}(a/S_X)=\operatorname{tp}(b/S_X)$, and so by compactness, $\operatorname{tp}(a/A)=\operatorname{tp}(b/A)$.\\ $(2)$ As noted in the proof of \cite[Theorem 2.4]{C1}, it is enough to show that for a countable subset $A$ of $S$, there are only countably many unary types over $A$. Let $A$ be a countable subset of $S$. Without loss of generality, we may assume that $A$ is a subsystem of $S$ and $S$ is $\aleph_1$-saturated. By $(1)$, for each $a\in m(k,J)$, $\operatorname{tp}(a/A)$ is determined by the isomorphism classes of $S_{a\vee A\cup \{a\}}$ over $S_{a\vee A}$. Since $A$ is countable and $\mathcal J$ is countable, there are only countably many possibilities of $a\vee A$ for each $a\in m(k,J)$. Also, $a\vee A\cup ([a]\cap m(k,J))$ is locally full and dense in $S_{a\vee A\cup \{a\}}$ because $a\le s$ for all $s\in S_{a\vee A\cup \{a\}}$. Thus, the isomorphism from $S_{a\vee A\cup\{a\}}$ is determined by the image on $a\vee A\cup ([a]\cap m(k,J))$. Thus, for each $a\in m(k,J)$, the number of the isomorphism classes of $S_{a\vee A\cup \{a\}}$ over $a\vee A$ is bounded by the number of isomorphism classes of $[a]$, which is finite. Therefore, there are only countably many types $\operatorname{tp}(a/A)$ for each $a\in m(k,J)$. \end{proof} \begin{theorem}\label{thm:uniquness_SIP cover} Any sorted profinite group $(H,F_H)$ has a universal SEP-cover $(G,F)$ which is unique up to isomorphism over $(H,F_H)$. \end{theorem} \begin{proof} Following the proof scheme of \cite[Theorem 2.7]{C1}, we will first identify the theory of $S(G)$. By Theorem \ref{thm:existence_universal_SIP_cover}, any $(A,F)\in \operatorname{FSIm}(H,F_H)$ has the universal SEP-cover. Let $\Gamma$ be the set of isomorphism classes of finitely sorted finite groups which is an image of the universal SEP-cover of $(A,F)$ for some $(A,F)\in \operatorname{FSIm}(H,F_H)$. Let $T$ be the theory given by \begin{align*} T&=SCS_{SEP}\cup Diag(S(H))\\ &\cup\{\exists X( S_X\cong S(A)):(A,F)\in \Gamma\}\\ &\cup\{\forall X(S_X\not\cong S(B):(B,F')\not\in \Gamma\}, \end{align*} which can be written as ${\mathcal L}_G(\mathcal J)(S(H))$-sentences by Remark \ref{rem:firstorderproperty_SIM}. \begin{claim}\label{claim:T_consistent} The theory $T$ is consistent and complete. \end{claim} \begin{proof} Let $Y$ be a finite subset of $Diag(S(H))$, and let $(A_1,F_1),\dots, (A_n,F_n)\in \Gamma$ and $(B_1,F_1'),\ldots,(B_m,F_m')\notin \Gamma$. By Remark \ref{rem:firstorderproperty_SIM}, there are finite subsets $X_1,\ldots,X_n$ of $S(H)$ such that $(A_i,F_i)$ is an image of the universal SEP-cover of $G(S_{X_i})$ for each $i$. Let $S':=S_{(\bigcup_{i\le n} X_i)\cup Y}$ be a subsystem of $S(H)$. Then, $G(S')$ is in $\operatorname{FSIm}(H,F_H)$, and for the universal SEP-cover $(E,F_E)$ of $G(S')$, $(A_1,F_1),\ldots,(A_n,F_n) \in \operatorname{FSIm}(E,F_E)(\subset \Gamma)$ and $(B_1,F_1'),\ldots,(B_m,F_m')\notin \operatorname{FSIm}(E,F_E)$. Thus, $T$ is finitely consistent and so it is consistent. Also, it is complete by Lemma \ref{lem:key_lemma_for_alpeh_0_categoricity_for_SIP}. \end{proof} Since $T$ is $\omega$-stable, there is a prime model $S$ of $T$ over $S(H)$ which is unique up to isomorphism. Let $(G,F):=G(S)$ and let $\pi:(G,F)\rightarrow (H,F_H)$ be the epimorphism dual to the inclusion $\iota:S(H)\rightarrow S$. \begin{claim}\label{claim:G(S)_universal_SIP-cover} The epimorphism $\pi:(G,F)\rightarrow (H,F_H)$ is a universal SEP-cover of $(H,F_H)$. \end{claim} \begin{proof} Let $\pi':(G',F')\rightarrow (H,F_H)$ be a universal SEP-cover, which exists by Theorem \ref{thm:existence_universal_SIP_cover}. Since $\pi$ is a SEP-cover, we have that $\Gamma\subset \operatorname{FSIm}(G',F')\subset \operatorname{FSIm}(G,F)=\Gamma$ so that $\operatorname{FSIm}(G',F')=\Gamma$. Therefore, $S(G')$ is a model of $T$. Let $q:(M,F_M)\rightarrow (H,F_H)$ be a SEP-cover so that $S(M)\models SCS_{SEP}$. Since $\pi'$ is a universal SEP-cover, there is $p:(M,F_M)\rightarrow (G',F')$ such that $q=\pi'\circ p$. Put $S'=\operatorname{Im}(S(p))\subset S(M)$, which is a model of $T$. Since $S$ is a prime model of $T$, there is an embedding $\Phi:S\rightarrow S'$ such that $\Phi(x)=S(q)(x)$ for each $x\in S(H)$. Consider an embedding $\iota\circ \Phi:S\rightarrow S'\rightarrow S(M)$ where $\iota:S'\rightarrow S(M)$ is the inclusion. Then, the dual map $\varphi:=G(\iota\circ \Phi):(M,F_M)\rightarrow (G,F)$ gives a morphism such that $q=\pi\circ \varphi$. Therefore, $\pi:(G,F)\rightarrow (H,F_H)$ is a universal SEP-cover. \end{proof} Therefore, we conclude that the dual group of any prime model of $T$ over $S(H)$ gives a universal SEP-cover. Also, the proof of Claim \ref{claim:G(S)_universal_SIP-cover} shows that the sorted complete system of a universal SEP-cover of $(H,F_H)$ is a prime model of $T$ over $S(H)$. By uniqueness of prime model of $T$ over $S(H)$, every universal SEP-covers of $(H,F_H)$ are isomorphic. \end{proof} \subsection{Description of forking} In this subsection, we aim to describe forking independence and $U$-rank in sorted complete systems with co-SEP. We basically follow the proof scheme in \cite[Section 4]{C1}. We fix a complete extension $T$ of $SCS_{SEP}$, and $S\models T$. \begin{definition}\cite[Definition 1.10]{C1} Let $a\le b\in S$. The length of $a$ over $b$, denoted by $L(a/b)$ is the largest integer $n$ such that there exists a chain $$a=a_0<a_1<\cdots<a_n=b.$$ \end{definition} \begin{lemma}\label{lem:join_algebraic_closure}\cite[Lemma 4.1]{C1} Let $a<b\in S$ and let $A\subset S$. Suppose $b\in a\vee A$. Then, $$a\in \operatorname{acl}(A)\Leftrightarrow a\in \operatorname{acl}(b).$$ \end{lemma} \begin{proof} If $b\in \min A$, then it holds trivially. So, we assume that $\{b\}>\min A$. It is enough to show the left-to-right implication holds. Suppose $a\in \operatorname{acl}(A)$. We use induction on $L(a/b)$. Suppose $L(a/b)=1$. Without loss of generality, we may assume that $S$ is $(\aleph_0+|A|)^+$-saturated. Suppose $a\in \operatorname{acl}(A)\setminus\operatorname{acl}(b)$. Then, $\operatorname{tp}(a/\operatorname{acl}(b))$ has $(\aleph_0+|A|)^+$-many realizations in $S$ and so there are infinitely many realizations $a_0,a_1,\ldots$ of $\operatorname{tp}(a/\operatorname{acl}(b))$ outside of $A$. Since $a_i\models \operatorname{tp}(a/\operatorname{acl}(b))$, we have that $L(a_i/b)=1$. Since $a_i,c<b$ and $L(a_i/b)=1$ for any $c\in \min A$ and $i=0,1,\ldots$, we have $b\in a_i\vee A=a_j\vee A$ for any $i\neq j$. Therefore, by Theorem \ref{thm:description_complete_types}(1), $a_i\models \operatorname{tp}(a/A)$ for each $i$ because $a\vee A\subset \operatorname{acl}(b)$ and $S_{a\vee A}\subset \operatorname{acl}(b)$. This implies that $a\not\in \operatorname{acl}(A)$, a contradiction. Suppose $L(a/b)=n\ge 2$. Take $a=a_0<a_1<\ldots<a_n=b$. Note that each $a_i$ is in $\operatorname{acl}(A)$ because $a\in \operatorname{acl}(A)$. Since $\{b\}>\min A$, we have that $a_{i+1}\in a_i\vee S_{A\cup\{a_{i+1}\}}$ for each $i$. Since $L(a_i/a_{i+1})=1$, by induction, $a_i\in \operatorname{acl}(a_{i+1})$ for each $i$. Thus, $a=a_0\in \operatorname{acl}(a_n)=\operatorname{acl}(b)$. \end{proof} We describe forking independence, analogous to \cite[Proposition 4.1]{C1}. \begin{proposition}\label{prop:forking_description} Let $A\subset B$ be a substructure of $S$ and let $a\in S$. Then, $$a\mathop{\smile \hskip -0.9em ^| \ }_A B\Leftrightarrow a\vee B\subset \operatorname{acl}(A)\Leftrightarrow a\vee B\subset \operatorname{acl}(a\vee A).$$ \end{proposition} \begin{proof} Without loss of generality, we may assume that $A$ and $B$ are algebraically closed. Since $\min A\ge \min B$, we have that $a\vee A\ge a\vee B$. Since $\{a\}, \min B\le a\vee B$, we have that $a\vee B\subset \operatorname{acl}(a)\cap \operatorname{acl}(B)$. So, if $a\vee B\not\subset \operatorname{acl}(A)$, then $B\mathop{ \not \smile \hskip -0.9em ^| \ }_A a$. Suppose $a\vee B\subset \operatorname{acl}(A)=A$ so that $a\vee B=a\vee A$ and $S_{a\vee B}\subset A$. Consider the following partial type over $B$, $$\Sigma(x):=\operatorname{tp}(a/S_{a\vee B})\cup\{x\vee \delta\sim c:\delta\in B,c\in a\vee B, \delta\le c\}.$$ By Theorem \ref{thm:description_complete_types}(1), the partial type $\Sigma$ is consistent and $\Sigma\models \operatorname{tp}(a/B)$. Since $a\vee B=a\vee A\subset A$, the type $\operatorname{tp}(a/B)$ is definable over $A$ and it does not fork over $A$. The second equivalence comes from Lemma \ref{lem:join_algebraic_closure}. \end{proof} \noindent By the same proof of \cite[Theorem 4.2]{C1}, we have the following description of $U$-rank. \begin{theorem}\label{thm:U-rank} Let $a\in S$ and $A\subset S$. Let $n=L(a/b)$ for some (equivalently, any) $b\in a\vee S_A$. Choose a sequence $a=a_0<a_1<\ldots<a_n=b$. Then, the $U$-rank of $\operatorname{tp}(a/A)$ is the number of indices $i<n$ such that $a_i\not\in \operatorname{acl}(a_{i+1})$. \end{theorem} We end our paper with the following question. \begin{question}\label{question:sorted_prof_gp=galois_gp?} Any profinite group is a Galois group of a field. For a fixed language ${\mathcal L}$ with a set $\mathcal J$ of sorts having a stable theory, is there a stable ${\mathcal L}$-theory $T$ having elimination of imaginaries and a monster model $\mathfrak{C}$ of $T$ such that any sorted profinite group in $\operatorname{SPG}_{\mathcal J}$ with the function $J^*_{\subset}$ defined in \cite[Remark 3.1]{HoLee} is a Galois groups of a substructure of $\mathfrak{C}$ after equipping ? \end{question}
{ "timestamp": "2020-12-29T02:26:46", "yymm": "2012", "arxiv_id": "2012.14149", "language": "en", "url": "https://arxiv.org/abs/2012.14149" }
\section{Introduction} Bibliography on translation surfaces is immense, we cite here only the celebrated handbooks of dynamical systems (see for instance~\cite{esk,for,hub,masur,masur2}), the nice survey~\cite{wri}, as well as~\cite{yoc} and~\cite{zor}, and references therein. Also, we refer to Section~\ref{s2} for precise definitions, staying colloquial in this introduction. The translation surface obtained by gluing parallel sides of a regular octagon is commonly known as ``the octagon''. A fake octagon is a translation surface with one singular point and the same periods as the octagon. It is well known that periods are local coordinates for the moduli space of translations surfaces of fixed genus and singular divisor. Periods come in two flavours: absolute and relative: former ones are translation vectors associated to closed loops, the latter are those associated to saddle connections (i.e. paths connecting singular points). So-called isoperiodic deformations of a translation surface consist in changing relative periods without touching absolute ones. Isoperiodic loci are leaves of the isoperiodic foliation (also known as absolute period foliation or kernel foliation). Local coordinates on isoperiodic leaves are given by positions of singular points with respect to a fixed singular point, chosen as origin. As a consequence, translation surfaces of the minimal stratum (that is, with a unique singular point) cannot be continuously and isoperiodically deformed in that stratum (all periods are absolute). A priori, it is not clear whether or not, given $X$ in the minimal stratum, there is a translation surface, still in the minimal stratum, with same periods as $X$. If any, such surfaces are called ``fake $X$''. In fact, the question of finding fakes of famous translation surfaces, as for instance the octagon, was a nice coffee-break problem in dynamical system conferences some years ago. Nowadays, this is literature. \medskip Fakes where introduced and studied by McMullen in~\cite{Mcm07,Mcm14} --- who gave a complete and detailed description of isoperiodic leaves in genus two --- and dynamical properties of the isoperiodic foliation were established in~\cite{CDF,ursula} in general (in particular ergodicity and classification of leaf-closures). From~\cite{Mcm07,Mcm14,CDF,ursula} it follows that if periods of $X$ are not discrete (e.g. the octagon), then $X$ has infinitely many fakes. More precisely, the isoperiodic leaf through $X$ intersects the minimal stratum $\mathcal H_{2g-2}$ in a set whose closure has positive dimension. In particular, any such $X$ can be approximated by fakes. Moreover, in~\cite{Mcm14} McMullen showed that, in genus two, fakes are arranged in horizontal strips, and described all fake pentagons. \medskip The purpose of this note is to give easy proofs of such results for the particular case of the octagon by using elementary methods; where ``easy'' means ``explicable in a conference coffee-break''. The ``elementary methods'' we use are surgeries that are the topological viewpoint of the so-called Schiffer variations. Given the octagon, we describe a surgery (that we call ``left-surgery'') that produces a fake octagon and that can be iterated. We then prove that all fakes produced by iterating left-surgeries are in fact different from each other, exhibiting therefore an explicit infinite family of fake octagons. Also, we will show that any fake of the family can be arbitrarily approximated by iterations. We note that all our fakes are along a ``horizontal'' line of the isoperiodic leaf of the octagon: the Schiffer variations are always in the horizontal direction. In this way we describe all fakes in a horizontal strip. Finally, we discuss ingredients needed for possible generalisations. Our main result is summarised as follows: \begin{thm*}[Theorem~\ref{thm1}, Remark~\ref{r44}] Fake octagons obtained by iterated left-surgeries on the octagon are different from each other, and any such fake can be arbitrarily approximated by iterates. \end{thm*} \medskip \noindent{\bf Acknowledgements} This work originated from master's thesis~\cite{dob} of first named author. Second named author would like to thank first named author for the genuine friendship born during the redaction of that thesis. \section{Isoperiodic foliation and fakes}\label{s2} Translation structures on closed, connected, oriented surfaces can be defined in many different ways, for instance: \begin{itemize} \item They can be viewed as Euclidean structures with cone-singularities of cone-angles multiple of $2\pi$, up to isometries that reads as translations in local charts. Equivalently, they are branched $\mathbb C$-structures whose holonomy consits of translations, where ``branched'' means that the developing map is not just a local homeomorphism but can also be a local branched covering; \item or as pairs $(X,\omega)$ where $X$ is a Riemann surface and $\omega$ a holomorphic $1$-form, up to biholomorphisms; \item or quotients of poligons in $\mathbb C$ via gluings that identify pairs of parallel edges via translations, up to suitable ``tangram'' relations. \end{itemize} Third construction clearly produces a Euclidean structure with cone-singularities, which, by pulling back the structure of $(\mathbb C,dz)$ produces a complex structure together with a $1$-form (whose zeroes correspond to cone-singularities). In fact, it turns out that all viewpoints are equivalent (we refer to~\cite{wri} for more details). Any singular point has an order: if viewed as a cone-point, then it has order $d$ if the total angle is $2\pi+2\pi d$; if viewed as a zero of $\omega$, then it has order $d$ if locally $\omega=z^ddz$. As usual, we will refer to a surface endowed with a translation structure as a {\em translation surface}. Singular points are also referred to as {\em saddles}. If a translation surface has genus $g$, then by Gauss-Bonnet (or by a characteristic count) the sum of the orders of singular points is $2g-2$. The moduli space of translation surface of genus $g$ --- that we denote simply by $\mathcal H$ if there is no ambiguity on the genus --- is naturally stratified by the singular divisor: if $\kappa$ is a partition of $2g-2$ (more precisely a list of non increasing positive integers summing up to $2g-2$) then the stratum $\mathcal H(\kappa)$ consists of all translation surfaces whose singular points have orders as prescribed by $\kappa$. For example, in genus $g=2$ there are only two strata: the principal, or generic, stratum $\mathcal H_{1,1}$ --- consisting of translation surfaces with two simple singular points (with cone-angles $4\pi$ each) --- and the minimal stratum $\mathcal H_2$ --- consisting of translation surfaces having only one singular point of cone-angle $6\pi$. It turns out that any stratum is a complex orbifold of dimension $2g+s-1$ where $s=|\kappa|$ is the number of singular points. Apart from obvious issues due to the orbifold structure, periods give coordinates on any stratum. More precisely, if $S$ is a translation surface with singular locus $\Sigma=\{x_1,\dots,x_s\}$, then we consider the relative homology $H_1(S,\Sigma;\mathbb Z)$. If $\gamma_1,\dots,\gamma_{2g}$ is a basis of $H_1(S;\mathbb Z)$ and $\eta_2,\dots, \eta_s$ are arcs connecting $x_1$ to $x_2,\dots, x_s$, then the family $\gamma_1,\dots,\gamma_{2g},\eta_2,\dots,\eta_{s}$ is a basis of $H_1(S,\Sigma;\mathbb Z)$. By using the $(X,\omega)$ viewpoint of translation surface, the period map $$(X,\omega)\mapsto (\int_{\gamma_1}\omega,\dots,\int_{\gamma_{2g}}\omega,\int_{\eta_2}\omega,\dots,\int_{\eta_s}\omega)$$ is a local chart $\mathcal H(\kappa)\to \mathbb C^{2g+s-1}$. These are the so called {\bf period coordinates}. In other words, we consider $[\omega]\in H^1(S,\Sigma;\mathbb C)$. Periods of curves $\gamma_i$'s are usually called {\bf absolute periods}, while those of $\eta_i$'s are {\bf relative periods}. There is a natural period map $Per:\mathcal H\to \mathbb C^{2g}=H^1(S;\mathbb C)$ that associates to any translation surface its absolute periods $$Per: (X,\omega)\mapsto (\int_{\gamma_1}\omega,\dots,\int_{\gamma_{2g}}\omega)$$ The so-called {\bf isoperiodic foliation} $\mathcal F$ (also known as {\em kernel foliation} or {\em absolute period foliation}) is the foliation locally defined by the fibers of $Per$. Namely, two translation surfaces are in the same leaf of $\mathcal F$ if one can be continuously deformed into the other without changing absolute periods. The isoperiodic foliation is globally defined in $\mathcal H=\cup_\kappa\mathcal H(\kappa)$, and its leaves have dimension $2g-3$. Isoperiodic foliation has been extensively studied, for instance in~\cite{Mcm07,Mcm14,CDF,ursula,bain,lef,ygouf}. One of the problems in studying isoperiodic foliation, is to determine the foliation induced by $\mathcal F$ on each stratum. For instance, in the minimal stratum $\mathcal H_{2g-2}$ there is no room for deformations: locally, any leaf of $\mathcal F$ intersects transversely such stratum in a single point. Given $X\in\mathcal H_{2g-2}$, a ``{\bf fake} $X$'' is a translation surface, different from $X$, but with same absolute periods as $X$ (as a polarized module) and only one singular point, that is to say, if $F_X$ is the leaf of $\mathcal F$ through $X$, then a ``fake $X$'' is a point in $F_X\cap \mathcal H_{2g-2}$. \begin{ex} The so-called {\em octagon} is the translation surface obtained by gluing parallel sides of a regular octagon sitting in $\mathbb C$ with an edge in the segment $[0,1]$. It is a genus two surface with a single singular point. A {\em fake octagon} is an intersection point of the isoperiodic leaf of the octagon with the minimal stratum $\mathcal H_2$, i.e. any translation surface with the same (absolute) periods as the octagon (the same area) and only one singular point. \end{ex} \section{Traveling on isoperiodic leaves by moving singular points} If $X$ has $s$ singular points, then there are $s-1$ degrees of freedom for perturbing $X$ without changing its absolute periods (we can change the relative periods of $\eta_2,\dots,\eta_s$). It turns out that local parameters are exactly the positions of singular points; more precisely, the relative positions of $x_2,\dots, x_s$ with respect to $x_1$. So we can travel the isoperiodic leaf through $X$ by ``moving'' singular points. From an analytic viewpoint such moves are known as Schiffer variations (see~\cite{schi,BPS}). We adopt here a more topological cut-and-paste viewpoint. We briefly recall the basic construction, referring to~\cite{CDF,BPS} for a more detailed discussion. Let $x$ be a singular point and let $\gamma$ be a segment, or more generally a path, starting at $x$. If $x$ has degree $d$, then $\gamma$ has $d$ {\bf twins}, that is to say, paths starting at $x$ with the same developed image as $\gamma$ (by simplicity we assume here that none of such twins contains a saddle in its interior). Explicitly, if $\gamma$ is a segment, its twins are segments forming angles $2\pi, 4\pi,\dots,d2\pi$ with $\gamma$. For any twin of $\gamma$ we can perform a cut-and-paste surgery as follows: We cut along $\gamma$ and the chosen twin, and then we glue in the unique other way coherent with orientations. This is better described in Figure~\ref{fig:twins}. \begin{figure}[h] \footnotesize \centering \begin{tikzpicture}[x=1ex,y=1ex] \draw[->] (0,0)--(-7,0);\draw(-7,0)--(-10,0); \foreach \a in{45,90,-45,-90,0,135,-135}{ \begin{scope}[rotate={\a}] \draw[dotted, ->] (0,0)--(7,0);\draw[dotted](7,0)--(10,0); \end{scope}} \draw (-3,0) arc (180:135:3); \draw[fill] (0,0) circle[radius=.3]; \node at (-5,2) {$2\pi$}; \node[below] at (-10,0) {$\gamma$}; \node[below] at (-1,-10) {\parbox{15ex}{$\gamma$ and its twins}}; \begin{scope}[shift={(27,0)}] \draw[->] (0,0)--(-7,0);\draw (-7,0)--(-10,0); \foreach \a in{90,-45,-90,0,135,-135}{ \begin{scope}[rotate={\a}] \draw[dotted, ->] (0,0)--(7,0);\draw[dotted](7,0)--(10,0); \end{scope}} \begin{scope}[rotate={45}] \draw[->] (0,0)--(7,0);\draw(7,0)--(10,0); \end{scope} \draw (-3,0) arc (180:45:3); \draw[fill] (0,0) circle[radius=.3]; \node at (-1.5,4) {$6\pi$}; \node[below] at (-10,0) {$\gamma$}; \node[below] at (-1,-10) {\parbox{22ex}{$\gamma$ and one chosen twin}}; \end{scope} \begin{scope}[shift={(54,0)}] \draw[fill, gray!7] (0,0)--(-10,0)--(-2,5)--(7,7)--(0,0); \draw(0,0)--(-10,0)--(-2,5)--(7,7)--(0,0); \draw[->] (0,0)--(-7.5,0) \draw[->] (-2,5)--(-8,1.25) \draw[->] (-2,5)-- (4.75,6.5) \foreach \a in{-45,-90,0,-135}{ \begin{scope}[rotate={\a}] \draw[dotted, ->] (0,0)--(7,0);\draw[dotted](7,0)--(10,0); \end{scope}} \foreach \a in{90,135}{ \begin{scope}[shift={(-2,5)},rotate={\a}] \draw[dotted, ->] (0,0)--(7,0);\draw[dotted](7,0)--(10,0); \end{scope}} \begin{scope}[rotate={45}] \draw[->] (0,0)--(7.5,0);\draw (7.5,0)--(10,0); \end{scope} \draw[fill] (0,0) circle[radius=.3]; \draw[fill] (-2,5) circle[radius=.3]; \draw (-10,0) circle[radius=.3]; \draw (7,7) circle[radius=.3]; \node[below] at (0,-10) {\parbox{7ex}{cutting\dots}}; \end{scope} \begin{scope}[shift={(81,0)}] \draw[->] (0,0) -- (-3,4.5);\draw(-3,4.5)--(-4,6); \draw[->] (-8,12) -- (-5,7.5);\draw(-5,7.5)--(-4,6); \foreach \a in{-45,-90,0,-135}{ \begin{scope}[rotate={\a}] \draw[dotted, ->] (0,0)--(3,0);\draw[dotted](3,0)--(4.5,0); \end{scope}} \foreach \a in{90,135}{ \begin{scope}[shift={(-8,12)},rotate={\a}] \draw[dotted, ->] (0,0)--(3,0);\draw[dotted](3,0)--(4,0); \end{scope}} \draw[fill] (0,0) circle[radius=.3]; \draw[fill] (-8,12) circle[radius=.3]; \draw (-4,6) circle[radius=.3]; \node[below] at (0,-10) {\parbox{15ex}{\dots and pasting}}; \end{scope} \end{tikzpicture} \caption{Moving singular points via cut-and-paste surgeries} \label{fig:twins} \end{figure} A first remark on that surgery, is that endpoints of $\gamma$ and the twin can be both regular, both singular, or one regular and the other singular point. Given the angles at endpoints, and the angle between $\gamma$ and its twin, we can easily recover angles after the surgery (see Figure~\ref{fig:ang}): \begin{figure}[h] \footnotesize \centering \begin{tikzpicture}[x=1ex,y=1ex] \draw(-6,-6) circle[radius=.3]; \draw (6,6) circle[radius=.3]; \draw[fill] (0,0) circle[radius=.3]; \draw (-6,-6)--(6,6); \draw (6,6) circle [radius=1]; \draw (-6,-6) circle [radius=1]; \draw (.72,.72) arc (45:225:1); \draw (-.82,-.82) arc (-135:45:1.2); \node at (-8,-5) {$\alpha$}; \node at (8,5) {$\beta$}; \node at (-2,1) {$\theta$}; \node at (2,-1) {$\delta$}; \begin{scope}[shift={(35,0)}] \draw(-6,-6) circle[radius=.3]; \draw (6,6) circle[radius=.3]; \draw[fill] (3,-3) circle[radius=.3];\draw[fill] (-3,3) circle[radius=.3]; \draw[fill, gray!7] (-6,-6)--(-3,3)--(6,6)--(3,-3)--(-6,-6); \draw (-6,-6)--(-3,3)--(6,6)--(3,-3)--(-6,-6); \draw (5.66,5) arc (-108:198:1.05); \draw (-5.66,-5) arc (-288:18:1.05); \draw (2,-3.33) arc (-170:74:1.05); \draw (-2,3.33) arc (10:254:1.05); \node at (-8,-5) {$\alpha$}; \node at (8,5) {$\beta$}; \node at (-5,2) {$\theta$}; \node at (5,-2) {$\delta$}; \end{scope} \begin{scope}[shift={(70,0)}] \draw[fill](-6,6) circle[radius=.3]; \draw[fill] (6,-6) circle[radius=.3]; \draw (0,0) circle[radius=.3]; \draw (-6,6)--(6,-6); \draw (-6,6) circle [radius=1]; \draw (6,-6) circle [radius=1]; \draw (-.72,.72) arc (135:315:1); \draw (.82,-.82) arc (-45:135:1.2); \node at (8,-5) {$\delta$}; \node at (-8,5) {$\theta$}; \node at (-2,-1) {$\alpha$}; \node at (2.5,1) {$\beta$}; \end{scope} \end{tikzpicture} \caption{Angles before and after surgery} \label{fig:ang} \end{figure} In Figure~\ref{fig:ang}, before the surgery the full-dotted singular point has total angle $\theta+\delta$, and after it splits in two points. The two empty-dotted points paste together to form a point of total angle $\alpha+\beta$. All $\alpha,\beta,\theta,\delta$ are multiple of $2\pi$ (they are $2\pi$ precisely when the corresponding point is regular). Note that our surgeries take place locally, near a singular point. It follows that they do not affect absolute periods (wile clearly they affect relative periods). It turns out that these moves are the only way to isoperiodically deform a translation surface. (See~\cite{CDF,BPS}). It maybe useful to remark at this point that such surgeries may or may not preserve strata. With notations as in Figure~\ref{fig:ang}, if $\alpha,\beta,\theta,\delta$ are all $2\pi$, then what we are doing is to move a singular point from the starting point of $\gamma$ to its endpoint (in this case the stratum does not change). If $\delta,\theta >2\pi$, and $\alpha,\beta=2\pi$, then we are splitting a singular point in two separate singular points and creating a singular point of angle $4\pi$. (The sum of resulting degrees equals that of initial ones). So in this case we are changing stratum. Similarly, if for instance $\alpha=4\pi$, and $\theta,\delta,\beta=2\pi$, the surgery collapses together two singular points, hence again changing stratum. There are more possibilities, and other kind of surgeries are possible (for instance by cut and pasting along many twins simultaneously). We refer the interested reader to~\cite{BPS,CDF} for further details. The last needed remark, is that it may happen that $\gamma$ is a loop, starting and ending at the same point. In this case twins of $\gamma$ may or may be not loops, and conversely. Also, it can even happen that $\gamma$ is embedded, but the twin is not. In such cases some topological disasters may happen (the surgery could for instance disconnect the surface) and one has to check what happens carefully. We will use surgeries where $\gamma$ is a closed {\bf saddle connection}, that is to say a straight segment starting and ending at the same singular point, but we will always require that twins of $\gamma$ are embedded segments. It is readily checked that in this case no disasters occur. We refer to such a cut-and-paste as {\bf saddle connection surgery}. See Figure~\ref{fig:s}. \begin{figure}[h] \footnotesize\centering \begin{tikzpicture}[x=1ex,y=1ex] \draw[->] (-10,0)--(0,0) to[in=90,out=60] (10,0); \draw (10,0)to[out=-90,in=-20] (0,0); \draw[->] (0,0)--(-7,0); \draw[fill] (0,0) circle [radius=.3]; \draw[red] (-10,0) circle [radius=.3]; \node[below] at (-5,0){$\eta$}; \node[below] at (6,3){$\gamma$}; \draw (-1.3,0) arc (180:50:1.3); \node[above left] at (0,0){$\theta$}; \begin{scope}[shift={(29,0)}] \draw[fill, gray!7] (-10,0)--(0,0) to[in=90,out=60] (10,0) to[out=-90,in=-20] (0,0) to[in=-90,out=-40] (13,0) to [out=90,in=30] (-1,3)--(-10,0); \draw[->] (-10,0)--(0,0) to[in=90,out=60] (10,0); \draw (10,0)to[out=-90,in=-20] (0,0); \draw[->] (0,0)--(-7,0); \draw (0,0)to[out=-40,in=-90] (13,0) to[out=90,in=30] (-1,3)--(-10,0)--(0,0); \draw[-<] (0,0)to[out=-40,in=-90] (13,0); \draw[->] (-1,3)--(-7,1); \draw[fill] (0,0) circle [radius=.3]; \draw[red] (-10,0) circle [radius=.3]; \draw[fill, blue] (-1,3) circle [radius=.3]; \node[below] at (-5,0){$\eta$}; \node[below] at (6,3){$\gamma$}; \node[above] at (-5.5,1.5){$\eta'$}; \node[above] at (8,4.5){$\gamma'$}; \end{scope} \begin{scope}[shift={(53,3)}] \draw[fill, gray!7](0,0) to[out=65,in=90] (13,0) to[out=-90,in=-30] (2,-2) to[out=-80,in=120] (3.5,-6) to[out=-60,in=160] (7,-8) to[out=160,in=-60] (2,-6)[in=-90,out=120] to (0,0) to[in=-90,out=-20] (10,0) to[out=90,in=60] (0,0); \draw[->] (0,0) to[in=90,out=60] (10,0); \draw (10,0)to[out=-90,in=-20] (0,0); \draw[->] (0,0) to[out=65,in=90] (13,0); \draw (13,0) to[out=-90,in=-30] (2,-2); \draw[-<] (2,-2) to[out=-80,in=120] (3.5,-6); \draw (3.5,-6) to[out=-60,in=160] (7,-8); \draw[-<] (0,0) to[out=-90,in=120] (2,-6); \draw (2,-6) to[out=-60,in=160] (7,-8); \draw[fill] (0,0) circle [radius=.3]; \draw[fill, blue] (7,-8) circle [radius=.3]; \draw[red] (2,-2) circle [radius=.3]; \node[left] at (1.3,-5){$\gamma'$}; \node[right] at (3.5,-5.3){$\eta'$}; \node[below] at (6,3){$\gamma$}; \node[above] at (10,3.3){$\eta$}; \end{scope} \begin{scope}[shift={(77,5)}] \draw[->] (0,0) to[in=90,out=60] (10,0); \draw (10,0)to[out=-90,in=-20] (0,0); \draw[-<] (0,0) -- (2,-3); \draw (2,-3) --(6,-9); \draw[fill] (0,0) circle [radius=.3]; \draw[fill,blue] (6,-9) circle [radius=.3]; \end{scope} \end{tikzpicture} \caption{A saddle connection surgery along a closed saddle connection $\gamma$ and a twin $\eta$. The angle $\theta$ is responsible for the degree of the new full-dotted (blue) point.} \label{fig:s} \end{figure} \begin{rem}\label{remfa} If $X$ is in $\mathcal H_{2g-2}$, then a saddle connection surgery produces a translation surface with the same absolute period of $X$. If in addition the angle between the closed saddle connection and the chosen twin is exactly $2\pi$, then the resulting surface is in $\mathcal H_{2g-2}$ (the full-dotted blue point in Figure~\ref{fig:s} is a regular point). So, if different from $X$, it is a fake $X$. Moreover, the closed saddle connection used by the surgery, remains a closed saddle connection of the same length and direction after the surgery. \end{rem} \section{Iterated surgeries on the octagon} In this section we describe a sequence of fake octagons $\operatorname{Oct}_n$ obtained from the octagon $\operatorname{Oct}=\operatorname{Oct}_0$ via a sequence of saddle connection surgeries. In particular, each surgery will be a saddle connection surgery along a fixed closed saddle connection. We will then prove that all fakes $\operatorname{Oct}_n$ are in fact different from each other. We parameterise our octagon by gluing parallel sides of two polygons as in Figure~\ref{figB1}. Edges have length one, all vertices are identified to each other and form the unique singular point. \begin{figure}[h] \footnotesize \centering \begin{tikzpicture}[x=.8ex,y=.8ex] \draw (17,17) -- (-7,17) -- (-7,7)--(0,0) -- (10,0) -- (17,7) -- (17,17); \draw (-7,23) -- (0,30)--(10,30)--(17,23)-- (-7,23); \foreach \a in{(0,0),(10,0),(17,7),(17,17), (-7,17), (-7,7),(-7,23), (0,30), (10,30), (17,23)} \draw[fill] \a circle [radius =.3]; \draw[dotted] (-7,7)-- (3,7);\draw (3.5,6.5)--(2.5,7.5);\draw(3.5,7.5)--(2.5,6.5); \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \node[right] at (30,15){\parbox{30ex}{ \underline{Initial identifications:}\\ \ \\ $\left.\begin{array}{l} AB\sim C'F\\ CD\sim EB'\\ A'E\sim D'F\end{array}\right\}\text{never touched}$\\ \ \\ \ \\ $\left.\begin{array}{l} BC\sim B'C'\\ AD\sim A'D' \end{array}\right\}\text{to be changed}$\\ \ \\ \ \\ The dotted line is the twin of $BC$ that will never be used }}; \end{tikzpicture} \caption{The octagon} \label{figB1} \end{figure} The octagon has three horizontal (closed) saddle connections. Only one, which in the picture is $BC$, has length $1$, and the other two $AD,EF$ have length $1+\sqrt2$. This property will be preserved by all saddle connection surgeries. We therefore describe our surgeries from an intrinsic viewpoint, exploiting this property. Let $\gamma$ be the unique unitary horizontal closed saddle connection, being the other two of length $1+\sqrt2$. By definition of twin, the two twins of $\gamma$ are sub-segments of those longer saddle connections. Since $\gamma$ is horizontal, the end of $\gamma$ forms with the start of $\gamma$ an angle which is an odd multiple of $\pi$. In fact for the octagon that angle is $3\pi$. Since the total angle around the singular point is $6\pi$, then the twins of $\gamma$ form angles $\pm \pi$ with respect to the end of $\gamma$. We orient $\gamma$ from left to right, and name $\gamma_L$ be the twin on the ``left side'', that is to say, the angle measured clockwise from the end of $\gamma$ to $\gamma_L$ is $\pi$. Let $\gamma_R$ be the other twin. We define {\bf left surgery} the saddle connection surgery along $\gamma$ and $\gamma_L$, and {\bf right surgery} that along $\gamma$ and $\gamma_R$. (See also Figure~\ref{figBI}). The angle between $\gamma_L$ (or $\gamma_R)$ and $\gamma$ is exactly $2\pi$, so left and right surgeries produce elements of $\mathcal H_2$ (see Remark~\ref{remfa}) It is immediate to check that the inverse of a left surgery is a right surgery along $\gamma^{-1}$. It will be clear from what follows that left and right surgeries preserve the two properties of having one unitary horizontal saddle connection (and two of length $1+\sqrt2$), and that the angle between the start and the end of $\gamma$ is $3\pi$. Therefore, we can iterate left and right surgeries. \begin{defn} For $n\in\mathbb Z$ we define $\operatorname{Oct}_n$ as the translation surface obtained from the octagon $\operatorname{Oct}_0$ by $n$ left surgeries (for negative $n$ we apply right surgeries). \end{defn} Before giving a global description of $\operatorname{Oct}_n$, we start by looking in details at first steps. Coming back to pictures, left surgeries will always affect the horizontal saddle connection $\gamma=BC$ and its twin on the line $AD$. Specifically, the twin of $BC$ along $EF$ will never come in play. Also, we never change diagonal identifications $AB\sim C'F, CD\sim EB'$, nor the vertical one $A'E\sim D'F$. Let's start. We cut and paste along $BC$ and its twin on the line $AD$. See Figure~\ref{figBI}. \begin{figure}[h] \footnotesize \centering \begin{tikzpicture}[x=.8ex,y=.8ex] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[dashed, very thick, green] (0,0)--(10,0); \draw[dashed, very thick, green] (0,30)--(10,30); \draw[dashed, very thick, red] (-7,17)--(3,17); \draw[dashed, very thick, red] (-7,23)--(3,23); \draw (3,17)--(17,17);\draw (3,23)--(17,23); \foreach \a in{(3,17),(3,23)} \draw[fill] \a circle [radius =.3]; \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(0,0),(10,0),(17,7),(17,17), (-7,17), (-7,7),(-7,23), (0,30), (10,30), (17,23)} \draw[fill] \a circle [radius =.3]; \node[below right] at (-10,-4){\parbox{30ex}{\footnotesize First cut along $BC$ and its twin $AP_1$}}; \begin{scope}[shift={(55,0)}] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[very thick, red] (0,0)--(10,0); \draw[very thick, green] (0,30)--(10,30); \draw[very thick, red] (-7,17)--(3,17); \draw[very thick, green] (-7,23)--(3,23); \draw (3,17)--(17,17);\draw (3,23)--(17,23); \foreach \a in{(3,17),(3,23)} \draw[fill] \a circle [radius =.3]; \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(10,0), (17,7), (-7,23), (-7,7), (0,30), (10,30)} \draw[fill] \a circle [radius =.3]; \foreach \a in{(0,0),(-7,17), (17,23), (17,17)} \draw \a circle [radius =.3]; \node[below right] at (-11,-4){\parbox{40ex}{\footnotesize New identifications: \\ $BC\sim AP_1, B'C'\sim A'P_1', P_1D\sim P_1'D'$}}; \end{scope} \end{tikzpicture} \caption{First left surgery: first fake $\operatorname{Oct}_1$.} \label{figBI} \end{figure} In that picture, dashed lines mean cuts, i.e. segments that where previously identified and are no longer identified. Colours visualise new identifications. Note that after the surgery, not all vertices are identified to each other. In particular, $A'\sim B'\sim D\sim D'$ is a regular point. All other vertices are identified, give rise to the unique singular point, and the result is indeed a fake octagon: it is our $\operatorname{Oct}_1$. We will label with a full dot the singular point, and with other symbols those other vertices that are regular points (we use same label for vertices that are identified). Also, we will use the ``dot'' notation for concatenation of segments, e.g. ``$XY\cdot ZT$'' denotes the concatenation of segments $ZT$ after $XY$, clearly this makes sense only if $Y$ is identified with $Z$. When we cut the twin of $BC$ (oriented as $BC$) we see two avatars of it in the picture: one with the surface on its left, and one on its right. We denote by $P_1$ the endpoint of the cut having the surface on its left side, and $P_1'$ the other. After the surgery, the saddle connection $BC$ has again two twins, one emanating from $P_1$ along the line $P_1D$ and another emanating from $E$. We then obtain $\operatorname{Oct}_2$ via a second left surgery, cutting and pasting along $BC$ and its twin on the line $P_1D$. See Figure~\ref{figBII} (left side). As above, when cutting along that twin, we denote by $P_2$ the endpoint of the cut having the surface in its left side, and $P_2'$ the other. \begin{figure}[h] \tiny \centering \begin{tikzpicture}[x=.86ex,y=.86ex] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[very thick, red] (0,0)--(10,0); \draw[dashed, very thick, green] (0,30)--(10,30); \draw[very thick, red] (-7,17)--(3,17); \draw[dashed, very thick, green] (-7,23)--(3,23); \draw[dashed, very thick, blue] (3,23)--(13,23); \draw[dashed, very thick, blue] (3,17)--(13,17); \draw (13,23)--(17,23); \draw (13,17)--(17,17); \foreach \a in{(3,17),(3,23),(13,17),(13,23)} \draw[fill] \a circle [radius =.3]; \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[above] at (13,23) {$P_2$}; \node[below] at (13,17) {$P_2'$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(10,0),(17,7), (-7,23), (-7,7), (0,30), (10,30)} \draw[fill] \a circle [radius =.3]; \foreach \a in{(0,0),(-7,17), (17,23), (17,17)} \draw \a circle [radius =.3]; \begin{scope}[shift={(40,0)}] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[very thick, red] (0,0)--(10,0); \draw[very thick, green] (0,30)--(10,30); \draw[very thick, red] (-7,17)--(3,17); \draw[very thick, blue] (-7,23)--(3,23); \draw[very thick, green] (3,23)--(13,23); \draw[very thick, blue] (3,17)--(13,17); \draw (13,23)--(17,23); \draw (13,17)--(17,17); \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[above] at (13,23) {$P_2$}; \node[below] at (13,17) {$P_2'$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(17,7), (-7,7),(13,17), (0,30),(10,30),(3,23),(13,23)} \draw[fill] \a circle [radius =.3]; \foreach \a in{(10,0),(3,17),(-7,23)} {\draw \a circle [radius =.3];\draw\a circle[radius=.5];}; \foreach \a in{(-7,17), (17,23), (17,17),(0,0)} \draw \a circle [radius =.3]; \end{scope} \node[below] at (26,-5){\parbox{30ex}{\footnotesize Second surgery:}}; \node[below right] at (-13,-12){\parbox{60ex}{\footnotesize On the left the cut along $BC$ and its twin $P_1P_2$.}}; \node[below right] at (-13,-18){\parbox{60ex}{\footnotesize On the right the new identifications:\\ ($B'C'\sim A'P_1', P_2D\sim P_2'D'$, and)\\ $BC\sim P_1P_2, AP_1\sim P_1'P_2'$.}}; \begin{scope}[shift={(80,0)}] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[very thick, red] (6,0)--(10,0); \draw[dashed, very thick, green] (0,30)--(10,30); \draw[very thick, red] (-1,17)--(3,17); \draw[very thick, blue] (-7,23)--(3,23); \draw[dashed, very thick, green] (3,23)--(13,23); \draw[very thick, blue] (3,17)--(13,17); \draw[dashed, very thick, magenta] (13,23)--(17,23); \draw[dashed, very thick, magenta] (13,17)--(17,17); \draw[dashed, very thick, magenta] (-7,17)--(-1,17); \draw[dashed, very thick, magenta] (0,0)--(6,0); \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[above] at (13,23) {$P_2$}; \node[below] at (13,17) {$P_2'$}; \node[below] at (-1,17) {$P_3'$}; \node[below] at (6,0) {$P_3$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(17,7), (-7,7),(13,17), (0,30),(10,30),(3,23),(13,23),(-1,17),(6,0)} \draw[fill] \a circle [radius =.3]; \foreach \a in{(10,0),(3,17),(-7,23)} {\draw \a circle [radius =.3]; \draw\a circle[radius=.5];}; \foreach \a in{(-7,17), (17,23), (17,17),(0,0)} \draw \a circle [radius =.3]; \begin{scope}[shift={(40,0)}] \draw (-7,17)--(-7,7)--(0,0); \draw (10,0)--(17,7)-- (17,17); \draw (-7,23) -- (0,30); \draw (10,30)--(17,23); \draw[very thick, red] (6,0)--(10,0); \draw[very thick, green] (0,30)--(10,30); \draw[very thick, red] (-1,17)--(3,17); \draw[very thick, blue] (-7,23)--(3,23); \draw[very thick, magenta] (3,23)--(13,23); \draw[very thick, blue] (3,17)--(13,17); \draw[very thick, green] (13,23)--(17,23); \draw[very thick, magenta] (13,17)--(17,17); \draw[very thick, magenta] (-7,17)--(-1,17); \draw[very thick, green] (0,0)--(6,0); \foreach \a in {(0,30),(4.5,30),(5.5,30),(13,23),(0.5,0),(1.5,0)} {\begin{scope}[shift={\a}] \draw (1.5,0.5)--(2.5,0)--(1.5,-0.5); \end{scope}}; \foreach \a in {(13,17),(-5,17),(-6,17),(3,23),(8,23),(9,23)} {\begin{scope}[shift={\a}] \draw (1.5,0.5)--(2.5,0)--(1.5,-0.5)--(1.5,0.5); \end{scope}}; \node[left] at (-7,17) {$A'$}; \node[left] at (-7,23) {$A$}; \node[above] at (3,23) {$P_1$}; \node[below] at (3,17) {$P_1'$}; \node[above] at (13,23) {$P_2$}; \node[below] at (13,17) {$P_2'$}; \node[below] at (-1,17) {$P_3'$}; \node[below] at (6,0) {$P_3$}; \node[left] at (-7,7) {$E$}; \node[right] at (17,7) {$F$}; \node[right] at (17,17) {$D'$}; \node[right] at (17,23) {$D$}; \node[below] at (0,0) {$B'$}; \node[below] at (10,0) {$C'$}; \node[above] at (0,30) {$B$}; \node[above] at (10,30) {$C$}; \foreach \a in{(17,7), (-7,7), (0,30),(10,30),(13,23),(-1,17),(6,0)} \draw[fill] \a circle [radius =.3]; \foreach \a in{(10,0),(3,17),(-7,23)} {\draw \a circle [radius =.3]; \draw\a circle[radius=.5];}; \foreach \a in{(-7,17), (17,17)} \draw \a circle [radius =.3]; \draw (12.5,17.5)--(13.5,16.5);\draw(12.5,16.5)--(13.5,17.5); \draw (2.5,23.5)--(3.5,22.5);\draw(2.5,22.5)--(3.5,23.5); \foreach \a in{(0,0),(17,23)} \draw \a circle [radius =.5]; \end{scope} \node[below] at (26,-5){\parbox{30ex}{\footnotesize Third surgery:}}; \node[below right] at (-17,-12){\parbox{65ex}{\footnotesize On the left the cut along $BC$ and its twin $P_2D\cdot B'P_3$.}}; \node[below right] at (-17,-18){\parbox{65ex}{\footnotesize On the right the new identifications:\\ ($AP_1\sim P_1'P_2', P_3'P_1'\sim P_3C'$, and)\\ $BC\sim P_2D\cdot B'P_3, P_1P_2\sim P_2'D'\cdot A'P_3'$.}}; \end{scope} \end{tikzpicture} \caption{Second and third fakes $\operatorname{Oct}_2$ and $\operatorname{Oct}_3$.} \label{figBII} \end{figure} One more left surgery, along $BC$ and its twin emanating from $P_2$, will produce $\operatorname{Oct}_3$. See Figure~\ref{figBII} (right side). Again, $P_3$ and $P_3'$ are the endpoints of the cut of the twin having the surface on the left and right side respectively. We are now ready to describe the gluing pattern of $\operatorname{Oct}_n$. For this purpose it is more convenient to pass to a simpler --- even if less ``octagonal'' --- viewpoint. Namely, we glue the upper quadrilateral to the bottom one, by identifying sides $AB$ and $C'F$. See Figure~\ref{figBconfnonoct}. (Compare also with~\cite[Figure 8]{Mcm14}). \begin{figure}[h] \footnotesize \centering \begin{tikzpicture}[x=.96ex,y=.96ex] \draw (-7,17)--(-7,7)--(0,0); \draw (17,7)-- (17,17);\draw(27,7)--(34,0); \draw[very thick, blue] (0,0)--(34,0); \draw[dashed, very thick, green] (17,7)--(27,7); \draw[dashed, very thick, green] (6,0.2)--(16,0.2); \draw[very thick, blue] (-7,17)--(13,17); \draw[very thick, blue] (13,17)--(17,17); \draw[dotted] (10,0)--(17,7); \foreach\a in{(26,0)} {\begin{scope}[shift={\a}] \draw (-.5,-.5)--(.5,.5);\draw(-.5,.5)--(.5,-.5); \end{scope}}; \node[below] at (6,0) {$P_{n-1}$}; \node[below] at (26,0) {$P_{n+1}$}; \node[below] at (16,0) {$P_n$}; \node[below] at (13,17) {$P_n'$}; \node[left] at (-7,7) {$E$}; \node[left] at (17,7) {$F\sim B$}; \node[right] at (17,17) {$D'$}; \node[right] at (34,0) {$D$}; \node[below] at (0,0) {$B'$}; \node[left] at (-7,17) {$A'$}; \node[above] at (27,7) {$C$}; \foreach \a in{(17,7), (-7,7),(13,17),(27,7),(6,0),(16,0)} \draw[fill] \a circle [radius =.5]; \foreach \a in{(-7,17), (17,17)} \draw \a circle [radius =.3]; \foreach \a in{(34,0),(0,0)} \draw \a circle [radius =.5]; \end{tikzpicture} \caption{A less ``octagonal'' viewpoint. $P_n$ is identified with $P_n'$. $P_{n-1}P_n$ is where $BC$ is glued at step $n$, while $P_nP_{n+1}$ is the next twin we cut at step $n+1$. Segment $P_nD\cdot B'P_{n-1}$ is identified with $P_n'D'\cdot A'P_n'$. This is the $n^{th}$ fake $\operatorname{Oct}_n$.} \label{figBconfnonoct} \end{figure} Horizontal gluings are determined, once we know positions of points $P_n$ and $P_n'$, as follows. Since $B'$ is identified with $D$, segment $B'D$ can be parameterised by a circle of length $2+\sqrt2$. Points $P_{n-1}$ and $P_{n+1}$ are the points of the circle $B'D$ at distance $1$ from $P_n$, respectively on the left and right side of $P_n$. At step $n$, segment $BC$ is identified with $P_{n-1}P_n$ --- this is the unique unitary horizontal saddle connection --- and segment $P_n'D'\cdot A'P_n$ is identified with $P_nP_{n-1}$ (which, in Figure~\ref{figBconfnonoct}, is the concatenation of segments $P_nD\cdot B'P_{n-1}$), the latter being a horizontal saddle connection of length $1+\sqrt2$. The third horizontal saddle connection, namely $EF$, is never involved and always has length $1+\sqrt2$. The unique singular point is $P_{n-1}\sim P_n\sim P_n'\sim B\sim C\sim E$, and a quick check shows that the angle between the start and the end of the unitary closed horizontal saddle connection is $3\pi$. The twin of $BC$ that will be used in next surgery is $P_nP_{n+1}$ (which is identified with the corresponding segment starting from $P_n'$), and it is readily checked that a left surgery along $BC$ and its twin $P_nP_{n+1}$ produces again a configuration of the same type, with different positions of $P_n$ and $P_n'$: If we parameterise $B'D$ with $[0,2+\sqrt2]$ and $A'D'$ with $[0,1+\sqrt2]$, we see that $P_1=2$ and $P'_1=1$, and in general we have $$P_n\equiv n+1 \mod (2+\sqrt2)\qquad\qquad P'_n\equiv n \mod (1+\sqrt2).$$ \begin{rem}\label{r1} Pictures only help in calculations, but left surgeries are intrinsically defined: any of our fakes has three horizontal saddle connections, and only one of them has length one. At any step we cut and paste along that saddle connection and its left twin. This receipt is ``picture free''. \end{rem} \begin{thm}\label{thm1} If $n\neq m$, then $\operatorname{Oct}_n\neq \operatorname{Oct}_m$. \end{thm} \begin{proof} The invariant that distinguishes fakes octagons from each other is the systole, namely the (family of) shortest saddle connection(s). As the octagon has edge of length one, the systole is always not longer than one. In fact, the shortest saddle connections for the true octagon all have length one, and because the irrationality of $\sqrt2$ this never happens again. Looking at Figure~\ref{figBconfnonoct} we see that systoles are necessarily segments connecting some avatar of the singular point (i.e. $P_{n-1},P_n,P_n',E,B,C$). Point $P_n'$ always has distance at least one from other singular points, so no systole starts from $P_n'$ in Figure~\ref{figBconfnonoct}. Moreover, since the quadrilateral $P_{n-1}P_nCB$ is a parallelogram, for $n\neq 0$, we have three possible families of fakes octagons, determined by the position of $P_n$ in $B'D=[0,2+\sqrt2]$ (see Figure~\ref{fig:sis}): \begin{enumerate} \item $P_n\in(1,1+\frac{1+\sqrt2}{2})$. The unique systole is the segment $P_nB$. \item $P_n\in(1+\frac{1+\sqrt2}{2},2+\frac{1+\sqrt2}{2})$. There are two systoles: $P_{n-1}B$ and $P_nC$. \item $P_n\in(0,1)\cup(2+\frac{1+\sqrt2}{2},2+\sqrt2)$. In this case the unique systole is $P_{n-1}C$. \end{enumerate} \begin{figure}[h] \centering \begin{tikzpicture}[x=1ex,y=1ex] \footnotesize \node at (-20,5) {$(1)$}; \draw (-7,7)--(0,0); \draw(27,7)--(34,0); \draw(0,0)--(34,0); \draw (17,7)--(27,7); \draw[dotted] (10,0)--(17,7); \draw[dotted] (32,0)--(27,7)--(22,0)--(17,7)--(12,0); \foreach \a in {32,22,10,0} \draw (\a,0)--(\a,-1); \node[below] at (0,-1.5) {$0$}; \node[below] at (10,-1.5) {$1$}; \node[below] at (22,-1.5) {$1+\frac{1+\sqrt 2}{2}$}; \node[below] at (32,-1.5) {$2+\frac{1+\sqrt 2}{2}$}; \node[below] at (6,0) {$P_{n-1}$}; \node[below] at (16,0) {$P_n$}; \draw[very thick, red, dashed] (16,0)--(17,7); \node[left] at (-7,7) {$E$}; \node[above] at (17,7) {$B$}; \node[right] at (34,0) {$D=2+\sqrt 2$}; \node[left] at (-.5,0) {$B'$}; \node[above] at (27,7) {$C$}; \foreach \a in{(17,7), (-7,7),(27,7),(6,0),(16,0)} \draw[fill] \a circle [radius =.3]; \begin{scope}[shift={(0,-15)}] \node at (-20,5) {$(2)$}; \draw (-7,7)--(0,0); \draw(27,7)--(34,0); \draw(0,0)--(34,0); \draw (17,7)--(27,7); \draw[dotted] (10,0)--(17,7); \draw[dotted] (32,0)--(27,7)--(22,0)--(17,7)--(12,0); \foreach \a in {32,22,10,0} \draw (\a,0)--(\a,-1); \node[below] at (0,-1.5) {$0$}; \node[below] at (10,-1.5) {$1$}; \node[below] at (22,-2) {$1+\frac{1+\sqrt 2}{2}$}; \node[below] at (32,-1.5) {$2+\frac{1+\sqrt 2}{2}$}; \node[below] at (15,0) {$P_{n-1}$}; \node[below] at (25,0) {$P_n$}; \draw[very thick, red, dashed] (15,0)--(17,7); \draw[very thick, red, dashed] (25,0)--(27,7); \node[left] at (-7,7) {$E$}; \node[above] at (17,7) {$B$}; \node[right] at (34,0) {$D=2+\sqrt 2$}; \node[left] at (-.5,0) {$B'$}; \node[above] at (27,7) {$C$}; \foreach \a in{(17,7), (-7,7),(27,7),(15,0),(25,0)} \draw[fill] \a circle [radius =.3]; \end{scope} \begin{scope}[shift={(0,-32)}] \node at (-20,5) {$(3)$}; \draw (-7,7)--(0,0); \draw(27,7)--(34,0); \draw(0,0)--(34,0); \draw (17,7)--(27,7); \draw[dotted] (10,0)--(17,7); \draw[dotted] (32,0)--(27,7)--(22,0)--(17,7)--(12,0); \foreach \a in {32,22,10,0} \draw (\a,0)--(\a,-1); \node[below] at (0,-1.5) {$0$}; \node[below] at (10,-1.5) {$1$}; \node[below] at (22,-1.5) {$1+\frac{1+\sqrt 2}{2}$}; \node[below] at (32,-1.5) {$2+\frac{1+\sqrt 2}{2}$}; \node[below] at (28,0) {$P_{n-1}$}; \node[below] at (4,0) {$P_n$}; \draw[very thick, red, dashed] (28,0)--(27,7); \node[left] at (-7,7) {$E$}; \node[above] at (17,7) {$B$}; \node[right] at (34,0) {$D=2+\sqrt 2$}; \node[left] at (-.5,0) {$B'$}; \node[above] at (27,7) {$C$}; \foreach \a in{(17,7), (-7,7),(27,7),(4,0),(28,0)} \draw[fill] \a circle [radius =.3]; \end{scope} \end{tikzpicture} \caption{The three possible systole configurations.} \label{fig:sis} \end{figure} Since $2+\sqrt2$ is irrational and $P_n\equiv n+1 \mod(2+\sqrt2)$, the possible positions of $P_n$ on $B'D$ identified with $[0,2+\sqrt2]$, form an infinite dense set. It follows that the set of lengths of systoles of the family $\{\operatorname{Oct}_n;n\in\mathbb Z\}$ is an infinite set. Hence, the family of fakes $\{\operatorname{Oct}_n:n\in\mathbb Z\}$ contains infinitely many different fakes. Suppose now that there is $n,m$ such that $\operatorname{Oct}_n=\operatorname{Oct}_m$. Then (by Remark~\ref{r1}) in this case, also $\operatorname{Oct}_{n+i}=\operatorname{Oct}_{m+i}$ for any $i$, and so we would observe a $m-n$ periodic behaviour. In particular we would have only finitely many fakes among our $\operatorname{Oct}_n$'s. But, since we already proved that we have infinitely many different fakes, this cannot happen. It follows that for any $n\neq m$ we have $\operatorname{Oct}_n\neq \operatorname{Oct}_m$. \end{proof} \begin{rem}\label{r44} The fact that the possible positions of $P_n$ in $[0,2\sqrt2]$ form an infinite dense set, implies in particular that all possibilities described in Theorem~\ref{thm1} actually arise. Another consequence is that we can find fakes $\operatorname{Oct}_n$ arbitrarily close to the octagon $\operatorname{Oct}_0$, and in general that for any $\operatorname{Oct}_m$ there is a fake $\operatorname{Oct}_n$ arbitrarily close to, but different from, $\operatorname{Oct}_m$. This is nothing but a manifestation of general density phenomena described in~\cite{CDF} and anticipated in Introduction. \end{rem} \begin{rem} Even if any $\operatorname{Oct}_n$ is different from each other, the systoles may have the same length. For instance, if $1+\frac{\sqrt2 -1}{2}<P_n<1+\frac{\sqrt2 +1}{2} \mod (2+\sqrt2)$, then $\operatorname{Oct}_n,\operatorname{Oct}_{n+1},$ and $\operatorname{Oct}_{n+2}$ have the systole(s) of the same length (the three being in families $(1),(2),(3)$ respectively). \end{rem} This is basically all that can happens. \begin{prop} For any $\operatorname{Oct}_m$ (with $m\neq 0$) there is $\operatorname{Oct}_n$ with the same systole length and in family $(1)$, more precisely with $P_n\equiv x\in(1,1+\frac{\sqrt2}{2})\mod(2+\sqrt2)$. Moreover, \begin{itemize} \item if $P_n\in(\frac{1+\sqrt2}{2},1+\frac{\sqrt2}{2}) \mod(2+\sqrt2)$, then $\operatorname{Oct}_m$ has the same systole-length of $\operatorname{Oct}_n$ if and only if $m=\pm n, \pm n+1,\pm n+2$; \item if $P_n\in(1,\frac{1+\sqrt2}{2}) \mod(2+\sqrt2)$, then $\operatorname{Oct}_m$ has the same systole-length of $\operatorname{Oct}_n$ if and only if $m=n$ or $m=-n+2$. \end{itemize} \end{prop} \begin{proof} \begin{figure}[h] \centering \begin{tikzpicture}[x=1.5ex,y=1.5ex] \footnotesize \draw (-7,7)--(0,0); \draw(27,7)--(34,0); \draw(0,0)--(34,0); \draw (17,7)--(27,7); \draw[dotted] (10,0)--(17,7)--(17,0); \draw[dashed, very thick, red] (14,0)--(17,7)--(20,0); \draw[dashed, very thick, red] (24,0)--(27,7)--(30,0); \draw[dotted] (32,0)--(27,7)--(22,0)--(17,7)--(12,0); \draw[dotted] (27,7)--(27,0); \foreach \a in {34,32,22,10,0,12,17,27} \draw (\a,0)--(\a,-1); \node[below] at (0,-1.5) {\tiny $0$}; \node[below] at (10,-1.5) {\tiny $1$}; \node[below] at (22,-1.5) {\tiny $1+\frac{1+\sqrt 2}{2}$}; \node[below] at (32,-1.5) {\tiny $2+\frac{1+\sqrt 2}{2}$}; \node[below] at (12,-1.5) {\tiny $\frac{1+\sqrt 2}{2}$}; \node[below] at (17,-1.5) {\tiny $1+\frac{\sqrt 2}{2}$}; \node[below] at (27,-1.5) {\tiny $2+\frac{\sqrt 2}{2}$}; \node[below] at (14,0) {\tiny $x$}; \node[below] at (20,0) {\tiny $y$}; \node[below] at (24,0) {\tiny $z$}; \node[below] at (30,0) {\tiny $t$}; \node[left] at (-7,7) {$E$}; \node[above] at (17,7) {$B$}; \node[right] at (34,0) {\tiny $D=2+\sqrt 2$}; \node[left] at (-.5,0) {$B'$}; \node[above] at (27,7) {$C$}; \foreach \a in{(17,7), (-7,7),(27,7)} \draw[fill] \a circle [radius =.3]; \end{tikzpicture} \caption{Positions having the same distance form $B$ or $C$} \label{fig:sl} \end{figure} For $x\in [0,2+\sqrt2]$ let $y=y(x)$ be its symmetric with respect to $1+\sqrt2/2$. This is the unique other point so that $d(x,B)=d(y,B)$. Explicitly, $y$ is determined by $$\frac{x+y}{2}=1+\frac{\sqrt2}{2}\qquad \text{whence} \qquad x+y=2+\sqrt2.$$ Let $z=z(x)=x+1$ and $t=t(x)=y(x)+1$. Those are the unique points so that $d(x,B)=d(z,C)=d(t,C)$. Note that \begin{equation} \label{eq:1} x\equiv -y\equiv z-1\equiv -t+1 \mod(2+\sqrt2). \end{equation} Such equations have integer coefficient and $2+\sqrt2$ is irrational. So, if we want to solve them in $\mathbb Z$, they reduce to genuine equalities. Namely, if $x\equiv P_n\equiv n+1\mod(2+\sqrt2)$ and $y\equiv P_m\equiv m+1\mod(2+\sqrt2)$, then $x\equiv -y\mod(2+\sqrt2)$ if and only if $m=-n$, and similarly for points $z,t$. The first consequence of this fact is that if $P_m$ is placed in $(1+\frac{\sqrt2}{2},2+\sqrt2)$, then there is $n$ such that $P_n$ is placed in $x\in(1,1+\frac{\sqrt2}{2})$ (hence $\operatorname{Oct}_n$ is in family $(1)$) and $P_m$ is either a $y$- or $z$- or $t$-point for $x$. In particular, this proves the first claim. We may therefore assume that we have $\operatorname{Oct}_n$ in family $(1)$ and search for all possible $\operatorname{Oct}_m$ with the same systole-length. From the fact that congruences~\ref{eq:1} reduces to genuine identities on $\mathbb Z$, we can now deduce second claims. If $P_n\equiv x\in(\frac{1+\sqrt2}{2},1+\frac{\sqrt2}{2})\mod(2+\sqrt2)$, then the possibility for $\operatorname{Oct}_m$ to have the same systole-length as $\operatorname{Oct}_n$ are two for each family, and precisely: \begin{itemize} \item $\operatorname{Oct}_m$ is in family $(1)$: \begin{itemize} \item $P_m$ coincides with $x$. This is possible only if $m=n$ \item $P_m\equiv y= -x \mod (2+\sqrt2)$, which happens if and only if $m=-n$; \end{itemize} \item $\operatorname{Oct}_m$ is in family $(2)$: \begin{itemize} \item $P_m\equiv z\equiv x+1 \mod (2+\sqrt2)$, which happens if and only if $m=n+1$. In this case $P_{m-1}\equiv x\mod (2+\sqrt2)$; \item $P_m\equiv t\equiv -x+1 \mod (2+\sqrt2)$, which happens if and only if $m=-n+1$. In this case $P_{m-1}\equiv y\equiv -x\mod (2+\sqrt2)$; \end{itemize} \item $\operatorname{Oct}_m$ is in family $(3)$: \begin{itemize} \item $P_{m-1}\equiv z\equiv x+1\mod (2+\sqrt2)$, which happens if and only if $m=n+2$; \item $P_{m-1}\equiv t\equiv -x+1\mod (2+\sqrt2)$, which happens if and only if $m=-n+2$. \end{itemize} \end{itemize} If $P_n\equiv x\in(1,\frac{1+\sqrt2}{2})\mod(2+\sqrt2)$, some possibility disappears because in this case $d(y,B)>d(y,C)$ and $d(z,C)>d(z,B)$. A part the case $\operatorname{Oct}_m=\operatorname{Oct}_n$ (if and only if $m=n$), the only possibility that remains is when $\operatorname{Oct}_m$ belongs to family $(3)$ and $P_m$ is the $t$-point of $x\equiv P_n\mod (2+\sqrt2)$, namely: \begin{itemize} \item $P_{m-1}\equiv t\equiv -x+1\mod (2+\sqrt2)$, and this happens if and only if $m=n-n+2$. \end{itemize} \end{proof} \begin{rem}[Generalisations] The construction of sequence $(\operatorname{Oct}_n)_{n\in \mathbb Z}$ used only the existence of a (horizontal) saddle connection $\gamma$ having an embedded twin such that: \begin{itemize} \item The angle from the start of the twin to the start of $\gamma$ is $2\pi$. (So that the saddle connection surgery produces a point in the minimal stratum, see Remark~\ref{remfa}.) \item The angle from the end of $\gamma$ to the start of the twin is $\pi$. \item If the (horizontal) continuation of the twin is a saddle connection (which is longer than $\gamma$ because the twin is embedded), then the angle from its start to its end is $\pi$ (hence it bounds a cylinder). \end{itemize} Second condition implies that first one is preserved by the surgery; third condition is preserved by surgery and guarantees that the length of the twin saddle connection does not change under the surgery (to see this, just draw the twin and angles in Figure~\ref{fig:s}). Therefore the sequence of (putative) fakes can be constructed in any such situation via left surgeries. \end{rem}
{ "timestamp": "2021-10-05T02:29:05", "yymm": "2012", "arxiv_id": "2012.14157", "language": "en", "url": "https://arxiv.org/abs/2012.14157" }
\section{Introduction} Benefitting from the great progresses in visual-tactile fused sensing~\cite{liu2018robotic,8460494,DBLP:conf/icra/LeeBL19}, researchers~\cite{zhang2019visual} attempt to focus on visual-tactile fused clustering (VTFC), which aims to group similar objects together in an unsupervised manner. \begin{figure}[htbp] \centering \centerline{\includegraphics[width =1.0\columnwidth]{png//FirstPage.png}} \caption{Diagram of our proposed method, which first encodes original partial visual and tactile data in modality-specific subspaces, \emph{i.e.,} visual subspace and tactile subspace. Then, we do visual-tactile fused clustering after completing the missing data. In this way, similar objects are clustered into a group. } \label{fig:1} \end{figure} An interesting example is that when robots employ visual and tactile information to explore unknown environment (e.g., many objects cluttered in an unstructured scene), recognizing the objects in this scene by collecting and annotating a lot of samples is time-consuming and expensive~\cite{Zhaoyangyang2021,wei2019adversarial,ZhaoWYZHW20,ijcai2020-77,sun2020continual}. An alternative solution is to use unsupervised manner to group these objects. In this setting, previous VTFC methods provide a feasible solution by employing fused visual-tactile information in an unsupervised manner to group the objects with same identity into a same group (i.e., object clustering). Fusion visual-tactile information could improve the clustering performance effectively, since they can provide complementary information. Generally, most existing VTFC methods mainly utilize the idea of multi-view clustering~\cite{dang2020multi,hushizheDMIB,PR2020DAMC}, \emph{e.g.}, Zhang et al.~\cite{zhang2019visual} propose a VTFC model based on non-negative matrix factorization (NMF) as well as consensus clustering and achieve great progresses. As far as we know, this is the first work about visual-tactile fused clustering. However, the task of VTFC has not been well settled due to the following challenges \emph{i.e.}, \textbf{partial data} and \textbf{heterogeneous modality}. \textbf{Partial data:} Existing visual-tactile fused object clustering methods~\cite{zhang2019visual} make a strong assumption that all the visual-tactile modalities well aligned and complete. However, visual-tactile data usually tend to be incomplete in real world applications. For instance, when a robot grasps an apple, the visual information of the apple becomes unobservable due to the occlusion of a robot hand. Moreover, noises, signal loss and malfunction in the data collecting process might make the instance missing. For instance, in special situations (\emph{e.g.,} underwater scenes), the visual can be easily missing due to turbidity of the water. These cases mentioned above lead to the incompleteness of multi-modality data, which further hurt the clustering performance. \textbf{Heterogeneous modality:} Most previous partial multi-view clustering methods use different feature description methods (\emph{e.g.}, SIFT, LBP, HOG) to extract different view features for visual data, which are essentially homogeneous data. Therefore, directly employing these methods on heterogeneous data (\emph{i.e.}, visual and tactile data) could induce a negative effect and even unsuccessful clustering task, since they ignore the distinct properties between visual and tactile modalities. To solve these problems mentioned above, as shown in Figure~\ref{fig:1}, we propose a Generative Partial Visual-Tactile Fused (\emph{i.e.}, GPVTF) framework for object clustering, which aims to obtain better clustering results by adopting generative adversarial learning as well as simple yet effective KL-divergence losses. Specifically, we first extract partial visual and tactile features from the raw input data, and employ two modality-specific encoders to project the extracted features into visual subspace and tactile subspace, respectively. Then visual (or tactile) conditional cross-modal clustering generative networks are trained to reproduce tactile (or visual) latent representations in the modality-specific subspaces. In this way, the our proposed approach is able to effectively leverage the complementary information, and learns the latent subspace level pairwise cross-modal knowledge among visual-tactile data. The conditional clustering generative adversarial networks can not only complete the missing data, but also force the heterogeneous modalities to be similar and further align them. With the well completed and aligned visual and tactile subspaces, we can obtain expressive representations of the raw visual-tactile data. Moreover, two pseudo label based fusion KL-divergence losses are employed to update the encoders, and further help obtaining better representations for better clustering performance. Finally, extensive experimental results on three real-world visual-tactile datasets prove the superiority of our proposed framework. We summarize the contributions of our work as follows: \begin{itemize} \item We put forward a Generative Partial Visual-Tactile Fused (GPVTF) framework for partial visual-tactile clustering. To our best knowledge, this is an earlier work about visual-tactile fused clustering, which tackles the problem of incomplete data. \item A conditional cross-modal clustering generative adversarial learning schema is encapsulated in our model to complete the missing data and align visual-tactile data, which can further help explore the shared complementary information among multi-modality data. \item We conduct comparisons and experiments with three benchmark real-world visual-tactile datasets, which show the superiority of the proposed GPVTF framework. \end{itemize} \section{Related Work} \subsection{Visual-Tactile Fused Sensing} Significant progresses have been made on visual-tactile fused sensing~\cite{liu2018robotic} in recent years, \emph{e.g.}, object recognition, cross-modal matching and object clustering. For example, Liu et al.~\cite{liu2016visual} develop an effective fusion strategy for weakly paired visual-tactile data based on joint sparse coding, which makes great success in household object recognition. Wang et al.~\cite{wang20183d} predict the shape prior of an object from a single color image and then achieve accurate 3D object shape perception by actively touching the object. Yuan et al.~\cite{yuan2017connecting} show that there is an intrinsic connection between visual and tactile modalities through the physical properties of materials. Li et al.~\cite{li2019connecting} uses a conditional generative adversarial network to generate pseudo visual (or tactile) outputs based on tactile (or visual) inputs, then expanding the generated data to classification tasks. Zhang et al.~\cite{zhang2019visual} first propose a visual-tactile fusion object clustering framework base on non-negative matrix factorization (NMF). However, all of the methods assume that data are well aligned and complete, which is unrealistic in practical applications. Thus, we design the GPVTF framework to address these problems for object clustering in this paper. \begin{figure*}[htbp] \centering \centerline{\includegraphics[width=1.0\columnwidth]{png//Model.png}} \caption{ Illustration of the proposed generative partial visual-tactile fused object clustering framework. Firstly, partial visual and tactile features are extracted from the raw partial visual and tactile data. Then two modality-specific encoders, \emph{i.e.}, visual encoder $E_1(\cdot)$ and tactile encoder $E_2(\cdot)$ are introduced to obtain distinctive representations in visual subspace and tactile subspace. Two cross-modal clustering generators $G_1(\cdot)$ and $G_2(\cdot)$ generate representations conditionally based on the other subspace, which could not only pull the distance between the visual and tactile subspaces but also complete the missing items. Finally, both real and generated fake representations are fused to predict the clustering labels. Meanwhile, the modality-specific encoders are updated by the KL-divergence losses $\mathcal{L}_{E_1}$ and $\mathcal{L}_{E_2}$, which are calculated by the predicted pseudo labels. } \label{fig:2} \end{figure*} \subsection{Partial Multi-View Clustering} Partial multi-view clustering~\cite{SunCWLF20,li2014partial,Wang2020icmsc,wang2018partial}, which provides a framework to solve the issue of incomplete (partial) input data, can be divided into two categories. The first category is based on traditional technique, such as NMF and kernel learning. For example, Li et al.~\cite{li2014partial} propose a incomplete multi-view clustering framework by establishing a latent subspace based on NMF, where incomplete multi-view information is maximized. Shao et al.~\cite{shao2013clustering} propose a collective kernel learning method to complete missing data and then do clustering tasks. The second category utilizes generative adversarial networks (GANs) to complete the missing data, for the reason that GANs can align heterogeneous data and complete partial data~\cite{What_Transferred_Dong_CVPR2020,Semantic_Transferable_Dong_ICCV2019,yang2020adversarial,JiangXYCH19}. For instance, Xu et al.~\cite{xu2019adversarial} propose an adversarial incomplete multi-view clustering method, which performs missing data inference via GANs and learns the common latent subspace of multi-view data simultaneously. All the methods mentioned above are developed for homogeneous data, they ignore the huge gap between heterogeneous data (\emph{i.e.}, visual and tactile data). \section{The Proposed Method} In this section, the proposed Generative Partial Visual-Tactile Fused (GPVTF) framework is presented in detail, together with its implementation. \subsection{Details of the Model Pipeline} Given the visual-tactile data $V$ and $T$, where $V$ denotes the visual data (\emph{i.e.}, RGB images) and $T$ denotes the tactile data. Noticing that the visual and tactile data collected from different tactile sensors lie in different data spaces. Our proposed GPVTF model consists of two partial feature extraction processes, \emph{i.e.,} visual feature extraction and tactile feature extraction, which learn partial visual features $X_n^{(1)} \in \mathbb{R}^{d_1 \times n} $ from $V$ and tactile features $X_n^{(2)} \in \mathbb{R}^{d_2 \times n} $ from $T$, where $d_1$ and $d_2$ are the feature dimensions and $n$ is the number of samples; two modality-specific encoders, ${E_1(\cdot)}$ and ${E_2(\cdot)}$; two generators, $G_1(\cdot)$ and $G_2(\cdot)$ and their corresponding discriminators, $D_1(\cdot)$ and $D_2(\cdot)$; two KL-divergence based losses, as illustrated in Figure~\ref{fig:2}. More details are provided in the following sections. Particularly, since each dataset has different feature extraction processes, the details of these processes are given in the ``Experiments" section. \textbf{Encoders and Clustering Module:} Modality-specific encoders $E_1(\cdot)$ and $E_2(\cdot)$ are introduced to project both partial visual and tactile features into the modality-specific subspaces, \emph{i.e.,} visual subspace and tactile subspace, respectively. Specifically, in the modality-specific subspaces, the learn latent subspace representations are learned via $Z_{n}^{(m)} = E_m(X_{n}^{(m)};{\theta}_{E_m})$, where $m=1$ denotes the visual modality, $m=2$ denotes the tactile modality, and ${\theta}_{E_m}$ denote the network parameters of the $m$-th encoder. Then the fused representations (\emph{i.e.}, $m=3$ ) can be gained by: \begin{equation}\label{eq:Fusion} \begin{aligned} Z_n^{(3)} = (1-\alpha)Z_n^{(1)} + \alpha Z_n^{(2)}, \end{aligned} \end{equation} where $\alpha > 0 $ is the weighting coefficient that balances the ratio of tactile and visual modalities. Next, the K-means method is employed on $Z^{(m)}_n$ to get the initial clustering centers $\{ {\mu}^{(m)}_j \}_{j=1}^k$, where $k$ is the number of clusters\footnote{Since we do clustering according to object identity, the $k$ is set to be equal with the number of types of objects in the datasets. Specifically, $k$ is set to be $53$, $119$ and $108$ for \textbf{PHAC-2}, \textbf{GelFabric}, and \textbf{LMT} datasets, respectively.}. Inspired by~\cite{xie2016unsupervised}, we employ Student's t-distribution to measure the similarity of latent subspace representations $Z^{(m)}_n$ and the clustering center ${\mu}^{(m)}_j$: \begin{equation}\label{eq:Q} \begin{aligned} q^{(m)}_{nj} = \frac{ {(1+\|Z^{(m)}_n-\mu^{(m)}_j\|^2/\gamma)}^{-\frac{2}{\gamma+1}}} { \sum_{j'}(1+\|Z^{(m)}_n-\mu_{j'}^{(m)}\|^2 / \gamma)^{- \frac{\gamma+1}{2}}}, \end{aligned} \end{equation} where $\gamma$ is the degrees of freedom of the Student's t-distribution and set to be $1$ in this paper; $q^{(m)}_{nj}$ are the pseudo-labels, which denote the probability of assigning sample $n$ to cluster $j$ for the $m$-th modality. To improve cluster compactness, we pay more attention to data points of which are assigned with high confidence, by obtaining the target distribution $p_{nj}^{(m)}$ as follows: \begin{equation}\label{eq:P} \begin{aligned} p^{(m)}_{nj} = \frac{{q^{(m)}_{nj}}^2 \big/ \sum_nq^{(m)}_{nj}}{\sum_{j'}{q^{(m)}_{nj'}}^2 \big/\sum_nq^{(m)}_{nj}}. \end{aligned} \end{equation} Then the encoders are trained with fused KL-divergence losses, which are defined as follows: \begin{equation}\label{eq:E1_KL} \begin{aligned} \mathcal{L}_{E_m}& \!=\!KL \big(P^{(m)}\vert\vert Q^{(m)}\big) + \beta KL \big(P^{(3)}\vert\vert Q^{(3)}\big) \\ & \!=\! \sum_{n}\sum_{j}p^{(m)}_{nj}\log\frac{p^{(m)}_{nj}}{q^{(m)}_{nj}} \!+\! \beta \sum_{n}\sum_{j}p^{(3)}_{nj}\log\frac{p^{(3)}_{nj}}{q^{(3)}_{nj}}, \end{aligned} \end{equation} where $m =1$ and $m =2$ correspond to the losses of encoders $E_1(\cdot)$ and $E_2(\cdot)$, and $\beta$ is a trade-off parameter. The encoders are implemented by a two-layer fully-connected network. \textbf{Conditional Cross-Modal Clustering GANs:} Noticing that the gap between visual and tactile modalities is very large since their frequency, format and receptive field are quite different. Thus, directly employing GANs in the original space $X_n^{(m)}$ might increase the difficulty of training or even lead to non-convergence. To address this challenge, we develop a conditional cross-modal clustering GANs, which generates one latent space conditional on the other latent space. Specifically, the conditional cross-modal cluster GANs including $G_m(\cdot)$ and $D_m(\cdot)$, where $G_m(\cdot)$ competes with $D_m(\cdot)$ to generate samples as real as possible, and the loss function is given as: \begin{equation}\label{eq:G1d} \begin{aligned} \mathcal{L}_{G_{md}}= -E_{\omega \sim P_\omega(\omega)}\log\big( 1 \!-\! D_m (G_m(\omega|Z_n^{(m)})) \big), \end{aligned} \end{equation} where $\omega$ is the noise matrix. Noticing that our goal is clustering rather than generation, a prior that consists of normal random variables cascaded with one-hot noise is sampled, which is different from tradition GANs. More specifically, $\omega = (\omega_n,\omega_c)$, $\omega_n \sim N(0,\sigma^2I_{dn})$, $\omega_c = e_k$, $e_k$ is the $k$-th elementary vector in $\mathbb{R}^k$ and $k$ is the number of clusters. We choose $\sigma = 0.1$ in all our experiments. By this way, a non-smooth geometry latent subspace is created, and $G_m(\cdot)$ can generate more distinctive and robust representations which are beneficial for clustering performance, \emph{i.e.}, not only the gap between visual and tactile modalities can be mitigated but also the missing data are completed naturally. Moreover, since training the GANs in Eq.~\eqref{eq:G1d} is not trivial~\cite{wang2019generative}, a regularizer, which forces the real samples and the generated fake samples to be similar, is introduced to obtain stable generative results, which can be defined as: \begin{equation}\label{eq:G1s} \begin{aligned} \mathcal{L}_{G_{ms}} = E_{\omega \sim P_\omega(\omega)} \big( \|G_m(\omega|Z_n^{(m)})) - Z_n^{(m)}\|^2 \big). \end{aligned} \end{equation} Then, the overall loss function of $G_m(\cdot)$ is given as follows: \begin{equation}\label{eq:G} \begin{aligned} \mathcal{L}_{G_m} = \mathcal{L}_{G_{md}} + \lambda \mathcal{L}_{G_{ms}}, \end{aligned} \end{equation} where $\lambda$ is a trade-off parameter which balances the two losses and is set to be 0.1 in this paper. $G_m(\cdot)$ is a three-layer network. The discriminator $D_m(\cdot)$ is designed to discriminate the fake representations generated by $G_m(\cdot)$ and the real representations in the modality-specific subspaces. The object function for $D_m(\cdot)$ can be given as: \begin{equation}\label{eq:D1} \begin{aligned} \mathcal{L}_{D_m}= &E_{Z \sim P_Z(Z)}\log D_m( E_m(X_n^{(m)};\theta_{E_m})) + \\ & E_{\omega \sim P_\omega(\omega)}\log \big(1 \!-\! D_m( G_m(\omega| E_m(X_n^{(m)};\theta_{E_m})) ) ) \big). \end{aligned} \end{equation} The proposed $D_m(\cdot)$ is mainly made up of a fully connected layer with ReLU activation, a mini-batch layer~\cite{salimans2016improved} that can increase the diversity of fake representations, a sigmoid function which outputs the fake-real possibility of input representations. Then, both the generated fake and real representations are fused. Thus, the fusion representations Eq.~\eqref{eq:Fusion} can be modified to: \begin{equation}\label{eq:Fusion_fake} \begin{aligned} Z^{(3)}_n = (1-\alpha)Z_n^{(1)} + \alpha Z_n^{(2)} + \sum_{m=1}^{2}\varphi_{m}Z_{\mathrm{fake}}^{(m)}, \end{aligned} \end{equation} where $\varphi_{m}$ is the weighting coefficients of real and the generated fake representations for the $m$-th modality, \emph{i.e.,} $m=1$ represents visual modality and $m=2$ represents tactile modalities, respectively. The overall loss function of our model is summarized as follows: \begin{equation}\label{eq:Total_Loss} \begin{aligned} \mathcal{L}_{total} = \min_{E_{m},G_m}\max_{D_m}\mathcal{L}_{E_{m}} + \mathcal{L}_{G_{m}}+ \mathcal{L}_{D_{m}}, \end{aligned} \end{equation} where $\mathcal{L}_{E_{m}}$ are the KL-divergence losses, $\mathcal{L}_{G_{m}}$ and $\mathcal{L}_{D_{m}}$ are the conditional cross-modal clustering GANs losses. \begin{algorithm}[htbp] \caption{Training Process of the Proposed Framework} \label{alg:updateALL} \begin{algorithmic}[1] % \STATE \textbf{Input:}Visual-tactile data:\{$V$, $T$\}. Number of clusters: $k$. The maximum number of iterations: MaxIter; hyper-parameters $\alpha$, $\beta, \varphi_1$ and $\varphi_2$. \STATE \textbf{Initialization:} Project \{$V$, $T$\} into feature subspaces \{$X_n^{(1)}$, $X_n^{(2)}$\}.Initialize the parameters of the networks with Xavier initializer. Calculate the initial fusion representations $Z_n^{(3)}$ and the clustering centers $\{\mu_j^{(m)}\}_{j=1}^k$. \FOR{iter $\leq$ MaxIter} \STATE Train the encoders $E_m{(\cdot)}$ with corresponding KL-divergence losses $\mathcal{L}_{E_m}$, $\forall m = 1,2$. \STATE Train the generators $G_m(\cdot)$ with $\mathcal{L}_{G_m}$, $\forall m = 1,2$. \STATE Train the discriminators $D_m(\cdot)$ with $\mathcal{L}_{D_m}$, $\forall m = 1,2$. \STATE Update the fused representation $Z_n^{(3)}$ and clustering centers $\{\mu_j^{(m)}\}_{j=1}^k$, $\forall m = 1,2$. \ENDFOR \STATE Gain the updated fusion representation $Z_n^{(3)}$, fusion clustering centers $\{\mu_j^{(3)}\}_{j=1}^k$ and pseudo-labels $q_{nj}^{(3)}$. \STATE Predict the clustering labels according to $q_{nj}^{(3)}$. \RETURN Predicted cluster labels. \end{algorithmic} \end{algorithm} \subsection{Training} The whole process of the proposed GPVTF framework is summarized as below. \emph{\textbf{Step 1} Initialization}: We feed the partial visual and tactile features $X_n^{(1)}$ and $X_n^{(2)}$ into $E_1(\cdot)$ and $E_2(\cdot)$ to obtain the initial latent subspace representations $Z_n^{(m)}$. Then standard K-means method is applied on $Z_n^{(m)}$ to get the initial clustering centers $\{\mu_j^{(m)}\}_{j=1}^k, \forall m = 1,2,3$. \emph{\textbf{Step 2} Training encoders}: Eq.~\eqref{eq:Q} is employed to calculate the pseudo-labels $q_{nj}^{(m)}$; $p_{nj}^{(m)}$ and KL-divergence losses $L_{E_m}$ are computed by Eq.~\eqref{eq:P} and Eq.~\eqref{eq:E1_KL}, respectively. Then $L_{E_m}$ are fed to its corresponding Adam optimizers to train the encoders and the learning rates are set to be 0.0001. \emph{\textbf{Step 3} Training conditional cross-modal clustering GANs}: In this step, we employ the generator losses, \emph{i.e.}, Eq.~\eqref{eq:G1d} and Eq.~\eqref{eq:G1s} with Adam optimizers to update the parameters of the two generators and the learning rates are set to be 0.000003 and 0.000004 for $G_1(\cdot)$ and $G_2(\cdot)$, respectively. Next, the two discriminators $D_1(\cdot)$ and $D_2(\cdot)$ are optimized by Eq.~\eqref{eq:D1} with Adam optimizers and the leaning rates are set to be 0.000001 both for $D_1(\cdot)$ and $D_2{(\cdot)}$. We update the generators five times while updating the discriminators once. \emph{\textbf{Step 4}} After the framework is optimized, we feed original data to the model and then obtain the completed fusion representations $Z_n^{(3)}$ as well as the updated clustering centers $\{\mu_j^{(m)}\}_{j=1}^k$. Then the predicted clustering labels $q^{(3)}_{nj}$ are calculated by Eq.~\eqref{eq:Q}. Finally, we choose the maximum value of $q^{(3)}_{nj}$ as the predicted clustering labels. We implement the model with Tensorflow 1.12.0, and set the batch size to be 64. We summarize the overall training process of the proposed framework in \textbf{Algorithm~\ref{alg:updateALL}}. \section{Experiments} In this section, the used datasets, comparison methods, evaluation metrics and experimental results are given. \subsection{Datasets and Partial Data Generation} \textbf{PHAC-2}~\cite{gao2016deep} dataset consists of color images and tactile signals of 53 household objects, where each object has 8 color images and 10 tactile signals. We use all the images and the first 8 tactile signals to build the initial paired visual-tactile dataset in this paper. The feature extraction process of the tactile modality is similar with~\cite{gao2016deep,zhang2019visual}, and the visual features are extracted by AlexNet~\cite{krizhevsky2012imagenet}, which is pre-trained on the ImageNet. After feature extraction, 4096-D visual and 2048-D tactile features are obtained. \textbf{LMT}~\cite{zheng2016deep,strese2015surface} dataset consists of 10 color images and 30 haptic acceleration data of 108 different surface materials. The first 10 haptic acceleration data and all the images are used. We extract 1024-D tactile features similarly with~\cite{liu2019lifelong} and 4096-D visual features by the pre-trained AlexNet. \textbf{GelFabric}~\cite{yuan2017connecting} dataset includes visual data (\emph{i.e.,} color and depth images) and tactile data of 119 kind of different fabrics. Each fabric has 10 color images and 10 tactile images, which are used in this paper. Since both the visual and tactile data are image formats, we extract 4096-D visual and tactile features with pre-trained AlexNet. Some examples of the used datasets are given in Figure~\ref{fig:3}. \textbf{Partial data generation}: The partial visual-tactile datasets are generated in a similar way with partial multi-view clustering settings, \emph{e.g.}, Xu et al~\cite{xu2019adversarial}. Supposing that the number of all the visual and tactile samples is $N$ in each dataset, we randomly select $\tilde{N}$ samples as the missing data points. Then, the Missing Rate (\emph{i.e.}, $\mathcal{MR}$) can be defined as $\mathcal{MR} = \frac{\tilde{N}}{N}$ \begin{figure}[htbp] \centering \centerline{\includegraphics[width =1.0\columnwidth]{png/Dataset.png}} \caption{Example visual images and tactile data of the used datasets, \emph{i.e.,} GelFabric dataset~\cite{yuan2017connecting}, LMT dataset~\cite{strese2015surface,Strese2014A}, and PHAC-2 dataset~\cite{gao2016deep}. Intuitively, there are intrinsic differences between visual and tactile data. It is worth noting that the tactile signals in the PHAC-2 dataset consist of multiple components. We only visualize the electrode impedance component for simplicity. More details of these datasets can be found in their corresponding references. } \label{fig:3} \end{figure} \begin{table*}[htbp] \centering \caption{ACC and NMI performance on the three visual-tactile datasets, when the missing rate is set to be 0.1.} \begin{tabular}{|ccc|cc|cc|} \hline \multicolumn{3}{|c|}{ PHAC-2 Dataset} & \multicolumn{2}{c|}{ LMT Dataset} & \multicolumn{2}{c|}{GelFabric Dataset} \\ \hline Method & ACC($\%$) & NMI($\%$) & ACC($\%$) & NMI($\%$) & ACC($\%$) & NMI($\%$) \\ \hline\hline SC1& 40.62$\pm$0.64 & 67.05$\pm$0.60 &51.32$\pm$1.19 & 76.07$\pm$0.32 &49.50$\pm$0.69 & 72.98$\pm$0.31 \\ SC2& 30.20$\pm$0.95 & 56.67$\pm$0.60 & 15.02$\pm$0.26 & 42.61$\pm$0.27 & 45.87$\pm$0.76 & 72.92$\pm$0.34 \\ ConcatPCA& 45.38$\pm$1.04 & 69.17$\pm$0.64 & 40.78$\pm$0.48 & 68.16$\pm$0.21 & 47.95$\pm$1.64 & 74.56$\pm$0.84 \\ GLMSC& 37.38$\pm$0.17 & 64.57$\pm$0.47 & 41.30$\pm$1.11 & 68.37$\pm$0.83 & 50.88$\pm$1.01 & 75.55$\pm$0.14 \\ VTFC& 51.41$\pm$0.63 & 70.85$\pm$0.32 & 43.94$\pm$0.16 & 51.03$\pm$0.22 & 55.72$\pm$1.04 & 74.76$\pm$0.38 \\ IMG& 37.90$\pm$0.92 & 49.79$\pm$0.14 & 41.66$\pm$1.68 & 67.45$\pm$0.93 & 37.39$\pm$2.10 & 66.06$\pm$0.48 \\ GRMF& 33.16$\pm$1.62 & 60.54$\pm$0.73 & 26.59$\pm$0.71 & 57.89$\pm$0.37 & 40.97$\pm$0.99 & 72.69$\pm$0.37 \\ UEAF& 40.56$\pm$0.06 & 63.20$\pm$0.39 & 47.78$\pm$0.19 & 74.09$\pm$0.60 & 51.26$\pm$0.05 & 72.36$\pm$0.72 \\ \textbf{OURS}& \textbf{53.30$\pm$0.69} & \textbf{74.47$\pm$0.18} & \textbf{54.81$\pm$1.36} & \textbf{80.37$\pm$0.40} & \textbf{59.89$\pm$0.42} & \textbf{81.60$\pm$0.37}\\ \hline \end{tabular}% \label{tab:tb1}% \end{table*}% \subsection{Comparsion Methods and Evaluation Metrics} We compare our GPVTF model with the following baseline methods. We first employ standard spectral clustering methods on the modality-specific features, \emph{i.e.}, visual features $X_n^{(1)}$ and tactile features $X_n^{(2)}$, which are termed as \textbf{SC1} and \textbf{SC2}. \textbf{ConcatPCA} concatenates feature vectors of different modalities via PCA and then performs standard spectral clustering. \textbf{GLMSC}~\cite{zhang2018generalized} proposes a subspace multi-view clustering model under the assumption that each single feature view originates from one comprehensive latent representations. \textbf{VTFC}~\cite{zhang2019visual} is a pioneering work to incorporate visual modality with tactile modality in the object clustering tasks based on auto-encoders and NMF. \textbf{IMG}~\cite{zhao2016incomplete} does the incomplete multi-view clustering by transforming the original partial data to complete representations. \textbf{GRMF}~\cite{wen2018incomplete} exploits the complementary and local information among all views and samples based on graph regularized matrix factorization. \textbf{UEAF}~\cite{wen2019unified} performs missing data inference with locality-preserved constraint. \textbf{Evaluation Metrics:} Two widely used clustering evaluation metrics, \emph{i.e.}, Accuracy (ACC) and Normalized Mutual Information (NMI) are employed to assess the effectiveness of the clustering performance. For all the metrics, higher value indicates better performance. More details of these metrics can be found in~\cite{schutze2008introduction}. \begin{figure*}[!t] \centering \centerline{\includegraphics[width =1.0\columnwidth]{png//AAAI2021_NMI.png}} \caption{The average clustering NMI performance with respect to different missing rate on the (a) PHAC-2 dataset, (b) LMT dataset and (c) GelFabric dataset. } \label{fig:4} \end{figure*} \begin{figure*}[!t] \centering \centerline{\includegraphics[width =1.0\columnwidth]{png//AAAI2021_ACC.png}} \caption{The average clustering ACC performance with respect to different missing rate on the (a) PHAC-2 dataset, (b) LMT dataset and (c) GelFabric dataset. } \label{fig:5} \end{figure*} \subsection{Experimental Results} Experimental results on three public visual-tactile datasets are reported by comparing with the state-of-the-arts in this subsection. Due to the randomness of missing data generation, all experiments are repeated in ten times and reported with the mean value. Generally, the observations are summarized as follows: \textbf{1}) As shown in Table~\ref{tab:tb1}, where the missing rate is set to be 0.1, our GPVTF model consistently outperforms other methods with a clear improvement. \begin{figure}[htbp] \centering \subcaptionbox{ACC ($\%$) performance}{\includegraphics[width =.49\columnwidth] {png/ACC_MR_01.png}} \subcaptionbox{NMI ($\%$) performance}{\includegraphics[width =.49\columnwidth] {png/NMI_MR_01.png}} \caption{Effectiveness of cross-modal clustering GANs, encoders, fusion KL-divergence losses, when the missing rate $\mathcal{MR}$ is 0.1.} \label{fig:6} \end{figure} For instance, compared with single-modality methods (\emph{i.e.}, \textbf{SC1} and \textbf{SC2}), the performance is raised by $12.68\%$ in ACC and $7.42\%$ in NMI on the PHAC-2 dataset, which demonstrates the fact that fusing visual and tactile modalities does improve the clustering performance. The results also show that our model is able to learn complementary information among the heterogeneous data. Compared with partial multi-view clustering method \textbf{UEAF} and visual-tactile fusing clustering method \textbf{VTFC}, the performance is raised by $1.89\%$ and $3.62\%$ in ACC and NMI, respectively. The reason why our GPVTF model achieves considerable achievements is that our model can not only complete the missing data but also well align the heterogeneous data. \textbf{2}) As shown in Figure~\ref{fig:3} and Figure~\ref{fig:4}, our GPVTF model outperforms other methods under different missing rates ($0.1\sim0.5$) on all the three datasets. Moreover, our model can also achieve competitive results on the PHAC-2 and LMT datasets even though the missing rate is very large. This observation indicates the effectiveness of the proposed conditional cross-modal clustering GANs. Besides, although the performance of \textbf{SC2} drops more slowly than ours, its performance is very low in most cases. We also find an interesting phenomenon that some multi-view clustering methods (\emph{i.e.}, GRMF, IMG and GLMSC) even perform worse than single-view methods. The possible reason is that these methods do not take the gap between visual and tactile data into account. Directly fusion the heterogeneous data in a violent way would inevitably lead to performance degradation. \subsection{Ablation Study} The effect of the proposed cross-modal clustering GANs, fusion KL-divergence losses are analyzed first. Then we report the analyses of most important parameters $\alpha$, $\beta$, $\varphi_1$ and $\varphi_2$. \textbf{Effectiveness of Cross-Modal Clustering GANs, Fusion KL-Divergence Losses:} As shown in Figure~\ref{fig:6}, we first conduct ablation study to illustrate the effect of the proposed conditional cross-modal clustering GANs and fusion KL-Divergence losses when missing rate is set to be 0.1, where ``None GANs'' means the proposed conditional cross-modal clustering GANs are not employed and ``None Fusion KL'' means the proposed fusion KL-Divergence losses are not employed, respectively. We can observe that ``Ours'' outperforms ``None GANs'' among all the datasets, which proves that the proposed conditional cross-modal clustering GANs promotes to achieve better performance. ``Ours'' outperforms than ``None Fusion KL" proves that the proposed fusion KL-Divergence losses could better discover the information hidden in multi-modality data, and further enhance the performance. \begin{figure}[htbp] \centering \subcaptionbox{ACC ($\%$) with different $\alpha$}{\includegraphics[width =.49\columnwidth] {png/ACC_Alpha_01.png}} \subcaptionbox{ACC ($\%$) with different $\beta$}{\includegraphics[width =.49\columnwidth] {png/ACC_Beta_01.png}} \caption{ACC ($\%$) performance with different $\alpha$ and $\beta$ when the missing rate $\mathcal{MR}$ is 0.1.} \label{fig:7} \end{figure} \begin{figure}[htbp] \centering \subcaptionbox{NMI ($\%$) performance}{\includegraphics[width =.49\columnwidth] {png/NMI_Phi_Gel_01.png}} \subcaptionbox{ACC ($\%$) performance}{\includegraphics[width =.49\columnwidth] {png/ACC_Phi_Gel_01.png}} \caption{Performance with different $\varphi_1$ and $\varphi_2$ when the missing rate $\mathcal{MR}$ is 0.1.} \label{fig:8} \end{figure} \textbf{Parameter Analysis:} To explore the effect important weight coefficient $\alpha$ that controls the proportion of visual and tactile modalities, the parameter $\alpha$ is tuned from the set \{$0.1,0.2,0.3,0.4,0.5$\}, and report the clustering performance in Figure~\ref{fig:7}. Our model achieves the best clustering results, when the value of $\alpha$ is set to be 0.2, 0.2 and 0.1 on the PHAC-2, GelFabric and LMT datasets, respectively. Then, the parameter $\beta$ is tuned from the set \{$0.01,0.1,1,10,100$\}, and the ACC performance is plotted in Figure~\ref{fig:7}. In fact, $\beta$ controls the effect of common component, which further helps to update the encoders $E_1(\cdot)$ and $E_2(\cdot)$ simultaneously. It helps to ease the gap between visual and tactile modalities. It can be seen that when $\beta$ is set to be $1$, we gain the best performance. Thus we empirically choose $\beta=1$ as default in this paper. To the end, we tune the trade-off parameters $\varphi_1$ and $\varphi_2$ in a similar way with $\beta$. As shown in Figure~\ref{fig:8}, our proposed GPVTF model performs best when $\varphi_1$ and $\varphi_2$ are set to be $0.01$. Thus, we empirically choose $\beta=1$, $\varphi_1=0.01$ and $\varphi_2=0.01$ as default in this paper in order to achieve the best performance. \section{Conclusion} In this paper, we put forward a Generative Partial Visual-Tactile Fused (GPVTF) framework, which tries to solve the problem of partial visual-tactile object clustering. GPVTF completes the partial visual-tactile data via two generators, which generate missing samples conditional on the other modality. In this way, the performance of clustering can be improved via the completed missing data and the aligned heterogeneous data. Moreover, pseudo-label based fusion KL-Divergence losses are leveraged to explicitly encapsulate the clustering task in our network, and further update the modality-specific encoders. Extensive experimental results on three public real-world benchmark visual-tactile datasets prove the superiority of our framework when comparing with several advanced methods. \bibliographystyle{aaai21}
{ "timestamp": "2021-02-16T02:18:47", "yymm": "2012", "arxiv_id": "2012.14070", "language": "en", "url": "https://arxiv.org/abs/2012.14070" }
\section{Introduction and main results} \subsection{Introduction} We consider the self-adjoint Dirac operator $H$ on $L^2(\R_+,\C^2)$ given by $$ H y = -i \s_3 y' + i \s_3 Q y,\qq y = \ma y_1 \\ y_2 \am,\qq \s_3 = \ma 1 & 0 \\ 0 & -1 \am, $$ with the Dirichlet boundary condition $$ y_1(0) - y_2(0) = 0. $$ The potential $Q$ has the following form $$ Q = \ma 0 & q \\ \overline{q} & 0 \am,\qq q \in \cP, $$ where the class $\cP$ is defined for some $\g > 0$ fixed throughout this paper by \begin{definition*} $\cP = \cP_{\g}$ is a metric space of all functions $q \in L^2(\R_+)$ such that $\sup \supp q = \g$ equipped with the metric $$ \rho_{\cP}(q_1,q_2) = \|q_1 - q_2\|_{L^2(0,\g)},\qq q_1,q_2 \in \cP. $$ \end{definition*} It is well known that $\s(H) = \s_{ac}(H) = \R$ (see, e.g., \cite{LS91}). For any $z \in \C$, we introduce the $2 \times 2$ matrix-valued Jost solution $f(x,z) = \left( \begin{smallmatrix} f_{11} & f_{12} \\ f_{21} & f_{22} \end{smallmatrix} \right) (x,z)$ of the Dirac equation $$ f'(x,z) = Q(x) f(x,z) + i z \s_3 f(x,z),\qq (x,z) \in \R_+ \ts \C, $$ which satisfies the standard condition for compactly supported potentials: $$ f(x,z) = e^{i z x \s_3},\qq \forall \qq (x,z) \in [\g,+\iy) \ts \C. $$ We define a Jost function $\psi: \C \to \C$ for the operator $H$ by $$ \psi(z) = f_{11} (0,z) - f_{21} (0,z),\qq z \in \C. $$ It is well-known that $\psi$ is entire, $\psi(z) \neq 0$ for any $z \in \ol \C_+ $ and it has zeros in $\C_-$, which are called \textit{resonances} and a multiplicity of a resonance is a multiplicity of the zero of $\p$ (see, e.g., \cite{IK14b}). Moreover, it was shown in \cite{IK14b} that the resonances are also zeros of the Fredholm determinant and poles of the resolvent of the operator $H$. We introduce a scattering matrix $S:\R \to \C$ for the operator $H$ by $$ S(z) = \frac{\ol\psi(z)}{\psi(z)} = e^{-2 i \arg \psi(z)},\qq z \in \R. $$ The function $S$ admits a meromorphic continuation from $\R$ onto $\C$, since $\psi$ is entire. Moreover, poles of $S$ are zeros of $\psi$ and then they are resonances. Note that we sometimes write $\psi(\cdot,q)$, $S(\cdot,q)$,$\ldots$ instead of $\psi(\cdot)$, $S(\cdot)$, $\ldots$ when several potentials are being dealt with. An inverse problem for operator $H$ consists of recovering the potential $q$ in specified class by some spectral data. As spectral data one can consider, for instance, the scattering matrix $S$, the Jost function $\psi$ or resonances. The inverse problem for operator $H$ in terms of the scattering matrix is intensively studied in connection with the nonlinear Schr{\"o}dinger equation (see, e.g., \cite{ZS71, APT04, DEGM82, FT07}). Inverse problems for the operator $H$ in terms of resonances and the Jost function have been studied much less. For the potentials $q \in \cP$ these problems were considered in \cite{KM20a}. Moreover, in this paper, the characterization of the scattering matrix for $q \in \cP$ was given. In order to present these results, we introduce the Fourier transform $\cF$ on $L^2(\R)$ by $$ (\cF g)(k) = \int_{\R} g (s) e^{2iks} ds,\qq k \in \R. $$ Then its inverse $\cF^{-1}$ on $L^2(\R)$ is given by $$ (\cF^{-1} g)(s) = \frac{1}{\pi} \int_{\R} g(k) e^{-2iks} dk,\qq s \in \R. $$ We will use the notation $\hat g = \cF^{-1} g$. Note that if we apply the direct or inverse Fourier transform to a function $g$ defined on $I \ss \R$, then we extend $g$ to the whole line by making it zero outside $I$. Now, we introduce a class of Jost functions from \cite{KM20a}. \begin{definition*} $\cJ = \cJ_{\g}$ is a metric space of all entire functions $\psi$ such that \[ \label{p2e1} \psi(z) = 1 + \cF g(z),\qq z \in \C, \] for some $g \in \cP$ and $\psi(z) \neq 0$ for any $z \in \ol \C_+$ equipped with the metric $$ \rho_{\cJ}(\psi_1,\psi_2) = \| \cF^{-1}(\psi_1 - \psi_2) \|_{\cP},\qq \psi_1,\psi_2 \in \cJ. $$ \end{definition*} We define the circle $\S^1 = \{\,z \in \C \, \mid \, |z| = 1 \, \}$. Let $g:\R \to \S^1$ be a continuous function such that $g(x) = C + o(1)$ as $x \to \pm \iy$ for some $C \in \S^1$. Then $g = e^{-2i\phi}$ for some continuous $\phi:\R \to \R$. We introduce a winding number $W(g) \in \Z$ by $$ W(g) = \frac{1}{\pi}\left(\lim_{x \to +\iy} \phi(x) - \lim_{x \to -\iy} \phi(x) \right), $$ i.e., $W(g)$ is a number of revolutions of $g(x)$ around $0$, when $x$ runs through $\R$. Now, we introduce class of scattering matrices from \cite{KM20a}. \begin{definition*} $\cS = \cS_{\g}$ is a metric space of all continuous functions $S:\R \to \S^1$ such that: \begin{enumerate}[label={\roman*)}] \item $W(S) = 0$; \item $S(z) = 1 + \cF F(z)$, $z \in \R$, for some $F \in L^1(\R) \cap L^2(\R)$ such that $\inf \supp F = -\g$; \end{enumerate} equipped with the metric $$ \rho_{\cS}(S_1,S_2) = \|\cF^{-1}(S_1 - S_2)\|_{L^2(-\g,+\iy)} + \|\cF^{-1}(S_1 - S_2)\|_{L^1(-\g,+\iy)},\qq S_1,S_2 \in \cS. $$ \end{definition*} We need the following results from \cite{KM20a} about inverse problem for the operator $H$ in terms of the Jost function, the scattering matrix and resonances. \begin{theorem} \label{thm:inv_jost} \begin{enumerate}[label={\roman*)}] \item The mapping $q \mapsto S(\cdot,q)$ from $\cP$ to $\cS$ is a homeomorphism; \item The mapping $q \mapsto \psi(\cdot,q)$ from $\cP$ to $\cJ$ is a homeomorphism; \item The potential $q \in \cP$ is uniquely determined by zeros of $\psi(\cdot,q) \in \cJ$. \end{enumerate} \end{theorem} This theorem solves the inverse problem in terms of resonances (\textit{uniqueness} and \textit{characterization}) in the following way: the potential $q \in \cP$ is uniquely determined by resonances, which are zeros of some $\psi \in \cJ$. That is, the sequence $(k_n)_{n \geq 1}$ such that $k_n \in \C_-$, $n \geq 1$, are the sequence of resonances for some $q \in \cP$ if and only if $(k_n)_{n \geq 1}$ are zeros of some $\psi \in \cJ$. Moreover, it was shown in \cite{KM20a}, how to recover the Jost function and the scattering matrix from resonances and then we can recover the potential using the Gelfand-Levitan-Marchenko equations. The next question that can be posed is a \textit{stability} of this inverse problem. That is, how the resonances for some $q_o \in \cP$ can be perturbed such that we also obtain the sequence of resonances for some $q \in \cP$. And related question is a \textit{continuity} of this inverse problem. That is, does the potential depends continuously in some sense on the perturbation of resonances. In paper \cite{KM20a}, these problems were solved for the perturbation of a finite number of resonances. Namely, it was shown that if we have a sequence of resonances for some $q \in \cP$ and we arbitrarily shift a finite number of resonances, then we obtain the sequence of resonances which is associated with another potential from $\cP$ and the potential depends continuously on a distance between resonances. Our main goal is to solve the global stability and continuity problems for the resonances of operator $H$, when infinitely many resonances are involved. \subsection{Main result} In order to formulate main result, we introduce the Banach space $\ell^1$ as a set of all sequences of complex numbers $\z = (\z_n)_{n \geq 1}$ equipped with the norm $\displaystyle \|\z\|_{\ell^1} = \sum_{n \geq 1} |\z_n|$. Let $\k = (k_n)_{n \geq 1}$ be a sequence of numbers from $\C_-$ such that $|k_1| \leq |k_2| \leq \ldots$. Then, by $q(\cdot,\k)$, we denote the potential such that $(k_n)_{n \geq 1}$ are its resonances, if such potential exists. Now, we give our main result. \begin{theorem} \label{thm:main_thm} Let $\k^o = (k_n^o)_{n \geq 1}$ be zeros of $\psi(\cdot,q_o)$ for some $q_o \in \cP$ arranged that $0 < |k^o_1| \leq |k^0_2| \leq \ldots$ and let $\r = (\r_n)_{n \geq 1} \in \ell^1$ be such that $k_n = k_n^o + \r_n \in \C_-$ for each $n \geq 1$. Then there exists a unique $q \in \cP$ such that $\k = (k_n)_{n \geq 1}$ are zeros of $\psi(\cdot,q)$. Moreover, if $\|\r\|_{\ell^1} \to 0$, then we have $\|q-q_o\|_{\cP} \to 0$. \end{theorem} \begin{remark} 1) Firstly, this theorem solve the global stability problem for resonances. Namely, it shows that the set of resonances for $q \in \cP$ is closed with respect to $\ell^1$ perturbations. Secondly, it shows that the potential depends continuously on such perturbations. 2) In case of Schr{\"o}dinger operators on the half-line with compactly supported potentials, the similar result was obtained by Korotyaev in \cite{K04b}. It was shown in this paper that the space of resonances is closed under perturbations $(\r_n)_{n \geq 1}$ such that $\sum_{n \geq 1} |\r_n|^2 n^{2\ve} < \iy$ for some $\ve > 1$. \end{remark} Albeit the methods of this paper are similar to those from \cite{K04b}, they need some adaptation. In particular, we used the methods from the Banach algebras theory. This is due to the following differences between Dirac and Schr{\"o}dinger cases: \begin{enumerate}[label={(\roman*)}] \item The resonances of Dirac operators are not symmetric with respect to the imaginary line. \item Roughly speaking, the spectral problem for Dirac operators corresponds to spectral problem for Schr{\"o}dinger operators with distributions. \item The second term in the asymptotic expansion of the Jost function of Dirac operators decrease more slowly as spectral parameter goes to infinity. Maybe it is the main point. \end{enumerate} The stability of inverse problem in terms of resonances for Schr{\"o}dinger operators on the half-line with compactly supported potentials was also considered by Marletta, Shterenberg and Weikard \cite{MSW10} in a different form. They showed that two potentials are close to each other if their resonances in the circle with a radius $R$ are close to each other. Namely, in this work, the norm $\sup_{x \in [0,\g]} \left| \int_{x}^{\g} (q(t) - \tilde{q}(t)) dt\right|$ was estimated through $R$ and $\ve$, where $|z_n(q) - z_n(\tilde{q})| < \ve$ for any $n \geq 1$ such that $|z_n(q)| < R$. Such results are possibly preferable for numerical applications since they answer on the question how many resonances we need to know to recover the potential with a given accuracy. An extension of this method for Schr{\"o}dinger operators on the real line with compactly supported potentials was obtained by Bledsoe in \cite{B12}. The inverse resonance problem in this case was studied by Korotyaev in \cite{K05}, where was shown that in this case resonances does not uniquely determine a potential and then we need to add some additional data to obtain the uniqueness. \subsection{Canonical systems} It is well-known that the Dirac operators are associated with canonical systems (see, e.g., \cite{GK67}). We consider a canonical system given by \[ \label{p1e7} Jy'(x,z) = z \gh(x) y(x,z),\qq (x,z) \in \R_+ \ts \C,\qq J = \ma 0 & 1 \\ -1 & 0 \am, \] with the Dirichlet boundary condition \[ \label{p1e8} y_1(0,z) = 0,\qq y(x,z) = \ma y_1 \\ y_2 \am(x,z), \] where $\gh: \R_+ \to \cM^+_2(\R)$ is a Hamiltonian and by $\cM^+_2(\R)$, we denote the set of $2 \times 2$ positive-definite self-adjoint matrices with real entries. Now, we introduce the class of Hamiltonians associated with the Dirac operators. By $\cM_2(\R)$ we denote the set of $2 \times 2$ matrices with real entries. \begin{definition*} $\cG = \cG_{\g}$ is the set of functions $\gh:\R_+ \to \cM^+_2(\R)$ such that $$ \gh' \in L^2(\R_+,\cM_2(\R)),\qq \sup \supp \gh' = \g,\qq \gh(0) = I_2 $$ and $\gh$ has the following form \[ \label{p1e12} \gh = \ma \ga & \gb \\ \gb & \frac{1+\gb^2}{\ga} \am, \] where $\ga : \R_+ \to \R_+$ and $\gb : \R_+ \to \R$. \end{definition*} \begin{remark} If $\gh \in \cG$, then it follows from (\ref{p1e12}) that $$ \gh^*(x) = \gh(x),\qq \det \gh(x) = 1,\qq \ga(x) > 0,\qq \forall x \in \R_+ $$ and $\gh(x)$ is a constant matrix for any $x \geq \g$. \end{remark} The canonical system (\ref{p1e7}), (\ref{p1e8}) with the Hamiltonian $\gh \in \cG$ corresponds to an self-adjoint operator $$ \cK_{\gh} = \gh^{-1} J \frac{d}{dx} $$ in the weighted Hilbert space $L^2(\R_+, \C^2, \cH)$ equipped with the norm $$ \|f\|^2_{L^2(\R_+, \C^2, \gh)} = \int_{\R_+} (\gh(x) f(x), f(x)) dx,\qq f \in L^2(\R_+, \C^2, \gh), $$ where $(\cdot,\cdot)$ is the standard scalar product in $\C^2$ (see, e.g., \cite{R14}). Moreover, we need the following result from \cite{KM20a} (see also \cite{KM20b}). \begin{theorem} \label{thm3} For any $q \in \cP$ there exists a unique $\gh \in \cG$ and for any $\gh \in \cG$ there exists a unique $q \in \cP$ such that the operators $H(q)$ and $\cK(\gh)$ are unitary equivalent. \end{theorem} Recall well-known result about inverse problem for the canonical system (\ref{p1e7}), (\ref{p1e8}) in terms of de Branges space (see Theorem 40 in \cite{dB} or Theorems 10, 13 in \cite{R14}). For any canonical system there exists a Hermite-Biehler function $E$ such that $E$ is entire and $|E(z)| > |E(\ol z)|$ for each $z \in \C_+$ and the associated de Branges space $B(E)$ is given by $$ B(E) = \Big\{ F:\C \to \C \, \mid \, \text{$F$ is entire},\, \frac{F}{E},\, \frac{F^{\#}}{E} \in \mH^2(\C_+)\, \Big\}, $$ where $F^{\#}(z) = \ol{F(\ol z)}$ and $\mH^2(\C_+)$ is the Hardy space in the upper half-plane. Moreover, from a de Branges space, one can recover the associated canonical system. We say that a Hermite-Biehler function is \textit{Dirac-type} if it is associated with the canonical system with the Hamiltonian $\gh \in \cG$. It follows from Theorem \ref{thm3} that there exists a correspondence between Dirac-type Hermite-Biehler functions and Jost functions. We need the following result from \cite{KM20a}. \begin{theorem} \label{thm4} A Hermite-Biehler function $E$ is Dirac-type if and only if \[ \label{p1e10} E(k) = -ie^{-i\g k} \psi(k),\qq k \in \C, \] for some $\psi \in \cJ$. \end{theorem} \begin{remark} Recall that Jost function is uniquely determined by its zeros. Thus, it follows from (\ref{p1e10}) that Dirac-type Hermite-Biehler function is also uniquely determined by its zeros. Moreover, other properties of zeros of Jost functions hold true for Dirac-type Hermite-Biehler functions, see details in \cite{KM20a}. \end{remark} Now, using this correspondence, we show that the zeros of the Dirac-type Hermite-Biehler function can be perturbed by $\ell^1$ sequence and the function depends continuously on this perturbation. \begin{theorem} \label{thm:herm_pert} Let $E_o$ be a Dirac-type Hermite-Biehler function with zeros $\k^o = (k_n^o)_{n \geq 1}$ arranged that $0 < |k^o_1| \leq |k^0_2| \leq \ldots$ and let $\r = (\r_n)_{n \geq 1} \in \ell^1$ be such that $k_n = k_n^o + \r_n \in \C_-$ for each $n \geq 1$. Then there exists a unique Dirac-type Hermite-Biehler function $E$ such that $\k = (k_n)_{n \geq 1}$ are zeros of $E$. Moreover, if $\|\r\|_{\ell^1} \to 0$, then we have $\|\cF^{-1}(E - E_o)\|_{L^2(-\frac{\g}{2},\frac{\g}{2})} \to 0$. \end{theorem} \begin{remark} Note that it follows from the Plancherel theorem (see, e.g., Theorem IX.6 in \cite{RS80}) that $\|\cF^{-1}(E - E_o)\|_{L^2(-\frac{\g}{2},\frac{\g}{2})} \to 0$ if and only if $\|E - E_o\|_{L^2(\R)} \to 0$ \end{remark} \subsection{Literature survey} We shortly discuss the known results about resonances. Resonances are considered in the different settings, see articles \cite{F97, H99, K04a, S00, Z87} and the book \cite{DZ19} and the references therein. Recall that the inverse resonance problem for Schr{\"o}dinger operators with compactly supported potentials was solved by Korotyaev in \cite{K05} for the case of the real line and in \cite{K04a} for the case of the half-line, see also Zworski \cite{Z02} and Brown, Knowles and Weikard \cite{BKW03} concerning the uniqueness. Moreover, there are other results about perturbations of the following model (unperturbed) potentials by compactly supported potentials: step potentials was considered by Christiansen \cite{C06}, periodic and linear potentials was considered by Korotyaev in \cite{K11h} and \cite{K17}. Note also that Schr{\"o}dinger operators with linear potentials are one-dimensional Stark operators and in \cite{K17} the inverse resonance problem for Stark operators perturbed by compactly supported potentials was solved. The asymptotics of the counting function of resonances for Schr{\"o}dinger operators on the real line with compactly supported potentials was first obtained by Zworski in \cite{Z87}. The results about the Carleson measures for resonances were obtained by Korotyaev in \cite{K16}. Global estimates of resonances for the massless Dirac operators on the real line were obtained by Korotyaev in \cite{K14}. Resonances for Dirac operators was also studied by Iantchenko and Korotyaev in \cite{IK14b} for the massive Dirac operators on the half-line and in \cite{IK14a} for the massless Dirac operators on the real line under the condition $q' \in L^1(\R)$. In \cite{IK15}, Iantchenko and Korotyaev considered the radial Dirac operator. The inverse resonance problem for the massless Dirac operators with compactly supported potentials was solved by Korotyaev and Mokeev in \cite{KM20a} for the case of the half-line and in \cite{KM20b} for the case of the real line. There is a number of papers dealing with other related problems for the one-dimensional Dirac operators, for instance, the resonances for Dirac fields in black holes was described, see, e.g.,, Iantchenko \cite{I18}. As we have showed above, Dirac operators can be rewritten as canonical systems and for these systems, the inverse problem can be solved in terms of de Branges spaces, which can be parametrized by the Hermite-Biehler functions (see \cite{dB, R14}). There exist many papers devoted to de Branges spaces and canonical systems. In particular, they are used in the inverse spectral theory of Schr{\"o}dinger and Dirac operators (see, e.g., \cite{R02}). In our paper we have used the connection between Jost and Hermite-Biehler functions. Similar connection in case of the Schr{\"o}dinger operators was given by Baranov, Belov and Poltoratski in \cite{BBP} (see also Makarov and Poltoratski \cite{P}). In \cite{KM20a,KM20b}, the canonical systems associated with Dirac operators on the half-line and on the real line was considered. In particular, it was shown how to recover the potential of the Dirac operator by the Hamiltonian of the unitary equivalent canonical system. \section{Preliminary} Before we prove the main theorem, we recall some well-known facts about entire functions and Banach algebras, prove several technical lemmas and recall properties of the resonances of the operator $H$. \subsection{Entire functions} Recall that an entire function $f(k)$ is said to be of \textit{exponential type} if there exist constants $\t,C > 0$ such that $|f(k)| \leq C e^{\t |k|}$, $k \in \C$. We introduce the Cartwright classes of entire functions $\cE_{Cart}(\a,\b)$ by \begin{definition*} For any $\a,\b \in \R$, $\cE_{Cart}(\a,\b)$ is a class of entire functions of exponential type $f$ such that $$ \int_{\R} \frac{\log(1+|f(k)|)dk}{1 + k^2} < \iy,\qq \t_+(f) = \a,\qq \t_-(f) = \b, $$ where $\displaystyle \t_{\pm}(f) = \lim \sup_{r \to +\iy} \frac{\log |f(\pm i r)|}{r}$. Let also $\cE_{Cart} = \cE_{Cart}(0,2\g)$. \end{definition*} If $f \in \cE_{Cart}(\a,\b)$ for some $\a,\b \in \R$, then it has the Hadamard factorization (see, e.g., pp.127-130 in \cite{L96}). Let $p \geq 0$ be the multiplicity of zero $k = 0$ of $f$. We denote by $(k_n)_{n \geq 1}$ zeros of $f$ in $\C \sm \{ 0 \}$ counted with multiplicity and arranged that $0 < |k_1| \leq |k_2| \leq \ldots$. Then $f$ has the Hadamard factorization \[ \label{p2e11} f(k) = C k^p e^{i \varkappa k} \lim_{r \to +\iy} \prod_{|k_n| \leq r} \left(1 - \frac{k}{k_n}\right),\qq k \in \C, \] where the product converges uniformly on compact subsets of $\C$ and \[ \label{p2e12} \varkappa = \frac{\b - \a}{2},\qq C = \frac{f^{(p)}(0)}{p!},\qq \sum_{n \geq 1} \frac{|\Im k_n|}{|k_n|^2} < +\iy,\qq \exists \lim_{r \to +\iy} \sum_{|k_n| \leq r} \frac{1}{k_n} \neq \iy. \] For functions from $\cE_{Cart}(\a,\b)$ there is the Levinson's theorem about distribution of their zeros (see, e.g., p. 58 in \cite{Koo98}). For any $f \in \cE_{Cart}(\a,\b)$, $\a,\b \in \R$, we introduce the counting function of its zeros $n(r,f)$ into a circle of radius $r \geq 0$ by $$ n(r,f) = \# \{ \, k \in \C \, \mid \,f(k) = 0,\, |k| \leq r \,\}. $$ \begin{theorem}[Levinson] \label{thm:lev} Let $f \in \cE_{Cart}(\a,\b)$ for some $\a,\b \in \R$. Then we have \[ \label{p2e13} n(r,f) = \frac{\a + \b}{\pi} r + o(r) \] as $r \to +\iy$. \end{theorem} We also need the following Lindel{\"o}f's theorem (see, e.g., p. 21 in \cite{Koo98}). \begin{theorem}[Lindel{\"o}f] \label{thm:lind} Let $(k_n)_{n \geq 1}$ be arranged that $0 < |k_1| \leq |k_2| \leq |k_3| \leq \ldots$ and let $$ n(r) = \# \{\, k_m,\, m \geq 1\, \mid \, |k_m| \leq r \,\}. $$ Suppose that $n(r) \leq Kr$ for some $K \geq 0$ and for any $r \geq 0$ and suppose that $$ \Bigl| \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr| $$ remain bounded as $r \to \iy$. Then the product $$ C(k) = \lim_{r \to +\iy} \prod_{|k_n| \leq r} \left(1 -\frac{k}{k_n}\right),\qq k \in \C, $$ is an entire function of exponential type. \end{theorem} \begin{remark} Using this theorem, we can construct the entire functions of exponential type by its zeros. Note that the Lindel{\"o}f theorem is usually formulated for the canonical product of the form $$ C_1(k) = \prod_{n \geq 1} \left(1 -\frac{k}{k_n}\right) e^{\frac{k}{k_n}},\qq k \in \C. $$ We have replaced $C_1$ by $C$ using the standard arguments (see, e.g., p. 130 in \cite{L96}). \end{remark} We also need the following simple lemma about asymptotic of the counting function of sequence with bounded perturbation. \begin{lemma} \label{lm:count_func} Let the sequences $(k_n)_{n \geq 1}$ and $(k^o_n)_{n \geq 1}$ be arranged that $$ 0 < |k_1| \leq |k_2| \leq |k_3| \leq \ldots,\qq 0 < |k^o_1| \leq |k^o_2| \leq |k^o_3| \leq \ldots. $$ and let $$ n(r) = \# \{\, k_m,\, m \geq 1\, \mid \, |k_m| \leq r \,\},\qq n_o(r) = \# \{\, k^o_m,\, m \geq 1\, \mid \, |k^o_m| \leq r \,\}. $$ Suppose also that \begin{enumerate}[label = {(\roman*)}] \item $\displaystyle \sup_{n \geq 1} |k_n - k_n^o| = s < \iy$, \item $n_o(r) = Cr + o(r)$ as $r \to \iy$ for some $C \in \R$. \end{enumerate} Then we have $n(r) = Cr + o(r)$ as $r \to \iy$. \end{lemma} \begin{proof} Since $\displaystyle \sup_{n \geq 1} |k_n - k_n^o| = s$, we have $$ n_o(r-2s) \leq n(r) \leq n_o(r+2s), $$ for any $r > 0$. Using $n_o(r) = Cr + o(r)$ as $r \to \iy$, we get $$ C(r-2s) + o(r) \leq n(r) \leq C(r+2s) + o(r) $$ as $r \to \iy$, which yields that $n(r) = Cr + o(r)$ as $r \to \iy$. \end{proof} \subsection{Resonances} Recall that resonances of the operator $H$ are zeros of the associated Jost function, which is entire function of exponential type. Moreover, using the Paley-Wiener theorem (see, e.g., p.30 in \cite{Koo98}), we have that an entire function having form (\ref{p2e1}) belongs to the Cartwright class. Thus, the resonances and the Jost function of the operator $H$ have all properties, which was discussed above, and we have the following corollary (see Corollary 1.2 in \cite{KM20a}). \begin{corollary} \label{cor:jost_cart} Let $q \in \cP$. Then we have $\psi(\cdot,q) \in \cE_{Cart}$ and it satisfies (\ref{p2e11}-\ref{p2e13}). \end{corollary} We also need the following result about position of resonances (see Theorem 1.3 in \cite{KM20a}). \begin{theorem} \label{thm:res_pos} Let $q \in \cP$ and let $(k_n)_{n \geq 1}$ be its resonances. Let $\ve > 0$. Then there exists a constant $C = C(\ve,q) \geq 0$ such that the following inequality holds true for each $n \geq 1$: \[ \label{p1e1} 2 \g \Im k_n \leq \ln \left( \ve + \frac{C}{|k_n|} \right). \] In particular, for any $A > 0$, there are only finitely many resonances in the strip \[ \label{p1e2} \{ \, k \in \C \, \mid \, 0 > \Im k > -A \, \}. \] \end{theorem} \begin{remark} This theorem describe so called forbidden domain for resonances. Moreover, if $q' \in L^1(\R_+)$, then estimate (\ref{p1e1}) and the forbidden domain (\ref{p1e2}) can be given in more detailed form (see Theorem 2.7 in \cite{IK14b}). \end{remark} Using Theorem \ref{thm:res_pos}, we obtain the following useful corollary. \begin{corollary} \label{cor:im_part} Let $q \in \cP$ and let $(k_n)_{n \geq 1}$ be its resonances. Then we have $\Im k_n \to -\iy$ as $n \to \iy$. \end{corollary} \subsection{Banach algebras} Recall that we have introduced the Fourier transform $\cF$ and its inverse $\cF^{-1}$ on $L^2(\R)$ by $$ \begin{aligned} (\cF g)(k) &= \int_{\R} g (s) e^{2iks} ds,\qq k \in \R,\\ \cF^{-1} g(s) &= \frac{1}{\pi} \int_{\R} g(k) e^{-2iks} dk,\qq s \in \R. \end{aligned} $$ Moreover, we have introduced the notation $\hat g = \cF^{-1} g$. We introduce the following Banach space $$ \begin{aligned} \cL_{+} &= L^2(\R_{+}) \cap L^1(\R_{+}),\qq \| \cdot \|_{\cL_{+}} = \| \cdot \|_{L^2(\R_{+})} + \| \cdot \|_{L^1(\R_{+})} \end{aligned} $$ and the following Banach algebras with pointwise multiplication $$ \begin{aligned} \hat\cL_{+} &= \{ \,\cF g \, \mid \, g \in \cL_{+} \,\},\qq \| \cF g \|_{{\hat \cL}_{+}} = \| g \|_{\cL_{+}},\\ {\cW}_{+} &= \{ \, c + g \, \mid \, (c,g) \in \C \ts \hat \cL_{+} \,\},\qq \| c + g \|_{{\cW}_{+}} = |c| + \| g \|_{\hat \cL_{+}}. \end{aligned} $$ It is well-known that ${\cW}_+$ is unital Banach algebra (see, e.g., Chapter 17 in \cite{GRS64}). Moreover, due to Paley-Wiener theorem and the Riemann-Lebesgue lemma (see, e.g., Theorem IX.7 in \cite{RS80}), each element of $\hat \cL_+$ or $\cW_+$ is bounded continuous function on $\ol \C_+$. The spectrum of $f \in \cW_+$ is given by $$ \s(f) = \{\, f(k) \,\mid\, k \in \ol \C_+ \,\} \cup \{\,\lim_{k \to \iy} f(k)\,\}. $$ Thus, $f \in \cW_+$ is invertible in $\cW_+$ if and only if $f(k) \neq 0$ for any $k \in \ol \C_+$ and $\lim_{k \to \iy} f(k) \neq 0$. Recall that in each Banach algebra there exists a holomorphic functional calculus, that is the following theorem holds true (see, e.g., Chapter 6 in \cite{GRS64}). \begin{theorem} \label{thm5} Let $\vp$ be an analytic function on some open domain $D$ and let $f \in \cW_+$ such that $\s(f) \ss D$. Suppose that $\G \ss D$ is a closed rectifiable curve such that $\s(x)$ is contained in the interior of the domain bounded by $\G$. Then there exists a unique $\vp(f) \in \cW_+$ given by $$ \vp(f) = \frac{1}{2\pi i} \int_{\G} (\l - f)^{-1} \vp(\l) d\l, $$ where the integral does not depend on the choice of $\G$, subject only to the conditions stated, and $\vp(f)$ depends continuously on $f \in \cW_+$ such that $\s(f)$ is contained in the interior of the domain bounded by $\G$. \end{theorem} \begin{remark} The continuity of this mapping follows from the resolvent identity $$ (\l - f_1)^{-1} - (\l - f_2)^{-1} = (\l - f_1)^{-1}(f_1 - f_2)(\l - f_2)^{-1},\qq f_1,f_2 \in \cW_+. $$ \end{remark} Using Theorem \ref{thm5}, we can consider analytic functions on $\cW_+$. In particular, we introduce the logarithm and the exponential mappings on some subspaces of $\cW_+$. We introduce the following subspaces of $\cW_+$: $$ \begin{aligned} \cW_{log} &= \{\, f = 1 + g \, \mid \, g \in \hat \cL_+,\, f(x) \notin \R_- \cup \{0\},\, x \in \ol \C_+ \, \},\\ \cW_{exp} &= \{\, f = 1 + g \, \mid \, g \in \hat \cL_+,\, |f(x)| > 0,\, x \in \ol \C_+ \, \}. \end{aligned} $$ Let $\exp: f \mapsto e^{f(\cdot)}$, $f \in \hat \cL$, and $\log : f \mapsto \log(f(\cdot))$, where the branch of the logarithm is fixed by the condition $\log(x) \in \R$ for any $x \in \R$. Thus, we get the following corollary of Theorem~\ref{thm5}. \begin{corollary} \label{p3c1} The mappings $\log : \cW_{log} \to \hat \cL_+$ and $\exp : \hat \cL_+ \to \cW_{exp}$ are continuous. \end{corollary} Moreover, we estimate the norm of the logarithm mapping. \begin{lemma} \label{lm:log_estimate} Let $f \in \hat \cL_+$ be such that $\| f \|_{\hat \cL_+} < \frac{1}{4}$. Then we have $\log(1 + f) \in \hat \cL_+$ and $$ \| \log (1+f) \|_{\hat \cL_+} < C \| f \|_{\hat \cL_+}, $$ there $C > 0$ does not depend on $f$. \end{lemma} \begin{proof} Let $\|f\|_{\hat \cL} = r < \frac{1}{4}$ and let $\G = \{\, z \in \C \, \mid \, |z| = 2r \,\}$. Since $|\l| \leq \|f\|_{\hat \cL}$ for any $\l \in \s(f)$, we have that $\s(f)$ is contained in the interior of the domain bounded by $\G$. Recall that the analytic branch of the logarithm $\log(\cdot)$ on $\C \sm (-\iy,0]$ is fixed by the condition $\log(z) \in \R$ for any $z > 0$. Due to $r < \frac{1}{4}$, we have $1 + \l \in \C \sm (-\iy,0]$ for any $\l \in \G$. Thus, using Theorem \ref{thm5}, we have $$ \log(1 + f) = \frac{1}{2\pi i} \int_{\G} (\l - f )^{-1} \log(1 + \l) d\l, $$ which yields \[ \label{log_estimate:eq1} \|\log(1 + f)\|_{\hat \cL_+} = \frac{1}{2\pi} \int_{\G} \|(\l - f )^{-1}\|_{\hat \cL_+} |\log(1 + \l)| |d\l|. \] Firstly, we estimate $\|(\l - f )^{-1}\|_{\hat \cL_+}$. Since $|\l| = 2r$ for any $\l \in \G$, we have \[ \label{log_estimate:eq2} \|(\l - f )^{-1}\|_{\hat \cL_+} = \frac{1}{|\l|} \Bigl\|\left(1 - \frac{f}{\l} \right)^{-1}\Bigr\|_{\hat \cL_+} \leq \frac{1}{|\l|} \frac{1}{1 - \frac{\left\| f\right\|}{|\l|}} = \frac{1}{2r} \frac{1}{1 - \frac{r}{2r}} = \frac{1}{r}, \qq \l \in \G. \] Secondly, we estimate $|\log(1 + \l)|$. Since $\log(1+\l) = \l + o(\l)$ as $\l \to 0$, there exists a constant $C > 0$ such that \[ \label{log_estimate:eq3} |\log(1 + \l)| \leq C|\l| \] for any $\l \in \C$ such that $|\l| < 1/2$. Substituting (\ref{log_estimate:eq2}) and (\ref{log_estimate:eq3}) in (\ref{log_estimate:eq1}) and using $|\l| = 2r$ for any $\l \in \G$, we get $$ \|\log(1 + f)\|_{\hat \cL_+} \leq \frac{1}{2\pi} \int_{\G} \frac{1}{r} C r |d\l| = \frac{C}{2\pi} \int_{\G} |d\l| = 2Cr = 2C\| f \|_{\hat \cL_+}. $$ \end{proof} \section{Proof of the main theorem} Firstly, we consider simple function, which will be used in the proof of the main theorem. \begin{lemma} \label{lm:wiener_estimate} Let $k_1,k_2 \in \C_-$, $\r = k_2 - k_1$ and let $$ g(k) = 1 + \frac{\r}{k_1 - k},\qq k \in \C \sm \{k_1\}. $$ Then we have $g \in \cW_+$, $g(k) \notin (-\iy,0]$ for any $k \in \ol \C_+$ and $$ \|g-1\|_{\hat \cL_+} = \frac{|\r|}{|\Im k_1|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k_1|^{\frac{1}{2}}} \right). $$ \end{lemma} \begin{proof} Firstly, we show that $g(k) \notin (-\iy,0]$ for any $k \in \ol \C_+$ by contradiction. Let $$ g(k) = 1 + \frac{\r}{k_1 - k} = \frac{k_2-k}{k_1 - k} = -c $$ for some $c \in [0,+\iy)$ and $k \in \ol \C_+$. Then we have $$ k_2 - k = -c(k_1-k). $$ Considering the imaginary part of this identity and using $c \in \R$, we get $$ \Im k_2 - \Im k = -c(\Im k_1 - \Im k). $$ Due to $c, \Im k \geq 0$ and $\Im k_1, \Im k_2 < 0$, we have $$ \Im k_2 - \Im k < 0,\qq -c(\Im k_1 - \Im k) \geq 0, $$ which yields $$ \Im k_2 - \Im k \neq -c(\Im k_1 - \Im k) $$ and then we have a contradiction. Secondly, we show that $g \in \cW_+$. Using the Jordan lemma and $k_1 \in \C_-$, we obtain $$ \cF^{-1} (g-1)(s) = \frac{\r}{\pi} \int_{\R} \frac{e^{-2iks}}{k_1 - k} dk = 2i\r e^{-2ik_1s}\vt(s),\qq s \in \R. $$ Let $k_1 = x_1 + iy_1$. Then we have \[ \label{wiener_estimate:eq1} |\cF^{-1} (g-1)(s)| = 2|\r|e^{2y_1s} \vt(s),\qq s \in \R. \] Using (\ref{wiener_estimate:eq1}) and the fact that $y_1 < 0$, we get \[ \label{wiener_estimate:eq2} \|\cF^{-1} (g-1)\|_{L^1(\R)} = 2|\r|\int_0^{\iy} e^{2y_1s} ds = \frac{|\r|}{|y_1|}. \] Similarly we obtain \[ \label{wiener_estimate:eq3} \|\cF^{-1} (g-1)\|_{L^2(\R)} = 2|\r|\left|\int_0^{\iy} e^{4y_1s} ds\right|^{\frac{1}{2}} = \frac{|\r|}{|y_1|^{\frac{1}{2}}}. \] Combining (\ref{wiener_estimate:eq2}) and (\ref{wiener_estimate:eq3}), we have $$ \|g-1\|_{\hat \cL_+} = \frac{|\r|}{|y_1|^{\frac{1}{2}}}\left(1 + \frac{1}{|y_1|^{\frac{1}{2}}} \right). $$ \end{proof} Secondly, we show the stability of Jost functions in $\cJ$ under $\ell^1$ perturbation of their zeros. \begin{theorem} \label{thm:jost_pert} Let $f_o \in \cJ$ with zeros $(k_n^o)_{n \geq 1}$ arranged that $0 < |k_1| \leq |k_2| \leq \ldots$ and let $\r = (\r_n)_{n \geq 1} \in \ell^1$ be a sequence of complex numbers such that $k_n = k_n^o + \r_n \in \C_-$ for any $n \geq 1$. Then there exists a unique $f \in \cJ$ such that $(k_n)_{n \geq 1}$ are its zeros. Moreover, if $\| \r \|_{\ell^1} \to 0$, then we have $\r_{\cJ}(f,f_o) \to 0$. \end{theorem} \begin{proof} We show that there exists an entire function, which zeros are $(k_n)_{n \geq 1}$. Let $$ n(r) = \# \{\, k_m,\, m \geq 1\, \mid \, |k_m| \leq r \,\}. $$ Due to Theorem \ref{cor:jost_cart}, we have $n_o(r) = \frac{2\g}{\pi} r + o(r)$ as $r \to \iy$. Since $\r \in \ell^1$, we have $$ \sup_{m \geq 1} (k_m -k_m^o) = \sup_{m \geq 1} \r_m = s < \iy. $$ Thus, using Lemma \ref{lm:count_func}, we get $n(r) = \frac{2\g}{\pi} r + o(r)$ as $r \to \iy$. Now, we show that $\displaystyle \Bigl| \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr|$ is bounded as $r \to \iy$. Firstly, we consider the imaginary part of this sum. Using $k_n = k^o_n + \r_n$, we get \[ \label{jost_pert:eq1} \begin{aligned} \Bigl| \Im \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr| &\leq \sum_{|k_n| \leq r} \frac{|\Im k_n|}{|k_n|^2} = \sum_{|k_n| \leq r} \frac{|\Im k_n^o|}{|k_n^o|^2} \frac{|\Im k_n|}{|\Im k_n^o|}\frac{|k_n^o|^2}{|k_n|^2}\\ &= \sum_{|k_n| \leq r} \frac{|\Im k_n^o|}{|k_n^o|^2} \left| 1 + \frac{\Im \r_n}{\Im k_n^o}\right| \left|1 + \frac{\r_n}{k_n^o}\right|^{-2} = \sum_{|k_n| \leq r} \frac{|\Im k_n^o|}{|k_n^o|^2} \z_n, \end{aligned} \] where we have introduced $\displaystyle \z_n = \left| 1 + \frac{\Im \r_n}{\Im k_n^o}\right| \left|1 + \frac{\r_n}{k_n^o}\right|^{-2}$, $n \geq 1$. Due to Corollary \ref{cor:im_part}, we have $|k_n^o| \to \iy$ and $|\Im k_n^o| \to \iy$ as $n \to \iy$. Recall that $\displaystyle \sup_{n \geq 1} \r_n = s < \iy$. Hence, we obtain $$ \left|\frac{\Im \r_n}{\Im k_n^o}\right| \to 0,\qq \left|\frac{\r_n}{k_n^o}\right| \to 0 $$ as $n \to \iy$ and then there exists $C \in \R$ such that $|\z_n| < C$ for any $n \geq 1$. Substituting this estimate in (\ref{jost_pert:eq1}), we get $$ \Bigl| \Im \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr| < C \sum_{|k_n| \leq r} \frac{|\Im k_n^o|}{|k_n^o|^2}. $$ Using (\ref{p2e12}) and Corollary \ref{cor:jost_cart}, we have $\displaystyle \sum_{n \geq 1} \frac{|\Im k_n^o|}{|k_n^o|^2} < \iy$ and then the sum $\displaystyle \sum_{|k_n| \leq r} \frac{|\Im k_n^o|}{|k_n^o|^2}$ is bounded as $r \to \iy$. Thus, the sum $\displaystyle \Bigl| \Im \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr|$ is bounded as $r \to \iy$. Secondly, we consider the real part of this sum. $$ \Bigl| \Re \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr| \leq \Bigl| \Re \sum_{|k_n| \leq r} \frac{1}{k^o_n} \Bigr| + \sum_{|k_n| \leq r} \left| \frac{1}{k_n} - \frac{1}{k_n^o} \right|. $$ Due to (\ref{p2e12}) and Corollary \ref{cor:jost_cart}, the sum $\displaystyle \sum_{|k_n| \leq r} \frac{1}{k^o_n}$ converges as $r \to \iy$ and then $\displaystyle \Bigl| \Re \sum_{|k_n| \leq r} \frac{1}{k^o_n} \Bigr|$ is bounded as $r \to \iy$. Using $k_n = k_n^o + \r_n$, we get $$ \sum_{|k_n| \leq r} \left| \frac{1}{k_n} - \frac{1}{k_n^o} \right| = \sum_{|k_n| \leq r} \frac{|\r_n|}{|k_n||k_n^o|} \leq \sum_{|k_n| \leq r} \frac{|\r_n|}{\left|1 + \frac{\r_n}{k_n^o}\right|} \frac{1}{|k_n^o|^2}. $$ Since $\displaystyle \sup_{n \geq 1} \r_n = s < \iy$ and $|k_n^o| \to \iy$ as $n \to \iy$, we have $\frac{|\r_n|}{|k_n^o|} \to 0$ as $n \to \iy$ and then there exists $C \in \R$ such that $$ \frac{|\r_n|}{\left|1 + \frac{\r_n}{k_n^o}\right|} < C,\qq n \geq 1. $$ By Corollary \ref{cor:im_part}, we have $\Im k_n^o \to \iy$ as $n \to \iy$. Hence, it follows from $\displaystyle \sum_{n \geq 1} \frac{|\Im k_n^o|}{|k_n^o|^2} < \iy$ that the series $\displaystyle \sum_{n \geq 1} \frac{1}{|k_n^o|^2}$ converges and then $\displaystyle \sum_{|k_n| \leq r} \frac{1}{|k_n^o|^2}$ is bounded as $r \to \iy$. Using these estimates, we see that $\displaystyle \sum_{|k_n| \leq r} \left| \frac{1}{k_n} - \frac{1}{k_n^o} \right|$ is bounded as $r \to \iy$, which yields that $\displaystyle \Bigl| \Re \sum_{|k_n| \leq r} \frac{1}{k_n} \Bigr|$ is bounded as $r \to \iy$. Now, it follows from Theorem \ref{thm:lind} that the function $f:\C \to \C$ given by $$ f(k) = f(0) e^{ik\g} \lim_{r \to \iy} \prod_{|k_n| \leq r} \left( 1 - \frac{k}{k_n}\right),\qq k \in \C, $$ is an entire function of exponential type, where $\displaystyle f(0) = f_o(0) \lim_{r \to +\iy} \prod_{|k_n| \leq r} \frac{k_n}{k^o_n}$. The last product converges, since $$ \sum_{n \geq 1} \Bigl| 1 - \frac{k_n}{k^o_n} \Bigr| = \sum_{n \geq 1} \Bigl| \frac{\r_n}{k^o_n} \Bigr| \leq \Bigl( \sum_{n \geq 1} |\r_n|^2 \Bigr)^{\frac{1}{2}} \Bigl(\sum_{n \geq 1} \frac{1}{|k_n^o|^2} \Bigr)^{\frac{1}{2}} \leq \sum_{n \geq 1} |\r_n| \Bigl(\sum_{n \geq 1} \frac{1}{|k_n^o|^2} \Bigr)^{\frac{1}{2}} < \iy. $$ Here we used the H{\"o}lder inequality, $\r \in \ell^1 \ss \ell^2$ and $\displaystyle \sum_{n \geq 1} \frac{1}{|k_n^o|^2} < \iy$. Now, we show that $f \in \cW_+$. Since $f_o \in \cW_+$ and $f = \frac{f}{f_o} f_o$, it is sufficiently to show that $\frac{f}{f_o} \in \cW_+$. In order get this result, we introduce \[ \label{jost_pert:eq10} F(k) = \log \left( \frac{f(k)}{f_o(k)} \right),\qq k \in \R. \] Using the Hadamard factorization for $f$ and $f_o$, we have \[ \label{jost_pert:eq4} \begin{aligned} F(k) &= \log\left( \frac{f(0)}{f_o(0)} \lim_{r \to +\iy} \prod_{|k_n| \leq r} \frac{1 - \frac{k}{k_n}}{1 - \frac{k}{k^o_n}} \right) = \log\left( \lim_{r \to +\iy} \prod_{|k_n| \leq r} \left(1 + \frac{\r_n}{k^o_n - k} \right)\right)\\ &= \lim_{r \to +\iy} \sum_{|k_n| \leq r} \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \end{aligned} \] for any $k \in \R$. Now, we show that this series converges absolutely in $\hat \cL_+$. By Lemma \ref{lm:wiener_estimate}, we have \[ \label{jost_pert:eq3} \left\| \frac{\r_n}{k^o_n - k} \right\|_{\hat \cL_+} = \frac{|\r_n|}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right). \] Recall that $|\Im k^o_n| \to \iy$ and $\r_n \to 0$ as $n \to \iy$. Thus, it follows from (\ref{jost_pert:eq3}) that $\left\| \frac{\r_n}{k^o_n - k} \right\|_{\hat \cL_+} \to 0$ as $n \to \iy$. Let $N \in \N$ be such that $\left\| \frac{\r_n}{k^o_n - k} \right\|_{\hat \cL_+} < \frac{1}{4}$ for any $n > N$. Thus, using Lemma \ref{lm:log_estimate} and estimate (\ref{jost_pert:eq3}), we obtain $$ \left\| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \right\|_{\hat \cL_+} < \frac{C|\r_n|}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right) $$ for any $n > N$, where the constant $C > 0$ does not depend on $n$. Using (\ref{jost_pert:eq4}), we get \begin{multline} \label{jost_pert:eq6} \sum_{n \geq 1} \Bigl\| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \Bigr\|_{\hat \cL_+} \leq\\ \sum_{n = 1}^{N} \Bigl\| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \Bigr\|_{\hat \cL_+} + \sum_{n > N} \frac{C|\r_n|}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right). \end{multline} Since $|\Im k^o_n| \to \iy$ as $n \to \iy$ and $|\Im k^o_n| \neq 0$ for any $n \geq 1$, there exist a constant $C_1 > 0$ such that \[ \label{jost_pert:eq7} \frac{1}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right) < C_1 \] for any $n \geq 1$. Due to Lemma \ref{lm:wiener_estimate} and Corollary \ref{p3c1}, we have $\log \left(1 + \frac{\r_n}{k^o_n - k} \right) \in \hat \cL_+$ and then there exists a constant $C_2 > 0$ such that \[ \label{jost_pert:eq8} \Bigl\| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \Bigr\|_{\hat \cL_+} < C_2 \] for any $1 \leq n \leq N$. Substituting (\ref{jost_pert:eq7}) and (\ref{jost_pert:eq8}) in (\ref{jost_pert:eq6}), we obtain \[ \label{jost_pert:eq9} \sum_{n \geq 1} \Bigl\| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \Bigr\|_{\hat \cL_+} \leq N C_2 + C C_1 \sum_{n > N} |\r_n|. \] Since $(\r_n)_{n \geq 1} \in \ell^1$, we have $\sum_{n > N} |\r_n| < \iy$ and then the series in (\ref{jost_pert:eq9}) converges. Thus, we have $F \in \hat \cL_+$. Now, it follows from Corollary \ref{p3c1} that $\exp(F) \in \cW_{exp}$. Recall that $f_o \in \cW_{exp}$. Thus, using (\ref{jost_pert:eq10}), we get $f = \exp(F) f_o \in \cW_{exp}$. Now, we show that $f \in \cJ$. Since $f \in \cW_{exp}$, it is bounded on $\R$ and then $$ \int_{\R} \frac{\log(1 + |f(k)|) dk}{1 + k^2} < \iy. $$ Recall that $f$ is an entire function of exponential type, which yields that $f \in \cE_{Cart}(\a,\b)$ for some $\a,\b \in \R$. Moreover, we have $n(r,f) = \frac{2\g}{\pi} r + o(r)$ as $r \to \iy$. Due to Theorem \ref{thm:lev}, we have $\t_+(f) + \t_-(f) = 2\g$. Since $f \in \cW_+$, it follows from the Paley-Wiener theorem that $\t_+(f) = 0$ and then we have $\t_-(f) = 2\g$. Hence, we get $\t_+(f-1) = 0$ and $\t_-(f-1) = 2\g$. Moreover, we have showed above that $f - 1 \in \hat \cL_+$ and then, by the Plancherel theorem, we get $f - 1 \in L^2(\R)$. Thus, using the Paley-Wiener theorem, we obtain $$ f(k) - 1 = \int_0^{\g} g(s) e^{2iks} ds,\qq k \in \R, $$ for some $g \in \cP$. Due to $f \in \cW_{exp}$, we have $f(k) \neq 0$ for any $k \in \ol \C_+$, which yields that $f \in \cJ$. Finally, we show that $\r_{\cJ}(f,f_o) \to 0$ as $\| \r \|_{\ell^1} \to 0$. If $\| \r_n \|_{\ell^1} < \ve$, then we have $|\r_n| < \ve$ for any $n \geq 1$. Using Lemma \ref{lm:wiener_estimate} and estimate (\ref{jost_pert:eq7}), we get $$ \left\| \frac{\r_n}{k^o_n - k} \right\|_{\hat \cL_+} = \frac{|\r_n|}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right) < \frac{\ve}{|\Im k^o_n|^{\frac{1}{2}}}\left(1 + \frac{1}{|\Im k^o_n|^{\frac{1}{2}}} \right) < C_1 \ve $$ for any $n \geq 1$ and then we get $$ \left\| \frac{\r_n}{k^o_n - k} \right\|_{\hat \cL_+} < \frac{1}{4} $$ for any $n \geq 1$ and $\ve < \frac{1}{4C_1}$. Thus, we can choose $N = 0$ in (\ref{jost_pert:eq9}) for $\ve < \frac{1}{4C_1}$, which yields \[ \label{jost_pert:eq11} \left\| F \right\|_{\hat \cL_+} \leq \sum_{n \geq 1} \| \log \left(1 + \frac{\r_n}{k^o_n - k} \right) \|_{\hat \cL_+} \leq C C_1 \sum_{n > N} |\r_n| = C C_1 \| \r \|_{\ell^1}. \] It follows from Corollary \ref{p3c1}, that the mapping $F \mapsto \exp(F)$ from $\hat \cL_+$ to $\cW_{exp}$ is continuous. Since $\exp(0) = 1$, we have $\|\exp(F) - 1\|_{\hat \cL_+} \to 0$ as $\left\| F \right\|_{\hat \cL_+} \to 0$ and then it follows from inequality (\ref{jost_pert:eq11}) that $\|\exp(F) - 1\|_{\hat \cL_+} \to 0$ as $\| \r \|_{\ell^1}$. Now, we consider $\r_{\cJ}(f,f_o)$. By the definition of the metric $\r_{\cJ}$, we have $$ \r_{\cJ}(f,f_o) = \|f-f_o\|_{\hat \cL_+} = \|\exp(F) f_o - f_o\|_{\hat \cL_+} \leq \|f_o\|_{\cW_+} \|\exp(F) - 1\|_{\hat \cL_+}. $$ Thus, it follows from this inequality that $\r_{\cJ}(f,f_o) \to 0$ as $\| \r \|_{\ell^1} \to 0$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:main_thm}}] Let $\k^o = (k_n^o)_{n \geq 1}$ be zeros of $\psi_o = \psi(\cdot,q_o)$ for some $q_o \in \cP$ arranged that $0 < |k^o_1| \leq |k^0_2| \leq \ldots$ and let $\r = (\r_n)_{n \geq 1} \in \ell^1$ be such that $k_n = k_n^o + \r_n \in \C_-$ for each $n \geq 1$. Due to Theorem \ref{thm:inv_jost}, we have $\psi_o \in \cJ$ and then, by Theorem \ref{thm:jost_pert}, there exists a unique $\psi_1 \in \cJ$ such that $(k_n)_{n \geq 1}$ are zeros of $\psi_1$. Now, it follows from Theorem \ref{thm:inv_jost} that there exists a unique $q \in \cP$ such that $\psi_1 = \psi(\cdot,q)$. Moreover, if $\r_{\cJ}(\psi_1,\psi_o) \to 0$ then we have $\r_{\cP}(q,q_o) \to 0$. Due to Theorem \ref{thm:jost_pert}, we have $\r_{\cJ}(\psi_1,\psi_o) \to 0$ as $\|\r\|_{\ell^1} \to 0$, which yields $\r_{\cP}(q,q_o) \to 0$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:herm_pert}}] Let $E_o$ be Dirac-type Hermite-Biehler function, let $\k^o = (k_n^o)_{n \geq 1}$ be its zeros arranged that $0 < |k^o_1| \leq |k^0_2| \leq \ldots$ and let $\r = (\r_n)_{n \geq 1} \in \ell^1$ be such that $k_n = k_n^o + \r_n \in \C_-$ for each $n \geq 1$. We introduce $\psi_o(k) = i e^{i\g k} E_o(k)$, $k \in \C$. Due to Theorem \ref{thm4}, the function $\psi \in \cJ$ and then, by Theorem \ref{thm:jost_pert}, there exists a unique $\psi \in \cJ$ such that $(k_n)_{n \geq 1}$ are zeros of $\psi$ and $\r_{\cJ}(\psi,\psi_o) \to 0$ as $\|\r\|_{\ell^1} \to 0$. Thus, it follows from Theorem \ref{thm4} that there exists a unique Dirac-type Hermite-Biehler function $E$ such that $\k = (k_n)_{n \geq 1}$ are zeros of $E$ and given by $E(k) = -ie^{-i\g k}\psi(k)$, $k \in \C$. Now, we show the continuity. Using well-known properties of the Fourier transform, we get $$ \|\cF^{-1}(E - E_o)\|_{L^2(-\frac{\g}{2},\frac{\g}{2})} = \|\cF^{-1}(e^{-i g k} (\psi - \psi_o))\|_{L^2(-\frac{\g}{2},\frac{\g}{2})} = \|\cF^{-1}(\psi - \psi_o)\|_{L^2(0,\g)} = \r_{\cJ}(\psi,\psi_o), $$ which yields that $\|\cF^{-1}(E - E_o)\|_{L^2(-\frac{\g}{2},\frac{\g}{2})} \to 0$ as $\|\r\|_{\ell^1} \to 0$. \end{proof} \footnotesize \no {\bf Acknowledgments.} D. M. is supported by the RFBR grant No. 19-01-00094. \medskip
{ "timestamp": "2020-12-29T02:21:03", "yymm": "2012", "arxiv_id": "2012.13996", "language": "en", "url": "https://arxiv.org/abs/2012.13996" }
\section{Introduction} There is an increasing interest in deep learning for different pattern classification and recognition tasks \cite{sahbiicassp2015,sahbiicassp2019,sahbiacm2000}. These parametric models rely on deep neural networks, composed of several convolutional, pooling and fully connected layers, that capture different levels of abstractions in the analyzed patterns \cite{goodfellow2016deep}. These models have been popular in the analysis of vectorial data; i.e., those sitting on top of regular domains such as images \cite{alexnet2012,inception2015,sahbicassp2016a,sahbiccv2017,sahbiPR2019,resnet2016,mobilenet2017,squeezenet2016}. However, the extension of these models to non-regular domains, such as graphs, remains a major challenge even though interesting solutions are currently emerging~\cite{kipf2016semi,bruna2013spectral,sahbiBMVC2019,defferrard2016convolutional,gao2018large,huang2018adaptive,monti2017geometric}. Indeed, the difficulty in analyzing non-vectorial data stems from the ambiguity in defining usual operations namely convolutions. Whereas achieving convolution using sliding windows in regular domains, such as images, is a well defined operation, there is no clear definition of sliding windows in general graphs \cite{monti2017geometric}; besides, the number and the order of nodes that intervene in the receptive fields of convolutions may change dramatically across different graph instances. \\ \indent Early graph convolutional network (GCN) methods \cite{gori2005new,micheli2009neural,scarselli2008graph,wu2019comprehensive} and their variants (see for instance \cite{li2015gated,dai2018learning,bacciu2018,zhang2018,ying2018,genie2019,xu2018}) are rather spatial and seek to learn graph representations by iteratively propagating node features (a.k.a representations, descriptions or signals) through their neighbors using recurrent neural architectures till a stationary point is reached. These spatial methods also include recurrent gaited networks \cite{scarselli2008graph,li2015gated,dai2018learning} that share the same convolutional parameters through layers, and composition-based convolutional networks \cite{hamilton2017inductive} that consider different parameters. However, on highly irregular graphs, convolutions are ill-posed as the notion of translation and filter support (i.e., receptive field) cannot be consistently defined. Existing attempts, to address these issues, achieve node sorting and efficient sampling of neighboring nodes in order to define the receptive field during graph convolutions \cite{chen2017stochastic} and to make it similar to regular (grid-like) domains \cite{atwood2016diffusion,niepert2016learning,gao2018large}. Other solutions operate differently \cite{hamilton2017inductive,monti2017geometric,niepert2016learning,gao2018large,zhang2018gaan}; first, they describe nodes by aggregating their neighbors into fixed length features prior to apply convolution (based on inner product) on the aggregated features. \\ \noindent On another hand, spectral methods provide interesting alternatives to make convolutions well defined \cite{bruna2013spectral,defferrard2016convolutional,kipf2016semi,henaff2015deep,li2018adaptive,levie2018cayleynets,dual2018}. These methods rely on the Fourier transform that projects the signal of a given graph using the spectral decomposition of its Laplacian prior to perform convolution in the Fourier domain, and then back-project the result in the input domain; in particular, the method in \cite{defferrard2016convolutional} makes it possible to project graph signals using an orthogonal Chebyshev basis prior to achieve convolution. An extension, in \cite{kipf2016semi}, allows to reduce the Chebychev polynomial using a first order approximation which provides a spatially localized convolution, that is equivalent to spatial methods. A variant in \cite{chen2018fastgcn} interprets the graph convolutions in \cite{kipf2016semi} as integral transforms of embedding functions under probability measures and uses Monte Carlo sampling to efficiently and consistently estimate the integrals. Huang et al. \cite{huang2018adaptive} propose an adaptive layer-wise sampling approach, based on variance reduction in order to accelerate the training of ChebyshevNet~\cite{kipf2016semi}, where sampling for a lower layer is conditioned on a top one. Nonetheless, most of these spectral methods suffer from several drawbacks; the eigen decomposition of the Laplacian, besides being computationally expensive, is sensitive to any small perturbation of input graphs (that may result from the intra-class variability). Moreover, the learned filters are domain dependent and cannot be transferred to graphs with high topological variations.\\ \noindent Besides the aforementioned issues, the accuracy of both spatial and spectral GCNs also relies on the discrimination power of the input graph signal. For highly nonlinear graph signals, relying on convolutions in the input space may limit the discrimination power of the learned convolutional representations and may result into limited accuracy. Furthermore, sorting using automorphisms is not always consistent through different graph instances while aggregation based on averaging (when achieved in the input space) may dilute input node representations prior to convolution. An explicit expansion of the input node representations may enhance the discrimination power but comes at the expense of a substantial increase in the number of training parameters (thereby the risk of overfitting) and also an increase in the computational complexity both in space and time. Therefore, one should consider, instead, an {\it implicit} mapping of the input graph signal in a (high or possibly infinite dimensional) reproducing kernel Hilbert space ({\it RKHS})~\cite{vapnik1998} and achieve an averaging aggregation and convolution in that space, in order to enhance the representational power of nodes and also the learned graph representations while being permutation agnostic. This mapping scheme has been proven to be effective in kernel methods, and particularly in support vector machines (SVMs) (see for instance~\cite{sahbirr2002,kernels2004,sahbirr2004,sahbivisual2004,sahbisoft2008,sahbitnnls2017,sahbijstars2017,sahbiccv2013,sahbineuro2007,sahbicip2014,sahbipami2011}) and it is extended, in our paper, to GCNs.\\ \indent Considering these challenges, we introduce in this paper a {\it dual} formulation of GCN based on kernels which maps graph signals from an input space into a high dimensional Hilbert space. This mapping is implicitly defined using positive semi-definite kernels that enhance the discrimination power of the learned graph representations, without explicitly increasing the dimensionality of the input graph signals nor the number of training parameters\footnote{In contrast to \cite{hamid2014compact,kar2012random,le2013fastfood,vedaldi2012efficient,rahimi2008random,dai2014scalable} which may increase the number of parameters in the model and the risk of overfitting.}. This is beneficial when handling low dimensional raw signals, such as 3D skeleton graphs in action recognition~\cite{ntu2016,SBU12}; indeed the low dimensionality of these data makes the Bayes risk of the underlying classification task intrinsically high, and this requires increasing the dimensionality of the input raw signal. Moreover, our GCN achieves convolutions without explicitly realigning nodes in the receptive fields of the learned graph filters with those of the input graphs, thereby making convolutions permutation agnostic. We cast the problem of filter design as kernel learning with the particularity of using standard kernels while training only their support vectors; this scheme of learning the support vectors (as a part of kernel design) is conceptually different from the two major families of kernel learning techniques, namely non-parametric \cite{vo2012transductive,sahbiPR2012} and parametric ones~\cite{gonen2011multiple,cortes2009learning,sahbiTIP2017}\footnote{In "non-parametric" training, the number of parameters follows exactly the size of training data (e.g., nonlinear SVMs) while in "parametric" training, this number is fixed independently (e.g., linear SVMs). In "semi-parametric" training, only a fraction of the parameters follows proportionally the size of training data.}. Finally, extensive experiments on the challenging task of action recognition show the high gain of our kernel-based GCNs w.r.t standard baselines as well as the related work. \def{\bf A}{{\bf A}} \def{\bf A}^k{{\bf A}^k} \def{\bf I}{{\bf I}} \def{\bf X}{{\bf X}} \def{\bf B}{{\bf B}} \def{\cal K}{{\bf K}} \def{\bf U}{{\bf U}} \def{\bf W}{{\bf W}} \def{\cal S}{{\cal S}} \def{\cal N}{{\cal N}} \def{\cal G}{{\cal G}} \def{\cal V}{{\cal V}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\bf z}{{\bf z}} \section{Graph convolutional networks} Let ${\cal S}=\{{\cal G}_i=({\cal V}_i,{\cal E}_i)\}_i$ denote a collection of graphs with ${\cal V}_i$, ${\cal E}_i$ being respectively the nodes and the edges of ${\cal G}_i$. Each graph ${\cal G}_i$ (denoted for short as ${\cal G}=({\cal V},{\cal E})$) is endowed with (i) a signal $\{s(u) \in {\cal X}: \ u \in {\cal V}\}$ (with ${\cal X}=\mathbb{R}^D$ being an input space) and (ii) a row-stochastic adjacency matrix ${\bf A}$ with each entry ${\bf A}_{uu'}>0$ iff $(u,u') \in {\cal E}$ and $0$ otherwise. Our goal is to design a novel graph convolutional network that returns both the representation and the classification of ${\cal G}$. \def{\cal K}{{\cal K}} \subsection{Standard graph convolutional networks}\label{standardGCN} Consider ${\cal G}=({\cal V},{\cal E})$, $g_\theta=({\cal V}_\theta,{\cal G}_\theta)$ as two graphs with $|{\cal V}_\theta| \ll |{\cal V}|$ and $|{\cal E}_\theta| \ll |{\cal E}|$. Following standard GCNs (see for instance~\cite{monti2017geometric}), the spatial convolution of ${\cal G}$ with a graph $g_\theta$ at a given node $u \in {\cal V}$ is defined as \begin{equation}\label{initial0} ({\cal G} \star g_\theta)_u = \sigma( {\cal K}_\theta(u)), \end{equation} with \begin{equation}\label{initial0} {\cal K}_\theta(u) = \bigg \langle \sum_{u'} s(u') . [{\bf A}^r]_{uu'}, w_\theta \bigg \rangle, \end{equation} here $\sigma(.)$ is a nonlinear activation (taken in practice as ReLU), $w_\theta \in {\cal X}$ corresponds to the filter parameters of the graph $g_\theta$ (also referred to as graphlet) and $[{\bf A}^r]_{uu'}$ is the ${u'}^{th}$ column of the ${u}^{th}$ row of the $r$-hop adjacency matrix ${\bf A}^r$. In this definition, the left-hand side term of the inner product in Eq.~\ref{initial0}, aggregates the neighbors of $u$ into a single vector prior to multiply this vector by $w_\theta$. \\ In spite of being agnostic to any arbitrary permutation of nodes in ${\cal G}$, the above definition suffers from limited discrimination power, as the signal informations in the neighborhood system $\{{\cal N}_r(u)\}_u$ of ${\cal G}$ are mixed during convolution. In what follows, we consider a dual convolutional operator, based on kernels, that overcomes this limitation and provides more discriminating convolutional representations while still being agnostic to any arbitrary permutation of nodes in graphs. \subsection{Our kernel-based graph convolutional networks} \indent Considering $\kappa$ as a symmetric positive definite function (i.e., $\exists \psi: {\cal X} \rightarrow {\cal H}$, with $\psi$ being an implicit mapping that takes graph signals from an input space $\cal X$ to a high dimensional Hilbert space $\cal H$, s.t., $\kappa(s(u'),s(v))=\langle \psi(s(u')), \psi(s(v))\rangle$) and for a particular setting of $w_\theta$ as $\frac{1}{|{\cal V}_\theta|}\sum_{i=1}^N \alpha_i^\theta \psi(s(v_i^\theta))$\footnote{This setting is related to the representer theorem widely used in kernel methods~\cite{representer2001,wahba1971}. The latter states that many problems have optimal solutions that live in a finite dimensional span of training data mapped into a high dimensional Hilbert space, and this makes it possible to define kernel-based algorithms independently of the (high or infinite) dimensionality of these Hilbert spaces.}, with $\{v_i^\theta\}_i \subset {\cal V}_\theta$, $\{\alpha_i^\theta\}_i \subset \mathbb{R}$; the convolutional operator defined in Eq.~\ref{initial0} can be rewritten as \begin{equation}\label{initial} {\cal K}_\theta(u)= \frac{1}{|{\cal N}_r(u)|.|{\cal V}_\theta|} \sum_{u' \in {\cal N}_r(u)} \bigg(\sum_{i=1}^N \alpha_i^\theta \ \kappa(u',v_i^\theta)\bigg), \end{equation} here ${\cal N}_r(u)$ is the set of $r$-hop neighbors of $u$ and $\kappa(s(.),s(.))$, $\psi(s(.))$ are written for short as $\kappa(.,.)$ and $\psi(.)$ respectively. In the above definition, $\{v_i^\theta\}_{i,\theta} $ are referred to as support vectors and $\alpha=\{\alpha_i^\theta\}_{i,\theta} $ as the underlying mixing parameters. Since ${\cal K}_\theta(u)$ is defined as the sum of all of the kernel values between all of the possible signal pairs taken from ${\cal N}_r(u) \times {\cal V}_\theta$, its evaluation does not require any explicit alignment between these pairs and it is thereby still invariant to any arbitrary permutation (including rotations) of nodes in ${\cal V}$ and ${\cal V}_\theta$. \\ \indent The strength of this kernel trick resides in its capacity to handle nonlinear data as node representations are mapped into a high dimensional (and more discriminating) space ${\cal H}=\mathbb{R}^H$. For instance, when using the polynomial kernel $\kappa(s(u),s(v)) = \langle s(u),s(v)\rangle^p$, its underlying mapping is explicitly defined as $\psi(s(u))=s(u) \otimes \dots \otimes s(u)$ (with $\otimes$ being the Kronecker tensor product applied $p-1$ times); see also \cite{sahbiijmir2016,maji2012efficient,vedaldi2012efficient}. As the dimensionality $H$ of this explicit map grows exponentially w.r.t $p$ and polynomially w.r.t $D$, the kernel form is rather computationally more efficient. Indeed, considering a non-parametric setting with a fixed set of support vectors $\{v_i^\theta\}_i$ taken from the training set (i.e., $\cup \ {\cal V}_j$); when only $\{\alpha_i^\theta\}_i$ are allowed to vary in $w_\theta=\frac{1}{|{\cal V}_\theta|}\sum_{i} \alpha_i^\theta \psi(v_i^\theta)$, and when $H \gg |\{\alpha_i^\theta\}_i|$, the kernel trick presented earlier provides a computational and a generalization advantage (i.e., the convolution in Eq.~\ref{initial} has fewer parameters compared to the one in Eq.~\ref{initial0}). However, this may still come at the expense of a {\it quadratic} complexity when naively evaluating $\{\kappa(.,.)\}$; for mid (and even small) scale training problems with a large number of nodes in $\cup \ {\cal V}_j$, this complexity becomes clearly intractable. \\ \noindent One question that arises is how to make this approach parametric (or at least semi-parametric); in other words, how to maintain the kernel trick advantage (in Eq.~\ref{initial}) without significantly increasing the computational cost w.r.t the total number of nodes in $\cup \ {\cal V}_j$. Solutions such as sampling and reduced set technique \cite{burges1997improving} are both limited; on the one hand, sampling may generate a smaller fixed set of support vectors $\{v_i^\theta\}_i$ but biased (i.e., very limited to comprehensively make $w_\theta$ a universal filter approximator). On the other hand, the reduced set technique requires first building an initial expensive model $w_\theta$ before reducing its complexity by solving a difficult pre-image optimization problem \cite{burges1997improving,sahbiphd2003,sahbijmlr2006}. Our alternative, in this work, is to control the size of $\{v_i^\theta\}_i$ while allowing entries in $\{v_i^\theta\}_i$ to vary as a part of the end-to-end GCN (and also kernel) learning; this makes it possible to model a larger class of filters $\{w_\theta\}$ that better fit the classification task at hand (see later experiments). \\ \indent Note that one may consider a kernel approximation $\hat{\psi}(.)$ s.t. $\kappa(.,.)\approx \langle \hat{\psi}(.),\hat{\psi}(.) \rangle$ (as done in \cite{lu2014scale,sahbijiuicassp2016,hamid2014compact,kar2012random,le2013fastfood,vedaldi2012efficient,rahimi2008random,dai2014scalable,cho2009kernel} which seek to handcraft or learn shallow/deep explicit maps whose inner products approximate the original kernel values) and use instead Eq.~\ref{initial0}. However, this approximation usually results into very high dimensional mappings (and hence into a lot of training parameters in $w_\theta$), especially when considering highly nonlinear (and also discriminative) kernels such as gaussian, histogram intersection and triangular \cite{sahbistcv2003,sahbiicip2002}. Put differently, even when learning both $\{v_i^\theta\}_i$ and $\{\alpha_i^\theta\}_i$, the dual formulation in Eq.~\ref{initial} is computationally more efficient and less subject to overfitting, as the dimensionality $H$ of $\hat{\psi}$ is often $\gg |{\cal V}_\theta| \times D$ for the widely used kernels including gaussian and histogram intersection. In sum, our method is rather targeted to learn kernels following a (semi-)parametric setting by allowing the support vectors of these kernels to be learned (instead of being taken from training data) and this is also conceptually very different from multiple kernel learning \cite{gonen2011multiple}. \subsection{Neural consistency and architecture design} In contrast to usual convolutional operators on graphs (including Eq.~\ref{initial0}), the one in Eq.~\ref{initial} cannot be straightforwardly evaluated using standard neural units\footnote{i.e., those based on standard perceptron (inner product operators) followed by nonlinear activations.} as kernels may have general forms. Hence, modeling Eq.~\ref{initial} requires a careful design; our goal in this paper, is not to change the definition of neural units, but instead to adapt Eq.~\ref{initial} in order to make it consistent with the usual definition of neural units. In what follows, we introduce the overall architecture associated to ${\cal K}_\theta(.)$ (and the whole GCN) for different kernels including linear, polynomial, gaussian and histogram intersection as well as a more general class of shift invariant kernels.\\ \def{\bf x}{{\bf x}} \def{\bf y}{{\bf y}} \begin{definition}[Neural consistency] Let $u_{.,d}$ (resp. $v_{.,d}$) denote the $d^{th}$ dimension of the signal in a given node $u$ (resp. $v$). For a given (fixed or learned) $v$, a kernel $\kappa$ is referred to as ``neural-consistent'' if \begin{equation}\label{expansion0} \kappa(u,v) = \sigma_3\bigg(\sum_d \sigma_2(\sigma_1(u_{.,d}).\omega_d)\bigg), \end{equation} with $\omega_d=\sigma_4(v_{.,d})$ and being $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$ any arbitrary real-valued activation functions.\label{define000} \end{definition} Considering the above definition, the following kernels are neural consistent: linear $\langle u,v\rangle$, polynomial $\langle u,v\rangle^p$, and $\tanh(a \langle u,v\rangle+b)$. Neural consistency is straightforward for inner product-based kernels (namely linear, polynomial and tanh) while for shift-invariant ones such as the gaussian, one may obtain neural consistency by rewriting $\exp(-\beta \|u-v\|_2^2) = \sigma_3\big(\sum_d \sigma_2(\sigma_1(u_{.,d}).\omega_d)\big)$ with $\sigma_1(.)=\exp(.)$, $\sigma_2(.) = \log(.)^2$, $\sigma_3(.)= \exp(-\beta (.))$ and $\omega_d=\exp(-v_{.,d})$. Other kernels (including Laplacian, inverse multiquadric, power, log, Cauchy\footnote{See for instance \cite{genton2001classes} for a taxonomy of the widely used functions in kernel machines.}) are also neural consistent (see table~\ref{taxi} for the setting of their $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$). \\ For the histogram intersection kernel, $\sum_d \min(u_{.,d},v_{.,d}) = \sum_d 1-\max(1-u_{.,d},1-v_{.,d})$ and one may easily obtain $\sum_d 1-\max(1-u_{.,d},1-v_{.,d}) \approx \sigma_3\big(\sum_d \sigma_2(\sigma_1(u_{.,d}).\omega_d)\big)$ using $\sigma_1(.)=\exp(\exp(\beta(1-(.))))$, $\sigma_2(.) = -\frac{1}{\beta}\log(\log(.))+1$, $\sigma_3(.)=(.)$ and $\omega_d=\sigma_1(v_{.,d})$ (for a sufficiently large $\beta$). In the following section, we discuss the implementation details of our global GCN architecture built on top of these neural consistent kernels. \begin{table*} \begin{center} \resizebox{1.02\textwidth}{!}{ \begin{tabular}{cc||c|cccc} & & $\kappa(u,v)$ & $\sigma_1(t)$ & $\sigma_2(t)$ & $\sigma_3(t)$ & $\sigma_4(t)$ \\ \hline \hline \multirow{4}{*}{\rotatebox{38}{\scriptsize Inner product based}} & Linear & $\langle u,v \rangle$ & $t$ & $t$ & $t$ & $t$ \\ & Polynomial &$\langle u,v \rangle^p$ & $t$ & $t$ & $t^p$ & $t$ \\ &Sigmoid & $\frac{1}{1+\exp(-\beta \langle u,v\rangle )}$ & $t$ & $t$ & $\frac{1}{1+\exp(-\beta t)}$ & $t$ \\ &tanh &$\tanh (a \langle u,v \rangle + b$) & $t$ & $t$ & $\tanh(a t + b)$ & $t$ \\ \hline \multirow{7}{*}{\rotatebox{38}{\scriptsize Distance based}} & Gaussian & $\exp(-\beta \|u-v \|^2)$ & $\exp(t)$ & $\log(t)^2$& $\exp(-\beta t)$ & $\exp(-t)$ \\ & Laplacian &$\exp(-\beta \|u-v \|)$& $\exp(t)$ & $\log(t)^2$& $\exp(-\beta \sqrt{t})$ & $\exp(-t)$ \\ &Power & $-\|u-v\|^p$ &$\exp(t)$ & $\log(t)^2$& $-t^{p/2}$ & $\exp(-t)$ \\ & Inverse Multi-quadric &$\frac{1}{\sqrt{\|u-v\|^2+b^2}}$ & $\exp(t)$ & $\log(t)^2$ & $\frac{1}{\sqrt{t+b^2}}$ & $\exp(-t)$ \\ &Log & $-\log(\|u-v\|^p+1)$ & $\exp(t)$ & $\log(t)^2$ & $-\log(t^{p/2}+1)$ & $\exp(-t)$ \\ & Cauchy & $\frac{1}{1+\frac{\|u-v\|^2}{\sigma^2}}$ & $\exp(t)$ & $\log(t)^2$ & $\frac{1}{1+\frac{t}{\sigma^2}}$ & $\exp(-t)$ \\ \hline & Histogram intersection & $\sum_d \min(u_{.,d},v_{.,d})$ & $\exp(\exp(\beta(1-t)))$& $-\frac{1}{\beta} \log(\log(t))+1$ & $t$ & $\sigma_1(t)$ \end{tabular}} \end{center} \caption{This table shows the setting of $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$ for different kernel functions.} \label{taxi} \end{table*} \begin{figure*}[hpbt] \centering \resizebox{0.95\textwidth}{!}{\input{archi.pdf_t}} \caption{This figure shows the architecture of our kernel-based graph convolutional network; see a detailed description of this architecture in the ``Implementation Part'' of section~\ref{impl} {\bf (better to zoom the PDF version).}} \label{fig01} \end{figure*} \subsubsection*{\bf Implementation}\label{impl} Fig.~\ref{fig01} shows the architecture of our deep net including kernel evaluation and the weighted convolution blocks. The former block is fed with the input graph signal $s({\cal V})$ (denoted for short as ${\cal V}$) and the adjacency matrix ${\bf A}$ following the same arbitrary order both in ${\cal V}$ and ${\bf A}$. In the first layer, the $\sigma_1$ activation is first applied to all the dimensions of the signal ${\cal V}$, then each dimension of the resulting activated signal $\sigma_1({\cal V})$ is multiplied, in the second layer, by the $K\times N$ (reparameterized) weights of the node filters $\{\sigma_4({\cal V}_\theta)\}_\theta$ (as shown in Eqs.~\ref{initial}, ~\ref{expansion0}) prior to apply the $\sigma_2$ activation; here $K$ corresponds to the number of filters and $N$ the number of nodes in the expansion of each filter. Note that these weights are shared through different nodes in ${\cal V}$. In the third layer, the results of the previous one are pooled across dimensions resulting into $N\times K$ kernel values per node in ${\cal V}$. These kernel values are activated by $\sigma_3()$ and fed to the weighted convolutional block in order to evaluate their weighted linear combinations, in the fourth layer, resulting into $K$ pooled kernel values per node (see again Eqs.~\ref{initial},~\ref{expansion0}). These pooled kernel values are crossed, in the fifth layer, with the nonzero entries of the adjacency matrix ${\bf A}$ in order to make the receptive field of the convolutional operation local. Note also that the activation functions $\log()$ and $\exp()$ are successively applied in the fourth and fifth layers in order to make this crossing operation neural consistent. Indeed, one may rewrite Eq.~\ref{initial} as \begin{equation}\label{initial3} \small \begin{array}{lll} \displaystyle {\cal K}_\theta(u) &= & \displaystyle \frac{1}{|{\cal V}_\theta|} \sum_{u'} \exp\bigg (\log {\bf A}_{uu'} + \log \sum_{i=1}^N \alpha_i^\theta \ \kappa(u',v_i^\theta)\bigg), \end{array} \end{equation} which corresponds to the neural consistent form shown in Eq.~\ref{expansion0}. The results of this fifth layer are pooled, in the sixth layer, through the neighborhood systems $\{{\cal N}_r(u)\}_u$ and fed to the ReLU activation resulting into $K$ features per node in ${\cal V}$. Finally, these node features are used for final pooling and softmax classification. \section{Experimental validation}\label{section4} We evaluate the performance of our kernel-based GCN (KGCN) on the challenging task of action recognition, using the SBU kinect dataset \cite{SBU12}. The latter is an interaction dataset acquired using the Microsoft kinect sensor; it includes in total 282 video sequences\footnote{In contrast to other visual analysis tasks (e.g.,\cite{sahbicip2014,sahbiigarss2012,sahbiicip2009,sahbiECCV2014,sahbicassp2013,sahbigarss11,sahbiTIP2013,sahbicip2001,sahbicisp2001}), images/videos are already processed and skeleton data are available.} belonging to $C=8$ categories: ``approaching'', ``departing'', ``pushing'', ``kicking'', ``punching'', ``exchanging objects'', ``hugging'', and ``hand shaking'' with variable duration, viewpoint changes and interacting individuals (see examples in Fig. \ref{fig1}). In all these experiments, we use the same evaluation protocol as the one suggested in \cite{SBU12} (i.e., train-test split) and we report the average accuracy over all the classes of actions. \def{\hat{\w}}{{\hat{\w}}} \subsection{Video skeleton description}\label{graphc} \indent Given a video ${\cal V}$ in SBU as a sequence of skeletons, each keypoint in these skeletons defines a labeled trajectory through successive frames (see Fig.~\ref{fig1}). Considering a finite collection of trajectories $\{v_j\}_j$ in ${\cal V}$, we process each trajectory using {\it temporal chunking}: first we split the total duration of a video into $M$ equally-sized temporal chunks ($M=8$ in practice), then we assign the keypoint coordinates of a given trajectory $v_j$ to the $M$ chunks (depending on their time stamps) prior to concatenate the averages of these chunks and this produces the description of $v_j$ (again denoted as $s(v_j) \in \mathbb{R}^{D}$ with $D=3 \times M$) and $\{s(v_j)\}_j$ constitutes the raw description of nodes in a given video ${\cal V}$. Note that two trajectories $v_j$ and $v_k$, with similar keypoint coordinates but arranged differently in time, will be considered as very different when using temporal chunking. Note also that beside being compact and discriminant, this temporal chunking gathers advantages -- while discarding drawbacks -- of two widely used families of techniques mainly {\it global averaging techniques} (invariant but less discriminant) and {\it frame resampling techniques} (discriminant but less invariant). Put differently, temporal chunking produces discriminant raw descriptions that preserve the temporal structure of trajectories while being {\it frame-rate} and {\it duration} agnostic. \begin{figure}[hpbt] \begin{center} \centerline{\scalebox{0.29}{\input{processing.pdf_t}}} \vspace{-0.75cm}\caption{ This figure shows the whole keypoint tracking and description process.} \label{fig1} \end{center} \end{figure} \subsection{Performances and comparison} We trained our kernel-based GCN end-to-end for 3000 epochs with a batch size equal to $50$, a momentum of $0.9$ and we set the learning rate (denoted as $\nu$) iteratively inversely proportional to the speed of change of the cross entropy loss used to train our network; when this speed increases (resp. decreases), $\nu$ decreases as $\nu \leftarrow \nu \times 0.99$ (resp. increases as $\nu \leftarrow \nu \slash 0.99$). All these experiments are run on a GeForce GTX 1070 GPU device (with 8 GB memory) and no data augmentation is achieved. Table~\ref{table1} shows a comparison of action recognition performances (and also runtime per epoch during training), using our KGCN (with different kernels) against standard GCN (referred to as SGCN), shown in section~\ref{standardGCN}, with precomputed node representations based on kernel principal component analysis (KPCA) achieved on $\{s(u): u \in \cup {\cal V}_i\}$ using different kernels; in these results, we consider different numbers of eigenvectors (projection axes) corresponding to the largest eigenvalues of KPCA.\\ \begin{table*} \begin{center} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{c||cccccccccc|c} \backslashbox{kernels}{GCNs} & \multicolumn{10}{c|}{Standard GCN with different \# of KPCA dimensions ($H$) } & Our KGCN\\ &10 & 50 & 100& 200 & 300 & 400 & 500& 1000 & 2000& 3000 \\ \hline \hline Linear & 92.3077 & overdim&overdim &overdim &overdim & overdim & overdim & overdim & overdim & overdim & 90.7692 \\ Poly &89.2308&95.3846& 92.3077 & 93.8462 & 93.8462 & 93.8462 & 93.8462 & overdim & overdim & overdim & 93.8462 \\ tanh & 89.2308 &93.8462& 90.7692 &93.8462 & 90.7692 & 92.3077 & 93.8462 & 92.3077 & 93.8462 & 92.3077 & 96.9231 \\ sigmoid &93.8462 &90.7692& 93.8462 &92.3077 & 92.3077 & 92.3077 & 92.3077 & 96.9231 & 93.8462 & 92.3077 & 95.3846 \\ Gaussian &92.3077& 92.3077 &92.3077 &92.3077 & 96.9231 & 93.8462 & 93.8462 & 93.8462 & 93.8462 & 93.8462 & 98.4615 \\ Laplacian & 92.3077 &93.8462& 95.3846 & 92.3077 & 90.7692 & 90.7692 & 95.3846 & 93.8462 & 90.7692 & 90.7692 & 98.4615 \\ Power &90.7692 &92.3077&95.3846 & 92.3077 &92.3077 &95.3846 &95.3846 &93.8462 &93.8462 & 92.3077 & 96.9231 \\ IMQ & 87.6923&92.3077& 95.3846 & 95.3846 & 93.8462 & 93.8462 & 90.7692 & 95.3846 & 93.8462 & 93.8462 & 95.3846 \\ Log & 93.8462 &92.3077& 92.3077 & 95.3846 & 93.8462 & 93.8462 & 95.3846 & 90.7692 & 95.3846 & 90.7692 & 96.9231 \\ Cauchy & 93.8462 &95.3846&95.3846 & 92.3077& 96.9231 & 93.8462 & 92.3077 & 95.3846 & 92.3077 & 93.8462 & 98.4615 \\ HI & 93.8462 &92.3077&89.2308 & 90.7692 & 92.3077 & 92.3077 & 87.6923 & 87.6923 & 90.7692 & 87.6923 & 96.9231 \\ \hline time/epoch (s) &0.032&0.057& 0.072 &0.113 & 0.150 & 0.190 & 0.229 & 0.440 & 0.840 & 1.252 & 0.210 \end{tabular}} \end{center} \caption{ This table shows a comparison of our KGCN against SGCN (with different numbers of KPCA dimensions). Note that SGCN performances are not necessarily increasing w.r.t $H$; indeed, while more dimensions capture more statistical variance, this also increases the number of training parameters and hence the risk of overfitting. Note that for linear and polynomial kernels, the number of dimensions ($H$) cannot exceed $D$ and $D^2$ respectively with $D=24$ in practice (the Kronecker tensor product defining the map of --order 2-- polynomial kernel has $D^2$ dimensions while the map of the linear kernel has obviously $D$ dimensions.)}\label{table1} \end{table*} \indent From all these results in table~\ref{table1}, we observe a clear and a consistent gain of KGCN w.r.t the linear version (i.e., KGCN with linear kernel), as well as SGCN combined with different KPCA features; we observe an increase of the accuracy of the SGCN baseline when the dimension of KPCA (again denoted as $H$) is sufficiently large (without being able to overtake KGCN for most of the kernels) and performances decrease again as the underlying number of training parameters follows $H$ and this may lead to overfitting. Besides, the average runtime per epoch, with SGCN, increases substantially when $H$ grows, as the number of training parameters in the underlying network (equal to $H \times K + C \times K$) depends on $H$ while in KGCN the number of training parameters (equal to $(D + 1) \times N \times K + C \times K$) depends only on the dimension $D$ of the original signal despite being implicitly mapped into a high dimensional space ${\cal H} = \mathbb{R}^{H}$. In particular, $H \gg N\times (D+1)$ makes KGCN clearly more efficient and still more effective compared to SGCN (see again table~\ref{table1} and also table~\ref{variation} and Fig.~\ref{variation11}); this performance improves further as $N$ (the number of learned support vectors $\{ v_i^\theta\}_i$ per filter in Eq.~\ref{initial}) and $K$ (the number of convolutional filters) reach reasonably (but not very) large values, and this results from the flexibility of the filters which learn --- with few support vectors --- relevant {\it representatives} of nodes in training data. These performances consistently improve for all the kernels and this is again explained by the representational power of the maps of these kernels. Moreover, the ablation study in table~\ref{table3} shows that KGCN with learned support vectors capture better the nodes in graph data while KGCN is clearly limited when the support vectors are fixed (and thereby biased i.e., not sufficiently representative of the actual distribution of the nodes, see again table~\ref{table3}); hence, learning the KGCN parameters (i.e., with learned $\alpha$ and fixed support vectors) is not enough in order to recover from this bias. In sum, the gain of our KGCN results from the {\it complementary aspects of the used (implicit) kernel maps and also the modeling capacity of our KGCN when the support vectors of these kernels (that define the convolutional filters) are also allowed to vary}.\\ \begin{table}[!htb] \begin{center} \resizebox{0.59\linewidth}{!}{ \begin{tabular}{c||cccc} \backslashbox{\# of Filters ($K$)}{\# of SVs ($N$)} & $1$ & $4$ & $8$ \\ \hline \hline $1$ & 84.6154 & 84.7552 & 85.1748 \\ $5$ & 93.1469 & {\bf 95.3846} & 92.8671 \\ $10$ & 92.1678 & 95.1049 &95.1049 \end{tabular}} \end{center} \caption{ Average accuracy (w.r.t all the used kernels in KGCN) for different numbers ($K$) and sizes ($N$) of filters. SVs stands for support vectors.} \label{variation} \end{table} \begin{figure}[hpbt] \begin{center} \centerline{\scalebox{0.46}{\includegraphics{figures/figure1.png}}} \vspace{-0.3cm} \centerline{\scalebox{0.46}{\includegraphics{figures/figure2.png}}} \caption{Accuracy of KGCN w.r.t five examples of kernels and filter sizes $N$. Top figure corresponds to $K=5$ filters while bottom one to $K=10$. { \bf (Best viewed in color)}.} \label{variation11} \end{center} \end{figure} \begin{table} \begin{center} \resizebox{0.59\columnwidth}{!}{ \begin{tabular}{c||c|c|c} \backslashbox{kernels}{KGCNs} & F-SV / L-$\alpha$ & L-SV / F-$\alpha$ & L-SV / L-$\alpha$ \\ \hline \hline Linear & 89.2308 & 90.7692 & 90.7692 \\ Polynomial & 84.6154 & 90.7692 & 93.8462 \\ tanh & 87.6923 & 90.7692 & 96.9231 \\ Sigmoid & 95.3846 & 95.3846 & 95.3846 \\ Gaussian & 84.6154 & 93.8462 & 98.4615 \\ Laplacian & 84.6154 & 93.8462 & 98.4615 \\ Power & 92.3077 & 95.3846 & 96.9231 \\ I. Multi-quadric & 81.5385 & 93.8462 & 95.3846 \\ Log & 84.6154 & 90.7692 & 96.9231 \\ Cauchy &86.1538 & 92.3077 & 98.4615 \\ HI & 86.1538& 95.3846 & 96.9231 \end{tabular}} \end{center} \caption{ This table shows an ablation study; F-SV, F-$\alpha$ stand respectively for fixed support vectors and fixed mixing parameters $\alpha$ while L-SV, L-$\alpha$ stand for learned ones. Note that the results, when learning the support vectors using the linear kernel, are identical (both with fixed and learned $\alpha$) as one may include the multiplicative factors $\alpha$ in the learned support vectors (the converse is not true).} \label{table3} \end{table} \noindent Finally, we compare the classification performances of our KGCN against other related methods in action recognition ranging from sequence based such as LSTM and GRU \cite{DeepGRU,GCALSTM,STALSTM} to deep graph (non-vectorial) methods based on spatial and spectral convolution \cite{kipf17,SGCCONV19,Bresson16}. From the results in table \ref{compare}, our KGCN brings a substantial gain w.r.t state of the art methods, and provides comparable results with the best vectorial methods. \begin{table}[!htb] \begin{center} \resizebox{0.38\linewidth}{!}{ \begin{adjustbox}{angle=-90} \setlength\tabcolsep{2.4pt} \def0.7} \begin{tabular}{c||ccccccccccccccccccc{0.7} \begin{tabular}{c||ccccccccccccccccccc} \rotatebox{90}{Perfs} & \rotatebox{90}{90.00} & \rotatebox{90}{96.00} & \rotatebox{90}{94.00}& \rotatebox{90}{96.00}& \rotatebox{90}{49.7 }& \rotatebox{90}{80.3 }& \rotatebox{90}{86.9 }& \rotatebox{90}{83.9 }& \rotatebox{90}{80.35 }& \rotatebox{90}{90.41}& \rotatebox{90}{93.3 } & \rotatebox{90}{90.5}& \rotatebox{90}{91.51}& \rotatebox{90}{94.9}& \rotatebox{90}{97.2}& \rotatebox{90}{95.7}& \rotatebox{90}{93.7 } & \rotatebox{90}{{{\bf 98.46} } }\\ & & & & & & & & & & & & & & & & & & & \\ \rotatebox{90}{Methods} & \rotatebox{90}{ GCNConv \cite{kipf17}} & \rotatebox{90}{ArmaConv \cite{ARMACONV19}} & \rotatebox{90}{ SGCConv \cite{SGCCONV19}} & \rotatebox{90}{ ChebyNet \cite{Bresson16}}& \rotatebox{90}{ Raw coordinates \cite{SBU12}} & \rotatebox{90}{Joint features \cite{SBU12}} & \rotatebox{90}{Interact Pose \cite{InteractPose}} & \rotatebox{90}{CHARM \cite{CHARM15}} & \rotatebox{90}{ HBRNN-L \cite{HBRNNL15}} & \rotatebox{90}{Co-occurrence LSTM \cite{CoOccurence16}} & \rotatebox{90}{ ST-LSTM \cite{STLSTM16}} & \rotatebox{90}{ Topological pose ordering\cite{velocity2}} & \rotatebox{90}{ STA-LSTM \cite{STALSTM}} & \rotatebox{90}{ GCA-LSTM \cite{GCALSTM}} & \rotatebox{90}{ VA-LSTM \cite{VALSTM}} & \rotatebox{90}{DeepGRU \cite{DeepGRU}} & \rotatebox{90} {Riemannian manifold trajectory\cite{RiemannianManifoldTraject}} & \rotatebox{90}{Our best KGCN model} \\ \end{tabular} \end{adjustbox} } \caption{ Comparison against state of the art methods.} \label{compare} \end{center} \end{table} \section{Conclusion} We introduce in this paper a novel GCN formulation based on kernel machines. The method defines convolutional graph filters in the span of nodes in a (high or potentially infinite dimensional) reproducing kernel Hilbert space ({\it RKHS}), with the particularity that node representations, in the {\it RKHS}, are learned instead of being taken from training data. This makes the proposed approach (semi-)parametric and tractable while also being effective and less subject to overfitting. Indeed, the proposed GCN formulation is dual and requires few parameters, it also provides an effective way to enhance the discrimination power of the learned graph representations and it overtakes standard (primal) GCN approaches as well as the related work. \\ As a future work, we are currently investigating the combination of explicit node expansion with implicit kernel mapping, in order to further enhance the generalization performances of other pattern recognition tasks.
{ "timestamp": "2020-12-29T02:27:45", "yymm": "2012", "arxiv_id": "2012.14186", "language": "en", "url": "https://arxiv.org/abs/2012.14186" }
\section{Conclusions} In this work, we propose a situation model with attributed grammar for algebra story problems. The experiment results on ASP6.6k indicate our model outperforms state-of-the-art models and shows better interpretability and generalization ability. Future work includes creating an intelligent tutor that helps students develop problem-solving skills. \newpage \section{Ethical Impact} This paper implements the situation model applied by humans to solve algebra story problems on the setting of artificial intelligence. Based on the model, educators can design intelligent tutors to guide students in mathematical learning. \section{Acknowledgement} This work reported herein is supported by ARO W911NF1810296, DARPA XAI N66001-17-2-4029, and ONR MURI N00014-16-1-2007. \section{The \textit{ASP6.6k} Dataset} We curate the new dataset \textit{ASP6.6k} from the widely used Math23K dataset. To select and categorize algebra story problems, we first compute the term frequency–inverse document frequency (TF-IDF) features for each problem in the dataset and then use k-means clustering to group problems into different categories. We use the elbow method \cite{Thorndike1953WhoBI} to find the optimal K for the clustering. To further remove noise, we manually select certain keywords to filter out problems that do not belong to the group. We select a subset of problems from Math23K following the criteria from \cite{mayer1981frequency,nathan1992theory}: \begin{itemize} \item The problem ask for numerical answers rather than translating a story into equations. \item The problem has a story-line consisting of characters, objects, and/or actions. \end{itemize} As a result, we obtain a dataset of 6666 problems spanning from four typical types of algebra story problems: motion, price, relation, and task. Here, we provide a brief summary of the problem types: \begin{itemize} \item \textbf{Motion}: problems involve traveling and require understanding of per time rate. \item \textbf{Task}: problems involve completion of tasks and require understanding of the relations between fractions. \item \textbf{Price}: problems involve purchasing items and require understanding of unit price and total price. \item \textbf{Relation}: problems involve a description of relationship between two objects. \end{itemize} \begin{table}[htbp] \centering \small \begin{tabular}{ |c|c|c|c|c|c| } \hline & \textbf{Motion} & \textbf{Task} & \textbf{Relation} & \textbf{Price}& \textbf{Total} \\ \hline Problems & 1687 &1158 &1915 & 1908 & 6666\\ \hline Avg. Length & 37.0 & 33.6 & 28.8 & 26.7 & 31.1\\ \hline Avg.Agents & 1.85& 1.13& 2.34& 1.96& 1.90\\ \hline Avg. Events & 1.93& 2.21& 2.58& 2.30& 2.27\\ \hline Avg. Relations & 3.07& 3.32& 3.53& 2.98& 3.22\\ \hline \end{tabular} \caption{Dataset Statistics. Length is number of tokens.} \label{tab:stats} \end{table} \begin{table*}[t] \centering \small \begin{tabularx}{\textwidth}{|l|X|} \hline Problem Type & Sample Problem\\ \hline Task & The engineering team built a viewing trail and completed 30\% of the full length in the first week and 45\% of the full length in the second week. 150 meters in two weeks, how long is the length of this trail?\\ \hline Motion & Mingming’s family went to travel, they took a 14-hour train ride, and then a 5-hour car ride before reaching their destination. It is known that the speed of the train is 120 kilometers/hour and the speed of the car is 60 kilometers/hour. How long is this journey?\\ \hline Relation & Xiaogang's weight is 28.4 kg, Xiaoqiang's weight is 1.4 times that of Xiaogang, Xiaoqiang's weight = how many kilograms? \\ \hline Price & The school bought 45 sets of desks and chairs at 128 yuan per desk and 52 yuan per chair. How much did it spend?\\ \hline \end{tabularx} \caption{An example of each problem type.} \label{tab:sample} \end{table*} Details of the dataset statistics are listed in Table \ref{tab:stats}. Examples for each problem type are shown in Table \ref{tab:sample}. See supplementary materials for more details about dataset preprocessing and more statistics. \section{Experiments} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/qualitative.pdf} \caption{Qualitative study of the GTS model and our SMART model. For the first two instances, we visualize the result in the original test set. The last two instances are from the OOD dataset, where the test set has greater problem length in general compared to the training set.} \label{fig:qualitative} \end{figure*} \subsection{Experimental Setup} \paragraph{Dataset} We evaluate our SMART model on the newly curated ASP6.6k dataset. The final dataset is randomly divided into training and test sets of 5,332 and 1,334 problems (i.e, approximately a 80/20 split). \paragraph{Evaluation Metric} We report the answer accuracy of the models: the generated solution is considered correct if it executes to the ground-truth answer. Furthermore, we design an out-of-distribution (OOD) evaluation to examine the models' generalization ability. \paragraph{Baselines} We compare the proposed SMART model with several state-of-the-art neural models for math word problems: MathEN~\cite{Wang2018TranslatingMW}, Group-ATT~\cite{Wang_Zhang_Zhang_Xu_Gao_Dai_Shen_2019}, GTS\cite{Xie2019AGT}, and Graph2Tree \cite{zhang2020graph2tree}. \subsection{Results and Analyses} \paragraph{Comparison with State-of-the-art Models} \autoref{tab:acc} summarizes the comparison of the answer accuracy on the test set with regard to problem types. The proposed SMART model significantly outperforms all the neural models and beats the state-of-the-art model by nearly 3\% in terms of the overall accuracy. More specifically, SMART outperforms the neural models by 3\% and 4\% on the Motion and Relation problems, while it achieves comparable performance on the Task problems and lower performance on the Price problems. \begin{table}[htbp] \centering \small \begin{tabular}{c|c|cccc} \hline \textbf{Model} & \textbf{Overall} & \textbf{Motion} & \textbf{Task} & \textbf{Relation} & \textbf{Price}\\ \hline MathEN & 67.8 &68.3&70.2&63.3&70.5 \\ GroupATT & 67.4 &65.2&70.7&63.6&71.5\\ GTS & 76.8 & 73.2 & 72.1 & 76.0 & \textbf{83.6} \\ Graph2Tree & 76.8 &76.9&\textbf{79.0}&73.8&78.7\\ \hline SMART & \textbf{79.5} &\textbf{79.8}&\textbf{79.0}&\textbf{77.9}&81.8\\ \hline \end{tabular} \caption{The answer accuracy on the test set (\%). } \label{tab:acc} \end{table} \begin{table}[htbp] \centering \small \begin{tabular}{c|c|cccc} \hline \textbf{Model} &\textbf{Overall} & \textbf{Motion} & \textbf{Task} & \textbf{Relation} & \textbf{Price}\\ \hline MathEN & 31.7 & 22.6&28.9&39.9&33.2\\ GroupATT &35.0 &24.0&42.2&42.6&32.7\\ GTS & 45.8 &44.5 &41.9 &49.9 &45.3\\ Graph2Tree & 45.1 & 34.1 & 47.4 & 55.1 & 41.9\\ \hline SMART & \textbf{63.2} &\textbf{65.0} &\textbf{64.8} &\textbf{62.9} &\textbf{60.8}\\ \hline \end{tabular} \caption{The answer accuracy in the OOD evaluation (\%). The test set is the 20\% longest problems of each type.} \label{tab:ood} \end{table} \paragraph{Out-of-distribution Evaluation} To measure the models' generalization ability, we conduct an out-of-distribution (OOD) evaluation, where the test set contains more complex problems than the training set. The length of an algebra story problem is a good proxy for its solving complexity. Therefore, we select the longest 20\% of problems for each type as the test set and the rest as the training set. \autoref{tab:ood} summarize the answer accuracy of all models in the OOD evaluation. The results show that SMART has better generalization ability. It outperforms the neural models by 17\%, revealing SMART's strong ability to reason about more complicated situations even if it is trained on much simpler problems. \paragraph{Iterative Learning} \autoref{fig:curve} shows the test accuracy versus iterations. Here an iteration is an update on the SMART model based on collected successful samples prior to that iteration, as illustrated in Algorithm~\ref{alg:iterative}. We can see that the model improves during iterative learning and begins to converge at the third iteration. \paragraph{Ablative Study} The ablative study in \autoref{fig:curve} analyzes the effect of each module in the inference procedure on the test set. We observe that the sole use of NER only achieves around 20\% accuracy. This indicates that reasoning about relations between attributes is required in most problems. The final result with both the original relation extraction (RE) module and Seq2Seq outperforms the model with just the RE module. This suggests Seq2Seq does detect relations not extracted by RE. However, Seq2seq alone is inferior to RE, which suggests explicit mapping of relations works better than implicit learning of a neural module. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Figures/test_accuracy.pdf} \caption{The answer accuracy on the test set across various iterations during the iterative learning. RE denotes the relation extraction in the initial parser. NER denotes the Named Entity Recognition system. Seq2Seq is the model used to detect relations not extracted by the initial parser.} \label{fig:curve} \end{figure} \paragraph{Qualitative Study} To further analyze our model's interpretability and generalization ability, we visualize several examples from the test set, as shown in \autoref{fig:qualitative}. The first two instances are from the original set, while the last two are from the OOD split. Both models work well on the first instance since there are similar samples in the training set. When it comes to the third instance, the neural network failed since it has never observed a problem with more than two events and does not comprehend the relations. However, the situation model handles well due to event independence. In the fourth instance, the neural network fails to determine the speed of the next car and the relation ``20\% of the total distance", while SMART is good at both. However, SMART also makes wrong predictions. In the second instance, ``the two vehicles were 35 kilometers apart" is ambiguous since we do not know if the cars have met each other yet. Therefore, SMART gives a negative value for the total length. In the future, error analysis is needed in the situation model so that when an implausible answer is given, the situation model seeks an alternative solution. \section{Introduction} \begin{figure}[t] \includegraphics[width=\linewidth]{Figures/Teaser_AAAI2021.pdf} \caption{The process of human solving algebra story problems: We first hallucinate a situation model from the text and then perform arithmetic reasoning on the situation model to compute an answer. If we fail to generate a correct solution, we can adjust our situation model accordingly.} \label{fig:teaser_aog} \end{figure} \textit{Algebra Story Problems}, depicted by~\citet{hinsley1977} as ``twentieth-century fables'', remain a critical challenge in modern artificial intelligence. An algebra story problem typically describes a real-world situation and inquires about an unknown fact in the situation. It goes beyond arithmetic since one has to first comprehend the situation, recognize the goal in the problem, and then develop a solution for it ~\cite{nathan1992theory}. Psychology studies~\cite{bjork2013cognitive,abedi2001language} also indicate that algebra story problems can serve as a test of children's cognitive skills to perform arithmetic reasoning on real-world tasks. However, although algebra story problems are distinguished \textit{per se}, related works from the community of artificial intelligence and natural language processing often mix them with other types of problems, such as number problems and geometry problems, into one whole task called \textit{Math Word Problems} (MWPs)~\cite{wang-etal-2017-deep, Huang2016HowWD, Amini2019MathQATI} Recent works on Math Word Problems~\cite{wang-etal-2017-deep, Huang2018NeuralMW, Wang2018TranslatingMW, Xie2019AGT, hong2021lbf} focused on using end-to-end neural networks (\textit{e.g.}, Seq2Seq, Seq2Tree) to directly translate a problem text into an expression, which is then executed to get the final answer. Although they seem to obtain satisfying performance, such end-to-end neural models suffer from the following drawbacks: \begin{itemize} \item Lack of interpretability. The expressions generated by neural networks are hard to interpret without the intermediate problem-solving process. An exemplary expression from~\autoref{fig:qualitative} is ``(24+60)/[1-(1-(2/5))*(3/10)-(2/5)*(3/4)-(2/5)]", which makes no sense to humans, even though it generates the correct answer. \item Lack of generalization ability. These neural solvers usually fail in scenarios that are more sophisticated than those in training. \end{itemize} To address these issues in current research on Math Word Problems, we make the following efforts in this work. First, we curate a new benchmark named \textit{ASP6.6k}, which contains four canonical types of algebra story problems: motion, price, relation, and task. We build our dataset upon Math23K~\cite{Wang2017DeepNS}, the most frequently-used MWP dataset in recent years, and categorize algebra story problems by following precisely the criteria set by \citet{mayer1981frequency}. Second, we introduce the cognitive concept of a \textit{situation model}~\cite{dijk1983}, which is widely used in psychology studies to model the mental states of humans in problem-solving~\cite{Reusser1990FromTT, greeno1989, nathan1990, CoquinViennot2007ArithmeticPA, Leiss2010TheRO}. It is believed that problem-solving techniques, such as mathematics and logic, are applied to the hallucinated situation model instead of the problem text. As shown in \autoref{fig:teaser_aog}, the situation model interacts with mathematical concepts to derive a solution for the problem. To efficiently represent the situation model, we propose \textit{SMART}, which utilizes attributed grammar \cite{knuth1990genesis, Liu2014SingleView3S, Park2015AttributedGF, Park2016AttributeAG} as the representation of a situation model in algebra story problems. More specifically, the world, agents, and events depicted in an algebra story problem are represented as nodes in a hierarchical parse graph, derived from the problem text with a context-free grammar. The parse graph nodes are further augmented with attributes to represent quantities in the problem, and the relations between these quantities are encoded as numerical constraints on the corresponding attributes. The construction of these constraints usually requires both commonsense knowledge and mathematical knowledge. Therefore, the parse graph generated by an attributed grammar can capture the situation model's desired characteristics for algebra story problems. With the parse graph, the problem solving is equivalent to seeking one unknown attribute in the graph and can be formulated as an attribute propagation process guided by the constraints on these attributes. To automatically construct parse graphs for problems, we first train an information extraction module to extract nodes, attributes, and relations from problem texts and then generate parse graphs based on a carefully designed attributed grammar. The learning of SMART is nontrivial. Since the grammar parsing and the problem solving are non-differentiable, we cannot use back-propagation to learn SMART in an end-to-end fashion under the supervision of final answers. Therefore, we propose a two-stage learning strategy: First, we manually design a text parser to generate initial supervision on parse graphs and use them to train the information extraction module in SMART. Second, we adopt an iterative learning method to strength the information extraction module, where pseudo-gold parse graphs at each iteration augment the supervision for the next learning iteration. We conduct experiments on the newly curated benchmark ASP6.6k, and the proposed SMART model outperforms all neural network baselines by a large margin. Moreover, it demonstrates stronger generalization ability in an out-of-distribution evaluation, where the test problems are more complex than those in the training set. A qualitative study also suggests that SMART achieves better interpretability and generalization ability than the neural models. \section{Methods} \section{SMART} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Figures/Framework.pdf} \caption{Overview of our SMART model. The Named Entity Recognition (NER) module extracts the spans of nodes, attributes, as well as relations from the text, and construct a parse graph using Attributed Grammar. The Relation Extraction module uses the relation spans and the parse graph already constructed to embed some relations into the parse graph. In the updated graph parser, Relation Extraction corresponds to Seq2Seq. The relations are then executed to get the final answer. If the answer is correct, it is added to the buffer of pseudo-gold parse graphs to train NER and Seq2Seq. If not, it is added to the failure set to be updated in the following iterations.} \label{fig:framework} \end{figure*} In this section, we introduce SMART, a situation model for algebra story problems via attributed grammar. \subsection{Attributed Grammar} Inspired by \citet{qi2018human}, an attributed grammar is designed for the domain of algebra story problems, as shown in \autoref{tab_attributed_grammar}. \begin{table}[H] \centering \begin{tabular}{l} \hline $G = (S, V, A, E, R)$ \\ \hline $S$ is the start symbol. \\ $V$ = \{S, World, Agents, Agent, Events, Event\} \\ $A$ = \{rate, amount, total\} \\ $E$ = \{$e$: $e$ is a valid equation on attributes.\}\\ $R$ = \{$S$ $\to$ World\\ ~~~~~~~~World $\to$ Agents\\ ~~~~~~~~Agents $\to$ Agents Agent $|$ Agent\\ ~~~~~~~~Agent $\to$ Events\\ ~~~~~~~~Events $\to$ Events Event $|$ Event\}\\ \hline \end{tabular} \caption{The attributed grammar for algebra story problems.} \label{tab_attributed_grammar} \end{table} In attributed grammar, the production rules $R$ are designed by the following observation: a problem usually depicts a \textit{world}, where several \textit{agents} perform several \textit{events}. Inspired by \cite{Roy2017UnitDG,nathan1992theory,mayer1981frequency}, we design three types of attributes to augment the nodes: i) \textit{rate}: a quantity which is certain measure corresponding to one unit of some other quantity, indicated by phrases like ``A per B" and ``each A has B" (\textit{e.g.}, speed, price). ii) \textit{amount}: a measurement of units of rate quantities (\textit{e.g.}, hour). iii) \textit{total}: a quantity which equals to the multiplication of rate and amount (\textit{e.g.}, distance). The relations $E$ represent possible constraints on the attributes in the form of equations. These constraints can be either explicitly stated in the text, such as ``it travels 1/3 of the distance'', or implied by commonsense knowledge, such as ``distance = speed $\times$ time''. See \autoref{fig:framework} for an exemplary parse graph generated from the attributed grammar. \subsection{Grammar Parsing} The construction of the situation model for an algebra story problem is equivalent to parsing the problem text into a parse graph. Formally, given the problem $x$ and the attributed grammar $G$, the parsing process is formulated as: \begin{equation} pg^{\ast} = \arg\max_{pg \in \mathcal{L}(G)} p(pg \mid x), \end{equation} where $\mathcal{L}(G)$ denotes the language of the attributed grammar. The probability of a parse graph $pg$ given $x$ can be written as a joint probability of its nodes $V_{pg}$, attributes $A_{pg}$ and relations $E_{pg}$: \begin{align} p(pg \mid x) &= p(V_{pg}, A_{pg}, E_{pg}|x) \\ &= p(V_{pg}|x) \cdot p(A_{pg}|x) \cdot p(E_{pg}|x) \end{align} Here we assume the independence of these nodes, attributes, and relations to simplify our model and leave the exploration of their dependency for future works. The extraction of nodes, attributes and relations is achieved by a three-step process. First, we define seven categories of entities: nodes (\textsc{World, Agent, Event}), attributes (\textsc{Rate, Amount, Total}), \textsc{Rel} (which denotes a text span that indicates relation). We train a named entity recognition (NER) system to recognize these entities from the text. Specifically, we have one NER model to extract the attributes, and another one for the extraction of the nodes and \textsc{Rel}. We use Nested NER \cite{Strakov2019NeuralAF} for the second model. We use BERT-chinese-base pre-trained model and fine-tune it on our NER task. We then have: \begin{align} p(V_{pg}|x) &= \frac{1}{|V_{pg}|} \sum_{w \in V_{pg}} p_{ner}(w) \\ p(A_{pg}|x) &= \frac{1}{|A_{pg}|} \sum_{w \in A_{pg}} p_{ner}(w) \end{align} where $|V_{pg}|$ is the length of a node span, $|A_{pg}|$ is the length of an attribute span, and $w$ is a word in the node or attribute span. $p_{ner}(w)$ is the probability of a word being labelled as a specific category. Second, we connect these nodes and attributes into a parse graph based on two distances: the word distance between two nodes in the problem text, and the distance (number of links) between them in the dependency parse. Some constraints in the dependency parsing are also imposed, \textit{e.g.}, the noun word representing an agent is preferred to be the \textsc{nsubj} of the verb word in the event node. Please refer to the supplementary materials for the complete list of rules and constraints used in the parsing. Attributes not extracted by NER are marked as an unknown. Third, we train a Seq2Seq model to translate the \textsc{Rel} entity from a text span into an equation $e$, which consists of attributes in the parse graph and arithmetic operators ($\{+,-,\times,\div, \wedge, =\}$). To include attributes in the equation, the input to the Seq2Seq model is the concatenation of \textsc{Rel}, nodes and the attributes of each node. See supplementary for the examples of inputs and outputs of the Seq2Seq model. The probability of the relation is defined as: \begin{equation} p(E_{pg}|x) = \sum_{e \in E_{pg}} p_{seq2seq}(e|\text{\textsc{Rel}},V_{pg}, A_{pg}) \end{equation} where $p_{seq2seq}$ is the Seq2Seq output probability. \subsection{Goal Recognition} The goal of a problem is usually an interrogative word extracted by NER (\textit{e.g., how many}). It can be in an attribute or in \textsc{Rel}. We transform the interrogative word into an unknown in the equation. \subsection{Problem Solving} After building the parse graph, the problem can be easily solved by feeding the relation equations and the goal into a mathematical solver~\cite{10.7717/peerj-cs.103}. This stage requires mathematical reasoning skills, which are perfectly provided by a stand-alone solver. \section{The Learning of SMART} The learning objective of SMART is to optimize the information extraction module including both the NER system and Seq2Seq models for relation and goal. Since the grammar parsing and the problem solving are non-differentiable, we cannot use back-propagation to learn SMART in an end-to-end fashion under the supervision of final answers. Therefore, we propose a two-stage learning strategy: First, we manually design a text parser to generate initial supervision on parse graphs and use them to train the information extraction module in SMART. Second, we adopt an iterative learning method to strengthen the information extraction module. \subsection{Initial Supervision} Since we do not have the ground-truth parse graphs, we generate parse graph proposals using a hand-designed text parser. The parser extracts nodes, attributes and relations from the text to construct a parse graph. \paragraph{Attribute Extraction} We first extract all numbers of \textit{rate}, \textit{amount} and \textit{total} using pos Tagger by spaCy\footnotemark. \footnotetext{https://spacy.io/} Similar to \citet{Roy2017UnitDG}, we refer to the unit of attribute \textit{total} to be ``Num Unit" (short for Numerator Unit), and the unit of attribute \textit{amount} to be ``Den Unit" (short for Denominator Unit). The unit of attribute \textit{rate} is therefore ``Num Unit per Den Unit". \autoref{tab:ruq} shows the attributes and attribute units of an exemplar problem. \begin{table}[htbp] \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|c|c} \hline \multirow{4}{*}{\shortstack{\textbf{Problem}: Each kilogram of pears\\ cost 3.65 dollars. How many\\ dollars does mom have to\\ pay for 13 kilograms of pears?}} &\textbf{\textit{rate}} & \textbf{\textit{amount}} & \textbf{\textit{total}}\\ \cline{2-4} &3.65 & 13 & how many\\ \cline{2-4} &\textbf{Num Unit} & \textbf{Den Unit} &\\ \cline{2-3} & dollar & kilometer&\\ \hline \end{tabular}} \caption{The attributes (\textit{rate}, \textit{amount}, \textit{total}), Num Unit, Den Unit of an exemplar problem} \label{tab:ruq} \end{table} Generally, when we have a word marked as ``NUM" or ``X" (in the case of fractions) by pos tagger, or when the word is ``how many" (in the case of interrogative phrase), we check if there's word such as ``per" or ``each" nearby. If so, we mark the number as \textit{rate} and extract the Num Unit and Den Unit. We then extract \textit{amount} and \textit{total} which are followed by Den Unit and Num Unit respectively. \subsubsection{Node Extraction} The next step is to extract the nodes and link attributes to nodes, based on the POS tagging and the dependency parsing. Specifically, we extracts the nouns in the text and treat them as agent nodes. We extract the verbs and their dependents as event nodes. The world node represents the scope of a problem, and is associated with a \textit{total} attribute denoting the total quantities to be covered in the problem. The extraction is conditioned on the problem type. For the type of motion, it means the total distance within the scope of the problem; for the type of price, it is the total money that can be spent; for the type of task completion, it is usually the total amount of tasks to be completed. The world node is extracted based on rules using POS tagger, dependency parser and regular expressions. If there is no ``scope" information in the problem, we just place a default world node in the parse graph. We connect nodes and attributes based on distance and dependency parsing. \paragraph{Relation Extraction} We represent the relations explicitly stated in the problem text via the first-order logic: \begin{itemize} \item \textbf{Variables:} We define a node $v$ to be a variable. \item \textbf{Functions:} We consider an attribute to be a function of a node, \textit{i.e.}, Rate($v$), Amount($v$), Total($v$). Moreover, we define two extra functions: a sum function which takes in the same attribute of several nodes and returns their sum, e.g., {Sum(Total($v_i$), Total($v_j$))}; a left function, which computes the quantities that haven't been covered by events so far, e.g., {Left(Total($S$), Total($v_i$), Total($v_j$))}. \item \textbf{Predicates:} We view relations as predicates. Predicates take in functions $F$, and sometimes a value $n$ representing numerical relation (if $n$ is detected by relation extraction, we exclude it from the attribute set). These include: {Equal}($F(v_i)$, $F(v_j)$); {More\_than($F(v_i)$, $F(v_j)$, $n$)}; {Less\_than($F(v_i)$, $F(v_j)$, $n$)}; {Times\_of($F(v_i)$, $F(v_j)$, $n$)}. \end{itemize} Please refer to the supplementary materials for the complete definitions for functions and predicates. To mine relations from the text, we first use keyword matching to measure how much a text span is considered to indicate a relation based on keywords (e.g., more than, less than, equals to, times of, left) and then represent the relation as a first-order logic predicate. The predicates are transformed into equations. \subsection{Iterative Learning} To further improve the performance of SMART, we propose an iterative learning method: we keep a success buffer, which stores the pseudo gold parse graphs generating correct answers, and a failure buffer, which keeps track of the problems not being solved yet. The pseudo gold parse graphs provided by initial supervision served as the initialization of the success buffer for the first iteration. At each iteration, we first use the success buffer to update the model, and then apply the updated model to the instances in the failure buffer to check if the updated model can solve new problems. The details of the iterative learning method are illustrated in Algorithm~\autoref{alg:iterative}. \begin{algorithm} [H] \begin{algorithmic}[1] \caption{Iterative Learning}\label{alg:iterative} \STATE \textbf{Input}: training set $\mathcal{D}=\{(x_i, y_i)\}_{i=1}^N$ \STATE Success buffer $\mathcal{B}$, Failure buffer $\mathcal{F}$, updated parser $\theta$ \STATE \hfill$\triangleright$\textit{Parse Graph Proposal} \FOR {$x_i, y_i\in \mathcal{D}$} \STATE ${pg}_i$ = initial\_parser ($x_i$) \IF {execute(${pg}_i$) = $y_i$ } \STATE $\mathcal{B} \leftarrow \mathcal{B} \cup \{x_i, pg_i\}$ \ELSE \STATE $\mathcal{F} \leftarrow \mathcal{F} \cup \{x_i, y_i\}$ \ENDIF \ENDFOR \STATE \hfill$\triangleright$\textit{Iterative Learning} \WHILE {not converge} \FOR {$x_i, pg_i\in \mathcal{B}$} \STATE $\theta = \theta - \nabla_{\theta} J(x_i, pg_i)$ \ENDFOR \FOR {$x_i, y_i\in \mathcal{F}$} \STATE ${pg}_i$ = updated\_parser ($x_i$) \IF {execute(${pg}_i$) = $y_i$ } \STATE $\mathcal{B} \leftarrow \mathcal{B} \cup \{x_i, pg_i\}$ \STATE remove $\{x_i, y_i\}$ from $\mathcal{F}$ \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} \section{Related Works} \paragraph{Math Word Problems} Solving math word problems has attracted researchers for decades. Early solvers \cite{Fletcher1985UnderstandingAS, Bakman2007RobustUO, Yuhui2010FrameBasedCO} use rule-based methods which are generally fixed and only work on single-step word problems for one category of problems. The next stage of solvers use semantic parsing techniques \cite{Hosseini2014LearningTS, KoncelKedziorski2015ParsingAW, Shi2015AutomaticallySN, Huang2017LearningFE}. These methods attempt to parse the problem text into an intermediate structured representation, usually annotated in the training set. Specifically, \citet{Shi2015AutomaticallySN} uses Context Free Grammar to solve number problems, which is quite different from our grammar for story problems. Statistical learning methods~\cite{Kushman2014LearningTA, Zhou2015LearnTS, Mitra2016LearningTU, Roy2017UnitDG, Huang2016HowWD} attempt to boost semantic parsing techniques, like choosing the most probable template to use \cite{Mitra2016LearningTU}. However, these templates are still fixed before training, leading to inflexibility in solving more sophisticated problems. Researchers recently focus on solving math word problems using neural networks \cite{Ling2017ProgramIB, wang-etal-2017-deep, Huang2018NeuralMW, Robaidek2018DataDrivenMF, Wang2018TranslatingMW, Wang_Zhang_Zhang_Xu_Gao_Dai_Shen_2019, Chiang2019SemanticallyAlignedEG, Xie2019AGT, zhang2020graph2tree, hong2021lbf}. The mere translation from a text to an equation neglects the intermediate process required by problem solving, thus lacking interpretability. In this paper, we seek to combine the strengths of both symbolic reasoning and neural networks, where we use neural modules to update symbolic representations. \paragraph{Situation Model} Situation models have been proven crucial in human discourse comprehension and problem solving~\cite{zwaan1995dimensions,nesher2003situation}. Researchers have long believed that text comprehension is a process of construction and integration~\cite{gernsbacher2013language, kintsch1998comprehension}. \citet{hegarty1995comprehension} indicate that without a situation model, problem solvers with a direct translation approach are more likely to fail for math problems. A recent study \cite{raudszus2019situation} also shows that the ability of building a situation model is a strong indicator of cognitive and linguistic skills. There is a history of situation model construction for algebra story problem solving~\cite{Reusser1990FromTT, greeno1989, nathan1990, CoquinViennot2007ArithmeticPA, Leiss2010TheRO}. \citet{kintsch1985understanding} use a situation model to analyze processing requirements and difficulties of algebra word problems. \citet{nathan1992theory} build a situation model to predict student mental state and predict how to tutor students based on interaction history. In contrast, this paper builds a situation model for the machine, which uses attributed grammar to model the problem solving for algebra story problems. A situation model should satisfy the following properties \cite{dijk1983} : \begin{itemize} \item Reference: The situation model should represent the world the text is stating about. \item Coherence: All facts, implicit or explicit, need to be connected as long as the relations are indicated by the text. \item Situational Parameters: It includes the parameters and attributes about the world and events in the text. \item Event Independence: It needs to be invariant regardless of the number of events and their order. \end{itemize} We argue that a parse graph derived from attributed grammar can capture the above properties of a situation model. \paragraph{Attributed Grammar} Attributed grammar is proposed by Knuth to handle the semantics of programming languages~\cite{knuth1990genesis}. In recent years, researchers use attributed grammar to represent hierarchical grammar structures for images \cite{han2005bottom,wang2013weakly}, video events~\cite{lin2009semantic}, human poses~\cite{park2016attribute}, indoor scene understanding~\cite{qi2018human,jiang2018configurable,huang2018holistic,chen2019holistic++}, \textit{etc}. The attributes are assigned to terminal and non-terminal nodes of a grammar based on commonsense knowledge. Attributes between terminal nodes and non-terminal nodes are related by soft constraints or hard constraints, depending on the specific task. We utilize hard constraints in SMART for mathematical reasoning. In addition, attributes in a parse graph can be propagated in a controlled and formal way. \\
{ "timestamp": "2020-12-29T02:21:37", "yymm": "2012", "arxiv_id": "2012.14011", "language": "en", "url": "https://arxiv.org/abs/2012.14011" }
\section{Introduction} \input{parts/01-introduction} \section{Related works} \label{related_work} \input{parts/02-related_works} \section{Model} \label{model} \input{parts/03-model} \section{Data} \label{data} \input{parts/04-data} \section{Comparison to state-of-the-art} \label{comparison_SOTA} \input{parts/05-comparison_sota} \section{Ablation study} \label{ablation_study} \input{parts/06-ablation_study} \section{Conclusion} \input{parts/07-conclusion} \section*{Acknowledgments} This work benefited from the support of the French National Research Agency (ANR) through the project HORAE ANR-17-CE38-0008. It is also part of the \textit{HOME History of Medieval Europe} research project supported by the European JPI Cultural Heritage and Global Change (Grant agreements No. ANR-17-JPCH-0006 for France, MSMT-2647\\/2018-2/2, id. 8F18002 for Czech Republic and PEICTI Ref. PCI2018-093122 for Spain). Mélodie Boillet is partly funded by the CIFRE ANRT grant No. 2020/0390. \bibliographystyle{IEEEtran} \subsection{Training} Our model is implemented in PyTorch. We trained it with an initial learning rate of \textit{5e-3}, Adam optimizer and the cross entropy loss. The weights are initialized using Glorot initialization. In addition, we used mini-batches of size 4 to reduce the training time. We tested different dropout probabilities and decided to keep the model with \textit{p\_dilated = p\_conv = 0.4} since it yielded higher performances on average on the validation set. The model is trained over a maximum of 200 epochs and early stopping is used to stop training when the model converges. In the end, we keep the model with the lowest validation loss. We also trained dhSegment on our data with the same splits for a maximum of 60 epochs since the model is pre-trained and converges faster than our. We used mini-batches of size 4 and trained on patches of shape \textit{400$\times$400 px}. The initial learning rate is \textit{5e-5} and we chose to use a ResNet50 \cite{resnet} as pre-trained encoder. Early stopping is also used and the best model obtained during training is selected. Both models have the same post-processing step with the same hyper-parameters. After testing thresholds within a range from \textit{0.5} to \textit{0.9}, we kept \textit{t = 0.7} since it shows the best results on the validation set, allowing the expected pixels to be predicted as text lines and rejecting those belonging to the background. Lastly, the small connected components with less than \textit{min\_cc = 50} pixels are discarded. Several values have also been tested for this parameter, however, it didn't really impact the results obtained. \subsection{Results} \label{results} We trained the two networks on the four datasets and now we report the scores obtained for both of them. Most of the existing methods are evaluated using the Intersection-over-Union (IoU) metric. The IoU measures the average similarity between the predicted and the ground truth pixels. Alberti et al. \cite{albertiDIVA} designed a tool to evaluate the performance of a model by calculating the IoU, precision, recall and F-measure. It allows to have more information concerning the model's performances at pixel level than just the IoU. Therefore, to evaluate the models, we computed various pixel level metrics. We first report the Intersection-over-Union (IoU) as well as the Precision (P), Recall (R) and F1-score (F) in Table \ref{tab:results}. To be comparable, the images predicted by dhSegment are resized to \textit{384$\times$384 px} before computing the metrics. In addition, the values are only presented for the text line class (the background is not considered here). The results obtained by our method are often better than those obtained by dhSegment. On the Balsac dataset, our model outperforms dhSegment by up to 6 percentage points for the F1-score metric. This is due to a better separation of close text lines that are often predicted as one single line by dhSegment. Our model helps separating these lines where dhSegment fails. It also helps to have smoother and more accurate contours. So far, our model has shown better performances than dhSegment while having no pre-trained encoder. Another interesting point is that our model is way lighter than dhSegment. It has only 4.1M parameters to be learned whereas dhSegment has 32.8M parameters including 9.36M that have to be fully-trained. This leads to a reduced prediction time. Indeed, our model is up to 16 times faster than dhSegment model as shown on Table \ref{tab:prediction_time}. \\ \begin{table}[htbp!] \begin{center} \begin{tabular}{c|cc|c} \hline \multirow{2}{*}{\textsc{Dataset}}&\multicolumn{2}{c|}{\textsc{Mean prediction time$^1$}}&\multirow{2}{*}{\textsc{Ratio}} \\ & dhSegment & Our & \\ \hline Balsac & 2.95 & 0.41 & 7.20 \\ Horae & 7.87 & 0.97 & 8.11 \\ READ-Simple & 3.73 & 0.45 & 8.29 \\ READ-Complex & 4.70 & 0.59 & 7.97 \\ DIVA-HisDB & 12.90 & 0.80 & 16.13 \\ \hline \multicolumn{4}{l}{\footnotesize{$^1$ Predictions made on a GPU GeForce RTX 2070 8G.}} \\[-0.1ex] \end{tabular} \end{center} \caption{Prediction times (s / image) reported for the two networks for the experiments presented in Section \ref{results}. The ratio column contains the improvement ratios (dhSegment / our times).} \label{tab:prediction_time} \end{table} \begin{table*}[htbp] \begin{center} \begin{tabular}{c|cc|cc|cc|cc} \hline \multirow{2}{*}{\textsc{Dataset}}&\multicolumn{2}{c}{\textsc{Mean IoU (\%)}}&\multicolumn{2}{|c}{\textsc{Precision (\%)}}&\multicolumn{2}{|c}{\textsc{Recall (\%)}}&\multicolumn{2}{|c}{\textsc{F1-score (\%)}} \\ & dhSegment & Our & dhSegment & Our & dhSegment & Our & dhSegment & Our \\ \hline Balsac & 74.02 & \textbf{84.87} & 91.89 & \textbf{94.25} & 79.09 & \textbf{89.49} & 84.95 & \textbf{91.75} \\ Horae & 60.69 & \textbf{68.81} & \textbf{80.94} & 80.31 & 73.65 & \textbf{84.80} & 81.99 & \textbf{88.62} \\ READ-Simple & 65.07 & \textbf{68.14} & \textbf{88.34} & 83.19 & 71.56 & \textbf{78.05} & \textbf{80.72} & 79.45 \\ READ-Complex & 53.34 & \textbf{60.28} & \textbf{85.51} & 81.03 & 57.80 & \textbf{68.17} & 68.47 & \textbf{78.30} \\ DIVA-HisDB & 73.00 & \textbf{74.72} & \textbf{91.56} & 89.43 & 78.28 & \textbf{82.20} & 84.32 & \textbf{85.44} \\ \hline Balsac Fine-tuning & 74.52 & \textbf{85.73} & 91.48 & \textbf{92.90} & 80.03 & \textbf{91.70} & 85.29 & \textbf{92.24} \\ Horae Fine-tuning & 62.79 & \textbf{68.00} & \textbf{86.91} & 79.51 & 71.12 & \textbf{84.51} & 79.91 & \textbf{87.97} \\ READ-Simple Fine-tuning & 64.39 & \textbf{68.14} & \textbf{86.22} & 83.19 & 71.29 & \textbf{78.05} & 77.39 & \textbf{79.45} \\ READ-Complex Fine-tuning & 52.96 & \textbf{60.28} & \textbf{85.63} & 81.03 & 57.43 & \textbf{68.17} & 68.95 & \textbf{78.30} \\ DIVA-HisDB Fine-tuning & 74.24 & \textbf{74.72} & \textbf{92.83} & 89.43 & 78.79 & \textbf{82.20} & 85.18 & \textbf{85.44} \\ \hline \end{tabular} \end{center} \caption{Comparison of the results obtained by the two networks at pixel level. The two models have been trained on the \textit{Multiple document dataset}. In the second part of the table, the models have been fine-tuned on the corresponding dataset.} \label{tab:results_pre-training} \end{table*} \subsection{Pre-training} We have shown that pre-training on natural scene images is not required to have good results on document images. It is sometimes even worse than having a different model without any pre-trained part. We now want to see if pre-training on document images instead of natural scene images can have a positive impact on the performances. Therefore, in addition to the previous experiments, we trained dhSegment and our model on a mixture of all the datasets presented before. This dataset is denoted in the following as the \textit{Multiple document dataset}. The splitting obtained by mixing these images is shown in Table \ref{tab:split}. \begin{table}[htbp] \begin{center} \begin{tabular}{lcc} \hline & \sc{Pages} & \sc{Text lines} \\ \hline \sc{Train} & \textbf{1688} & \textbf{77454} \\ \hspace{0.4cm} Balsac & \hspace{0.4cm} 730 & \hspace{0.4cm} 37191 \\ \hspace{0.4cm} Horae & \hspace{0.4cm} 510 & \hspace{0.4cm} 11341 \\ \hspace{0.4cm} READ-Simple & \hspace{0.4cm} 172 & \hspace{0.4cm} 5117 \\ \hspace{0.4cm} READ-Complex & \hspace{0.4cm} 216 & \hspace{0.4cm} 17768 \\ \hspace{0.4cm} DIVA-HisDB & \hspace{0.4cm} 60 & \hspace{0.4cm} 6037 \\ \sc{Validation} & \textbf{188} & \textbf{10562} \\ \hspace{0.4cm} Balsac & \hspace{0.4cm} 92 & \hspace{0.4cm} 4612 \\ \hspace{0.4cm} Horae & \hspace{0.4cm} 17 & \hspace{0.4cm} 251 \\ \hspace{0.4cm} READ-Simple & \hspace{0.4cm} 22 & \hspace{0.4cm} 540 \\ \hspace{0.4cm} READ-Complex & \hspace{0.4cm} 27 & \hspace{0.4cm} 2160 \\ \hspace{0.4cm} DIVA-HisDB & \hspace{0.4cm} 30 & \hspace{0.4cm} 2999 \\ \sc{Test} & \textbf{200} & \textbf{10573} \\ \hspace{0.4cm} Balsac & \hspace{0.4cm} 91 & \hspace{0.4cm} 4356 \\ \hspace{0.4cm} Horae & \hspace{0.4cm} 30 & \hspace{0.4cm} 839 \\ \hspace{0.4cm} READ-Simple & \hspace{0.4cm} 22 & \hspace{0.4cm} 723 \\ \hspace{0.4cm} READ-Complex & \hspace{0.4cm} 27 & \hspace{0.4cm} 1758 \\ \hspace{0.4cm} DIVA-HisDB & \hspace{0.4cm} 30 & \hspace{0.4cm} 2897 \\ \hline \sc{Total} & \textbf{2076} & \textbf{98589} \\ \hline \end{tabular} \end{center} \caption{Splitting of the \textit{Multiple document dataset}.} \label{tab:split} \end{table} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.16\textwidth]{resources/Allemagne_Munchen_BayerischeStaatsbibliothek_4Inc.c.a.756g-0018.png} \includegraphics[width=0.16\textwidth]{resources/dhsegment.png} \includegraphics[width=0.16\textwidth]{resources/U-FCN.png} } \caption{Page from Horae dataset with the results of a line segmentation made by dhSegment (middle) and our model (right). dhSegment merges some lines and fails in detecting vertical text lines where our model correctly detect them.} \label{fig:visual_results} \end{figure} These generic models have then been tested on each dataset. The results are reported in Table \ref{tab:results_pre-training}. We also fine-tuned the models on each single dataset. To do so, we continued the training of our model for 80 epochs and dhSegment for 40 epochs. Without any fine-tuning, our architecture is almost always better than dhSegment's. One can see that our architecture lacks in precision indicating that our model sometimes predicts text line pixels that belong to the background. However, recall is higher than dhSegment's which indicates that more of the expected text line pixels are found. This is more interesting for us since it means that we don't miss any characters. Figure \ref{fig:visual_results} shows the results of the two models for an image from Horae dataset. Fine-tuning on each single dataset is not required to get good results with any of the models. With our architecture, only the model trained on Balsac took advantage of this fine-tuning. For the three other datasets, fine-tuning didn't improve the results since the best model obtained remains the one before re-training. These results show that our model is better than dhSegment whatever the dataset, with and without fine-tuning. Adding this pre-training step to our model has improved the results, mostly on the READ datasets. This impact is less important on Balsac, mainly because this dataset represents 43 \% of the \textit{Multiple document dataset}. DIVA-HisDB is also less impacted by the pre-training. This is due to the small quantity of training data it has and the high complexity of the pages. \subsection{Comparison of architectures} In this section, we detail the two architectures, dhSegment and Yang's one, and explain the choices we made to design our own model. \subsubsection{dhSegment} dhSegment is the state-of-the-art method for multiple layout analysis tasks on historical documents. It has shown various advantages like working with few training data and a reduced training time. In addition, the code to train and test the model is open-source\footnote{https://github.com/dhlab-epfl/dhSegment} and can be easily trained in the same conditions as our model to have a fair comparison. \begin{figure*} \centering \begin{subfigure}[hbtp]{0.75\textwidth} \centerline{\includegraphics[width=1\textwidth]{resources/dhSegment-min.pdf}} \caption{Architecture of dhSegment. The encoding path corresponds to a modified version of the ResNet-50\cite{resnet} architecture.} \label{fig:dhsegment_archi} \end{subfigure} \begin{subfigure}[hbtp]{0.50\textwidth} \centerline{\includegraphics[width=1\textwidth]{resources/Yang-min.pdf}} \caption{Architecture of Yang's model.} \label{fig:yang_archi} \end{subfigure} \hspace{2cm} \begin{subfigure}[hbtp]{0.2\textwidth} \centerline{\includegraphics[width=1\textwidth]{resources/legend.pdf}} \end{subfigure} \begin{subfigure}[hbtp]{0.65\textwidth} \centerline{\includegraphics[width=1\textwidth]{resources/UFCN-min.pdf}} \caption{Architecture of our model Doc-UFCN. The encoding path is represented in red and the decoding path in blue.} \label{fig:ufcn_archi} \end{subfigure} \caption{Architectures of the different models.} \end{figure*} dhSegment's architecture is presented Figure \ref{fig:dhsegment_archi}. This model is deeper than Yang's and can have up to \textit{2048} feature maps. The encoder is composed of convolution (light blue and orange on the Figure 2a) and pooling layers. This encoder is first pre-trained on natural scene images \cite{ImageNet} and both the encoder and decoder are then trained on document images. The decoder is quite similar to the one used by Yang and consists in successive blocks composed of one standard convolution and one upscaling layer. \subsubsection{Yang et al.} Yang's model is a multimodal fully convolutional network. It takes into account the visual and textual contents for the segmentation task. It has shown good performances on synthetic and real datasets of modern document images. The code to train the model is also open-source\footnote{http://personal.psu.edu/xuy111/projects/cvpr2017\_doc.html}. Yang's model is presented on Figure \ref{fig:yang_archi}. It is made of 4 parts: an encoder (red blocks on the Figure \ref{fig:yang_archi}), a first decoder outputing a segmentation mask, a second decoder for the reconstruction task and a bridge (red arrows) used for the textual content. The Text Embedding Map and the bridge are used to encode the textual content of the images and then to add the text information to the visual one before the last convolution. To have a fair comparison with dhSegment, only the visual content is used. Therefore, the Text Embedding Map, the bridge and the second decoder for the reconstruction task are removed. \subsection{Description of Doc-UFCN} Recent systems can show long inference times which can have great financial and ecological impacts. Indeed, dhSegment takes up to 66 days to detect the lines of the whole Balsac corpus (almost 2 million pages) on a GeForce RTX 2070. To this aim, we want to show the impact of the pre-trained parts on the segmentation results while having a small network and a reduced prediction time. To design our model, we chose to use the core of Yang's network since it has a reduced number of parameters and contains no pre-trained parts. Therefore, our architecture is a Fully Convolutional Network (FCN) composed of an encoder (red blocks on the Figure \ref{fig:ufcn_archi}) followed by a decoder (blue blocks) and a final convolution layer. Dealing with a FCN without any dense layer has many advantages. First, it highly reduces the number of parameters since there is no dense connection. In addition, it allows the network to deal with variable input image size and to keep the spatial information as is. To keep a light model, the second decoder used by Yang is not used in our architecture. \subsubsection{Contracting path} The contracting path (encoder) consists in 4 dilated blocks. The dilated blocks are slightly different from those presented by Yang et al. since they consist in 5 consecutive dilated convolutions. Using dilated convolutions instead of standard convolutions allows the receptive field to be larger and the network to have more context information. Each block is followed by a max-pooling layer except for the last one. \subsubsection{Expanding path} The goal of the expanding path (decoder) is to reconstruct the input image with a pixel-wise labeling at the original input image resolution. This deconvolution is usually done using transposed convolutions or upscaling. As suggested by Mechi et al. \cite{mechi2019}, we decided to replace the unpooling layers of Yang's model by transposed convolutions in order to keep the same resolution on both the input and output. Therefore, the decoding path is composed of 3 convolutional blocks, each consisting of a standard convolution followed by a transposed convolution. In addition, the features computed during the encoding step are concatenated with those of the decoding stage (purple arrows on the Figure \ref{fig:ufcn_archi}). \subsubsection{Last convolution} The last convolutional layer outputs full resolution feature maps. It returns \textit{c} feature maps with the same size as the input image, \textit{c} being the number of classes involved in the experiment. A softmax layer is then applied to transform these feature maps into probability maps. \subsection{Implementation details} We now present the implementation details of our model. \subsubsection{Input image size} Since our model is inspired by Yang et al. \cite{yang2018}, we decided to use the same input image size. We thus resized the input images and their corresponding label maps into smaller images of size \textit{384$\times$384 px}, adding padding to keep the original image ratio. This allows to reduce the training time without losing too much information. We also tested another input size to see the impact of this choice (see Section \ref{input_size}). \subsubsection{Dilated block} As stated before, all the dilated blocks are composed of 5 consecutive dilated convolutions with dilation rates \textit{d = 1, 2, 4, 8} and \textit{16}. The blocks respectively have \textit{32, 64, 128} and \textit{256} filters. Each convolution has a \textit{3$\times$3} kernel, a stride of \textit{1} and an adapted padding to keep the same tensor shape throughout the block. All the convolutions of the blocks are followed by a Batch Normalization layer, a ReLU activation and a Dropout layer with a probability \textit{p\_dilated}. \subsubsection{Convolutional block} The convolutional blocks are used during the decoding step. The expanding path is composed of 3 convolutional blocks and each block is composed of a standard convolution followed by a transposed convolution. The blocks respectively have \textit{128, 64} and \textit{32} filters. Each standard convolution has a \textit{3$\times$3} kernel, a stride and a padding of \textit{1}. Each transposed convolution has a \textit{2$\times$2} kernel and a stride of \textit{2}. As for the dilated blocks, all the standard and transposed convolutions are followed by a Batch Normalization layer, a ReLU activation and a Dropout layer with a probability \textit{p\_conv}. \subsubsection{Last convolution} The last convolution layer is parametrized as follows: \textit{c} (number of classes) filters, \textit{3$\times$3} kernel, stride and padding of \textit{1}. It is followed by a softmax layer that computes the pixel's class conditional probabilities. \subsubsection{Post-processing} As a post-processing step, we apply the same operations as the one applied by dhSegment: pixels with a confidence score higher than a threshold \textit{t} are kept and connected components with less than \textit{min\_cc} pixels are removed. \subsubsection{Balsac} The Balsac dataset consists in 913 images extracted from 74 registers selected among 44742 registers in total. The images represent pages of acts written in french and are annotated at line level. Two examples images are shown on Figure \ref{fig:balsac}. \begin{figure}[htbp] \centerline{ \includegraphics[width=0.185\textwidth]{resources/CE501S02_1906_2_line.png} \hspace{0.1cm} \includegraphics[width=0.25\textwidth]{resources/CE502S61_1878_3_line.png} } \caption{Two pages from the Balsac dataset with annotated text lines. 3 full acts on the left and one full act on the right document image.} \label{fig:balsac} \end{figure} \subsubsection{Horae} This dataset consists in 557 annotated pages of books of hours. These pages have been selected among 500 manuscripts as they represent the variety of layouts and contents \cite{horae2019}. The pages have been annotated at different levels and with various classes such as simple initials, decorated initials or ornamentations. Figure \ref{fig:horae} shows two annotated pages for text line segmentation selected from two different manuscripts. \subsubsection{READ-BAD} This dataset \cite{gruning_cbad} is composed of 2036 annotated archival images of documents and has been used during the cBAD: ICDAR2017 \cite{cBAD2017} competition. The images have been extracted from 9 archives and the dataset is split into Simple and Complex subsets. Each image has its corresponding ground truth in PAGE xml format. For the line segmentation task, we used the bounding boxes of the \textit{TextLine} objects as labels. \subsubsection{DIVA-HisDB} This last dataset \cite{diva} contains 120 annotated pages extracted from 3 different manuscripts. Each manuscript has 30 training, 10 validation and 10 testing images. \subsection{Batch Normalization} As stated in \cite{batchnorm}, Batch Normalization has a great impact on the convergence speed during training but can also impact the results. Indeed, our model converged more than twice faster with Batch Normalization. In addition, as shown in Table \ref{tab:ablation}, Batch Normalization has a real impact on the F1-score in particular for Horae, READ-Complex and DIVA-HisDB. In addition to the quantitative results, we remarked that the visual results with Batch Normalization are also improved. It helps separating close regions but also helps joining regions that would be separated otherwise. In addition, the contours of the predicted regions are often more accurate and smoother. \subsection{Dropout} We tested two configurations with dropout layers. The first one (\textit{Drop1}) consists in applying a dropout with \textit{p\_dilated = p\_conv = 0.4} only after the dilated blocks. The second one (\textit{Drop2}) consists in applying the same dropout after every convolution of the model and not only after the last one of the dilated blocks. The application of dropout layers has most of the time a good impact on the performances. Even if the first configuration gives better results on the Horae and READ-Simple datasets, the impact is greater when implemented using the second configuration. \subsection{Dilation} For implementing the model, we chose to use a modified version of the dilated block proposed by Yang et al. \cite{yang2018} to have more context information to predict the text lines. To justify our choice of dilation rates, we tested 4 different configurations on the Balsac dataset. We tested blocks with only one convolution and a dilation rate of 1 ($[$\textit{1}$]$) and blocks with a dilation rate of 16 ($[$\textit{16}$]$). We also tested blocks with 5 convolutions with different rates ($[$\textit{1, 1, 1, 1, 1}$]$ and $[$\textit{1, 2, 4, 8, 16}$]$). The results obtained are presented in Table \ref{tab:dilation}. \begin{table}[htbp!] \begin{center} \begin{tabular}{c|c|c|c|c} \hline \sc{Dilation}&\textsc{IoU (\%)}&\textsc{P (\%)}&\textsc{R (\%)}&\textsc{F (\%)} \\ \hline $[$\textit{1}$]$ & 75.94 & \textbf{95.02} & 79.07 & 86.20 \\ $[$\textit{1, 1, 1, 1, 1}$]$ & 79.93 & 92.02 & 85.77 & 88.57 \\ $[$\textit{16}$]$ & 77.45 & 91.68 & 83.22 & 87.13 \\ $[$\textit{1, 2, 4, 8, 16}$]$ & \textbf{83.79} & 94.80 & \textbf{87.86} & \textbf{91.11} \\ \hline \end{tabular} \end{center} \caption{Impact of the dilation rates.} \label{tab:dilation} \end{table} The results with the last configuration are better than any of the others since the receptive field is way larger and the model has more context to predict the text lines. Figure \ref{fig:receptive_field} shows the receptive field growth through the network. The receptive field with the dilation rate ($[$\textit{16}$]$) corresponds to the one of Yang's model since the dilated convolutions are not successive. Having dilated convolutions instead of standard ones really impacts the receptive field size (1000 pixels instead of 200) which results in using more context to predict the text lines and provides higher performances. \begin{figure}[htbp] \centerline{ \includegraphics[width=0.50\textwidth]{resources/Receptive_field.png} } \caption{Receptive field growth through the network.} \label{fig:receptive_field} \end{figure} \subsection{Training set size} In addition to the ablation study, we tried to analyze the impact of the training set size on the performances. Therefore, we trained our model on 4 subsets of Balsac training set and report the results on Table \ref{tab:training_size}. \begin{table}[htbp!] \begin{center} \begin{tabular}{c|c|c|c|c} \hline \sc{Number of images}&\textsc{IoU (\%)}&\textsc{P (\%)}&\textsc{R (\%)}&\textsc{F (\%)} \\ \hline 90 (12\%) & 77.42 & 92.40 & 82.52 & 87.00 \\ 182 (25\%) & 78.64 & 95.17 & 81.85 & 87.91 \\ 365 (50\%) & 80.58 & \textbf{95.69} & 83.58 & 89.16 \\ 731 (100\%) & \textbf{81.95} & 94.53 & \textbf{85.92} & \textbf{89.89} \\ \hline \end{tabular} \end{center} \caption{Impact of the training set size.} \label{tab:training_size} \end{table} The more the training data, the higher the IoU. However, this progression doesn't have the same effect on the precision metric. The model trained with 365 images has even a higher precision value than the one trained with 731 images. Moreover, we see that training over only 90 images (12 \% of the training set) gives quite good results which are even better than those obtained by dhSegment when trained on the whole dataset. \subsection{Input image size} \label{input_size} As we wanted to follow the model proposed in \cite{yang2018}, we decided to train our models on images resized to \textit{384$\times$384 px}. However we want to see the impact of this choice on our results. Therefore, we trained a model on Balsac and one on DIVA-HisDB on images resized to \textit{768$\times$768 px}. Table \ref{tab:image_size} shows that training on larger images improves a bit the results. However this impact is bigger when the training set contains a lot of images. Balsac dataset contains 731 training images and is more impacted than the DIVA-HisDB dataset that contains only 60 training images. \begin{table}[htbp!] \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline \sc{Dataset}&\sc{Size}&\textsc{IoU (\%)}&\textsc{P (\%)}&\textsc{R (\%)}&\textsc{F (\%)} \\ \hline \multirow{2}{*}{Balsac} & 384 & 83.79 & \textbf{94.80} & 87.86 & 91.11 \\ & 768 & \textbf{86.50} & 94.57 & \textbf{91.06} & \textbf{92.69} \\ \hline \multirow{2}{*}{DIVA-HisDB} & 384 & 75.71 & 92.14 & 80.88 & 86.09 \\ & 768 & \textbf{76.55} & \textbf{93.33} & \textbf{80.98} & \textbf{86.67} \\ \hline \end{tabular} \end{center} \caption{Impact of the input image size.} \label{tab:image_size} \end{table}
{ "timestamp": "2021-03-30T02:46:36", "yymm": "2012", "arxiv_id": "2012.14163", "language": "en", "url": "https://arxiv.org/abs/2012.14163" }
\section{Introduction}\label{intro} More than 1.7 million people have died of COVID-19 \cite{covidmap}. With such catastrophic loss of life at risk, it is a top priority to identify effective measures to prevent the spread of COVID-19. Preventing the spread of COVID-19 has several challenges. Firstly, 44\% of COVID-19 transmissions occur during the asymptomatic stage with infected individuals being most infectious 1 to 2 days before symptom onset \cite{ViralShedding}. Secondly, according to the CDC Planning Scenarios report \cite{CDCScenarios}, 40\% of all COVID-19 cases remain asymptomatic and this number could be as high as 79\% for those under 20 \cite{ChildrenAsymp}. The CDC Planning Scenario also estimates that asymptomatic individuals are 75\% as infectious as symptomatic individuals. Finally, the false-negative rate of RT-PCR tests on presymptomatic individuals ranges from 67\% to 100\% \cite{testSensitivity}. All of these factors make it very difficult to prevent COVID-19 with symptom-based measures. Contact tracing has existed for decades, helping to reduce tuberculosis \cite{Tuberculosis}, sexually transmitted diseases \cite{STD}, and Ebola \cite{Ebola}. However, manual contact tracing relies on substantial human labor. According to a recent survey by NPR, 39 states do not have enough contact tracers \cite{NPRSurvey}. Additionally, of the contacts reported, only around 50\% were reachable by contact tracers \cite{ManualCTHard}. Health officials estimate an additional \$12 billion dollars are needed to fund the 180,000 manual tracers needed \cite{MCTCost}. Manual contact tracing is also susceptible to errors with the United Kingdom's Test and Trace service losing 15,000 positive COVID-19 cases between September 25 and October 2 \cite{UKBlunder}. Digital contact tracing is a relatively new method of fighting pandemics. Although the Bluetooth technology for digital contact tracing was first validated in 2014 \cite{BluetoothTech}, contact tracing apps have only recently been implemented to fight the COVID-19 pandemic. Since digital contact tracing apps require a critical mass of a population in order to be effective, a key goal is to convince enough people to use the app. In \cite{targetcommunity}, it is suggested that targeting small communities like universities first will allow the app to be used by enough people within that community. This is more feasible than expecting a significant proportion of the population at large to use the app. Many universities have experienced outbreaks \cite{nytuniv} and are looking for ways to prevent further spread. Currently, Georgia Tech, Carnegie Mellon, Grand Valley State University, and even the city of Santa Fe are beginning to adopt contact tracing apps like NOVID \cite{novid}. Currently, there are not enough users within these communities to prove the effectiveness of contact tracing apps. We will use simulation to evaluate the effectiveness of digital contact tracing. Efforts to model diseases date back to the 1920s with the creation of the SIR (Susceptible, Infectious, Recovered) model \cite{OGSIR} where people in a fully mixed population are modeled as being Susceptible, Infectious, or Recovered. The SEIR (Susceptible, Infectious, Exposed, Recovered) model \cite{SEIR} is a variant of the SIR framework used for diseases with longer incubation periods. It includes the Exposed stage which are infected individuals who do not show symptoms. More recently, network based models, where humans are vertices and contacts are edges, have been adopted. Epidemiological models have been instrumental in encouraging preventative measures in the COVID-19 pandemic such as masking \cite{dekaiMaskPaper,maskPaper1}, social distancing \cite{distIndia,TestandDistanceLockdownPaper}, testing \cite{screenTesting,testingNotLockdown}. In particular, \cite{dekaiMaskPaper,screenTesting} are network based models. Since digital contact tracing relies on the exact contacts that occur in a population, it is best modeled with a contact network where people are vertices and contacts are edges. Recent efforts to simulate digital contact tracing \cite{NHSXReport,Dig2} have been done using synthetic networks where households and communities are constructed with random processes such as full mixing, and each pair of people have the same probability of contact. However, very little is known about the exact structure of human contacts and how these structures affect digital contact tracing. Additionally, those models assume perfect COVID-19 tests which changes optimal strategies of digital contact tracing and testing. This paper presents an enhanced network based SEIR model of the COVID-19 pandemic with digital contact tracing and testing strategies that incorporates variations in infectivity and test sensitivity. In contrast to previous work on digital contact tracing, the networks in the model are generated from a real world data set of interactions among 180 students of a high school in France \cite{multiDayData}. Since the data set was only recorded over 7 days, we use a MUNGE-like heuristic to generate additional days for the model. We present a new method to extend temporal weighted graphs in order to perform simulations on a larger population of 5000 people. Our model incorporates test sensitivity and shows that it has a significant impact on digital contact tracing strategies. The model simulates a new digital contact tracing strategy developed by NOVID \cite{novid} called the pre-exposure notification system. The pre-exposure notification system acts like a "social radar" telling app users how close they are to COVID-19, allowing them to take extra precautionary measures. In contrast with traditional SEIR models, test sensitivity is modeled to change over time depending on the time of symptom onset \cite{testSensitivity}. The incubation period is sampled from the COVID-19 incubation period distribution \cite{incubation}. The infectiousness of an individual is modeled to change over the infectious period and is fitted to the function in \cite{ViralShedding}. The model shows that the traditional strategy of quarantining direct contacts reduces infections by less than 10\% when more than half the population is asymptomatic. Testing second and third degree contacts reduces infections by up to 40\% when 70\% of the population uses the app. The pre-exposure notification system reduces infections by an additional 43\% and reduces the number of quarantines required by 51\%. Quarantining second degree contacts reduces infections but leads to a high number of quarantines. If large proportions of the population are asymptomatic, periodic testing reduces infections by an additional 41\%. However, periodic testing without tracing reduces infections by only 3\%. The most effective strategy discussed in this work was combined the pre-exposure notification system with testing second and third degree contacts. This strategy reduces infections by 18.3\% when 30\% of the population uses the app, 45.2\% when 50\% of the population uses the app, 72.1\% when 70\% of the population uses the app, and 86.8\% when 95\% of the population uses the app. When simulating the model on an extended network of 5000 students, the results are very similar with the contact tracing app reducing infections by up to 79\% \subsection{Paper Outline} In Section \ref{network}, we present the contact network generation process. Section \ref{covid} outlines the SEIR based model of COVID-19 spread. In Section \ref{app}, we create the model of the contact tracing app. In Section \ref{results} we present the results of 8 simulated scenarios. In Section \ref{S1}, the traditional strategy of quarantine of first degree contacts is investigated. In Section \ref{S2}, testing of second and third degree contacts is incorporated. In section \ref{S6}, the pre-exposure notification system is investigated. Section \ref{S7} investigates periodic testing. Section \ref{S8} simulates the model on an extended version of the graph. In Section \ref{econ}, we estimate the economic value of the contact tracing app. In Section \ref{conclusion}, we provide a summary and concluding remarks. \section{Model of Contact Network}\label{network} The first component of the model is the contact network which encapsulates the interactions between individuals in the simulation. The model iterates through discrete timestamps each representing a day in the simulation. Each day in the model, individuals come into contact, and these contacts ultimately determine how the virus transmits. The contact network on day $t$, $G_t$ is defined to be a graph where each vertex $v_1,...,v_n$ represent an individual in the model and the edge weight $e_{ij}$ between $v_i$ and $v_j$ represents the total amount of time person $i$ and person $j$ have spent in contact with each other during this time step. We do not distinguish between times of the day of the contact or number of contacts between the same individuals. This is in accordance with a recent CDC policy change that defines a close contact to be measured using cumulative contact time over the course of a day \cite{CDCAppendix}. Traditional SEIR models assume uniform mixing where each individual has the same probability of coming into contact with every other individual. Since digital contact tracing involves the exact contacts that occur in a population, a realistic contact network is needed. Temporal networks in the model are generated using a publicly available high-resolution data set \cite{multiDayData} which used RFID (Radio Frequency Identification Devices) to record all contacts between 180 students from Lyc\'ee Thiers high school in France over the course of 7 days. Since the simulation is over longer periods of time, we present a heuristic for generating additional days. Define $H_a$ to be the contact network in day $a$ of the data set where $1 \le a \le 7$. The edge weight $H_{a,i,j}$ is the duration of contact between people $i$,$j$ on day $a$ from the data set. The heuristic is similar to the MUNGE algorithm \cite{MUNGE} which generates synthetic training data. The algorithm picks an initial data value, and for each feature, with a certain probability, swaps that feature with the feature of its nearest neighbor. In this heuristic, we use a random day rather than the nearest neighbor. Initially, 2 distinct days $a$,$b$ are chosen at random from the data set. Starting with the contact graph for day $a$, for the contact duration between $v_i$ and $v_j$ with probability $0.5$, we replace that contact duration with $H_{b,i,j}$. On average, the generated contact network $G_t$ has half of its edges equal to the corresponding edges in $H_a$, and the other half equal to $H_b$. To simulate the app on larger networks, we present a method to generate larger networks from the original data set. We create a modified version of the Albert-Barab\'asi process that is adjusted for constructing weighted temporal graphs and maintains the average degree of the vertices. The principal idea of the Albert-Barab\'asi process is that nodes of high degree are more likely to interact with new nodes. Given a temporal graph $G$ with $n$ vertices where the graph on timestamp $t$ is $G_t$, extra people are added one at a time with the following process: A new vertex $v$ is added, and for each existing $w$, a random third vertex $u$ is chosen. The temporal edge weight between $v$ and $w$ is determined to be the same as the edge weight between $w$ and $u$: $G_{t,v,w}=G_{t,u,w}$ for all $t$. This step is the same as in Albert-Barab\'asi except adjusted to fit a weighted temporal graph. To preserve the average degree of the vertices, each of the original edges of $G$ are deleted with probability $\frac{1}{n-1}$. Note that the generated graph no longer has the scale-free property The extended network is initialized to be the original graph in the data set and extended in accordance to the above procedure to a total of 5000 individuals. \section{Model of COVID-19 Spread}\label{covid} The SEIR framework organizes the people in the simulation into one of the following four states: Susceptible (S), Exposed (E), Infectious (I), or Recovered (R). Individuals in the Susceptible stage have not been infected and therefore are susceptible to the virus. Exposed individuals have been infected with the virus but do not yet show symptoms. The Infectious stage begins when infected individuals show symptoms. Finally, the Recovered stage contains individuals who have recovered or otherwise removed from the model and are now immune from further spread. Individuals in both the Exposed and Infectious stage can infect others. An individual moves onto the next stage according to the following rules: \begin{itemize} \item $S\rightarrow E$: Individuals in the susceptible stage can only be exposed if they come into contact with an infected individual. Infections will be discussed in later sections. \item $E\rightarrow I$: After being exposed to a virus, the period of time before symptom onset is called the Incubation period, typically within 2 to 14 days. According to \cite{incubation}, the distribution of incubation periods is approximately the log-normal distribution with parameters $\mu=1.621$, $\sigma=0.418$. The log-normal distribution is defined as follows: Let $Z$ be a random variable with normal distribution with mean $\mu$ and deviation $\sigma$, then $X=e^{Z}$ where $X$ is the random variable with log-normal distribution with parameters $\mu,\sigma$. To determine the incubation period of an individual, we take a random sample from this distribution rounded to the nearest positive integer. \item $I\rightarrow R$:Individuals with moderate symptoms stop being infectious around 10 days from symptom onset \cite{CovidLength}. As in traditional SEIR models, we assume that every time stamp after symptom onset, there is a probability $\lambda=0.11$ which is $\frac{1}{9}$ that an Infectious person recovers or is removed from the model \end{itemize} As in traditional SEIR models, the functions $S(t),E(t),I(t),R(t)$ are the number of individuals in the compartments susceptible, exposed, infections, and recovered respectively at time $t$. We defined $Q(t)$ to be the number of individuals quarantined at time $t$ but have not received a positive test result and $T(t)$ to be the number of individuals who have received a positive before or during time $t$ but have not recovered. Individuals counted in $T(t)$ are in quarantine as well. Quarantines will be discussed in Section \ref{app}. In the rest of the paper, for each individual $v_i$, we will refer to $E_{v_i}$, $I_{v_i}$, and $R_{v_i}$ as the time of exposure, symptom onset, and recovery respectively of $v_i$. Additionally, $T_{v_i}$ is the time of the first positive test of $v_i$. In realistic scenarios, individuals can infect others before symptom onset with the level of infectiousness changing throughout of the infection. The relative infectiousness of an individual is calculated in \cite{ViralShedding}. We will call this function $ID(t)$ where the input $t$ is the number of days since symptom onset. $ID(t)$ is taken to be the function in \cite{ViralShedding} where infectiousness was assumed to start 5 days before symptom onset. Individuals are shown to be more infectious before symptom onset than after. We assume the infectiousness of an individual, which is defined to be the probability of infection given a 20-second contact, is some constant multiple, $p$, of this function. Consider the graph $G_t$ to be the contact network during timestamp $t$ in the model, with vertices $v_1,...,v_n$ representing people and edge weight $e_{ij}$ to be the total contact duration between $v_i,v_j$ during day $t$. Assuming infection to be an event that can happen during the course of a contact, if $v_i$ is in either the exposed or infectious compartments, and $v_j$ is susceptible, we model the probability that $v_j$ becomes exposed to be $$1-(1-p ID(t-I_{v_i}))^d$$ where $t-I_{v_i}$ is the number of days since symptom onset of $v_i$. The basic reproduction number, or $R_0$, is the expected number of secondary infections caused by a single infectious individual. Estimates of the value of $R_0$ range from $1.5$ to $6.7$ with the median value being $2.8$ \cite{R0Estimate} and the methods used to estimate $R_0$ varies widely between studies \cite{R0Conundrum}. Through simulation, we calculate the value of $R_0$ as a function of $p$ as shown in Figure \ref{pR0}. The value of $p$ is ranged in increments of $0.0025$ from $0$ to $0.225$ for a total of 100 values. For each value of $p$, we run the simulation 1,800 times where each individual is the seed infection 10 times and the $R_0$ value is measured as the average number of secondary infections caused by the seed infection. As shown in Figure \ref{pR0}, the scenario when $p=0.10$ yields an $R_0$ value of $2.8$. In the rest of this work, the value of $p$ will be set to $0.10$ \begin{figure}[H] \centering \includegraphics[width=8cm]{BasicR0.png} \caption{Probability of infection vs individual reproduction number} \end{figure}\label{pR0 According to \cite{CDCScenarios}, 40\% of the population remain asymptomatic throughout their infectious periods. Again, estimates vary widely and it is noted that this value could range from 10\% to 70\%. In \cite{80Kids}, it is estimated that over 80\% of young individuals are asymptomatic. The possibilities of when 20\%, 40\%, 60\%, and 80\% are asymptomatic are tested in the results. Asymptomatic individuals have similar viral loads to symptomatic individuals \cite{AsympInf}. Thus, we will assume that asymptomatic individuals are as infectious as symptomatic individuals. Testing is a key component of contact tracing. The contact tracing app relies on positive tests to identify cases of COVID-19. We adjust test sensitivities based on days from symptom onset to the distribution calculated in \cite{testSensitivity}. The false-negative rate of RT-PCR COVID-19 tests are between $67\%$ to $100\%$ before symptom onset, and fall to $20\%$ to $40\%$ after symptom onset \cite{testSensitivity}. The median number of days from symptom onset to taking a test is 3 days with interquartile range 1 to 6 \cite{CDCScenarios}. We approximate this distribution by assuming that each day there is a probability of 19\% that symptomatic individuals are tested. This preserves the median of 3 days and interquartile range 1 to 6. We do not account for false-positive tests or individuals symptomatic with a disease unrelated to COVID-19. Test results are received after a delay of 1 day. This is similar to the delay on university campuses \cite{UMichTest}. After receiving a positive COVID-19 test result, we assume the person will remain in quarantine until recovery \section{Model of Contact Tracing App}\label{app} The contact tracing app has incomplete information about the contact network and the states of the individuals. Ultrasound apps such as NOVID can measure distances to the resolution of inches and detect contacts with accuracy over 99.6\% \cite{novid}. We will assume all contacts between individuals with the app are sensed. Thus, the contact tracing app can detect the subgraph of the contact map induced by the set of app users. Contact tracing apps support the consideration of degree $k$ contacts rather than just direct contacts. This is not possible with manual contact tracing. A contact is defined to be a tuple $(p_1,p_2,t,d)$ that contains the following four pieces of information: $p_1,p_2$ are the people involved in the contact, $t$ is the day of contact, and $d$ is the duration of the contact measured in units of 20 seconds. Since the digital contact tracing app cannot determine the time of symptom onset, the estimated transmission probability of the contact is $(1-(1-p')^{d_i})$. The value $p'=\widehat{ID} p$ is substituted for $pID(t)$ where $\widehat{ID}=0.11$ is the average value of $ID(t)$ during the 6 most infectious days. In addition to contacts, users report all positive test results. If a user has tested positive, we assume that this user will report recovery. In the contact tracing app, we define a degree $k$ contact to be a sequence of contacts $c_1,…,c_k$ that satisfy the following properties: \begin{enumerate} \item The individuals $p_1,...,p_{i+1}$ involved in the contacts form a chain: $c_i=(p_i,p_{i+1},t_i,d_i)$. \item $p_1$ has reported a positive test: $t_1 \le T_{p_1} \le t_1+10$. Through simulation of 1800 outbreaks without interventions, 95\% of all symptomatic individuals took a test within 8 days of symptom onset. Since infectiousness becomes significant 2 days before symptom onset, the contacts from 10 days before the test must be recalled. \item $p_i,p_{i+1}$ have not reported recovery and thus can transmit or catch the virus: $R_{p_i},R_{p_{i+1}}>t_i$ for each $i$. \item The serial interval is between 1 and 10 days: $t_i+1\le t_{i+1}\le t_i+10$. The serial interval is defined to be the duration from the exposure time of the infector($t_i$) to the exposure time of the infected($t_{i+1}$). Through simulation of the model 1800 time, 94\% of all serial intervals are within 10 days. \end{enumerate} The weight of this contact chain is the product of the estimated transmission probabilities of each contact: $$\prod_{i=1}^k (1-(1-p')^{d_i}).$$ First and second degree contacts are computed using contact data from previous days directly after $p_1$ reports a positive test. Larger degree contacts are computed recursively. For each person $v_i$ in the model, we keep track of a matrix $M_i$ where $M_{i,l,t}$ is the total sum of contact chains of length $l$ that affect person $v_i$ at time $t$. Using the contacts, the app can calculate $M_i$ recursively. On day $t$, if $v_i$ has contacts with $v_{a_1},...,v_{a_k}$ with durations $d_1,...,d_k$ respectively, where $v_i,v_{a_1},...,v_{a_k}$ have not reported recovery by day $t$, then: $$M_{i,l,t}=\sum_{s=1}^k \left( \left(1-(1-p')^{d_s}\right)\sum_{r=1}^{10} M_{s,l-1,t-r} \right).$$ For all of $v_i$'s contacts, we take the sum of all contact chains of length $l-1$ and multiply by the estimated transmission probability $1-(1-p')^{d_s}$ to obtain the sum of all contact chains of length $l$. A person $x$ is a degree $k$ contact on day $t$ if the sum of all weights of their contacts with degree $\le k$ is above $10\%$: $$\sum_{i=1}^k\sum_{j=1}^{10} M_{x,i,t-j} \ge C.$$ The contact cutoff value, $C$, only affects the quarantine rules and not simulating the disease transmission. Essentially, direct contacts that are calculated to be infected with probability at least $C$ are quarantined. The value of $C$ is set to $10\%$ by default. By default, first degree contacts are quarantined. The pre-exposure notification system, as implemented in NOVID \cite{novid}, acts like a "social radar" telling users how close they are to COVID-19 by showing the number of positive cases at each distance in their social network. App users can see the number of neighbors of distance $d$ that are COVID-19 positive. We assume that first, second, and third degree neighbors will take precautionary measures. For second degree contacts, we assume a 75\% reduction in contacts. For third degree contacts, we assume a 50\% reduction in contacts. \section{Results}\label{results} At the beginning of the simulation, exactly one individual is exposed while the rest are susceptible. The simulation runs for 120 days. There are 180 individuals and for each simulation we run 1800 trials where each individual starts as the seed infection 10 times. We simulate the cases when $0\%$, $30\%$, $50\%$, $70\%$, and $95\%$ of the population use the app. We simulate the following 5 scenarios: \begin{enumerate} \item Quarantine of first degree contacts \item Quarantine of first degree contacts with followup testing of second and third degree contacts \item Pre-exposure notification system \item Pre-exposure notification system with periodic testing \item Scenario 4 on the extended graph \end{enumerate} We will measure the contact tracing app and COVID-19 testing configuration by 3 metrics: Total infected, total days spent in quarantine, and total tests used. In the tables below, the App Proportion column shows the proportion of individuals in the model that use the app. The Asymptomatic column show the proportion of individuals that are asymptomatic. The Infected column shows the average number of individuals infected after 120 days. Quarantines are split into 3 categories: false, true, tested. False quarantines are quarantined individuals who do not have COVID-19. Individuals in the True quarantine category are infected with COVID-19 but have not received a positive test result. Tested quarantines are those who have tested positive. We distinguish the Tested quarantines because these infections have been confirmed by test while True and False quarantines are predicted by the quarantine rules. Each of the quarantine columns show the total number of days spent in quarantine across all individuals over 120 days. The Tests Used column shows the number of tests used in the simulation after 120 days. In scenario 1, the most basic strategy of quarantining first degree contacts is not effective at high asymptomatic levels, suggesting that a more comprehensive testing strategy is required. Scenario 2 shows that testing second and third degree contacts greatly increases the effectiveness of the app. The pre-exposure notification system, which warns second and third degree contacts to take extra precautionary measures, can reduce infections by an additional 40\% while also reducing the number of quarantines by up to 50\%. IN Scenario 4, it is shown that periodic testing without contact tracing reduces infections by less than 3\%. Each app user decreases the economic cost of COVID-19 by \$2,841 at 50\% app usage and \$4,185 at 70\% app usage. \subsection{Scenario 1 --- Quarantine of first degree contacts (Traditional Strategy)}\label{S1} This is the most basic strategy where only first degree contacts are quarantined. Every day, symptomatic individuals have a 19\% chance of getting a test as discussed in Section \ref{covid}. The simulation is performed on the original network of 180 students. \begin{table}[H] \caption{Quarantine direct contacts, no followup testing} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&106.05 & 0 & 0 & 291.87 & 53.57 \\ 30\%&40\%&101.20 & 78.78 & 39.73 & 278.42 & 50.97\\ 50\%&40\%&97.15 & 195.71 & 94.06 & 265.96 & 48.97\\ 70\%&40\%& 87.48 & 329.17 & 149.91 & 242.71 & 44.41 \\ 95\%&40\%& 70.25 & 477.30 & 200.56 & 194.77 & 35.39\\ \hline 0\%&80\%& 110.43 & 0 & 0 & 101.43 & 18.43 \\ 30\%&80\%&107.99 & 33.53 & 19.23 & 98.83 & 18.09\\ 50\%&80\%&108.12 & 88.47 & 52.20 & 97.12 & 17.96\\ 70\%&80\%& 101.82 & 159.04 & 91.53 & 94.11 & 17.04 \\ 95\%&80\%& 99.73 & 282.78 & 156.70 & 90.58 & 16.76\\ \hline \end{tabular} \label{S1R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S1Linegraph120.pdf}\label{S1R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S1Infections120.pdf}\label{S1R2Inf}} \caption{Quarantine direct contacts, no followup testing} \end{figure} From Table \ref{S1R2}, we can find out that the effectiveness of the app depends on the percentage of the people use the app. At 40\% asymptomatic ratio, the contact tracing app reduces infections by 8.4\% at 50\% app usage and 17.5\% at 70\% app usage. The effectiveness of the app increases doubles from adding the extra 20\% of app users. This shows that it is very important to have a majority of app users. A limitation of this strategy is that it is ineffective at high asymptomatic ratios. At 80\% asymptomatic ratio the contact tracing app reduces infections by less than 10\% in all cases. This is because in this scenario, since COVID-19 testing relies on symptomatic individuals, the effectiveness of the app is greatly reduced at high asymptomatic levels. Less than 20 tests are used while over 100 students are infected which indicates that the majority of infections are undetected. The results in this scenario suggest that more comprehensive testing strategies are needed, especially when large proportions of the population are asymptomatic. The cost of using the app is that quarantines increase as more people use the app. In all cases, the number of false quarantines is approximately double the number of true quarantines. The time of peak infections is similar for each app usage. In Figure \ref{S1R2}, infections rise dramatically, peaking at around 30 days after the initial infection and then falls as herd immunity is reached. Because quarantine rules require a history of contacts to be recalled, the initial 10 days show no difference between the graphs of each app usage level. At 60\% and 80\% asymptomatic ratio, the app is unable to reduce infections significantly. \subsection{Scenario 2 --- Testing of second and third degree contacts}\label{S2} In this scenario, first, second, and third degree contacts are tested every 3 days. All other parameters are the same as in scenario 1. The simulation is performed on the original network of 180 students. \begin{table}[h] \caption{Quarantine direct contacts, test second, third degree contacts} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%& 104.61 & 0 & 0 & 286.37 & 52.68 \\ 30\%&40\%&98.53 & 113.23 & 32.84 & 325.58 & 107.60\\ 50\%&40\%&83.02 & 259.09 & 73.70 & 340.81 & 234.49\\ 70\%&40\%& 63.33 & 395.14 & 101.05 & 315.52 & 470.46 \\ 95\%&40\%& 27.87 & 314.2 & 77.42 & 172.38 & 670.94\\ \hline 0\%&80\%& 111.52 & 0 & 0 & 100.72 & 18.71 \\ 30\%&80\%&106.79 & 88.73 & 23.59 & 156.34 & 62.29\\ 50\%&80\%&95.37 & 253.73 & 71.30 & 250.42 & 188.05\\ 70\%&80\%& 78.03 & 418.75 & 110.40 & 304.72 & 400.50 \\ 95\%&80\%& 44.10 & 423.31 & 105.73 & 238.87 & 645.00\\ \hline \end{tabular} \label{S2R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S2Linegraph120.pdf}\label{S2R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S2Infections120.pdf}\label{S2R2Inf}} \caption{Test second, third degree contacts} \end{figure} As in scenario 1, the effectiveness of the app increases greatly as the number of app users increases. In Table \ref{S1R2}, at 40\% asymptomatic ratio, the contact tracing app reduces infections by 20.6\% at 50\% app usage and 39.5\% at 70\% app usage. These reductions are over 2 times greater than in scenario 1. Notably, at 95\% app usage, infections are reduced by 73.4\%, which is almost twice the reduction at 70\% app usage. Compared to scenario 1, the number of true quarantines decreases while the number of tested quarantines increases at 40\% asymptomatic ratio. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 14.5\% at 50\% app usage and 30.0\% at 70\% app usage. Again, this strategy is significantly more effective than scenario 1. However, the number of quarantines increases significantly in all categories compared to scenario 1. In particular, the number of tested quarantines rises by 224\% at 70\% app usage. This is to be expected since the number of tests used rises significantly, from less than 20 to over 400. Again, the app is significantly more effective at 95\% app usage, with infections being reduced by 60.05\%. Although results for 40\% and 80\% asymptomatic ratio are similar for app proportions less than 70\%, at 95\% app proportion, the case of 80\% asymptomatic yields 58\% more infections. Similar to scenario 1, the graphs are similar during the first 10 days and then diverge significantly. However, at high app usage, infections peak before 20 days while no app usage yields peak infections at around 30 days. In conclusion, testing second and third degree contacts greatly increases the effectiveness of the app especially at higher asymptomatic proportions. Thus, the increase in test usage and quarantines is justified. \subsection{Scenario 3 --- Pre-exposure notification system}\label{S6} In this scenario, we simulate the pre-exposure notification system. For second degree contacts, we assume a 75\% reduction in contacts. For third degree contacts, we assume a 50\% reduction in contacts. All other parameters are the same as those in scenario 2. The simulation is performed on the original network of 180 students. \begin{table}[H] \caption{Quarantine direct contacts, pre-exposure notification system} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&106.99 & 0 & 0 & 295.84 & 54.24\\ 30\%&40\%&93.48 & 92.80 & 27.40 & 303.44 & 101.73\\ 50\%&40\%&67.25 & 162.16 & 44.36 & 254.02 & 161.40\\ 70\%&40\%&36.21 & 185.80 & 45.24 & 162.85 & 213.24\\ 95\%&40\%&16.61 & 190.22 & 41.60 & 98.10 & 279.80\\ \hline 0\%&80\%&110.90 & 0 & 0 & 101.08 & 18.69\\ 30\%&80\%&103.19 & 87.01 & 22.46 & 150.39 & 64.83\\ 50\%&80\%&81.68 & 187.45 & 48.33 & 189.58 & 142.75\\ 70\%&80\%&54.06 & 251.14 & 61.42 & 179.84 & 220.68\\ 95\%&80\%&29.99 & 287.72 & 65.62 & 156.12 & 309.38\\ \hline \end{tabular} \label{S6R2} \end{table} As in the previous scenarios, the effectiveness of the app increases as the number of app users increases. In Table \ref{S6R2}, at 40\% asymptomatic ratio, the contact tracing app reduces infections by 37.1\% at 50\% app usage and 66.1\% at 70\% app usage. At 95\% app usage, infections are reduced by 84.5\% which is an additional 40.2\% reduction compared to scenario 2. Compared to scenario 2, the total number of quarantines is reduced by 51.4\% at 70\% app usage. There is a 54.7\% reduction in the number of tests used which is caused by the reduced interactions for second and third degree contacts. This strategy is more effective even though the number of true and tested quarantines are reduced by 55.2\% and 48.4\% respectively. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 14.5\% at 50\% app usage and 30.0\% at 70\% app usage. Again, this strategy is significantly more effective than scenario 1. At 95\% app usage, the 80\% asymptomatic case yields 80.6\% more infections than the 40\% case. This is an even more pronounced gap than in scenario 2. \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S6Linegraph120.pdf}\label{S6R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S6Infections120.pdf}\label{S6R2Inf}} \caption{Quarantine direct contacts, pre-exposure notification system} \end{figure} \subsection{Scenario 4 --- Pre-exposure notification system with periodic testing}\label{S7} In scenario 4, we simulate the Pre-Exposure Notification System with periodic testing every 14 days is investigated. As in scenario 3, second and third degree contacts take extra precautionary measures. Individuals in the model are tested every 14 days. The simulation is performed on the original network of 180 students. At 40\% asymptomatic ratio, the contact tracing app reduces infections by 45.2\% at 50\% app usage and 72.1\% at 70\% app usage. At 95\% app usage, infections are reduced by 86.8\%. Scenario 3 and 4 show similar results at higher app usage. In this scenario, the app is effective even at 30\% app usage. The 30\% app usage case reduces infections by 18.1\% which is much more effective than in previous scenarios. Additionally, results are similar at all asymptomatic ratios. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 37.8\% at 50\% app usage and 66.4\% at 70\% app usage. Thus, periodic testing is especially helpful when larger proportions of individuals are asymptomatic. In Figure \ref{S7R2Inf}, the number of infected but not tested individuals drop sharply during days when the population is tested. The graph displays bumps that come from testing the new second and third degree contacts every 3 days. Since the graphs are averaged over thousands of trials, the jaggedness is significant. It is likely caused by many second and third degree contacts being detected at the same time, and thus, the followup testing becomes synchronized. Surprisingly, periodic testing at 0\% app usage reduces infections from 106.05 in scenario 1 to 102.83, only a 3.0\% decrease. This is an insignificant change given that the population of 180 students. Thus, periodic testing without contact tracing is not effective. \begin{table}[H] \caption{Pre-exposure notification system with periodic testing} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&102.83 & 0 & 0 & 465.62 & 1021.34\\ 30\%&40\%&84.05 & 94.78 & 27.03 & 403.80 & 1125.57\\ 50\%&40\%&56.32 & 153.39 & 37.74 & 288.68 & 1286.85\\ 70\%&40\%&28.73 & 166.27 & 38.08 & 159.21 & 1440.73\\ 95\%&40\%&13.61 & 173.85 & 35.31 & 85.44 & 1571.49\\ \hline 0\%&80\%&103.95 & 0 & 0 & 375.20 & 1044.10\\ 30\%&80\%&90.64 & 104.25 & 28.62 & 358.05 & 1117.67\\ 50\%&80\%&64.69 & 174.51 & 44.14 & 281.18 & 1266.00\\ 70\%&80\%&34.95 & 195.32 & 45.23 & 172.00 & 1426.44\\ 95\%&80\%&17.66 & 208.92 & 43.91 & 105.50 & 1562.05\\ \hline \end{tabular} \label{S7R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S7Linegraph120.pdf}\label{S7R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S7Infections120.pdf}\label{S7R2Inf}} \caption{Pre-exposure notification system with periodic testing} \end{figure} \subsection{Graph Extension}\label{S8 In this section, We simulate the strategy in scenario 4 on an extended version of the graph. By the process described in Section \ref{network}, the original weighted temporal graph is extended to 5000 individuals. This population size is closer to larger communities such as schools or universities. \begin{table}[h] \caption{Graph Extension} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&2429.94 & 0 & 0 & 10991.94 & 31001.67\\ 30\%&40\%&1941.70 & 5858.34 & 869.33 & 10054.09 & 36557.43\\ 50\%&40\%&1753.64 & 15410.51 & 1686.31 & 9847.13 & 45625.01\\ 70\%&40\%&1301.11 & 26188.07 & 2111.04 & 7902.39 & 62003.10 \\ 95\%&40\%&501.21 & 22809.78 & 1438.89 & 3319.28 & 81969.96\\ \hline 0\%&80\%&2433.38 & 0 & 0 & 8759.07 & 31576.44\\ 30\%&80\%&2253.48 & 6640.62 & 1017.38 & 10021.54 & 36332.13\\ 50\%&80\%&1888.32 & 16593.59 & 1851.73 & 9563.09 & 45640.46\\ 70\%&80\%&1341.07 & 26338.16 & 2191.90 & 7671.17 & 61131.04\\ 95\%&80\%&591.61 & 26708.16 & 1723.15 & 3861.01 & 86490.97\\ \hline \end{tabular} \label{S8R2} \end{table} At 40\% asymptomatic ratio, the contact tracing app reduces infections by 27.8\% at 50\% app usage and 46.5\% at 70\% app usage. This is less effective than scenario 4. At 95\% app usage, infections are reduced by 79.4\% which is similar to in scenario 4. Since the average degree is preserved on the network of 5000 students, the similar results are not surprising. As in scenario 4, the results are similar for all asymptomatic ratios. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 22.4\% at 50\% app usage and 44.9\% at 70\% app usage. At 95\% app usage, infections are reduced by 75.7\%. These numbers are very similar to the corresponding values of 40\% asymptomatic ratio. As in scenario 4, Figure \ref{S8R2Inf} shows noticeable drops in infections on days when the population is tested. Interestingly, at 95\% app usage, the number of infected individuals never rises to more than 100 people, or 2\% of the population. The graph does not exhibit an obvious peak but rather a sustained amount of infections. Thus is a reflection of the greater population size. \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S8Linegraph120.pdf}\label{S8R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S8Infections120.pdf}\label{S8R2Inf}} \caption{Graph Extension} \end{figure} \subsection{Economic Impact}\label{econ} In this section, we estimate the economic impact of COVID-19 and preventative measures. According to a report by the Kaiser Family Foundation, most COVID-19 tests range between \$100 to \$200 with the median cost being \$127 \cite{TestCost}. Universities are charging students up to \$1000 to quarantine for 2 weeks \cite{PurdueQ,SyracuseQ}, around \$71 a day. For students, we estimate a 20 hour work week with an hourly wage of \$20 to put daily income at \$57. The average tuition cost at universities can be as high as \$41,411 \cite{Tuition}. Assuming 180 days in a school year and that students miss classes and coursework, the average tuition per day is \$230. We roughly estimate the social cost of a single day in quarantine to be \$358 which is the sum of the previous numbers. The economic cost of a single COVID-19 infection, or the societal willing-to-pay threshold of avoiding one COVID-19 infection, is estimated to be \$8,500 \cite{screenTesting}. We calculate the total economic cost of COVID-19 for Scenario 3 and 4. The decrease in economic cost is computed in comparison to total economic cost at 0\% app usage in scenario 1. The average economic impact per app user is the decrease in the total economic cost of COVID-19 divided by the number of app users. \begin{table}[h] \caption{Average Economic Impact Per App User (Scenario 3)} \centering \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{2}{*}{Cost of Infection}\\ \cline{1-2} App Proportion & Asymptomatic &\\ \hline 30\% & 40\% & \$991\\ 50\% & 40\% & \$2,841\\ 70\% & 40\% & \$4,260\\ 95\% & 40\% & \$4,198\\ 30\% & 80\% & -\$19\\ 50\% & 80\% & \$1,251\\ 70\% & 80\% & \$2,678\\ 95\% & 80\% & \$2,928\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{Economic Value of Contact Tracing App Per User (Scenario 4)} \centering \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{2}{*}{Cost of Infection}\\ \cline{1-2} App Proportion & Asymptomatic &\\ \hline 30\% & 40\% & -\$607\\ 50\% & 40\% & \$2,208\\ 70\% & 40\% & \$3,614\\ 95\% & 40\% & \$3,461\\ 30\% & 80\% & -\$2,052\\ 50\% & 80\% & \$974\\ 70\% & 80\% & \$2,788\\ 95\% & 80\% & \$2,927\\ \hline \end{tabular} \end{table} The economic benefit per app user can reach as high as \$4,260 in scenario 3. In the majority of cases, the strategy in scenario 3 yields a higher benefit to user ratio. This shows that testing the entire population every 2 weeks is not cost effective. The economic impact per user is actually negative at low app usage in scenario 4. This is because the periodic testing creates significant cost while, as mentioned in Section \ref{S7}, is not effective without sufficient contact tracing. \section{Discussion and Conclusion}\label{conclusion} Given the significant loss of life at risk, finding effective measures to prevent the spread of COVID-19 is a top priority. Challenges in COVID-19 prevention include significant pre-symptomatic transmission, high proportions of asymptomatic cases, and inaccurate tests during the pre-symptomatic stage. This work focuses on solving these challenges by presenting effective strategies in digital contact tracing and testing. The parameters in this model can be easily changed as more precise values of COVID-19 parameters are measured. As more data on human social patterns are collected, larger social networks can be constructed leading to more accurate predictions in the model. Additionally, as more accurate COVID-19 tests are developed, model results and optimal strategies could change. Finally, this model only considers infections based on close contacts. Although COVID-19 mainly spreads through close contacts \cite{CDCFAQ}, the significance of spread through indirect contacts is unclear and could be investigated in future models. In conclusion, by simulating a variety of tracing and testing strategies, we found that digital contact tracing can be very effective when combined with testing. In scenario 1, the most basic strategy of quarantining first degree contacts is not effective at high asymptomatic levels, suggesting that a more comprehensive testing strategy is required. Testing second and third degree contacts greatly increases the effectiveness of the app. The pre-exposure notification system, which warns second and third degree contacts to take extra precautionary measures, can reduce infections by an additional 40\% while also reducing the number of quarantines by up to 50\%. While periodic testing with contact tracing is effective, periodic testing without contact tracing reduces infections by less than 3\%. We find the results on the extended network of 5000 students to be similar. \section{Acknowledgements} I would like to thank my mentor Dr. Jesse Geneson for his invaluable advice and guidance throughout the project. This would not be possible without him. Thank you to Dr. Tanya Khovanova and Alexander Vitanov for reviewing the paper draft. This work was completed under the MIT PRIMES-USA program. \section{Appendix A: App Usage} In the following, we present figures for the number of infections as a function of the app usage. Simulations are run with the proportion of app users ranging from 0\% to 100\% in increments of 5\%. The app becomes much more effective as more users use the app. As shown in Section \ref{S7}, scenario 4 shows very close results for all asymptomatic levels. Scenarios 3 and 4 shows that the effectiveness of the app begins to level off after more than 80\% of the population uses the app. \begin{figure}[H] \renewcommand\thefigure{A.1} \centering \subfloat[Traditional App Configuration]{\includegraphics[width=8.5cm]{S1AppvsInf.png}\label{S1B}} \subfloat[Testing second and third degree contacts]{\includegraphics[width=8.5cm]{S2AppvsInf.png}\label{S2B}} \newline \subfloat[Pre-exposure notification system]{\includegraphics[width=8.5cm]{S6AppvsInf.png}\label{S3B}} \subfloat[Periodic testing]{\includegraphics[width=8.5cm]{S7AppvsInf.png}\label{S4B}} \caption{App Usage vs Infections} \end{figure} \bibliographystyle{plain} \section{Introduction}\label{intro} More than 1.7 million people have died of COVID-19 \cite{covidmap}. With such catastrophic loss of life at risk, it is a top priority to identify effective measures to prevent the spread of COVID-19. Preventing the spread of COVID-19 has several challenges. Firstly, 44\% of COVID-19 transmissions occur during the asymptomatic stage with infected individuals being most infectious 1 to 2 days before symptom onset \cite{ViralShedding}. Secondly, according to the CDC Planning Scenarios report \cite{CDCScenarios}, 40\% of all COVID-19 cases remain asymptomatic and this number could be as high as 79\% for those under 20 \cite{ChildrenAsymp}. The CDC Planning Scenario also estimates that asymptomatic individuals are 75\% as infectious as symptomatic individuals. Finally, the false-negative rate of RT-PCR tests on presymptomatic individuals ranges from 67\% to 100\% \cite{testSensitivity}. All of these factors make it very difficult to prevent COVID-19 with symptom-based measures. Contact tracing has existed for decades, helping to reduce tuberculosis \cite{Tuberculosis}, sexually transmitted diseases \cite{STD}, and Ebola \cite{Ebola}. However, manual contact tracing relies on substantial human labor. According to a recent survey by NPR, 39 states do not have enough contact tracers \cite{NPRSurvey}. Additionally, of the contacts reported, only around 50\% were reachable by contact tracers \cite{ManualCTHard}. Health officials estimate an additional \$12 billion dollars are needed to fund the 180,000 manual tracers needed \cite{MCTCost}. Manual contact tracing is also susceptible to errors with the United Kingdom's Test and Trace service losing 15,000 positive COVID-19 cases between September 25 and October 2 \cite{UKBlunder}. Digital contact tracing is a relatively new method of fighting pandemics. Although the Bluetooth technology for digital contact tracing was first validated in 2014 \cite{BluetoothTech}, contact tracing apps have only recently been implemented to fight the COVID-19 pandemic. Since digital contact tracing apps require a critical mass of a population in order to be effective, a key goal is to convince enough people to use the app. In \cite{targetcommunity}, it is suggested that targeting small communities like universities first will allow the app to be used by enough people within that community. This is more feasible than expecting a significant proportion of the population at large to use the app. Many universities have experienced outbreaks \cite{nytuniv} and are looking for ways to prevent further spread. Currently, Georgia Tech, Carnegie Mellon, Grand Valley State University, and even the city of Santa Fe are beginning to adopt contact tracing apps like NOVID \cite{novid}. Currently, there are not enough users within these communities to prove the effectiveness of contact tracing apps. We will use simulation to evaluate the effectiveness of digital contact tracing. Efforts to model diseases date back to the 1920s with the creation of the SIR (Susceptible, Infectious, Recovered) model \cite{OGSIR} where people in a fully mixed population are modeled as being Susceptible, Infectious, or Recovered. The SEIR (Susceptible, Infectious, Exposed, Recovered) model \cite{SEIR} is a variant of the SIR framework used for diseases with longer incubation periods. It includes the Exposed stage which are infected individuals who do not show symptoms. More recently, network based models, where humans are vertices and contacts are edges, have been adopted. Epidemiological models have been instrumental in encouraging preventative measures in the COVID-19 pandemic such as masking \cite{dekaiMaskPaper,maskPaper1}, social distancing \cite{distIndia,TestandDistanceLockdownPaper}, testing \cite{screenTesting,testingNotLockdown}. In particular, \cite{dekaiMaskPaper,screenTesting} are network based models. Since digital contact tracing relies on the exact contacts that occur in a population, it is best modeled with a contact network where people are vertices and contacts are edges. Recent efforts to simulate digital contact tracing \cite{NHSXReport,Dig2} have been done using synthetic networks where households and communities are constructed with random processes such as full mixing, and each pair of people have the same probability of contact. However, very little is known about the exact structure of human contacts and how these structures affect digital contact tracing. Additionally, those models assume perfect COVID-19 tests which changes optimal strategies of digital contact tracing and testing. This paper presents an enhanced network based SEIR model of the COVID-19 pandemic with digital contact tracing and testing strategies that incorporates variations in infectivity and test sensitivity. In contrast to previous work on digital contact tracing, the networks in the model are generated from a real world data set of interactions among 180 students of a high school in France \cite{multiDayData}. Since the data set was only recorded over 7 days, we use a MUNGE-like heuristic to generate additional days for the model. We present a new method to extend temporal weighted graphs in order to perform simulations on a larger population of 5000 people. Our model incorporates test sensitivity and shows that it has a significant impact on digital contact tracing strategies. The model simulates a new digital contact tracing strategy developed by NOVID \cite{novid} called the pre-exposure notification system. The pre-exposure notification system acts like a "social radar" telling app users how close they are to COVID-19, allowing them to take extra precautionary measures. In contrast with traditional SEIR models, test sensitivity is modeled to change over time depending on the time of symptom onset \cite{testSensitivity}. The incubation period is sampled from the COVID-19 incubation period distribution \cite{incubation}. The infectiousness of an individual is modeled to change over the infectious period and is fitted to the function in \cite{ViralShedding}. The model shows that the traditional strategy of quarantining direct contacts reduces infections by less than 10\% when more than half the population is asymptomatic. Testing second and third degree contacts reduces infections by up to 40\% when 70\% of the population uses the app. The pre-exposure notification system reduces infections by an additional 43\% and reduces the number of quarantines required by 51\%. Quarantining second degree contacts reduces infections but leads to a high number of quarantines. If large proportions of the population are asymptomatic, periodic testing reduces infections by an additional 41\%. However, periodic testing without tracing reduces infections by only 3\%. The most effective strategy discussed in this work was combined the pre-exposure notification system with testing second and third degree contacts. This strategy reduces infections by 18.3\% when 30\% of the population uses the app, 45.2\% when 50\% of the population uses the app, 72.1\% when 70\% of the population uses the app, and 86.8\% when 95\% of the population uses the app. When simulating the model on an extended network of 5000 students, the results are very similar with the contact tracing app reducing infections by up to 79\% \subsection{Paper Outline} In Section \ref{network}, we present the contact network generation process. Section \ref{covid} outlines the SEIR based model of COVID-19 spread. In Section \ref{app}, we create the model of the contact tracing app. In Section \ref{results} we present the results of 8 simulated scenarios. In Section \ref{S1}, the traditional strategy of quarantine of first degree contacts is investigated. In Section \ref{S2}, testing of second and third degree contacts is incorporated. In section \ref{S6}, the pre-exposure notification system is investigated. Section \ref{S7} investigates periodic testing. Section \ref{S8} simulates the model on an extended version of the graph. In Section \ref{econ}, we estimate the economic value of the contact tracing app. In Section \ref{conclusion}, we provide a summary and concluding remarks. \section{Model of Contact Network}\label{network} The first component of the model is the contact network which encapsulates the interactions between individuals in the simulation. The model iterates through discrete timestamps each representing a day in the simulation. Each day in the model, individuals come into contact, and these contacts ultimately determine how the virus transmits. The contact network on day $t$, $G_t$ is defined to be a graph where each vertex $v_1,...,v_n$ represent an individual in the model and the edge weight $e_{ij}$ between $v_i$ and $v_j$ represents the total amount of time person $i$ and person $j$ have spent in contact with each other during this time step. We do not distinguish between times of the day of the contact or number of contacts between the same individuals. This is in accordance with a recent CDC policy change that defines a close contact to be measured using cumulative contact time over the course of a day \cite{CDCAppendix}. Traditional SEIR models assume uniform mixing where each individual has the same probability of coming into contact with every other individual. Since digital contact tracing involves the exact contacts that occur in a population, a realistic contact network is needed. Temporal networks in the model are generated using a publicly available high-resolution data set \cite{multiDayData} which used RFID (Radio Frequency Identification Devices) to record all contacts between 180 students from Lyc\'ee Thiers high school in France over the course of 7 days. Since the simulation is over longer periods of time, we present a heuristic for generating additional days. Define $H_a$ to be the contact network in day $a$ of the data set where $1 \le a \le 7$. The edge weight $H_{a,i,j}$ is the duration of contact between people $i$,$j$ on day $a$ from the data set. The heuristic is similar to the MUNGE algorithm \cite{MUNGE} which generates synthetic training data. The algorithm picks an initial data value, and for each feature, with a certain probability, swaps that feature with the feature of its nearest neighbor. In this heuristic, we use a random day rather than the nearest neighbor. Initially, 2 distinct days $a$,$b$ are chosen at random from the data set. Starting with the contact graph for day $a$, for the contact duration between $v_i$ and $v_j$ with probability $0.5$, we replace that contact duration with $H_{b,i,j}$. On average, the generated contact network $G_t$ has half of its edges equal to the corresponding edges in $H_a$, and the other half equal to $H_b$. To simulate the app on larger networks, we present a method to generate larger networks from the original data set. We create a modified version of the Albert-Barab\'asi process that is adjusted for constructing weighted temporal graphs and maintains the average degree of the vertices. The principal idea of the Albert-Barab\'asi process is that nodes of high degree are more likely to interact with new nodes. Given a temporal graph $G$ with $n$ vertices where the graph on timestamp $t$ is $G_t$, extra people are added one at a time with the following process: A new vertex $v$ is added, and for each existing $w$, a random third vertex $u$ is chosen. The temporal edge weight between $v$ and $w$ is determined to be the same as the edge weight between $w$ and $u$: $G_{t,v,w}=G_{t,u,w}$ for all $t$. This step is the same as in Albert-Barab\'asi except adjusted to fit a weighted temporal graph. To preserve the average degree of the vertices, each of the original edges of $G$ are deleted with probability $\frac{1}{n-1}$. Note that the generated graph no longer has the scale-free property The extended network is initialized to be the original graph in the data set and extended in accordance to the above procedure to a total of 5000 individuals. \section{Model of COVID-19 Spread}\label{covid} The SEIR framework organizes the people in the simulation into one of the following four states: Susceptible (S), Exposed (E), Infectious (I), or Recovered (R). Individuals in the Susceptible stage have not been infected and therefore are susceptible to the virus. Exposed individuals have been infected with the virus but do not yet show symptoms. The Infectious stage begins when infected individuals show symptoms. Finally, the Recovered stage contains individuals who have recovered or otherwise removed from the model and are now immune from further spread. Individuals in both the Exposed and Infectious stage can infect others. An individual moves onto the next stage according to the following rules: \begin{itemize} \item $S\rightarrow E$: Individuals in the susceptible stage can only be exposed if they come into contact with an infected individual. Infections will be discussed in later sections. \item $E\rightarrow I$: After being exposed to a virus, the period of time before symptom onset is called the Incubation period, typically within 2 to 14 days. According to \cite{incubation}, the distribution of incubation periods is approximately the log-normal distribution with parameters $\mu=1.621$, $\sigma=0.418$. The log-normal distribution is defined as follows: Let $Z$ be a random variable with normal distribution with mean $\mu$ and deviation $\sigma$, then $X=e^{Z}$ where $X$ is the random variable with log-normal distribution with parameters $\mu,\sigma$. To determine the incubation period of an individual, we take a random sample from this distribution rounded to the nearest positive integer. \item $I\rightarrow R$:Individuals with moderate symptoms stop being infectious around 10 days from symptom onset \cite{CovidLength}. As in traditional SEIR models, we assume that every time stamp after symptom onset, there is a probability $\lambda=0.11$ which is $\frac{1}{9}$ that an Infectious person recovers or is removed from the model \end{itemize} As in traditional SEIR models, the functions $S(t),E(t),I(t),R(t)$ are the number of individuals in the compartments susceptible, exposed, infections, and recovered respectively at time $t$. We defined $Q(t)$ to be the number of individuals quarantined at time $t$ but have not received a positive test result and $T(t)$ to be the number of individuals who have received a positive before or during time $t$ but have not recovered. Individuals counted in $T(t)$ are in quarantine as well. Quarantines will be discussed in Section \ref{app}. In the rest of the paper, for each individual $v_i$, we will refer to $E_{v_i}$, $I_{v_i}$, and $R_{v_i}$ as the time of exposure, symptom onset, and recovery respectively of $v_i$. Additionally, $T_{v_i}$ is the time of the first positive test of $v_i$. In realistic scenarios, individuals can infect others before symptom onset with the level of infectiousness changing throughout of the infection. The relative infectiousness of an individual is calculated in \cite{ViralShedding}. We will call this function $ID(t)$ where the input $t$ is the number of days since symptom onset. $ID(t)$ is taken to be the function in \cite{ViralShedding} where infectiousness was assumed to start 5 days before symptom onset. Individuals are shown to be more infectious before symptom onset than after. We assume the infectiousness of an individual, which is defined to be the probability of infection given a 20-second contact, is some constant multiple, $p$, of this function. Consider the graph $G_t$ to be the contact network during timestamp $t$ in the model, with vertices $v_1,...,v_n$ representing people and edge weight $e_{ij}$ to be the total contact duration between $v_i,v_j$ during day $t$. Assuming infection to be an event that can happen during the course of a contact, if $v_i$ is in either the exposed or infectious compartments, and $v_j$ is susceptible, we model the probability that $v_j$ becomes exposed to be $$1-(1-p ID(t-I_{v_i}))^d$$ where $t-I_{v_i}$ is the number of days since symptom onset of $v_i$. The basic reproduction number, or $R_0$, is the expected number of secondary infections caused by a single infectious individual. Estimates of the value of $R_0$ range from $1.5$ to $6.7$ with the median value being $2.8$ \cite{R0Estimate} and the methods used to estimate $R_0$ varies widely between studies \cite{R0Conundrum}. Through simulation, we calculate the value of $R_0$ as a function of $p$ as shown in Figure \ref{pR0}. The value of $p$ is ranged in increments of $0.0025$ from $0$ to $0.225$ for a total of 100 values. For each value of $p$, we run the simulation 1,800 times where each individual is the seed infection 10 times and the $R_0$ value is measured as the average number of secondary infections caused by the seed infection. As shown in Figure \ref{pR0}, the scenario when $p=0.10$ yields an $R_0$ value of $2.8$. In the rest of this work, the value of $p$ will be set to $0.10$ \begin{figure}[H] \centering \includegraphics[width=8cm]{BasicR0.png} \caption{Probability of infection vs individual reproduction number} \end{figure}\label{pR0 According to \cite{CDCScenarios}, 40\% of the population remain asymptomatic throughout their infectious periods. Again, estimates vary widely and it is noted that this value could range from 10\% to 70\%. In \cite{80Kids}, it is estimated that over 80\% of young individuals are asymptomatic. The possibilities of when 20\%, 40\%, 60\%, and 80\% are asymptomatic are tested in the results. Asymptomatic individuals have similar viral loads to symptomatic individuals \cite{AsympInf}. Thus, we will assume that asymptomatic individuals are as infectious as symptomatic individuals. Testing is a key component of contact tracing. The contact tracing app relies on positive tests to identify cases of COVID-19. We adjust test sensitivities based on days from symptom onset to the distribution calculated in \cite{testSensitivity}. The false-negative rate of RT-PCR COVID-19 tests are between $67\%$ to $100\%$ before symptom onset, and fall to $20\%$ to $40\%$ after symptom onset \cite{testSensitivity}. The median number of days from symptom onset to taking a test is 3 days with interquartile range 1 to 6 \cite{CDCScenarios}. We approximate this distribution by assuming that each day there is a probability of 19\% that symptomatic individuals are tested. This preserves the median of 3 days and interquartile range 1 to 6. We do not account for false-positive tests or individuals symptomatic with a disease unrelated to COVID-19. Test results are received after a delay of 1 day. This is similar to the delay on university campuses \cite{UMichTest}. After receiving a positive COVID-19 test result, we assume the person will remain in quarantine until recovery \section{Model of Contact Tracing App}\label{app} The contact tracing app has incomplete information about the contact network and the states of the individuals. Ultrasound apps such as NOVID can measure distances to the resolution of inches and detect contacts with accuracy over 99.6\% \cite{novid}. We will assume all contacts between individuals with the app are sensed. Thus, the contact tracing app can detect the subgraph of the contact map induced by the set of app users. Contact tracing apps support the consideration of degree $k$ contacts rather than just direct contacts. This is not possible with manual contact tracing. A contact is defined to be a tuple $(p_1,p_2,t,d)$ that contains the following four pieces of information: $p_1,p_2$ are the people involved in the contact, $t$ is the day of contact, and $d$ is the duration of the contact measured in units of 20 seconds. Since the digital contact tracing app cannot determine the time of symptom onset, the estimated transmission probability of the contact is $(1-(1-p')^{d_i})$. The value $p'=\widehat{ID} p$ is substituted for $pID(t)$ where $\widehat{ID}=0.11$ is the average value of $ID(t)$ during the 6 most infectious days. In addition to contacts, users report all positive test results. If a user has tested positive, we assume that this user will report recovery. In the contact tracing app, we define a degree $k$ contact to be a sequence of contacts $c_1,…,c_k$ that satisfy the following properties: \begin{enumerate} \item The individuals $p_1,...,p_{i+1}$ involved in the contacts form a chain: $c_i=(p_i,p_{i+1},t_i,d_i)$. \item $p_1$ has reported a positive test: $t_1 \le T_{p_1} \le t_1+10$. Through simulation of 1800 outbreaks without interventions, 95\% of all symptomatic individuals took a test within 8 days of symptom onset. Since infectiousness becomes significant 2 days before symptom onset, the contacts from 10 days before the test must be recalled. \item $p_i,p_{i+1}$ have not reported recovery and thus can transmit or catch the virus: $R_{p_i},R_{p_{i+1}}>t_i$ for each $i$. \item The serial interval is between 1 and 10 days: $t_i+1\le t_{i+1}\le t_i+10$. The serial interval is defined to be the duration from the exposure time of the infector($t_i$) to the exposure time of the infected($t_{i+1}$). Through simulation of the model 1800 time, 94\% of all serial intervals are within 10 days. \end{enumerate} The weight of this contact chain is the product of the estimated transmission probabilities of each contact: $$\prod_{i=1}^k (1-(1-p')^{d_i}).$$ First and second degree contacts are computed using contact data from previous days directly after $p_1$ reports a positive test. Larger degree contacts are computed recursively. For each person $v_i$ in the model, we keep track of a matrix $M_i$ where $M_{i,l,t}$ is the total sum of contact chains of length $l$ that affect person $v_i$ at time $t$. Using the contacts, the app can calculate $M_i$ recursively. On day $t$, if $v_i$ has contacts with $v_{a_1},...,v_{a_k}$ with durations $d_1,...,d_k$ respectively, where $v_i,v_{a_1},...,v_{a_k}$ have not reported recovery by day $t$, then: $$M_{i,l,t}=\sum_{s=1}^k \left( \left(1-(1-p')^{d_s}\right)\sum_{r=1}^{10} M_{s,l-1,t-r} \right).$$ For all of $v_i$'s contacts, we take the sum of all contact chains of length $l-1$ and multiply by the estimated transmission probability $1-(1-p')^{d_s}$ to obtain the sum of all contact chains of length $l$. A person $x$ is a degree $k$ contact on day $t$ if the sum of all weights of their contacts with degree $\le k$ is above $10\%$: $$\sum_{i=1}^k\sum_{j=1}^{10} M_{x,i,t-j} \ge C.$$ The contact cutoff value, $C$, only affects the quarantine rules and not simulating the disease transmission. Essentially, direct contacts that are calculated to be infected with probability at least $C$ are quarantined. The value of $C$ is set to $10\%$ by default. By default, first degree contacts are quarantined. The pre-exposure notification system, as implemented in NOVID \cite{novid}, acts like a "social radar" telling users how close they are to COVID-19 by showing the number of positive cases at each distance in their social network. App users can see the number of neighbors of distance $d$ that are COVID-19 positive. We assume that first, second, and third degree neighbors will take precautionary measures. For second degree contacts, we assume a 75\% reduction in contacts. For third degree contacts, we assume a 50\% reduction in contacts. \section{Results}\label{results} At the beginning of the simulation, exactly one individual is exposed while the rest are susceptible. The simulation runs for 120 days. There are 180 individuals and for each simulation we run 1800 trials where each individual starts as the seed infection 10 times. We simulate the cases when $0\%$, $30\%$, $50\%$, $70\%$, and $95\%$ of the population use the app. We simulate the following 5 scenarios: \begin{enumerate} \item Quarantine of first degree contacts \item Quarantine of first degree contacts with followup testing of second and third degree contacts \item Pre-exposure notification system \item Pre-exposure notification system with periodic testing \item Scenario 4 on the extended graph \end{enumerate} We will measure the contact tracing app and COVID-19 testing configuration by 3 metrics: Total infected, total days spent in quarantine, and total tests used. In the tables below, the App Proportion column shows the proportion of individuals in the model that use the app. The Asymptomatic column show the proportion of individuals that are asymptomatic. The Infected column shows the average number of individuals infected after 120 days. Quarantines are split into 3 categories: false, true, tested. False quarantines are quarantined individuals who do not have COVID-19. Individuals in the True quarantine category are infected with COVID-19 but have not received a positive test result. Tested quarantines are those who have tested positive. We distinguish the Tested quarantines because these infections have been confirmed by test while True and False quarantines are predicted by the quarantine rules. Each of the quarantine columns show the total number of days spent in quarantine across all individuals over 120 days. The Tests Used column shows the number of tests used in the simulation after 120 days. In scenario 1, the most basic strategy of quarantining first degree contacts is not effective at high asymptomatic levels, suggesting that a more comprehensive testing strategy is required. Scenario 2 shows that testing second and third degree contacts greatly increases the effectiveness of the app. The pre-exposure notification system, which warns second and third degree contacts to take extra precautionary measures, can reduce infections by an additional 40\% while also reducing the number of quarantines by up to 50\%. IN Scenario 4, it is shown that periodic testing without contact tracing reduces infections by less than 3\%. Each app user decreases the economic cost of COVID-19 by \$2,841 at 50\% app usage and \$4,185 at 70\% app usage. \subsection{Scenario 1 --- Quarantine of first degree contacts (Traditional Strategy)}\label{S1} This is the most basic strategy where only first degree contacts are quarantined. Every day, symptomatic individuals have a 19\% chance of getting a test as discussed in Section \ref{covid}. The simulation is performed on the original network of 180 students. \begin{table}[H] \caption{Quarantine direct contacts, no followup testing} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&106.05 & 0 & 0 & 291.87 & 53.57 \\ 30\%&40\%&101.20 & 78.78 & 39.73 & 278.42 & 50.97\\ 50\%&40\%&97.15 & 195.71 & 94.06 & 265.96 & 48.97\\ 70\%&40\%& 87.48 & 329.17 & 149.91 & 242.71 & 44.41 \\ 95\%&40\%& 70.25 & 477.30 & 200.56 & 194.77 & 35.39\\ \hline 0\%&80\%& 110.43 & 0 & 0 & 101.43 & 18.43 \\ 30\%&80\%&107.99 & 33.53 & 19.23 & 98.83 & 18.09\\ 50\%&80\%&108.12 & 88.47 & 52.20 & 97.12 & 17.96\\ 70\%&80\%& 101.82 & 159.04 & 91.53 & 94.11 & 17.04 \\ 95\%&80\%& 99.73 & 282.78 & 156.70 & 90.58 & 16.76\\ \hline \end{tabular} \label{S1R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S1Linegraph120.pdf}\label{S1R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S1Infections120.pdf}\label{S1R2Inf}} \caption{Quarantine direct contacts, no followup testing} \end{figure} From Table \ref{S1R2}, we can find out that the effectiveness of the app depends on the percentage of the people use the app. At 40\% asymptomatic ratio, the contact tracing app reduces infections by 8.4\% at 50\% app usage and 17.5\% at 70\% app usage. The effectiveness of the app increases doubles from adding the extra 20\% of app users. This shows that it is very important to have a majority of app users. A limitation of this strategy is that it is ineffective at high asymptomatic ratios. At 80\% asymptomatic ratio the contact tracing app reduces infections by less than 10\% in all cases. This is because in this scenario, since COVID-19 testing relies on symptomatic individuals, the effectiveness of the app is greatly reduced at high asymptomatic levels. Less than 20 tests are used while over 100 students are infected which indicates that the majority of infections are undetected. The results in this scenario suggest that more comprehensive testing strategies are needed, especially when large proportions of the population are asymptomatic. The cost of using the app is that quarantines increase as more people use the app. In all cases, the number of false quarantines is approximately double the number of true quarantines. The time of peak infections is similar for each app usage. In Figure \ref{S1R2}, infections rise dramatically, peaking at around 30 days after the initial infection and then falls as herd immunity is reached. Because quarantine rules require a history of contacts to be recalled, the initial 10 days show no difference between the graphs of each app usage level. At 60\% and 80\% asymptomatic ratio, the app is unable to reduce infections significantly. \subsection{Scenario 2 --- Testing of second and third degree contacts}\label{S2} In this scenario, first, second, and third degree contacts are tested every 3 days. All other parameters are the same as in scenario 1. The simulation is performed on the original network of 180 students. \begin{table}[h] \caption{Quarantine direct contacts, test second, third degree contacts} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%& 104.61 & 0 & 0 & 286.37 & 52.68 \\ 30\%&40\%&98.53 & 113.23 & 32.84 & 325.58 & 107.60\\ 50\%&40\%&83.02 & 259.09 & 73.70 & 340.81 & 234.49\\ 70\%&40\%& 63.33 & 395.14 & 101.05 & 315.52 & 470.46 \\ 95\%&40\%& 27.87 & 314.2 & 77.42 & 172.38 & 670.94\\ \hline 0\%&80\%& 111.52 & 0 & 0 & 100.72 & 18.71 \\ 30\%&80\%&106.79 & 88.73 & 23.59 & 156.34 & 62.29\\ 50\%&80\%&95.37 & 253.73 & 71.30 & 250.42 & 188.05\\ 70\%&80\%& 78.03 & 418.75 & 110.40 & 304.72 & 400.50 \\ 95\%&80\%& 44.10 & 423.31 & 105.73 & 238.87 & 645.00\\ \hline \end{tabular} \label{S2R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S2Linegraph120.pdf}\label{S2R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S2Infections120.pdf}\label{S2R2Inf}} \caption{Test second, third degree contacts} \end{figure} As in scenario 1, the effectiveness of the app increases greatly as the number of app users increases. In Table \ref{S1R2}, at 40\% asymptomatic ratio, the contact tracing app reduces infections by 20.6\% at 50\% app usage and 39.5\% at 70\% app usage. These reductions are over 2 times greater than in scenario 1. Notably, at 95\% app usage, infections are reduced by 73.4\%, which is almost twice the reduction at 70\% app usage. Compared to scenario 1, the number of true quarantines decreases while the number of tested quarantines increases at 40\% asymptomatic ratio. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 14.5\% at 50\% app usage and 30.0\% at 70\% app usage. Again, this strategy is significantly more effective than scenario 1. However, the number of quarantines increases significantly in all categories compared to scenario 1. In particular, the number of tested quarantines rises by 224\% at 70\% app usage. This is to be expected since the number of tests used rises significantly, from less than 20 to over 400. Again, the app is significantly more effective at 95\% app usage, with infections being reduced by 60.05\%. Although results for 40\% and 80\% asymptomatic ratio are similar for app proportions less than 70\%, at 95\% app proportion, the case of 80\% asymptomatic yields 58\% more infections. Similar to scenario 1, the graphs are similar during the first 10 days and then diverge significantly. However, at high app usage, infections peak before 20 days while no app usage yields peak infections at around 30 days. In conclusion, testing second and third degree contacts greatly increases the effectiveness of the app especially at higher asymptomatic proportions. Thus, the increase in test usage and quarantines is justified. \subsection{Scenario 3 --- Pre-exposure notification system}\label{S6} In this scenario, we simulate the pre-exposure notification system. For second degree contacts, we assume a 75\% reduction in contacts. For third degree contacts, we assume a 50\% reduction in contacts. All other parameters are the same as those in scenario 2. The simulation is performed on the original network of 180 students. \begin{table}[H] \caption{Quarantine direct contacts, pre-exposure notification system} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&106.99 & 0 & 0 & 295.84 & 54.24\\ 30\%&40\%&93.48 & 92.80 & 27.40 & 303.44 & 101.73\\ 50\%&40\%&67.25 & 162.16 & 44.36 & 254.02 & 161.40\\ 70\%&40\%&36.21 & 185.80 & 45.24 & 162.85 & 213.24\\ 95\%&40\%&16.61 & 190.22 & 41.60 & 98.10 & 279.80\\ \hline 0\%&80\%&110.90 & 0 & 0 & 101.08 & 18.69\\ 30\%&80\%&103.19 & 87.01 & 22.46 & 150.39 & 64.83\\ 50\%&80\%&81.68 & 187.45 & 48.33 & 189.58 & 142.75\\ 70\%&80\%&54.06 & 251.14 & 61.42 & 179.84 & 220.68\\ 95\%&80\%&29.99 & 287.72 & 65.62 & 156.12 & 309.38\\ \hline \end{tabular} \label{S6R2} \end{table} As in the previous scenarios, the effectiveness of the app increases as the number of app users increases. In Table \ref{S6R2}, at 40\% asymptomatic ratio, the contact tracing app reduces infections by 37.1\% at 50\% app usage and 66.1\% at 70\% app usage. At 95\% app usage, infections are reduced by 84.5\% which is an additional 40.2\% reduction compared to scenario 2. Compared to scenario 2, the total number of quarantines is reduced by 51.4\% at 70\% app usage. There is a 54.7\% reduction in the number of tests used which is caused by the reduced interactions for second and third degree contacts. This strategy is more effective even though the number of true and tested quarantines are reduced by 55.2\% and 48.4\% respectively. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 14.5\% at 50\% app usage and 30.0\% at 70\% app usage. Again, this strategy is significantly more effective than scenario 1. At 95\% app usage, the 80\% asymptomatic case yields 80.6\% more infections than the 40\% case. This is an even more pronounced gap than in scenario 2. \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S6Linegraph120.pdf}\label{S6R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S6Infections120.pdf}\label{S6R2Inf}} \caption{Quarantine direct contacts, pre-exposure notification system} \end{figure} \subsection{Scenario 4 --- Pre-exposure notification system with periodic testing}\label{S7} In scenario 4, we simulate the Pre-Exposure Notification System with periodic testing every 14 days is investigated. As in scenario 3, second and third degree contacts take extra precautionary measures. Individuals in the model are tested every 14 days. The simulation is performed on the original network of 180 students. At 40\% asymptomatic ratio, the contact tracing app reduces infections by 45.2\% at 50\% app usage and 72.1\% at 70\% app usage. At 95\% app usage, infections are reduced by 86.8\%. Scenario 3 and 4 show similar results at higher app usage. In this scenario, the app is effective even at 30\% app usage. The 30\% app usage case reduces infections by 18.1\% which is much more effective than in previous scenarios. Additionally, results are similar at all asymptomatic ratios. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 37.8\% at 50\% app usage and 66.4\% at 70\% app usage. Thus, periodic testing is especially helpful when larger proportions of individuals are asymptomatic. In Figure \ref{S7R2Inf}, the number of infected but not tested individuals drop sharply during days when the population is tested. The graph displays bumps that come from testing the new second and third degree contacts every 3 days. Since the graphs are averaged over thousands of trials, the jaggedness is significant. It is likely caused by many second and third degree contacts being detected at the same time, and thus, the followup testing becomes synchronized. Surprisingly, periodic testing at 0\% app usage reduces infections from 106.05 in scenario 1 to 102.83, only a 3.0\% decrease. This is an insignificant change given that the population of 180 students. Thus, periodic testing without contact tracing is not effective. \begin{table}[H] \caption{Pre-exposure notification system with periodic testing} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&102.83 & 0 & 0 & 465.62 & 1021.34\\ 30\%&40\%&84.05 & 94.78 & 27.03 & 403.80 & 1125.57\\ 50\%&40\%&56.32 & 153.39 & 37.74 & 288.68 & 1286.85\\ 70\%&40\%&28.73 & 166.27 & 38.08 & 159.21 & 1440.73\\ 95\%&40\%&13.61 & 173.85 & 35.31 & 85.44 & 1571.49\\ \hline 0\%&80\%&103.95 & 0 & 0 & 375.20 & 1044.10\\ 30\%&80\%&90.64 & 104.25 & 28.62 & 358.05 & 1117.67\\ 50\%&80\%&64.69 & 174.51 & 44.14 & 281.18 & 1266.00\\ 70\%&80\%&34.95 & 195.32 & 45.23 & 172.00 & 1426.44\\ 95\%&80\%&17.66 & 208.92 & 43.91 & 105.50 & 1562.05\\ \hline \end{tabular} \label{S7R2} \end{table} \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S7Linegraph120.pdf}\label{S7R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S7Infections120.pdf}\label{S7R2Inf}} \caption{Pre-exposure notification system with periodic testing} \end{figure} \subsection{Graph Extension}\label{S8 In this section, We simulate the strategy in scenario 4 on an extended version of the graph. By the process described in Section \ref{network}, the original weighted temporal graph is extended to 5000 individuals. This population size is closer to larger communities such as schools or universities. \begin{table}[h] \caption{Graph Extension} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{ 2}{*}{Infected} & \multicolumn{3}{c|}{Quarantine} & \multirow{ 2}{*}{Tests Used}\\ \cline{1-2} \cline{4-6} App Proportion & Asymptomatic &&False&True&Tested&\\ \hline 0\%&40\%&2429.94 & 0 & 0 & 10991.94 & 31001.67\\ 30\%&40\%&1941.70 & 5858.34 & 869.33 & 10054.09 & 36557.43\\ 50\%&40\%&1753.64 & 15410.51 & 1686.31 & 9847.13 & 45625.01\\ 70\%&40\%&1301.11 & 26188.07 & 2111.04 & 7902.39 & 62003.10 \\ 95\%&40\%&501.21 & 22809.78 & 1438.89 & 3319.28 & 81969.96\\ \hline 0\%&80\%&2433.38 & 0 & 0 & 8759.07 & 31576.44\\ 30\%&80\%&2253.48 & 6640.62 & 1017.38 & 10021.54 & 36332.13\\ 50\%&80\%&1888.32 & 16593.59 & 1851.73 & 9563.09 & 45640.46\\ 70\%&80\%&1341.07 & 26338.16 & 2191.90 & 7671.17 & 61131.04\\ 95\%&80\%&591.61 & 26708.16 & 1723.15 & 3861.01 & 86490.97\\ \hline \end{tabular} \label{S8R2} \end{table} At 40\% asymptomatic ratio, the contact tracing app reduces infections by 27.8\% at 50\% app usage and 46.5\% at 70\% app usage. This is less effective than scenario 4. At 95\% app usage, infections are reduced by 79.4\% which is similar to in scenario 4. Since the average degree is preserved on the network of 5000 students, the similar results are not surprising. As in scenario 4, the results are similar for all asymptomatic ratios. At 80\% asymptomatic ratio, the contact tracing app reduces infections by 22.4\% at 50\% app usage and 44.9\% at 70\% app usage. At 95\% app usage, infections are reduced by 75.7\%. These numbers are very similar to the corresponding values of 40\% asymptomatic ratio. As in scenario 4, Figure \ref{S8R2Inf} shows noticeable drops in infections on days when the population is tested. Interestingly, at 95\% app usage, the number of infected individuals never rises to more than 100 people, or 2\% of the population. The graph does not exhibit an obvious peak but rather a sustained amount of infections. Thus is a reflection of the greater population size. \begin{figure}[H] \centering \subfloat[Total infected.]{\includegraphics[width=8.5cm]{S8Linegraph120.pdf}\label{S8R2Line}} \subfloat[Total infected but not recovered or tested.]{\includegraphics[width=8.5cm]{S8Infections120.pdf}\label{S8R2Inf}} \caption{Graph Extension} \end{figure} \subsection{Economic Impact}\label{econ} In this section, we estimate the economic impact of COVID-19 and preventative measures. According to a report by the Kaiser Family Foundation, most COVID-19 tests range between \$100 to \$200 with the median cost being \$127 \cite{TestCost}. Universities are charging students up to \$1000 to quarantine for 2 weeks \cite{PurdueQ,SyracuseQ}, around \$71 a day. For students, we estimate a 20 hour work week with an hourly wage of \$20 to put daily income at \$57. The average tuition cost at universities can be as high as \$41,411 \cite{Tuition}. Assuming 180 days in a school year and that students miss classes and coursework, the average tuition per day is \$230. We roughly estimate the social cost of a single day in quarantine to be \$358 which is the sum of the previous numbers. The economic cost of a single COVID-19 infection, or the societal willing-to-pay threshold of avoiding one COVID-19 infection, is estimated to be \$8,500 \cite{screenTesting}. We calculate the total economic cost of COVID-19 for Scenario 3 and 4. The decrease in economic cost is computed in comparison to total economic cost at 0\% app usage in scenario 1. The average economic impact per app user is the decrease in the total economic cost of COVID-19 divided by the number of app users. \begin{table}[h] \caption{Average Economic Impact Per App User (Scenario 3)} \centering \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{2}{*}{Cost of Infection}\\ \cline{1-2} App Proportion & Asymptomatic &\\ \hline 30\% & 40\% & \$991\\ 50\% & 40\% & \$2,841\\ 70\% & 40\% & \$4,260\\ 95\% & 40\% & \$4,198\\ 30\% & 80\% & -\$19\\ 50\% & 80\% & \$1,251\\ 70\% & 80\% & \$2,678\\ 95\% & 80\% & \$2,928\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{Economic Value of Contact Tracing App Per User (Scenario 4)} \centering \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Situation} & \multirow{2}{*}{Cost of Infection}\\ \cline{1-2} App Proportion & Asymptomatic &\\ \hline 30\% & 40\% & -\$607\\ 50\% & 40\% & \$2,208\\ 70\% & 40\% & \$3,614\\ 95\% & 40\% & \$3,461\\ 30\% & 80\% & -\$2,052\\ 50\% & 80\% & \$974\\ 70\% & 80\% & \$2,788\\ 95\% & 80\% & \$2,927\\ \hline \end{tabular} \end{table} The economic benefit per app user can reach as high as \$4,260 in scenario 3. In the majority of cases, the strategy in scenario 3 yields a higher benefit to user ratio. This shows that testing the entire population every 2 weeks is not cost effective. The economic impact per user is actually negative at low app usage in scenario 4. This is because the periodic testing creates significant cost while, as mentioned in Section \ref{S7}, is not effective without sufficient contact tracing. \section{Discussion and Conclusion}\label{conclusion} Given the significant loss of life at risk, finding effective measures to prevent the spread of COVID-19 is a top priority. Challenges in COVID-19 prevention include significant pre-symptomatic transmission, high proportions of asymptomatic cases, and inaccurate tests during the pre-symptomatic stage. This work focuses on solving these challenges by presenting effective strategies in digital contact tracing and testing. The parameters in this model can be easily changed as more precise values of COVID-19 parameters are measured. As more data on human social patterns are collected, larger social networks can be constructed leading to more accurate predictions in the model. Additionally, as more accurate COVID-19 tests are developed, model results and optimal strategies could change. Finally, this model only considers infections based on close contacts. Although COVID-19 mainly spreads through close contacts \cite{CDCFAQ}, the significance of spread through indirect contacts is unclear and could be investigated in future models. In conclusion, by simulating a variety of tracing and testing strategies, we found that digital contact tracing can be very effective when combined with testing. In scenario 1, the most basic strategy of quarantining first degree contacts is not effective at high asymptomatic levels, suggesting that a more comprehensive testing strategy is required. Testing second and third degree contacts greatly increases the effectiveness of the app. The pre-exposure notification system, which warns second and third degree contacts to take extra precautionary measures, can reduce infections by an additional 40\% while also reducing the number of quarantines by up to 50\%. While periodic testing with contact tracing is effective, periodic testing without contact tracing reduces infections by less than 3\%. We find the results on the extended network of 5000 students to be similar. \section{Acknowledgements} I would like to thank my mentor Dr. Jesse Geneson for his invaluable advice and guidance throughout the project. This would not be possible without him. Thank you to Dr. Tanya Khovanova and Alexander Vitanov for reviewing the paper draft. This work was completed under the MIT PRIMES-USA program. \section{Appendix A: App Usage} In the following, we present figures for the number of infections as a function of the app usage. Simulations are run with the proportion of app users ranging from 0\% to 100\% in increments of 5\%. The app becomes much more effective as more users use the app. As shown in Section \ref{S7}, scenario 4 shows very close results for all asymptomatic levels. Scenarios 3 and 4 shows that the effectiveness of the app begins to level off after more than 80\% of the population uses the app. \begin{figure}[H] \renewcommand\thefigure{A.1} \centering \subfloat[Traditional App Configuration]{\includegraphics[width=8.5cm]{S1AppvsInf.png}\label{S1B}} \subfloat[Testing second and third degree contacts]{\includegraphics[width=8.5cm]{S2AppvsInf.png}\label{S2B}} \newline \subfloat[Pre-exposure notification system]{\includegraphics[width=8.5cm]{S6AppvsInf.png}\label{S3B}} \subfloat[Periodic testing]{\includegraphics[width=8.5cm]{S7AppvsInf.png}\label{S4B}} \caption{App Usage vs Infections} \end{figure} \bibliographystyle{plain}
{ "timestamp": "2020-12-29T02:23:56", "yymm": "2012", "arxiv_id": "2012.14077", "language": "en", "url": "https://arxiv.org/abs/2012.14077" }
\section{Abstract object semantics} \label{sec:abstr-object-semant} The rules in \reffig{fig:surrey-opsem} provide a semantics for read, write and update operations for component programs within an executing context and can be used to model clients and libraries under RC11 RAR. These rules do not cover the behaviour of abstract objects, which we now consider. There have been many different proposals for specifying and verifying concurrent objects memory~\cite{DBLP:conf/pldi/Kokologiannakis19,DBLP:conf/popl/BattyDG13,DBLP:journals/pacmpl/EmmiE19,DBLP:conf/esop/KrishnaEEJ20,DBLP:journals/pacmpl/RaadDRLV19,ifm18,DongolJRA18}, since there are several different objectives that must be addressed. These objectives are delicately balanced in linearizability~\cite{HeWi90}, the most well-used consistency condition for concurrent objects. Namely, linearizability ensures: \begin{enumerate*} \item The abstract specification is explainable with respect to a \emph{sequential} specification. \item Correctness is \emph{compositional}, i.e., any concrete execution of a system comprising two linearizable objects is itself linearizable. \item Correctness ensures \emph{contextual (aka observational) refinement}, i.e., the use of a linearizable implementation within a client program in place of its abstract specification does not induce any new behaviours in the client program. \end{enumerate*} There is however an inherent cost to linearizability stemming from the fact that the effect of each method call must take place \emph{before} the method call returns. In the context of weak memory, this restriction induces additional synchronisation that may not necessarily be required for correctness~\cite{DBLP:journals/cacm/SewellSONM10}. Therefore, over the years, several types of relaxations to the above requirements have been proposed ~\cite{DBLP:conf/pldi/Kokologiannakis19,DBLP:conf/popl/BattyDG13,DBLP:journals/pacmpl/EmmiE19,DBLP:conf/esop/KrishnaEEJ20,DBLP:journals/pacmpl/RaadDRLV19,ifm18,DongolJRA18}. General data structures present many different design choices at the abstract level~\cite{DBLP:journals/pacmpl/RaadDRLV19}, but discussing these now detracts from our main contribution, i.e., the integration and verification of clients and libraries in a weak memory model. Therefore, we restrict our attention to an abstract lock object, which is sufficient to highlight the main ideas. Locks have a clear ordering semantics (each new lock ${\it acquire}$ and lock ${\it release}$ operation must have a larger timestamp than all other existing operations) and synchronisation requirements (there must be a release-acquire synchronisation from the lock \emph{{\it release}} to the lock \emph{{\it acquire}}). To enable proofs of contextual refinement (see \refsec{sec:cont-refin}), we must ensure corresponding method calls return the same value at the abstract and concrete levels. To this end, we introduce a special variable $rval$ to each local state that stores the value that each method call returns. \begin{example}[Abstract lock] \label{ex:abs-lock} Consider the specification of a lock with methods {\tt Acquire}, and {\tt Release}. Each method call of the lock is indexed by a subscript to uniquely identify the method call. For the lock, the subscript is a counter indicating how many lock operations have been executed and is used in the example proof in \refsec{sec:example-verification}. \[ \inference[{\sc Acquire}] {a = l.{\it acquire}_n \quad ls' = ls[rval := {\it true}] } {({\tt l.Acquire()}, ls) \trans a ({\it true}, ls')} \] \[ \inference[{\sc Release}] {a = l.{\it release}_n \quad ls' = ls[rval := \bot]} {({\tt l.Release()}, ls) \trans a (\bot, ls')} \] Locks, by default are synchronising. That is, in the memory semantics, a (successful) \emph{{\it acquire}} requires the operation to synchronise with most recent lock \emph{{\it release}} (in a manner consistent with release-acquire semantics), so that any writes that are happens-before ordered before the \emph{{\it release}} are visible to the thread that acquires the lock. The initial state of an abstract lock $l$, $\beta_\mathbf{Init}$, is given by: \begin{align*} \beta_\mathbf{Init}.\mathtt{ops} & = \{(l.init_0,0)\} & \gamma_\mathbf{Init}.{\tt tview}_t(l) & = (l.init_0,0) \\ \gamma_\mathbf{Init}.\mathtt{cvd} & = \emptyset \end{align*} We also obtain the rules below, where we assume $\gamma$ is the state of the lock and $\beta$ is the state of the client. \begin{figure*}[t] \centering \small $ \inference[{\sc Acquire}] { a = l.{\it acquire}_n \qquad b = l.{\it acquire}_n(t) \qquad (w, q) \in \gamma.\mathtt{ops} \qquad w \in \{l.init_0, l.{\it release}_{n-1}\} \\ q = {\it maxTS}(l,\gamma) \quad q < q' \\ \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(b, q')\} \qquad {\tt mview}' = {\tt tview}' \cup {\tt ctview}' \qquad \mathtt{cvd}' = \sigma.\mathtt{cvd} \cup \{(w,q)\} \\ {\tt tview}' = \gamma.{\tt tview}_t[l \ensuremath{:=} (b, q')]\otimes\gamma.{\tt mview}_{(w,q)} \quad {\tt ctview}' = \beta.{\tt tview}_t \otimes\gamma.{\tt mview}_{(w,q)} \\ } {\gamma, \beta\ \strans{a}_{t}\ \gamma\left[ \begin{array}[c]{@{}l@{}} \mathtt{ops} := \mathtt{ops}', {\tt tview}_t := {\tt tview}', \\ {\tt mview}_{(b, q')} := {\tt mview}', \mathtt{cvd} := \mathtt{cvd}' \end{array}\right] , \beta[{\tt tview}_t \ensuremath{:=} {\tt ctview}']} $ \bigskip $ \inference[{\sc Release}] { a = l.{\it release}_n \qquad w = l.{\it acquire}_{n-1}(t) \qquad (w, q) \in \gamma.\mathtt{ops} \qquad q = {\it maxTS}(l,\gamma) \qquad q < q' \\ \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, q')\} \qquad {\tt tview}' = \gamma.{\tt tview}_t[x \ensuremath{:=} (a, q')] \qquad {\tt mview}' = {\tt tview}' \cup \beta.{\tt tview}_t } {\gamma, \beta\ \strans{a}_{t}\ \gamma\left[ \begin{array}[c]{@{}l@{}} \mathtt{ops} := \mathtt{ops}', {\tt tview}_t := {\tt tview}', {\tt mview}_{(a, q')} := {\tt mview}' \end{array}\right] , \beta} $ \caption{Operational semantics for lock acquire and release } \label{fig:acq-rel-sem} \end{figure*} To record the thread that currently owns the lock, we derive a new action, $b$, from the action $a$ of the program semantics. Action $(w, q)$ represents the method that is observed by the ${\it acquire}$ method, which must be an operation in $\gamma.\mathtt{ops}$ such that $q$ has the maximum timestamp for $l$ (i.e., $q = {\it maxTS}(l,\gamma)$. The new timestamp $q'$ must be larger than $q$. We create a new component state $\gamma'$ from $\gamma$ by \begin{itemize}[leftmargin=*] \item inserting $(b, q')$ into $\gamma.\mathtt{ops}$; \item updating ${\tt tview}_t$ to ${\tt tview}'$, where ${\tt tview}'$ synchronises with the previous thread view in $\gamma$ to include information from the modification view of $(w, q)$, and updates $t$'s view of $l$ to include the new operation $(b, q')$; \item updating the contextual thread view for $t$ to ${\tt ctview}'$, where ${\tt ctview}'$ synchronises with the previous thread view in the context state $\beta$ to include information from the modification view of $(w, q)$; and \item updating the modification view for the new operation $(b, q')$ to ${\tt mview}'$, where ${\tt mview}'$ contains the view of $t$. \end{itemize} Finally, the context state $\beta'$ updates the thread view of $t$ to ${\tt ctview}'$ since synchronisation with a release may cause the view to be updated. A lock release, simply introduces a new operation with a maximal timestamp, provided that the thread executing the release currently holds the lock. \end{example} \section{Appendix} \begin{theorem} If $R$ is a forward simulation between $AO$ and $CO$, then for any client that only synchronises through $AO$ (and $CO$) we have $C[AO] \sqsubseteq C[CO]$. \end{theorem} \begin{proof}[Proof (sketch)] Given a trace of $C[CO]$, it is possible to construct a corresponding trace of $C[AO]$ using $R$. Correspondence between the initial states of $C[CO]$ and $C[AO]$ is trivial by the initialisation condition in \refsec{def:fsim}. For any purely local step of the client of $C[CO]$, the same step is available in $C[AO]$. For any read of a global variable in $C[CO]$, the same read is available in $C[AO]$ by the client observation property in \refdef{def:fsim}, and these update the local state in the same way. Moreover, since the client does not synchronise outside $CO$, the library states of $\Omega$ and $\Pi$ are equivalent. A similar argument applies to client writes. For each library transition, by the stuttering and non-stuttering step rules in \refdef{def:fsim}, we have that $R$ holds in the post state (where the abstract may not have taken a step). Importantly, by the client observation we have the conditions required for \refdef{def:cont-refin-1}. Then, after removing stuttering, one obtains the required trace refinement relationship (\refdef{def:prog-ref}). Finally, since we chose the trace of $C[CO]$ arbitrarily, we obtain program refinement as defined in \refdef{def:prog-ref}. \end{proof} \subsection{Sequence Lock} \label{sec:sequence-lock} The first example of our contextual refinement is the refinement of a sequence lock. An implementation of the concrete sequence lock is given in \reffig{seqlock_imp}. The lock operates over one shared variable ($glb$). If {\bf Acquire} is invoked, it waits until it sees an even value for $glb$ and then tries to increase the value of $glb$ by one using a compare and swap (\textbf{CAS}) operation. If the \textbf{CAS} is successful, then the lock is acquired, the outer loop is terminated and the thread can enter its critical section. Once the thread exits its critical section it should release the lock by invoking {\bf Release} which will increase the value of $glb$ by one making the last value written to the $glb$ an even number again. We have implemented the $\textsf{\textbf{CAS}}(x,u,v)$ operation using the semantics given in \reffig{fig:surrey-opsem} where a successful operation (i.e. the operation observes a write with value $u$) is modelled by an release-acquire update and the unsuccessful one (i.e. the operation does not observe a write with value $u$) by a relaxed read. It is worth noting that a successful $\textsf{\textbf{CAS}}$ operation is both releasing and acquiring meaning that if it observes a releasing write written by the release operation of the previous lock it synchronises the acquiring thread with the thread previously holding the lock. The operation could become more relaxed if the semantics provided an acquiring-only update operation. Following the forward simulation \refdef{def:fsim}, we provide a simulation relation between abstract lock object and the concrete the given implementation of sequence lock. For the client observation relation we have: \begin{equation} \begin{split} \forall t, x~. {\tt tst} (\gamma_C.{\tt tview}_t(x)) \ge {\tt tst} (\gamma_A.{\tt tview}_t(x)) \land \gamma_C.\mathtt{cvd} = \gamma_A.\mathtt{cvd} \wedge \gamma_C.\mathtt{ops} = \gamma_A.\mathtt{ops} \end{split} \label{seqlock-forwardsim_client} \end{equation} \noindent where $\gamma_A$ and $\gamma_C$ represent abstract and concrete client states, respectively. We define $\lhd$ to be the domain restriction operator where $S \lhd R = \{(x, y) \in R \mid x \in S\}$, and use it to give the following relation between abstract and concrete library states, where $glb$ and $l$ are variables: \begin{equation} \begin{split} &\forall w_c . ~~ w_c\in\beta.\mathtt{Obs}(t,glb)~ \implies (\exists w_a . {\it wrval}(w_c) = {\it wrval}(w_a) \land w_a\in\alpha.\mathtt{Obs}(t, l) \\ &\land ({\it GVar}_C \lhd \beta.{\tt mview}_{w_c}) = ({\it GVar}_C \lhd \alpha.{\tt mview}_{w_a}) \\ & \land \mathit{isRel}(w_c) = \mathit{isRel}(w_a) \land (w_c \in \beta.\mathtt{cvd} \iff w_a \in \alpha.\mathtt{cvd})) \end{split} \label{seqlock-forwardsim} \end{equation} \noindent where $\beta$ is the concrete library state (the sequence lock state in this case), $\alpha$ is the abstract object state, $\mathit{isRel}$ is a predicate used to determine whether a ``synchronised read'' access of the corresponding parameter induces a synchronisation. We observe that the ${\bf Acquire}$ operation can successfully acquire the lock only if the $\textsf{\textbf{CAS}}$ on line~2 is successful. Therefore in order to prove the refinement, we will need to prove that whenever the $\textsf{\textbf{CAS}}$ operation is successful, the abstract object can also successfully acquire the lock maintaining the simulation relation. Also the read on line 1 and the unsuccessful $\textsf{\textbf{CAS}}$ are stuttering steps and we need to show that when those steps are taken the abstract state remains unchanged and the new concrete state preserves the simulation relation. The ${\bf Release}$ operation contains only one releasing write on variable $glb$ which is considered to be a refining step. It is straightforward to show that this operation refines the abstract object release operation. \begin{lemma} Relations (\ref{seqlock-forwardsim_client}) and (\ref{seqlock-forwardsim}) together form a forward simulation for synchronisation-free clients (\refdef{def:fsim}) between the abstract lock object and the sequence lock. \end{lemma} \begin{proof} In Isabelle~\cite{Mech}. \end{proof} \begin{figure*}[t] \hfill \begin{minipage}[b]{0.45\textwidth} $\textbf{Init:} \ \ glb = 0$ \\[2pt] \begin{minipage}[t]{\textwidth} \small \textbf{Acquire()}: \begin{algorithmic}[1] \small \Statex \textbf{do} \Statex \quad \textbf{do} \State \quad \quad $r \leftarrow^{A} glb$ \Statex \quad \textbf{until} $(even (r))$ \State \quad $loc \gets \textsf{\textbf{CAS}}(glb, r, r+1)$ \Statex \textbf{until} $(loc)$ \end{algorithmic} \end{minipage} \\[5pt] \begin{minipage}[t]{\textwidth} \small \textbf{Release()}: \begin{algorithmic}[1] \small \State $glb :=^{R} r + 2$ \end{algorithmic} \end{minipage} \vspace{-1em} \caption{Implementation of a Sequence Lock} \label{seqlock_imp} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} $\textbf{Init:} \ \ nt = 0, \ \ sn = 0$\\[2pt] \begin{minipage}[t]{\columnwidth} \small \textbf{Acquire()}: \begin{algorithmic}[1] \small \State $my\_ticket \leftarrow \textsf{\textbf{fetch\_and\_inc}}(nt)$ \Statex \textbf{do} \State \quad $serving\_now \leftarrow^{A} sn$ \Statex \textbf{until} $(my\_ticket = serving\_now)$ \end{algorithmic} \end{minipage} \\[5pt] \begin{minipage}[t]{\columnwidth} \small \textbf{Release()}: \begin{algorithmic}[1] \small \State $sn :=^{R} serving\_now + 1$ \end{algorithmic} \end{minipage} \vspace{-1em} \caption{Implementation of a Ticket Lock} \label{ticketlock_imp} \end{minipage} \hfill \end{figure*} \subsection{Ticket Lock} \label{sec:ticket-lock} Our second case study is the refinement of an implementation of a ticket lock (given in Figure \ref{ticketlock_imp}). Unlike the previous example, the ticket lock has two shared variables $nt$ (next ticket) and $sn$ (serving now). Invocation of {\bf Acquire} loads the next available ticket into a local register ($my\_ticket$) and increases the value of $nt$ by one using a fetch-and-increase ($\textsf{\textbf{fetch\_and\_inc}}$) operation. It then enters a busy loop and reads $sn$ until it sees its own ticket value in $sn$ before it can enter its critical section. Similar to $\textsf{\textbf{CAS}}$, the $\textsf{\textbf{fetch\_and\_inc}}(x)$ operation is defined using the basic operations of the semantics given in \reffig{fig:surrey-opsem}. The operation reads the value of variable $x$, increases it by one and returns the old value. The operation is defined using a release-acquire update and forces a synchronisation if it reads from a releasing write on $x$. In the implementation of the ticket lock, the $\textsf{\textbf{fetch\_and\_inc}}$ operation does not need to be synchronising and could be relaxed if the semantics supported a relaxed update. While defining a relaxed update is straightforward, it falls outside the scope of this work. Lock synchronisation should only happen when the lock is acquired by reading a value of $sn$ which is equal to $my\_ticket$ on line 2. Here, like the previous example, we provide a simulation relation between abstract lock object and the concrete implementation. The client observation relation is the same as (\ref{seqlock-forwardsim_client}) given in the previous section. We give the following relation between abstract and concrete ticket lock states: \begin{equation} \begin{split} \forall~wnt~wsn~. &~~(wnt\in \beta.{\tt writes\_on}(nt)\land cls.pc_t\in~\{1,2\}~\\ & \land wsn\in \beta.\mathtt{Obs}(t,sn) \land ~{\it wrval}(wnt)~=~{\it wrval}(wsn))~\implies~ \\ & (\exists~wl~.~wl\in\alpha.\mathtt{Obs}(t,l)~\land ~even~({\it wrval}(wl))~\land wl \notin \alpha.\mathtt{cvd}\\ &\land ~({\it GVar}_C \lhd \beta.{\tt mview}_{wsn}) = ({\it GVar}_C \lhd \alpha.{\tt mview}_{wl}) \land \mathit{isRel}(wsn) = ~\mathit{isRel}(wl)) \end{split} \label{ticketlock-simrel} \end{equation} \noindent where ${\tt writes\_on}(x)$ is the set of writes on variable $x$, $\beta$ is the concrete library state, and $\alpha$ is the abstract object state. If the read on line 2 of the {\bf Acquire} operation reads from a write which its value is equal to the value of $my\_ticket$, then the lock is acquired. Therefore we will need to show that if this situation arises, the abstract lock object can also take an step and successfully acquires the lock. We consider the $\textsf{\textbf{fetch\_and\_inc}}$ operation on line 1 and the read on line 2 if it reads a value that is not equal to $my\_ticket$ to be a stuttering step. We prove that each of the stuttering and non-stuttering steps preserves the simulation relation. Similar to the previous example, the {\bf Release} operation consists of only one releasing write to variable $sn$ and it is straightforward to show that this operation refines the abstract release operation. \begin{lemma} Relations (\ref{seqlock-forwardsim_client}) and (\ref{ticketlock-simrel}) together form a forward simulation for synchronisation-free clients (\refdef{def:fsim}) between the abstract lock object and the sequence lock. \end{lemma} \begin{proof} In Isabelle~\cite{Mech}. \end{proof} \section{Client library composition} While the verification Key to our paper is the observation that one can compose clients and objects \section{Conclusions} \label{sec:conclusion} In this paper, we present a new approach to specifying and verifying abstract objects over weak memory by extending an existing operational semantics for RC11 RAR (which is a fragment of the C11 memory model). We show that our methodology supports two types of verification: (1) proofs of correctness of client programs that \emph{use} abstract libraries and (2) refinement proofs between abstract libraries and their implementations. Moreover, the operational semantics allows one to execute programs in thread order and accommodates weak memory behaviours via a special encoding of the state. To exploit this operational semantics, we develop an assertion language that describes a thread's observations of client-library states, which is in turn used to verify program invariants and proofs of refinement. The operational semantics, proof rules and example verifications have been mechanised in Isabelle/HOL. There are now several different approaches to program verification that support different aspects of weak memory using pen-and-paper proofs (e.g.,~\cite{DBLP:conf/icalp/LahavV15,DBLP:conf/oopsla/TuronVD14,DBLP:conf/popl/AlglaveC17,doko2017tackling}), model checking (e.g.,~\cite{DBLP:conf/pldi/Kokologiannakis19,DBLP:conf/pldi/AbdullaAAK19}), specialised tools (e.g.,~\cite{DBLP:conf/pldi/TassarottiDV15,DBLP:conf/esop/KrishnaEEJ20,DBLP:conf/esop/SvendsenPDLV18,DBLP:conf/tacas/Summers018}), and generalist theorem provers (e.g.,~\cite{ECOOP20}). These cover a variety of (fragments of) memory models and proceed via exhaustive checking, specialist separation logics, or Hoare-style calculi. The idea that abstract methods should specify synchronisation guarantees has been established in earlier work~\cite{ifm18,DongolJRA18}, where it has been shown to be necessary for contextual refinement~\cite{DongolJRA18} and compositionality~\cite{ifm18}. Raad et al \cite{DBLP:journals/pacmpl/RaadDRLV19} have tackled the problem of client-library programs and also consider the C11 memory model. Krishna et al \cite{DBLP:conf/esop/KrishnaEEJ20} have developed an approach to verifying implementations of weakly consistent libraries~\cite{DBLP:journals/pacmpl/EmmiE19}. They account for weak memory relaxations by transitioning over a generic happens-before relation encoded within a transition system. On the one hand, this means that their techniques apply to any memory model, but on the other hand, such a happens-before relation must ultimately be supplied. In future work, it would be interesting to further investigate implementations of other concurrent data types and transactional memory within this operational framework. \section{Contextual Refinement} \label{sec:cont-refin} We now describe what it means to \emph{implement} a specification so that any client properties that was preserved by the specification is not invalidated by the implementation. We define and prove contextual refinement directly, i.e., without appealing to external correctness conditions over libraries, c.f. linearizability~\cite{HeWi90,ifm18,GotsmanY11,DongolJRA18,DBLP:journals/tcs/FilipovicORY10}. \subsection{Refinement and Simulation for Weak Memory} \label{sec:refin-simul-weak} Since we have an operational semantics with an interleaving semantics over weak memory states, the development of our refinement theory closely follows the standard approach under SC~\cite{DBLP:books/cu/RoeverE1998}. Suppose $P$ is a program with initialisation $\mathbf{Init}$. An \emph{execution} of $P$ is defined by a possibly infinite sequence $\Pi_0\, \Pi_1\, \Pi_2\,\dots$ such that \begin{enumerate}[leftmargin=*] \item each $\Pi_i$ is a 4-tuple $(P_i, ls_i, \gamma_i, \beta_i)$ comprising a program, local state, global client state and global library state, and \item $(ls_0, \gamma_0, \sigma_0) = (\mathit{ls}_\mathbf{Init}, \gamma_\mathbf{Init}, \sigma_\mathbf{Init})$, and \item for each $i$, we have $\Pi_i \Longrightarrow \Pi_{i+1}$ as defined in \refsec{sec:program-semantics}. \end{enumerate} A \emph{client trace} corresponding to an execution $\Pi_0\,\Pi_1\,\Pi_2\dots$ is a sequence ${\it ct} \in \Sigma_C^*$ such that ${\it ct}_i = (\pi_{2}(\Pi_i)_{|C}, \pi_{3}(\Pi_i))$, where $\pi_n$ is a projection function that extracts the $n$th component of a given tuple and $ls_{|C}$ restricts the given local state $ls$ to the variables in $\mathit{LVar}_C$. Thus each ${\it ct}_i$ is the global client state component of $\Pi_i$. After a projection, the concrete implementation may contain (finite or infinite) stuttering~\cite{DBLP:books/cu/RoeverE1998}, i.e., consecutive states in which the client state is unchanged. We let ${\it rem\_stut}({\it ct})$ be the function that removes all stuttering from the trace ${\it ct}$, i.e., each consecutively repeating state is replaced by a single instance of that state. We let ${\it Tr_{SF}}(P)$ denote the set of \emph{stutter-free traces} of a program $P$, i.e., the stutter-free traces generated from the set of all executions of $P$. Below we refer to the client that uses the abstract object as the \emph{abstract client} and the client that uses the object's implementation as the \emph{concrete client}. The notion of contextual refinement that we develop ensures that a client is not able to distinguish the use of a concrete implementation in place of an abstract specification. In other words, each thread of the concrete client should only be able to observe the writes (and updates) in the client state (i.e., $\gamma$ component) that the thread could already observe in a corresponding of the client state of the abstract client. First we define trace refinement for weak memory states. \begin{definition}[State and Trace Refinement] \label{def:cont-refin-1} We say a concrete state $\gamma_C$ is a \emph{refinement} of an abstract state $\gamma_A$, denoted $(ls_A, \gamma_A) \sqsubseteq (ls_C, \gamma_C)$ iff $ls_A = ls_C$, $\gamma_A.\mathtt{cvd} = \gamma_C.\mathtt{cvd}$ and for all threads $t$ and $x \in {\it GVar}$, we have $\gamma_C.\mathtt{Obs}(t, x) \subseteq \gamma_A.\mathtt{Obs}(t, x)$. We say a concrete trace ${\it ct}$ is a \emph{refinement} of an abstract trace ${\it at}$, denoted ${\it at} \sqsubseteq {\it ct}$, iff ${\it ct}_i \sqsubseteq {\it at}_i$ for all $i$. \end{definition} This now leads to a natural definition of contextual refinement that is based on the refinement of traces. \begin{definition}[Program Refinement] \label{def:prog-ref} A concrete program $P_C$ is a \emph{refinement} of an abstract program $P_A$, denoted $P_A \sqsubseteq P_C$, iff for any (stutter-free) trace ${\it ct} \in {\it Tr_{SF}}(P_C)$ there exists a (stutter-free) trace ${\it at} \in {\it Tr_{SF}}(P_A)$ such that ${\it at} \sqsubseteq {\it ct}$. \end{definition} Finally, we obtain a notion of contextual refinement for abstract objects. Suppose $P$ is a program with holes. We let $P[O]$ be the program in which the holes are filled with the operations from object $O$. Note that $O$ may be an abstract object, in which case execution of each method call follows the abstract object semantics (\refsec{sec:abstr-object-semant}), or a concrete implementation, in which case execution of each method call follows the semantics of reads, writes and updates (\refsec{sec:program-semantics}). \begin{definition}[Contextual refinement] \label{def:cref} We say a concrete object $CO$ is a \emph{contextual refinement} of an abstract object $AO$ iff for any client program $C$, we have $C[AO] \sqsubseteq C[CO]$. \end{definition} To verify contextual refinement, we use a notion of \emph{simulation}, which once again is a standard technique from the literature. The difference in a weak memory setting is the fact that the refinement rules must relate more complex configurations, i.e., tuples of the form $(P, {\it lst}, \gamma, \alpha)$. The simulation relation, $R$, relates triples $(als, \gamma_A, \alpha)$, comprising an abstract local state $als$, client state $\gamma_A$ and library state $\alpha$, with triples $(cls, \gamma_C, \beta)$ comprising a concrete local state $cls$, a client state $\gamma_C$ and concrete library state $\beta$. The simulation condition must ultimately ensure $(als_{|C}, \gamma_A) \sqsubseteq (cls_{|C}, \gamma_C)$ at each step as defined in \refdef{def:cont-refin-1}. However, since client synchronisation can affect the library state, a generic forward simulation rule is non-trivial to define since it requires one to describe how clients steps affect the simulation relation. We therefore present a simpler use case for libraries that are used by clients that do not perform any synchronisation outside the library itself (e.g., the client in \reffig{fig:lockmp-proof}). If $\Pi = (P, lst, \gamma, \alpha)$, we let ${\it state}(\Pi) = (lst, \gamma, \alpha)$ be the state corresponding to $\Pi$. \begin{definition}[Forward simulation for synchronisation-free clients] \label{def:fsim} For an abstract object $AO$ and a concrete object $CO$ and a client $C$ that only synchronises through $AO$ (and $CO$), $C[AO] \sqsubseteq C[CO]$ holds if there exists a relation $R$ such that \begin{enumerate}[leftmargin=*] \item $R((als, \gamma_A, \alpha), (cls, \gamma_C, \beta)) \Rightarrow $ $ \begin{array}[c]{l} als_{|C} = cls_{|C} \wedge \gamma_A.\mathtt{cvd} = \gamma_C.\mathtt{cvd} \wedge {} \\ \forall t, x.\ \begin{array}[t]{@{}l@{}} \gamma_C.\mathtt{Obs}(t, x) \subseteq \gamma_A.\mathtt{Obs}(t, x) \wedge {} \\ als(t)(rval) = cls(t)(rval) \end{array} \end{array} $ \hfill (client observation) \item $R({\it state}(\Omega_\mathbf{Init}), {\it state}(\Pi_\mathbf{Init}))$ \hfill (initialisation) \item For any concrete configurations $\Pi$, $\Pi'$ and abstract configuration $\Omega$, if $\Pi \Longrightarrow \Pi'$ via a step corresponding to $CO$, and $R({\it state}(\Omega), {\it state}(\Pi))$, then either \begin{itemize} \item $R({\it state}(\Omega), {\it state}(\Pi))$, or \hfill (stuttering step) \item there exists an abstract configuration $\Omega'$ such that $\Omega \Longrightarrow \Omega'$ and $R({\it state}(\Omega'), {\it state}(\Pi'))$. \hfill (non-stuttering step) \end{itemize} \end{enumerate} \end{definition} \begin{theorem} If $R$ is a forward simulation between $AO$ and $CO$, then for any client that only synchronises through $AO$ (and $CO$) we have $C[AO] \sqsubseteq C[CO]$. \end{theorem} \section{Generalised operational semantics} \label{sec:gener-weak-memory} We now present a simple program syntax that allows one to write open programs that can be filled by an abstract method or concrete implementation of a method. \subsection{Program Syntax} \label{sec:program-syntax} We start by defining a syntax of concurrent programs, starting with the structure of sequential programs (single threads). A thread may use {\em global} shared variables (from ${\it GVar}$) and local registers (from $\mathit{LVar}$). We let $\mathit{Var} = {\it GVar} \cup \mathit{LVar}$ and assume ${\it GVar} \cap \mathit{LVar} = \emptyset$. For client-library programs, we partition ${\it GVar}$ into ${\it GVar}_C$ (the global client variables) and ${\it GVar}_L$ (the global library variables) and similarly $\mathit{LVar}$ into $\mathit{LVar}_C$ and $\mathit{LVar}_L$. In an implementation, global variables can be accessed in three different {\em synchronisation modes}: acquire ({\sf A}, for reads), release ({\sf R}, for writes) and relaxed (no annotation). The annotation {\sf RA} is employed for {\em update} operations, which reads and writes to a shared variable in a single atomic step. We let $\mathit{Obj}$ and $\mathit{Meth}$ be the set of all objects and method calls, respectively. We assume that $\ominus$ is a unary operator (e.g., $\neg$), $\oplus$ is a binary operator (e.g., $\land$, $+$, $=$) and $n$ is a value (of type $\mathit{Val}$). Expressions must only involve local variables. The syntax of sequential programs, ${\it Com}$, is given by the following grammar with $r \in \mathit{LVar}, x \in {\it GVar}, o \in \mathit{Obj}, m \in \mathit{Meth}, u, v \in \mathit{Val}$: \medskip \noindent \begin{tabular}[t]{r@{~}l} ${\it Exp}_L$ ::= & $\mathit{Val} \mid \mathit{LVar} \mid \ominus {\it Exp}_L \mid {\it Exp}_L \oplus Exp_L$ \\[1mm] ${\it CExp}_L$ ::= & $\bullet \mid {\it Exp}_L$ \\[1mm] $\bullet$ ::= & $\mathit{Val} \mid o.m([u]) \mid {\it Com} $, where ${\it Com}$ contains no holes \\[1mm] ${\it ACom}$ ::= & $ \bullet \mid \textsf{\textbf{skip}} \mid r \gets \textsf{\textbf{CAS}}(x, u, v)^{\sf RA} \mid r \gets \textsf{\textbf{fetch\_and\_inc}}(x)^{\sf RA} \mid r := {\it CExp}_L \mid x :=^{\sf [R]} {\it Exp}_L \mid r \gets^{\sf [A]} x$ \\[6pt] ${\it Com}$ ::= & ${\it ACom} \mid {\it Com} ; {\it Com} \mid \textsf{\textbf{if}}~B\ \textsf{\textbf{then}}\ {\it Com}\ \textsf{\textbf{else}}\ {\it Com} \mid \textsf{\textbf{while}}\ B\ \textsf{\textbf{do}}\ {\it Com}$ \end{tabular} \medskip \noindent where we assume $B$ to be an expression of type ${\it CExp}_L$ that evaluates to a boolean. We allow programs with holes, denoted $\bullet$, which may be filled by an abstract or concrete method call. During a program's execution, the hole may also be filled by the null value $\bot \notin \mathit{Val}$, or the return value of the method call. The notation ${\sf [X]}$ denotes that the annotation ${\sf X}$ is optional, where ${\sf X} \in \{{\sf A}, {\sf R}\}$, enabling one to distinguish relaxed, acquiring and releasing accesses. Within a method call, the argument $u$ is optional. Later, we will also use $\textsf{\textbf{do}}$-$\textsf{\textbf{until}}$ loops, which is straightforward to define in terms of the syntax above. \subsection{Program Semantics} \label{sec:program-semantics} For simplicity, we assume concurrency at the top level only. We let $\mathit{Tid}$ to be the set of all thread identifiers and use a function ${\it Prog}: \mathit{Tid} \to {\it Com}$ to model a program comprising multiple threads. In examples, we typically write concurrent programs as $C_1 || \ldots || C_n$, where $C_i \in {\it Com}$. We further assume some initialisation of variables. The structure of our programs thus is $\mathbf{Init}; \big( C_1 || \ldots || C_n \big) $. The operational semantics for this language is defined in three parts. The \emph{program semantics} fixes the steps that the concurrent program can take. This gives rise to transitions $(P,\mathit{lst}) \trans {a}_t (P',\mathit{lst}')$ of a thread $t$ where $P$ and $P'$ are programs, $\mathit{lst}$ and $\mathit{lst}'$ is the state of local variables and $a$ is an action (possibly the silent action $\epsilon$, see below). The program semantics is combined with a {\em memory semantics} which reflects the C11 state, and in particular the write actions from which a read action can read. Finally, there is the \emph{object semantics}, which defines the abstract semantics of the object at hand. We assume that the set of actions is given by ${\sf Act}$. We let $\epsilon \notin {\sf Act}$ be a silent action and let ${\sf Act}_\epsilon = {\sf Act} \cup \{\epsilon\}$. In the program semantics, we assume a function $\mathit{lst} \in \mathit{Tid} \rightarrow (\mathit{LVar} \pfun \mathit{Val})$, which returns the local state for the given thread. We assume that the local variables of threads are disjoint, i.e., if $t \neq t'$, then $\operatorname{\mathbf{dom}}(\mathit{lst}(t)) \cap \operatorname{\mathbf{dom}}(\mathit{lst}(t')) = \emptyset$. For an expression $E$ over local variables, we write $\llbr{E}_{\mathit{ls}}$ for the value of $E$ in local state $\mathit{ls}$; we write $\mathit{ls}[r := v]$ to state that $\mathit{ls}$ remains unchanged except for the value of local variable $r$ which becomes $v$. \begin{figure*}[t]\small \centering % $\inference{r \in \mathit{LVar} \quad v = \llbr{E}_{{\it ls}}}{(r := E,{\it ls}) \trans{\epsilon} (\textsf{\textbf{skip}},\mathit{ls}[r := v]) } \qquad \inference{x \in {\it GVar} \quad a = wr^{\sf [R]}(x, \llbr{E}_{{\it ls}}) }{(x :=^{\sf [R]} E,ls) \trans{a} (\textsf{\textbf{skip}},\mathit{ls}) }$ \bigskip $\inference{a = rd^{\sf [A]}(x,v) \quad v \in \mathit{Val}}{ (r \gets^{\sf [A]} x, \mathit{ls}) \trans a (\textsf{\textbf{skip}},\mathit{ls}[r := v]) } $ \bigskip $ \inference{(C_1,\mathit{ls}) \trans{a} (C_1',ls')}{(C_1 ; C_2,\mathit{ls}) \trans{a} (C_1' ; C_2,\mathit{ls}')} \qquad \inference{v \in \mathit{Val} \cup \{\bot\}}{(v ; C_2,\mathit{ls}) \trans{\epsilon} (C_2,\mathit{ls})}$ \bigskip $ \inference {\llbr{B}_{\mathit{ls}}} {({\it IF}, \mathit{ls}) \trans{\epsilon} (C_1,\mathit{ls})} \quad \inference {\neg \llbr{B}_{\mathit{ls}}} {({\it IF}, \mathit{ls}) \trans{\epsilon} (C_2,\mathit{ls}) } $ \bigskip $ \inference{\llbr{B}_{\mathit{ls}}}{ \begin{array}[t]{@{}l@{}} ({\it WHILE}, \mathit{ls}) \trans{\epsilon} (C; {\it WHILE}, \mathit{ls}) \end{array} } \qquad \inference{\neg \llbr{B}_{\mathit{ls}}} { \begin{array}[t]{@{}l@{}} ({\it WHILE}, \mathit{ls}) \trans{\epsilon} (\textsf{\textbf{skip}}, \mathit{ls}) \end{array} } $\bigskip $ \inference{a = rd(x,v') \quad v'\neq u \quad u,v,v' \in \mathit{Val}}{ (r \gets \textsf{\textbf{CAS}}(x, u, v), \mathit{ls}) \trans a (\textsf{\textbf{skip}},\mathit{ls}[r := {\it false}]) } $ \bigskip $ \inference{a = upd^{\sf RA}(x,u,v) \quad u,v \in \mathit{Val}}{ (r \gets \textsf{\textbf{CAS}}(x, u, v), \mathit{ls}) \trans a (\textsf{\textbf{skip}},\mathit{ls}[r := {\it true}]) } \qquad \inference{a = upd^{\sf RA}(x,u,u+1) \quad u \in \mathit{Val}}{ (r \gets \textsf{\textbf{fetch\_and\_inc}}(x), \mathit{ls}) \trans a (\textsf{\textbf{skip}},\mathit{ls}[r := u]) } $ \bigskip $ \inference{}{ (\COSem{C}{\textsf{\textbf{skip}}}, \mathit{ls}) \trans \epsilon (C,\mathit{ls}) } \qquad \inference{(D, ls) \trans a (D', ls')}{ (\COSem{C}{D}, \mathit{ls}) \trans{a}_L (\COSem{C}{D'},\mathit{ls}') } $\bigskip $ \inference[\sc Cli]{(P(t),\mathit{lst}(t)) \trans{a} (C,\mathit{ls}) \quad a \in {\sf Act}_\epsilon} {(P,\mathit{lst}) \trans{a}_t (P[t := C],\mathit{lst}[t := \mathit{ls}])} \qquad \inference[\sc Lib]{(P(t),\mathit{lst}(t)) \trans{a}_L (C,\mathit{ls}) \quad a \in {\sf Act}_\epsilon} {(P,\mathit{lst}) \trans{a}_{L, t} (P[t := C],\mathit{lst}[t := \mathit{ls}])} $ \caption{Program semantics, where ${\it IF} = \textsf{\textbf{if}} \ B\ \textsf{\textbf{then}}\ C_1\ \textsf{\textbf{else}}\ C_2$ and ${\it WHILE} = \textsf{\textbf{while}}\ B\ \textsf{\textbf{do}}\ C$} \label{fig:comm-sem} \end{figure*} We use $C[D]$ to denote the program $C$ with the leftmost innermost hole filled by $D$. If $D = \bot$, we proceed with the execution of $C$, otherwise we execute $D$. Note that if $D$ terminates with a value (due to a method call that returns a value), then the hole contains a value and execution may proceed by either using the rule for $r \ensuremath{:=} v$ or the rule for $v ; C_2$, both of which are present in \reffig{fig:comm-sem}. The last two rules, {\sc Cli} and {\sc Lib}, lift the transitions of threads to a transition of a client and library program, respectively. These are distinguished by the subscript $L$, which only appears in transitions corresponding to the library. The rules in \reffig{fig:comm-sem} allow for {\em all} possible values for any read. We constrain these values with respect to a {\em memory semantics} (formalised by $\strans{a}_t$), which is described for reads, writes and updates in \refsec{sec:memory-semantics} and for abstract objects in \refsec{sec:abstr-object-semant}. The combined semantics brings together a client state $\gamma$ and library state $\beta$ as follows. \begin{gather*} \small \inference{(P,\mathit{lst}) \trans {\epsilon }_t (P',\mathit{lst}')} {(P,\mathit{lst},\gamma, \beta) \ltsArrow } (P', \mathit{lst}',\gamma, \beta)} \small \ \ \ \inference{(P,\mathit{lst}) \trans {\epsilon }_{L,t} (P',\mathit{lst}')} {(P,\mathit{lst},\gamma, \beta) \ltsArrow } (P', \mathit{lst}',\gamma, \beta)} \\[5pt] \small \inference{(P,\mathit{lst}) \trans {a}_t (P',\mathit{lst}') \\ \gamma, \beta \strans{a}_{t} \gamma', \beta'} {(P,\mathit{lst},\gamma, \beta) \ltsArrow } (P', \mathit{lst}',\gamma', \beta')} \hfill \inference{(P,\mathit{lst}) \trans {a}_{L,t} (P',\mathit{lst}') \\ \beta, \gamma \strans{a}_{t} \beta', \gamma'} {(P,\mathit{lst},\gamma, \beta) \ltsArrow } (P', \mathit{lst}',\gamma', \beta')} \end{gather*} These rules ensure, for example, that a read only returns a value allowed by the underlying memory model. In \refsec{sec:abstr-object-semant}, we introduce additional rules so that the memory model also contains actions corresponding to method calls on an abstract object. Note that the memory semantics (see \refsec{sec:memory-semantics} and \refsec{sec:abstr-object-semant}) defined by $\gamma, \beta \strans{a}_t \gamma', \beta'$ assumes that $\gamma$ is the state of the component being executed and $\beta$ is the state of the context. For a client step, we have that $\gamma$ is the executing component state and $\beta$ is the context state, where as for a library step, these parameters are swapped. \subsection{Memory Semantics} \label{sec:memory-semantics} Next, we detail the modularised memory semantics, which builds on an earlier monolithic semantics~\cite{ECOOP20}, which is a timestamp-based revision of an earlier operational semantics~\cite{DBLP:conf/ppopp/DohertyDWD19}. Our present extension is a semantics that copes with client-library interactions in weak memory. Namely, it describes how synchronisation (in our example release-acquire synchronisation) in one component affects thread views in another component. The semantics accommodates both client synchronisation affecting a library, and vice versa. \smallskip\noindent {\bf Component State.} We assume ${\sf Act}$ denotes the set of actions. Following~\cite{ECOOP20}, each global write is represented by a pair $(a, q) \in {\sf Act} \times \mathbb{Q}$, where $a$ is a write action, and $q$ is a rational number that we use as a {\em timestamp} corresponding to modification order (cf. \cite{DBLP:conf/ecoop/KaiserDDLV17,Dolan:2018:LDRF,DBLP:journals/corr/PodkopaevSN16}). The set of modifying operations within a component that have occurred so far is recorded in $\mathtt{ops} \subseteq {\sf Act} \times \mathbb{Q}$. Unlike prior works, to accommodate (abstract) method calls of a data structures, we record abstract operations in general, as opposed to writes only. Each state must record the operations that are observable to each thread. To achieve this, we use two families of functions from global variables to writes~(cf. \cite{DBLP:journals/corr/PodkopaevSN16,DBLP:conf/popl/KangHLVD17}). \begin{itemize}[leftmargin=*] \item A \emph{thread view} function ${\tt tview}_t \in {\it GVar} \rightarrow \mathtt{ops}$ that returns the \emph{viewfront} of thread $t$. The thread $t$ can read from any write to variable $x$ whose timestamp is not earlier than ${\tt tview}_t(x)$. Accordingly, we define, for each state $\gamma$, thread $t$ and global variable $x$, the set of {\em observable writes}, where ${\tt tst}(w) = q$ denotes $w$'s timestamp: \smallskip\hfill$\gamma.\mathtt{Obs}(t, x) = \{(a, q) \in \gamma.\mathtt{ops} \mid \begin{array}[t]{@{}l@{}} \mathit{var}(a) = x \\ {}\wedge {\tt tst}(\gamma.{\tt tview}_t(x)) \leq q\} \end{array}$\hfill{} \item A \emph{modification view} function ${\tt mview}_w \in {\it GVar} \rightarrow {\sf Act} \times \mathbb{Q}$ that records the \emph{viewfront} of write $w$, i.e., the viewfront of the thread that executed $w$ immediately after $w$'s execution. We use ${\tt mview}_w$ to compute a new value for ${\tt tview}_t$ if a thread $t$ \emph{synchronizes} with $w$, i.e., if $w \in \mathsf{W_R}$ and another thread executes an $e \in \mathsf{R_A}$ that reads from $w$. \end{itemize} The client cannot directly access writes in the library, therefore the thread view function must map to writes within the same component. On the other hand, synchronisation in a component can affect thread views in another (as discussed in \refsec{sec:message-passing-via}), thus the modification view function may map to operations across the system. Finally, our semantics maintains a set $\mathtt{cvd} \subseteq \mathtt{ops}$. In C11 RAR, each update action occurs in modification order immediately after the write that it reads from \cite{DBLP:conf/ppopp/DohertyDWD19}. This property ensures the atomicity of updates. We disallow any newer modifying operation (write or update) from intervening between any update and the write or update that it reads from. As we explain below, covered writes are those that are immediately prior to an update in modification order, and new write actions never interact with a covered write. \smallskip \noindent{\bf Initialisation.} Suppose ${\it GVar}_C = \{x_1, \ldots, x_n\}$, ${\it GVar}_L = \{y_1, \ldots, y_n\}$, $\mathit{LVar} = \{r_1, \ldots, r_m\}$, $k_1, \dots, k_n, l_1, \dots, l_m \in \mathit{Val}$, and $\mathbf{Init} = x_1:=k_1; \ldots, x_n:=k_n ; [r_1 := l_1 ;] \dots [r_m := l_m;]$, where we use the notation $[r_i := l_i;]$ to mean that the assignment $r_i := l_i$ may optionally appear in $\mathbf{Init}$. Thus each shared variable is initialised exactly once and each local variable is initialised at most once. The initial values of the state components are then as follows, where we assume $0$ is the initial timestamp, $t$ is a thread, $x_i \in {\it GVar}_C$ and $y_i \in {\it GVar}_L$ \begin{align*} \gamma_\mathbf{Init}.\mathtt{ops} & = \{(wr(x_1,k_1),0), \ldots, (wr(x_n,k_n),0)\} \\ \beta_\mathbf{Init}.\mathtt{ops} & = \{(wr(y_1,k_1),0), \ldots, (wr(y_n,k_n),0)\} \\ \gamma_\mathbf{Init}.{\tt tview}_t(x_i) & = (wr(x_i,k_i),0)\\ \beta_\mathbf{Init}.{\tt tview}_t(y_i) & = (wr(y_i,k_i),0) \\ \gamma_\mathbf{Init}.{\tt mview}_{x_i} &=\beta_\mathbf{Init}.{\tt mview}_{y_i}= \gamma_\mathbf{Init}.{\tt tview}_t \!\cup\! \beta_\mathbf{Init}.{\tt tview}_t \\ \gamma_\mathbf{Init}.\mathtt{cvd} &= \beta_\mathbf{Init}.\mathtt{cvd}= \emptyset \end{align*} The local state component of each thread must also be compatible with $\mathbf{Init}$, i.e., for each $t$ if $r_i \in \operatorname{\mathbf{dom}}(lst(t))$ we have that $({\it lst}(t))(r_i) = l_i$ provided $r_i := l_i$ appears in $\mathbf{Init}$. We let ${\it lst}_\mathbf{Init}$ be the local state compatible with $\mathbf{Init}$ and let $\Gamma_\mathbf{Init} = ({\it lst}_\mathbf{Init},\gamma_\mathbf{Init}, \beta_\mathbf{Init})$. \begin{figure*}[t] \centering \small $\inference[{\sc Read}] {a \in \{rd(x, n), rd^\mathsf{A}(x, n) \} \qquad (w, q) \in \gamma.\mathtt{Obs}(t, x) \qquad {\it wrval}(w) = n \\ {\tt tview}' = \ensuremath \begin{cases} \gamma.{\tt tview}_t\otimes\gamma.{\tt mview}_{(w,q)}&\mbox{if $(w, a) \in \mathsf{W_R} \times \mathsf{R_A}$ }\\ \gamma.{\tt tview}_t[x := (w, q)]&\mbox{otherwise} \end{cases}} \\ {\tt ctview}' = \ensuremath \begin{cases} \beta.{\tt tview}_t\otimes\gamma.{\tt mview}_{(w,q)}&\mbox{if $(w, a) \in \mathsf{W_R} \times \mathsf{R_A}$ }\\ \beta.{\tt tview}_t &\mbox{otherwise} \end{cases}} } {\gamma, \beta\ \strans{a}_{t}\ \gamma[{\tt tview}_t \ensuremath{:=} {\tt tview}'], \beta[{\tt tview}_t \ensuremath{:=} {\tt ctview}'] }$ \bigskip $ \inference[{\sc Write}] { a \in \{ wr(x,n), wr^{\sf R}(x,n)\} \qquad (w, q) \in \gamma.\mathtt{Obs}(t, x) \setminus \gamma.\mathtt{cvd} \\ \mathit{fresh}_\gamma(q,q') \qquad \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, q')\} \\ {\tt tview}' = \gamma.{\tt tview}_t[x := (a, q')] \qquad {\tt mview}' = {\tt tview}' \cup \beta.{\tt tview}_t } {\gamma, \beta\ \strans{a}_{t}\ \gamma[{\tt tview}_t \ensuremath{:=} {\tt tview}', {\tt mview}_{(a,q')} \ensuremath{:=} {\tt mview}', \mathtt{ops} \ensuremath{:=} \mathtt{ops}'], \beta }$ \bigskip $ \inference[{\sc Update}] { a = upd^{\sf RA}(x,m,n) \qquad (w, q) \in \gamma.\mathtt{Obs}(t, x) \setminus \gamma.\mathtt{cvd} \qquad {\it wrval}(w) = m \\ \mathit{fresh}_\gamma(q,q') \qquad \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, q')\} \\ \mathtt{cvd}' = \gamma.\mathtt{cvd} \cup \{(w, q)\} \qquad {\tt mview}' = {\tt tview}' \cup {\tt ctview}' \\ {\tt tview}' = \ensuremath \begin{cases} \gamma.{\tt tview}_t[x \ensuremath{:=} (a, q')]\otimes\gamma.{\tt mview}_{(w,q)}&\mbox{if $w \in \mathsf{W_R}$ }\\ \gamma.{\tt tview}_t[x \ensuremath{:=} (a, q')]&\mbox{otherwise} \end{cases} \\ {\tt ctview}' = \ensuremath \begin{cases} \beta.{\tt tview}_t \otimes\gamma.{\tt mview}_{(w,q)}&\mbox{if $w \in \mathsf{W_R}$ }\\ \beta.{\tt tview}_t &\mbox{otherwise} \end{cases}} } {\gamma, \beta\ \strans{a}_{t}\ \gamma\left[ \begin{array}[c]{@{}l@{}} {\tt tview}_t \ensuremath{:=} {\tt tview}', {\tt mview}_{(a,q')} \ensuremath{:=} {\tt mview}', \\ \mathtt{ops} \ensuremath{:=} \mathtt{ops}', \mathtt{cvd} \ensuremath{:=} \mathtt{cvd}' \end{array}\right], \beta[{\tt tview}_t\ensuremath{:=} {\tt ctview}']}$ \caption{Transition relation for reads, writes and updates of the memory semantics} \label{fig:surrey-opsem} \end{figure*} \smallskip \noindent{\bf Transition semantics.} The transition relation of our semantics for global reads and writes is given in \reffig{fig:surrey-opsem} and builds on an earlier semantics that does not distinguish the state of the context~\cite{ECOOP20}. Each transition $\gamma, \beta \strans{a}_{t} \gamma', \beta'$ is labelled by an action $a$ and thread $t$ and updates the target state $\gamma$ (the state of component being executed) and the context $\beta$. \smallskip \noindent{\bf {\sc Read} transition by thread $t$.} Assume that $a$ is either a relaxed or acquiring read to variable $x$, $w$ is a write to $x$ that $t$ can observe (i.e., $(w, q) \in \gamma.\mathtt{Obs}(t,x)$), and the value read by $a$ is the value written by $w$. Each read causes the viewfront of $t$ to be updated. For an unsynchronised read, ${\tt tview}_t$ is simply updated to include the new write. A synchronised read causes the executing thread's view of the executing component and context to be updated. In particular, for each variable $x$, the new view of $x$ will be the later (in timestamp order) of either ${\tt tview}_t(x)$ or ${\tt mview}_w(x)$. To express this, we use an operation that combines two views $V_1$ and $V_2$, by constructing a new view from $V_1$ by taking the later view of each variable: \medskip \hfill $ \begin{array}[t]{@{}l@{}l} V_1 \otimes V_2 = \lambda x \in \operatorname{\mathbf{dom}}(V_1).\ & \textbf{if}~{\tt tst}(V_2(x)) \leq {\tt tst}(V_1(x))\ \textbf{then}\ V_1(x)\ \textbf{else}\ V_2(x) \end{array} $\hfill {} \smallskip \noindent{\bf {\sc Write} transition by thread $t$.} A write transition must identify the write $(w,q)$ after which $a$ occurs. This $w$ must be observable and must {\em not} be covered --- the second condition preserves the atomicity of read-modify-write updates. We must choose a fresh timestamp $q' \in \mathbb{Q}$ for $a$, which for a C11 state $\gamma$ is formalised by $\mathit{fresh}_\gamma(q, q') = q < q' \wedge \forall w' \in \gamma.\mathtt{ops}.\ q < {\tt tst}(w') \Rightarrow q' < {\tt tst}(w')$. That is, $q'$ is a new timestamp for variable $x$ and that $(a,q')$ occurs immediately after $(w,q)$. The new write is added to the set $\mathtt{ops}$. We update $\gamma.{\tt tview}_t$ to include the new write, which ensures that $t$ can no longer observe any writes prior to $(a, q')$. Moreover, we set the viewfront of $(a, q')$ to be the new viewfront of $t$ in $\gamma$ together with the thread viewfront of the environment state $\beta$. If some other thread synchronises with this new write in some later transition, that thread's view will become at least as recent as $t$'s view at this transition. Since ${\tt mview}$ keeps track of the executing thread's view of both the component being executed and its context, any synchronisation through this new write will update views across components. \smallskip \noindent{\bf {\sc Update} transition by thread $t$.} These transitions are best understood as a combination of the read and write transitions. As with a write transition, we must choose a valid fresh timestamp $q'$, and the state component $\mathtt{ops}$ is updated in the same way. State component ${\tt mview}$ includes information from the new view of the executing thread $t$. As discussed earlier, in {\sc Update} transitions it is necessary to record that the write that the update interacts with is now covered, which is achieved by adding that write to $\mathtt{cvd}$. Finally, we must compute a new thread view, which is similar to a {\sc Read} transition, except that the thread's new view always includes the new write introduced by the update. \section{Introduction} An effective technique for reasoning about weak memory models is to consider the observations that a thread can make of the writes within a system. For example, for certain subsets of C11 (the 2011 C standard), reasoning about per-thread observations has led to operational characterisations of the memory model, high-level predicates for reasoning about per-thread observations, and deductive verification techniques applied to standard litmus tests and synchronisation algorithms~\cite{ECOOP20}. Current verification techniques are however, focussed on (closed) programs, and hence do not provide any mechanism for (de)composing clients and libraries. This problem requires special consideration under weak memory since the execution of a library method induces synchronisation. That is, a thread's observations of a system (including of client variables) can change when executing library methods. This paper addresses several questions surrounding client-library composition in a weak memory context. {\bf (1)} {\em How can a client \emph{use} a weak memory library, i.e., what abstract guarantees can a library provide a client program?} Prior works~\cite{DongolJRA18,DBLP:conf/popl/BattyDG13} describe techniques for \emph{specifying} the behaviour of abstract objects, which are in turn related to their implementations using causal relaxations of linearizability. However, these works do not provide a mechanism for reasoning about the behaviour of client programs that {\em use} abstract libraries. In this paper, we address this gap by presenting a modular operational semantics that combines weak memory states of clients and libraries. {\bf (2)} {\em What does it mean to \emph{implement} an abstract library?} To ensure that behaviours of client programs using an abstract library are preserved, we require \emph{contextual refinement} between a library implementation and its abstract specification. This guarantees that no new client behaviours are introduced when a client uses a (concrete) library implementation in place of its (abstract) library specification. Under sequential consistency (SC), it is well known that linearizable libraries guarantee (contextual) refinement~\cite{DBLP:conf/icfem/DongolG16,GotsmanY11,DBLP:journals/tcs/FilipovicORY10}. However, under weak memory, a generic notion of linearizability is difficult to pin down~\cite{DongolJRA18,ifm18}. We therefore present a direct technique for establishing contextual refinement under weak memory. A key innovation is the development of context-sensitive simulation rules that ensures that each client thread that uses the implementation observes a subset of the values seen by the abstraction. {\bf (3)} {\em Can the same abstract library specify \emph{multiple} implementations?} A key benefit of refinement is the ability to use the same abstract specification for multiple implementations, e.g., to fine-tune clients for different concurrent workload scenarios. To demonstrate applicability of our framework, we provide a proof-of-concept example for an abstract lock and show that the same lock specification can be implemented by a sequence lock and ticket lock. The theory itself is generic and can be applied to concurrent objects in general {\bf (4)} {\em How can we support verification? Can the verification techniques be mechanised?} Assuming the existence of an operational semantics for the underlying memory model, we aim for \emph{deductive} verification of both client-library composition and contextual refinement. We show that this can be supported by prototyping the full verification stack in the Isabelle/HOL theorem prover~\footnote{Our Isabelle theories may be accessed as ancillary material in the ArXiV submission.}. \section{Message passing via library objects} \label{sec:message-passing-via} In this section, we illustrate the basic principles of client-object synchronisation in weak memory. \begin{figure}[t] \begin{minipage}[b]{0.45\columnwidth} \begin{center} {\bf Init: } $d:=0;$ $s.init();$ \\ $\begin{array}{@{}l@{\ }||@{\ }l} \text{\bf Thread } 1 & \text{\bf Thread } 2\\ d := 5; \qquad & \text{\bf do } r_1 := s.pop() \\ s.push(1); & \text{\bf until}\ r_1 = 1; \\ & r_2 \gets d; \\ \end{array}$ {\color{red!70!black} $\{r_2 = 0 \lor r_2=5\}$} \qquad \quad \quad \end{center} \vspace{-1em} \caption{Unsynchronised message passing} \label{fig:po-message-bad} \end{minipage} \hfill \begin{minipage}[b]{0.47\columnwidth} \begin{center} {\bf Init:} $d:=0;$ $s.init();$ \\ $\begin{array}{@{}l@{\ }||@{\ }l} \text{\bf Thread } 1 & \text{\bf Thread } 2\\ d := 5; \qquad & \text{\bf do } r_1 := s.pop^{\sf A}() \\ s.push^{\sf R}(1); & \text{\bf until}\ r_1 = 1; \\ & r_2 \gets d; \\ \end{array}$ {\color{green!40!black} $\{ r_2=5\}$} \qquad \ \ \ \end{center} \vspace{-1em} \caption{Publication via a synchronising stack } \label{fig:publication} \end{minipage} \vspace{-1.3em} \end{figure} \smallskip \noindent{\bf Client-object message passing.} Under SC all threads have a single common view of the shared state. When a new write is executed, the ``views'' of all threads are updated so that they are guaranteed to only see this new write. In contrast, each thread in a C11 program has its own view of each variable. Views may not be updated when a write occurs, allowing threads to read stale writes. To enforce view updates, additional synchronisation (e.g., release-acquire) must be introduced~\cite{DBLP:conf/popl/BattyOSSW11,DBLP:conf/ecoop/KaiserDDLV17,DBLP:conf/pldi/LahavVKHD17}. Now consider a generalisation of this idea to (client) programs that use library objects. The essence of the problem is illustrated by the message-passing programs in Figures~\ref{fig:po-message-bad}~and~\ref{fig:publication}. Under SC, when the program in \reffig{fig:po-message-bad} terminates, the value of $r_2$ is guaranteed to be $5$. However, this is not necessarily true in a weak memory setting. Even if $pop$ operation in thread~2 returns 1, it may be possible for thread 2 to observe stale value 0 for $d$. Therefore the program only guarantees the weaker postcondition $r_2 = 0 \lor r_2 = 5$. To address this problem, the library operations in \reffig{fig:publication} are annotated with release-acquire annotations. In particular, the client assumes the availability of a ``releasing push'' ($push^{\sf R}(1)$), which is to be used for message passing. Thread~2 pops from $s$ using an ``acquiring pop'' ($pop^{\sf A}()$). If this pop returns 1, the stack operations induce a happens-before synchronisation in the client, which in turn means that it is now impossible for thread 2 to read the stale initial write for $d$. \smallskip\noindent{\bf Verification strategy.} Our aim is to enable {\em deductive verification} of such programs by leveraging recently developed operational semantics, assertion language and Owicki-Gries style proof strategy for RC11 RAR~\cite{ECOOP20}. We show that these existing concepts generalise naturally to client-object, and in a manner that enables modular proofs. The assertion language of~\cite{ECOOP20} enables reasoning about a thread's views, e.g., in \reffig{fig:mp-proof}, after initialisation, thread $t \in \{1,2\}$ has \textit{definite value} $0$ for $d$ (denoted $[d = 0]_t$). In this paper, we extend such assertions to capture thread views over library objects. E.g., after initialisation, the only value a pop by thread $t$ can return is $Empty$, and this is captured by the assertion $[s.pop_{emp}]_t$. The precondition of $d := 5$ states that thread 2 cannot pop value $1$ from $s$ (as captured by the assertion $\neg \langle s.pop_1\rangle_2$). The precondition of the $\textsf{\textbf{until}}$ loop in thread 2 contains a {\em conditional observation} assertion (i.e., $\langle s.pop_1 \rangle [d = 5]_2$), which states that if thread~2 pops value 1 from $s$ then it will subsequently be in a state where it will definitely read $5$ for $d$. A key benefit of the logic in~\cite{ECOOP20} is that it enables use of {\em standard} Owicki-Gries reasoning and straightforward mechanisation~\cite{DBLP:journals/corr/abs-2004-02983}. As we shall see (\refsec{sec:example-client-lbjec}), we maintain these benefits in the context of client-object programs. \begin{figure}[t] \centering \begin{minipage}[t]{0.9\columnwidth} \begin{center} {\bf Init: } $d:=0;$ $s.init();$ \qquad \qquad \qquad \qquad\\ {\color{green!40!black} $\{[d = 0]_1 \wedge [d = 0]_2 \wedge [s.pop_{emp}]_1 \wedge [s.pop_{emp}]_2\}$} \\ $\begin{array}{l@{\quad}||@{\quad }l} \text{\bf Thread } 1 & \text{\bf Thread } 2\\ \begin{array}[t]{@{}l@{}} {\color{blue} \{ \neg \langle s.pop_1 \rangle_2 \wedge [d = 0]_1\}} \\ 1: d := 5; \\ {\color{blue} \{ \neg \langle s.pop_1 \rangle_2 \wedge [d = 5]_1\}} \\ 2: s.push^{\sf R}(1); \\ {\color{blue} \{ true\}} \end{array} & \begin{array}[t]{@{}l@{}} {\color{blue} \{ \langle s.pop_1 \rangle [d= 5]_2 \}} \\ 3: \text{\bf do } r_1 := s.pop^{\sf A}()\ \\ \text{\bf until}\ r_1 = 1; \\ {\color{blue} \{ [d = 5]_2\}} \\ 4: r_2 \gets d; \\ {\color{blue} \{r_2 = 5\} } \end{array} \end{array} $ {\color{green!40!black} $\{ r_2=5\}$} \end{center} \vspace{-1em} \caption{A proof outline for message passing} \label{fig:mp-proof} \end{minipage} \end{figure} \smallskip\noindent{\bf Contextual refinement.} \label{sec:cont-refin-pre} Contextual refinement relates a client using an abstract object with a client that uses a concurrent implementation of the object. More precisely, we say that a concrete object $CO$ is a \emph{contextual refinement} of an abstract object $AO$ iff for any client $C$, every behaviour of $C$ when it uses $CO$ is a possible behaviour of $C$ when it uses $AO$. Thus, there is no observable difference to any client when it uses $CO$ in place of $AO$. In a weak memory setting, to enable a client to \emph{use} an object, one must \emph{specify} how synchronisation between object method calls affects the client state. To {\em implement} such a specification, we must describe how the abstract synchronisation guarantees are represented in the implementation. Prior works have appealed to extensions of notions such as linearizability to ensure contextual guarantees \cite{DongolJRA18,DBLP:journals/pacmpl/EmmiE19,DBLP:journals/pacmpl/RaadDRLV19}. In this paper, we aim for a more direct approach and consider contextual refinement directly. \subsection{Case studies} \input{seqlock} \input{ticketlock} \subsection{Sequence Lock} \label{sec:sequence-lock} The first refinement example is a sequence lock which operates over a single shared variable ($glb$). \smallskip \begin{minipage}[b]{0.80\textwidth} $\textbf{Init:} \ \ glb = 0$ \\[6pt] \begin{minipage}[t]{\textwidth} \begin{minipage}[b]{0.80\textwidth} \textbf{Acquire()}: \begin{algorithmic}[1] \small \State \textbf{do} \quad \textbf{do} $r \leftarrow^{\sf A} glb$ \textbf{until} $even (r)$ ; \State \qquad\ \ $loc \gets \textsf{\textbf{CAS}}(glb, r, r+1)$ \State \textbf{until} $(loc)$ \end{algorithmic} \end{minipage} \end{minipage} \\[6pt] \begin{minipage}[t]{\textwidth} \begin{minipage}[b]{0.8\textwidth} \textbf{Release()}: \begin{algorithmic}[1] \small \State $glb :=^{\sf R} r + 2$ \end{algorithmic} \end{minipage} \end{minipage} \end{minipage}\smallskip \noindent The ${\bf Acquire}$ operation returns true if, and only if, the $\textsf{\textbf{CAS}}$ on line~2 is successful. Therefore, in order to prove the refinement, we will need to prove that whenever the $\textsf{\textbf{CAS}}$ operation is successful, the abstract object can also successfully acquire the lock maintaining the simulation relation. Also, the read on line 1 and the unsuccessful $\textsf{\textbf{CAS}}$ are stuttering steps and we need to show that when those steps are taken the abstract state remains unchanged and the new concrete state preserves the simulation relation. The ${\bf Release}$ operation contains only one releasing write on variable $glb$, which is considered to be a refining step. It is straightforward to show that this operation refines the abstract object release operation. The following proposition has been verified in Isabelle/HOL. \begin{proposition} For synchronisation-free clients, there exists a forward simulation between the abstract lock object and the sequence lock. \end{proposition} \subsection{Ticket Lock} \label{sec:ticket-lock} Our second refinement example is the ticket lock:\medskip \begin{minipage}[b]{0.48\textwidth} $\textbf{Init:} \ \ nt = 0, \ \ sn = 0$\\[6pt] \begin{minipage}[t]{\columnwidth} \begin{minipage}[b]{0.80\textwidth} \textbf{Acquire()}: \begin{algorithmic}[1] \small \State $m\_t \leftarrow \textsf{\textbf{fetch\_and\_inc}}(nt)$ \State \textbf{do} $s\_n \leftarrow^{\sf A} sn$ \textbf{until} $m\_t = s\_n$ \end{algorithmic} \end{minipage} \end{minipage} \\[8pt] \begin{minipage}[t]{\columnwidth} \begin{minipage}[b]{0.80\textwidth} \textbf{Release()}: \begin{algorithmic}[1] \small \State $sn :v=^{\sf R} s\_n + 1$ \end{algorithmic} \end{minipage} \end{minipage} \end{minipage}\medskip \noindent The ticket lock has two shared variables $nt$ (next ticket) and $sn$ (serving now). Invocation of {\bf Acquire} loads the next available ticket into a local register ($m\_t$) and increases the value of $nt$ by one using a fetch-and-increment ($\textsf{\textbf{fetch\_and\_inc}}$) operation. It then enters a busy loop and reads $sn$ until it sees its own ticket value in $sn$ before it can enter its critical section. If the read on line 2 of the {\bf Acquire} operation reads from a write whose value is equal to the value of $m\_t$, then the lock is acquired. Therefore we will need to show that if this situation arises, the abstract lock object can also take a step and successfully acquire the lock. We consider the $\textsf{\textbf{fetch\_and\_inc}}$ operation on line 1 and the read on line 2 if it reads a value that is not equal to $m\_t$ to be a stuttering step. We prove that each of the stuttering and non-stuttering steps preserves the simulation relation. Similar to the previous example, the {\bf Release} operation consists of only one releasing write to variable $sn$ and it is straightforward to show that this operation refines the abstract release operation. This proof has been mechanised in Isabelle/HOL. \begin{proposition} For synchronisation-free clients, there exists a forward simulation between the abstract lock object and the ticket lock. \end{proposition} \section{Client-library verification} \label{sec:example-verification} Having formalised the semantics of clients and libraries in a weak memory setting, we now work towards verification of (client) programs that use such libraries. \subsection{Assertion language} \label{sec:assertion-language} In our proof, we use \emph{observability assertions}, which describe conditions for a thread to observe a specific value for a given variable. Unlike earlier works, our operational semantics covers clients and their libraries, and hence operates over pairs of states. \smallskip \noindent {\bf Possible observation}, denoted $\langle x = u\rangle_t$, means that thread $t$ \emph{may} observe value $u$ for $x$~\cite{ECOOP20}. We extend this concept to cope with abstract method calls as follows. In particular, for an object $o$ and method $m$, we use $\langle o.m \rangle_t$ to denote that thread $t$ can observe $o.m$. \begin{align*} \langle x = n \rangle_t (\sigma) & \ \ \equiv \ \ \exists w\in \sigma.\mathtt{Obs}(t,x).\ {\it wrval}(w) = n\\ \langle o.m \rangle_t (\sigma) & \ \ \equiv \ \ \exists q.\ (o.m, q) \in \sigma.\mathtt{ops} \wedge q \geq \sigma.{\tt tview}_t(o) \end{align*} \noindent To distinguish possible observation in clients and libraries, we introduce the following notation, where $\gamma$ and $\beta$ are the client and library states, respectively, and $p$ is either a valuation (i.e., $x = n$) or an abstract method call (i.e., $o.m$): \begin{align*} \langle p \rangle_t^C (\gamma, \beta) & \ \ \equiv \ \ \langle p \rangle_t (\gamma) & \langle p \rangle_t^L (\gamma, \beta) & \ \ \equiv \ \ \langle p \rangle_t(\beta) \end{align*} \smallskip \noindent {\bf Definite observation}, denoted $[x = u]_t$, means that thread $t$ can only see the last write to $x$, and that this write has written value $u$. We define the \emph{last write} to $x$ in a set of writes $W$ as: \begin{align*} \mathit{last}(W, x) = w \ \ \equiv \ \ & \begin{array}[t]{@{}l@{}} w \in \{w \in W \mid \mathtt{var}(w) = x \} \wedge {} \\ (\forall w' \in W_{|x}.\ {\tt tst}(w') \leq {\tt tst}(w)) \end{array} \end{align*} We define the definite observation of a view function, $view$ with respect to a set of writes as follows: \begin{align*} \begin{array}[t]{@{}r@{}l@{}} & {\it dview} (view, W, x) = n \\ \ \ \equiv\ \ & view(x) = \mathit{last}(W, x) \wedge {\it wrval}(\mathit{last}(W, x)) = n \end{array} \end{align*} The first conjunct ensures that the viewfront of $view$ for $x$ is the last write to $x$ in $W$, and the second conjunct ensures that the value written by the last write to $x$ in $W$ is $n$. For a variable $x$, thread $t$ and value $n$, we define: \begin{align*} [x = n]_t (\sigma) \ \ & \equiv \ \ {\it dview}(\sigma.{\tt tview}_t, \sigma.\mathtt{ops} \cap \mathsf{W}, x) =n \end{align*} The extension of definite observation assertions to abstract method calls is straightforward to define. Namely we have: \begin{align*} [o.m]_t (\sigma) \ \ & \equiv \ \ \begin{array}[t]{@{}l@{}} \sigma.{\tt tview}_t(o) = {\it maxTS}(o, \sigma) \wedge {} \\ (o.m, {\it maxTS}(o, \sigma)) \in \sigma.\mathtt{ops} \end{array} \end{align*} As with possible observations, we lift definite observation predicates to state spaces featuring clients and libraries: \begin{align*} [p]^C_t (\gamma, \beta) & \ \ \equiv \ \ [p]_t(\gamma) & [p]^L_t (\gamma, \beta) & \ \ \equiv \ \ [p]_t(\beta) \end{align*} \smallskip \noindent {\bf Conditional observation}, denoted $\CObs{x}{u}{t}{y}{v}$, means that if thread $t$ synchronises with a write to variable $x$ with value $u$, it \emph{must} subsequently observe value $v$ for $y$. For variables $x$ and $y$, thread $t$ and values $u$ and $v$, we define: \begin{align*} \begin{array}[t]{@{}r@{~}l@{}} & \CObs{x}{u}{t}{y}{v} (\sigma) \\ \ \ \equiv \ \ & \forall w \in \sigma.\mathtt{Obs}(t,x) .\ {\it wrval}(w) = u \Rightarrow \\ & \quad {\it act}(w) \in \mathsf{W_R} \wedge {\it dview}(\sigma.{\tt mview}_w, \sigma.\mathtt{ops}, y) = v \end{array} \end{align*} This is a key assertion used in message passing proofs \cite{ECOOP20,DBLP:conf/ecoop/KaiserDDLV17} since it guarantees an observation property on a variable, $y$, via a synchronising read of another variable, $x$. As with possible and definite assertions, conditional assertions can generalised to objects and extended to pairs of states describing a client and its library. However, unlike possible and definition observations assertions, conditional observation enables one to describe view synchronisation across different states. For example, consider the following, which enables conditional observation of an abstract method to establish a definite observation assertion for the thread view of the client. We assume a set $Sync \subseteq {\sf Act}$ that identifies a set of synchronising abstract actions. \begin{align*} \begin{array}[t]{@{}r@{~}l@{}} & \langle o.m \rangle^L[y = v]_t^C (\gamma, \beta)\\ \ \ \equiv & o.m \in Sync \wedge \forall q.\ (o.m, q) \in \sigma.\mathtt{ops} \wedge q \geq \gamma.{\tt tview}_t(o) \Rightarrow \\ & {} \qquad\qquad {\it dview}(\gamma.{\tt mview}_{(o.m, q)}, \beta.\mathtt{ops}, y) = v \end{array} \end{align*} It is possible to define other variations, e.g., conditional observation synchronisation from clients to libraries, but we leave out the details of these since they are straightforward to construct. \smallskip \noindent {\bf Covered operations}, denoted $\cvd{x}{u}$, where $x$ is a variable and $u$ a value. Recall from the {\sc Acquire} rule that a new acquire operation causes the immediately prior (release) operation $l.release_{n-1}$ to be covered so that no later acquire can be inserted between $l.release_{n-1}$ and the new acquire. To reason about this phenomenon over states, we use: \begin{align*} \cvd{o.m}{} (\sigma) &\ \ \equiv\ \ \begin{array}[t]{@{}l@{}} \forall (w, q) \in \sigma.\mathtt{ops}_{|o} \setminus \sigma.\mathtt{cvd}.\ \\ w = o.m \wedge q = {\it maxTS}(o, \sigma) \end{array} \end{align*} where $\sigma.\mathtt{ops}_{|o}$ is the set of operations over object $o$. \smallskip \noindent {\bf Hidden value}, denoted $\cvv{o.m}{}$, states that the operation $o.m$ exists, but all of these are hidden from interaction. In proofs, such assertions limit the values that can be returned. \begin{align*} \cvv{o.m}{}(\sigma) &\ \ \equiv\ \ \begin{array}[t]{@{}l@{}} (\exists q.\ (o.m, q) \in \sigma.\mathtt{ops}) \wedge {}\\ (\forall q.\ (o.m, q) \in \sigma.\mathtt{ops} \Rightarrow (o.m, q) \in \sigma.\mathtt{cvd}) \end{array} \end{align*} Both covered and hidden-value assertions can be lifted to pairs of states and can be used to reason about standard writes, as opposed to method calls (details omitted). \subsection{Hoare Logic for C11 and Abstract Objects} \label{sec:hoare-logic-c11} Since we have an operational semantics, the assertions in \refsec{sec:assertion-language} can be integrated into standard Hoare-style proof calculus in a straightforward manner~\cite{ECOOP20,DBLP:journals/corr/abs-2004-02983}. The only differences are the state model (which is a weak memory state, as opposed to mappings from variables to values) and the atomic components (which may include reads of global variables, and, in this paper, abstract method calls). Following \cite{ECOOP20,DBLP:journals/corr/abs-2004-02983}, we let $\Sigma_C$ and $\Sigma_{L}$ to be the set of all possible global state configurations of the client and library, respectively and let $\Sigma_{C11} = (\mathit{LVar} \to \mathit{Val}) \times \Sigma_C \times \Sigma_{L}$ be the set of all possible client-library C11 states. Predicates over $\Sigma_{C11}$ are therefore of type $\Sigma_{C11} \to \mathbb{B}$. This leads to the following definition of a Hoare triple, which we note is the same as the standard definition --- the only difference is that the state component is of type $\Sigma_{C11}$. \begin{definition} \label{def:soundn-class-rules-1}Suppose $p, q \in \Sigma_{C11} \to \mathbb{B}$, $P \in {\it Prog}$ and $\textsf{\textbf{E}} = \lambda t : \mathit{Tid}.\ \textsf{\textbf{skip}}$. The semantics of a Hoare triple under partial correctness is given by: $\begin{array}{r@{~}l} \{p\} \mathbf{Init} \{q\} & = q(\Gamma_\mathbf{Init}) \\[2pt] \{p\} \mathbf{Init} ; P \{q\} & = \exists r.\,\{p\} \mathbf{Init} \{r\} \wedge \{r\} P \{q\} \\[2pt] \multicolumn{2}{l}{ \{p\}P \{q\} = \begin{array}[t]{@{}l@{}} \forall \mathit{lst},\gamma, \beta ,\mathit{lst}',\gamma', \beta'.\ p(\mathit{lst}, \gamma, \beta) \wedge \\ \ (P, \mathit{lst}, \gamma, \beta) \ltsArrow{}^* (\textsf{\textbf{E}}, \mathit{lst}', \gamma', \beta') \Rightarrow q(\mathit{lst}',\gamma', \beta') \end{array}} \\ \end{array}$ \end{definition} This definition (in the context of RC11 \cite{DBLP:conf/pldi/LahavVKHD17}) allows all standard Hoare logic rules for compound statements to be reused \cite{ECOOP20}. Due to concurrency, following Owicki and Gries, one must prove \emph{local correctness} and \emph{interference freedom} (or stability)~\cite{DBLP:journals/acta/OwickiG76,ECOOP20,DBLP:journals/corr/abs-2004-02983,DBLP:conf/icalp/LahavV15}. This is also defined in the standard manner. Namely, a statement $R \in {\it ACom}$ with precondition $pre(R)$ (in the standard proof outline) does {\em not interfere} with an assertion $p$ iff $\{ p \wedge pre(R)\} \ R \ \{ p\}.$ Proof outlines of concurrent programs are {\em interference free} if no statement in one thread interferes with an assertion in another thread. The only additional properties that one must define are on the interaction between atomic commands and predicates over assertions defined in \refsec{sec:assertion-language}. A collection of rules for reads, writes and updates have been given in prior work~\cite{DBLP:journals/corr/abs-2004-02983,ECOOP20}. Here, we present rules for method calls of the abstract lock object defined in \refex{ex:abs-lock}. In proofs, it is often necessary to reason about particular versions of the lock (i.e., the lock counter). Therefore, we use ${\tt l.Acquire(\mbox{$v$})}$ and ${\tt l.Release(\mbox{$v$})}$ to denote the transitions that set the lock version to $v$. Also note that in our example proof, it is clear from context whether an assertion refers to the client or library state, and hence, for clarity, we drop the superscripts $C$ and $L$ as used in \refsec{sec:assertion-language}. The lemma below has been verified in Isabelle/HOL. \begin{lemma} Each of the following holds, where the statements are decorated with the identifier of the executing thread, assuming ${\tt m} \in \{{\tt Acquire}, {\tt Release}\}$ and $t \neq t'$ \begin{small} \begin{align} & \{\cvv{l.{\it release}_u}{}\}\ {\tt l.Acquire(\mbox{$v$})_t} \ \{ v > u + 1 \} \\ & \{\cvv{l.{\it release}_u}{}\}\ {\tt l.m(\mbox{$v$})_t}\ \{\cvv{l.{\it release}_u}{}\} \\ & \{[l.{\it release}_u]_t\} \ {\tt l.Acquire(\mbox{$v$})_t}\ \{[l.{\it acquire}_{u+1}]_t\}\\ & \{[x = u]_t\}\ {\tt l.m(\mbox{$v$})_{t'}}\ \{[x = u]_t\} \\ & \{ \langle l.{\it release}_u \rangle [x = n]_t \}\ {\tt l.Acquire(\mbox{$v$})_t}\ \{v = u + 1 \Rightarrow [x = n]_t\} \\ & \left\{ \begin{array}[c]{@{}l@{}} \neg \langle l.{\it release}_u \rangle_{t'} {}\wedge{} {[}x = v]_t \end{array} \right\}\ {\tt l.Release(\mbox{$u$})_{t}}\ \{\langle {\it release}_u \rangle [x = v]_{t'}\} \label{co-intro} \end{align} \end{small} \end{lemma} \subsection{Example Client-Library Verification} \label{sec:example-client-lbjec} To demonstrate use of our logic in verification, consider the simple program in \reffig{fig:lockmp-proof}, which comprises a lock object $l$ and shared client variables $d_1$ and $d_2$ (both initially~$0$). Thread~1 writes 5 to both $d_1$ and $d_2$ after acquiring the lock while thread~2 reads $d_1$ and $d_2$ (into local registers $r_1$ and $r_2$) also after acquiring the lock. Under SC, it is a standard exercise to show that the program terminates with $r_1 = r_2$ and $r_i = 0$ or $r_i = 5$. We show that the lock specification in \refsec{sec:abstr-object-semant} together with the assertion language from \refsec{sec:assertion-language} and Owicki-Gries logic from \refsec{sec:hoare-logic-c11} is sufficient to prove this property. In particular, the specification guarantees {\em adequate synchronisation} so that if the Thread~2's lock acquire sees the lock release in Thread 1, it also sees the writes to $d_1$ and $d_2$. The proof relies on two distinct types of properties: \begin{itemize}[leftmargin=*] \item \emph{Mutual exclusion}: As in SC, no two threads should execute their critical sections at the same time. \item \emph{Write visibility}: If thread 1 enters its critical section first, its writes to both $d_1$ and $d_2$ must be visible to thread 2 after thread 2 acquires the lock. Note that this property is not necessarily guaranteed in a weak memory setting since all accesses to $d_1$ and $d_2$ in \reffig{fig:lockmp-proof} are relaxed. \end{itemize} \begin{figure*}[t] \centering \fbox{ \begin{minipage}[b]{0.7\textwidth} \begin{center} \small {\bf Init: } $d_1:=0;$ $d_2:=0;$ ${\tt l.init()};$ \qquad \qquad \qquad \qquad\\ {\color{green!40!black} $\{Inv \wedge [d_1 = 0]_1 \wedge [d_2 = 0]_1 \wedge [d_1 = 0]_2 \wedge [d_2 = 0]_2\}$} \\ $\begin{array}{l@{\quad}||@{\quad }l} \text{\bf Thread } 1 & \text{\bf Thread } 2\\ \begin{array}[t]{@{}l@{}} 1: \{Inv \wedge \mathbf{P_1}\}\ {\bf if}\ {\tt l.Acquire()} \\ 2: \{Inv \wedge \mathbf{P_2}\}\ \quad d_1 := 5 ; \\ 3: \{Inv \wedge \mathbf{P_3}\}\ \quad d_2 := 5 ; \\ 4: \{Inv \wedge \mathbf{P_4}\}\ \quad {\tt l.Release()} \end{array} & \begin{array}[t]{@{}l@{}} 1: \{Inv \wedge \mathbf{Q_1}\}\ {\bf if}\ {\tt l.Acquire(\mbox{$rl$})} \\ 2: \{Inv \wedge \mathbf{Q_2}\}\ \quad r_1 \gets d_1 ; \\ 3: \{Inv \wedge \mathbf{Q_3}\}\ \quad r_2 \gets d_2 ; \\ 4: \{Inv \wedge \mathbf{Q_4}\}\ \quad{\tt l.Release()} \end{array} \end{array}$ {\color{green!40!black} $5: \{(r_1=0 \wedge r_2=0) \vee (r_1=5 \wedge r_2=5)\}$} \qquad \qquad \qquad \end{center} \end{minipage}} \medskip \raggedright \small where assuming $P_{po} = (pc_2 = 1 \Rightarrow \neg \langle l.{\it release}_2 \rangle_2)\wedge \cvv{l.{\it init}_0}{}$, we have \centering $ \begin{array}[t]{@{}l@{\qquad}l@{}} \begin{array}[t]{@{}r@{}l@{}} P_1 = & [d_1 = 0]_1 \wedge [d_2 = 0]_1 \wedge \\ & (pc_2 = 1 \Rightarrow [l.init_0]_{1} \wedge [l.init_0]_{2})\\ & \wedge (pc_2 \in \{2,3,4\} \Rightarrow \cvd{l.{\it acquire}_1}{}) \\ P_2 = & [d_1 = 0]_1 \wedge [d_2 = 0]_1 \wedge P_{po} \\ P_3 = & [d_1 = 5]_1 \wedge [d_2 = 0]_1 \wedge P_{po} \\ P_4 = & [d_1 = 5]_1 \wedge [d_2 = 5]_1 \wedge P_{po} \\ \end{array} & \begin{array}[t]{@{}r@{}l@{}} & Q_1' = pc_1 = 5 \wedge \langle l.{\it release}_2 \rangle [d_1 = 5]_2 {} \wedge \langle l.{\it release}_2\rangle [d_2 = 5]_2 \\ & Q_1 = \begin{array}[t]{@{}l@{}} \left( \begin{array}[c]{@{}l@{}l@{}} pc_1 \notin \{2,3,4\} \Rightarrow & ([l.init_0]_2 \wedge [d_1 = 0]_2 \wedge [d_2 = 0]_2) \lor Q_1' {} \\ \end{array}\right) \\ {} \wedge (pc_1 = 1 \Rightarrow [l.init_0]_1) \wedge (pc_1 = 5 \Rightarrow \cvv{l.{\it init}_0}{}) \end{array} \\ & Q_2 = (rl = 1 \Rightarrow [d_1 = 0]_2 \wedge [d_2 = 0]_2) \\ & \qquad\ \ {} \wedge (rl = 3 \Rightarrow [d_1 = 5]_2 \wedge [d_2 = 5]_2) \\ & Q_3 = (rl = 1 \Rightarrow r_1 = 0 \wedge [d_2 = 0]_2) \\ & \qquad\ \ {} \wedge (rl = 3 \Rightarrow r_1 = 5 \wedge [d_2 = 5]_2) \\ & Q_4 = (rl = 1 \Rightarrow r_1 = 0 \wedge r_2 = 0) \wedge (rl = 3 \Rightarrow r_1 = 5 \wedge r_2 = 5) \end{array} \end{array} $ \caption{Proof outline for lock-synchronisation} \label{fig:lockmp-proof} \end{figure*} Our proof is supported by the following global invariant: \begin{align*} Inv \ \ \equiv{}\ \ & \neg (pc_1 \in \{2,3,4\} \wedge pc_2 \in \{2,3,4\}) \wedge (rl \in \{1, 3\}) \end{align*} The first conjunct establishes mutual exclusion, while the second ensures that the lock version written by the acquire in thread 2 is either 1 or 3, depending on which thread enters its critical section first. The main purpose of the definite and possible observation assertions is to establish $Q_1'$ (which appers in $Q_1$) using rule \refeq{co-intro}. This predicate helps establish $[d_1 = 5]_2$ and $[d_2 = 5]_2$ in thread 2 whenever thread 2 acquires the lock after thread~1. The most critical of these assertions is $Q_1$, which states that if thread 1 is not executing it's critical section then we either have \begin{itemize}[leftmargin=*] \item $[l.init_0]_2 \wedge [d_1 = 0]_2 \wedge [d_2 = 0]_2$, i.e., thread 2 can definitely see the lock initialisation and definitely observes both $d_1$ and $d_2$ to have value $0$, or \item $Q_1'$ holds, i.e., thread 1 has released the lock and has established a state whereby if thread 2 acquires the lock, it will be able to establish the definite value assertions $[d_1 = 5]_2$ and $[d_2 = 5]_2$. \end{itemize} Note that $Q_1$ also includes a conjunct $pc_1 = 5 \Rightarrow \cvv{l.{\it init}_0}{}$, which ensures that if thread 2 enters its critical section after thread 1 has terminated, then it does so because it sees $l.{\it release}_2$ (as opposed to $l.{\it init}_0$). This means that we can establish $rl = 1 \Rightarrow [d_1 = 0]_2 \wedge [d_2 = 0]_2$ (i.e., thread 2 has acquired the lock first) and $rl = 3 \Rightarrow [d_1 = 5]_2 \wedge [d_2 = 5]_2$ (i.e., thread 2 has acquired the lock second) in $Q_2$. Using these definite value assertions, we can easily establish that the particular values that are loaded into registers $r_1$ and $r_2$. The lemma has been verified in Isabelle/HOL. \begin{lemma} \label{thm:lockmp-proof} The proof outline in \reffig{fig:lockmp-proof} is valid. \end{lemma} \endinput \begin{verbatim} lemma cvv_val: assumes "wfs cs" and "l_wfs ls cs" and "cvv[lock,x] ls" and "lock_acquire_step t ls cs ls' cs' True ver" shows "ver ≠ x" lemma cvv_release: assumes "l_wfs ls cs" and "wfs cs" and "cvv[lock,x] ls" and "lock_release_step t ls cs ls' cs'" shows "cvv[lock,x] ls'" lemma cvd_cvv_val: assumes "wfs cs" and "l_wfs ls cs" and "cvd[lock,x] ls" and "lock_acquire_step t ls cs ls' cs' True ver" shows "cvv[lock,x] ls'" lemma cvv_lock_acquire_pres: assumes "wfs cs" and "l_wfs ls cs" and "cvv[lock,x] ls" and "lock_acquire_step t ls cs ls' cs' b ver" shows "cvv[lock,x] ls'" lemma lock_acqure_successful_pres_d_obs: assumes "wfs cs" and "l_wfs ls cs" and "[lock =⇩t x] ls" and "lock_acquire_step t ls cs ls' cs' True ver" shows "[lock =⇩t (x + 1) ] ls'" lemma lock_acqure_unsuccessful_client_pres_d_obs: assumes "wfs cs" and "l_wfs ls cs" and "[x =⇩t v] cs" and "lock_acquire_step t ls cs ls' cs' False ver" shows "[x =⇩t v ] cs'" lemma last_client_lock_acquire: assumes "wfs cs" and "l_wfs ls cs" and "lock_acquire_step t ls cs ls' cs' res ver" shows "lastWr cs' x = lastWr cs x" lemma writes_client_lock_acquire: assumes "wfs cs" and "l_wfs ls cs" and "lock_acquire_step t ls cs ls' cs' res ver" shows "writes_on cs x = writes_on cs' x" lemma lock_acqure_successful_client_pres_d_obs: assumes "wfs cs" and "l_wfs ls cs" and "[x =⇩t v] cs" and "lock_acquire_step t ls cs ls' cs' True ver" shows "[x =⇩t v ] cs'" lemma lock_acquire_successful_p_obs: assumes "wfs cs" and "l_wfs ls cs" and "[lock =⇩t v] ls" and "¬[lock ≈⇩t' u] ls" and "u ≥ v + 2 " and "lock_acquire_step t ls cs ls' cs' True ver" shows "¬[lock ≈⇩t' u] ls'" lemma lock_acquire_unsuccessful_p_obs: assumes "wfs cs" and "l_wfs ls cs" and "[lock =⇩t v] ls" and "¬[lock ≈⇩t' u] ls" and "u≠v " and "lock_acquire_step t ls cs ls' cs' False ver" shows "¬[lock ≈⇩t' u] ls'" lemma lock_acquire_unsuccessful_d_obs_pres: assumes "wfs cs" and "l_wfs ls cs" and "[lock =⇩t' v] ls" and "lock_acquire_step t ls cs ls' cs' False ver" shows "[lock =⇩t' v] ls'" lemma lock_acquire_unsuccessful_c_obs_pres: assumes "wfs cs" and "l_wfs ls cs" and "[lock = v]⦇x =⇩t u⦈ ls cs" and "lock_acquire_step t' ls cs ls' cs' False ver" shows "[lock = v]⦇x =⇩t u⦈ ls' cs'" lemma lock_acquire_successful_version: assumes "wfs cs" and "l_wfs ls cs" and "odd m" and "even n" and "∀ z . z∉{m,n} ⟶ ¬[lock ≈⇩t z] ls " and "lock_acquire_step t ls cs ls' cs' True ver" shows "ver = n" lemma lock_acquire_successful_c_o: assumes "wfs cs" and "l_wfs ls cs" and "odd m" and "even n" and "∀ z . z∉{m,n} ⟶ ¬[lock ≈⇩t z] ls " and "lock_acquire_step t ls cs ls' cs' True ver" and "[lock = n]⦇x =⇩t u⦈ ls cs" shows "[x =⇩t u] cs'" lemma lock_acquire_successful_not_p_obs_pres: assumes "wfs cs" and "l_wfs ls cs" and "¬[lock ≈⇩t u] ls" and "lock_acquire_step t ls cs ls' cs' False ver" shows "¬[lock ≈⇩t u] ls'" lemma lock_acquire_unsuccessful_diff_t_d_p_obs_pres: assumes "wfs cs" and "l_wfs ls cs" and "[x =⇩t u] cs" and "t ≠ t'" and "lock_acquire_step t' ls cs ls' cs' False ver" shows "[x =⇩t u] cs'" lemma lock_acquire_successful_diff_t_d_p_obs_pres: assumes "wfs cs" and "l_wfs ls cs" and "[x =⇩t u] cs" and "t ≠ t'" and "lock_acquire_step t' ls cs ls' cs' True ver" shows "[x =⇩t u] cs'" lemma lock_acquire_successful_rv: assumes "wfs cs" and "l_wfs ls cs" and "[lock =⇩t u] ls" and "lock_acquire_step t ls cs ls' cs' True ver" shows "ver = u" lemma lock_acquire_successful_cvd: assumes "wfs cs" and "l_wfs ls cs" and "cvd[lock, u] ls" and "lock_acquire_step t' ls cs ls' cs' True ver" shows "cvd[lock, (u+1)] ls'" lemma lock_rel_c_obs_intro: assumes "wfs cs" and "l_wfs ls cs" and "¬ [lock ≈⇩t' u] ls" and "[x =⇩t m] cs" and "cvd[lock, v] ls" and "t ≠ t'" and "lock_release_step t ls cs ls' cs'" shows "[lock = u]⦇x =⇩t' m⦈ ls' cs'" lemma p_obs_read: assumes "wfs cs" and "l_wfs ls cs" and "lock_read_step t b ls cs ls' cs' v" shows "[lock ≈⇩t v] ls' " \end{verbatim} \section{Exploiting View-based relaxation} \label{sec:view-based-relax} The modelling approach in \refsec{sec:abstr-object-semant} provides a flexible framework for specifying abstract objects and enables specification of synchronised and unsynchronised method calls (as detailed in Figs.~\ref{fig:publication} and \ref{fig:rfs}). We show that the framework also provides a mechanism for specifying concurrent objects whose linearisation order is different from real-time order (cf \cite{DBLP:journals/pacmpl/RaadDRLV19,DongolJRA18,DBLP:conf/esop/KrishnaEEJ20,DBLP:journals/pacmpl/EmmiE19}). We argue that such specifications are a natural fit with the weak memory operational semantics where new writes may be introduced in the ``middle'' of modification order~\cite{DBLP:conf/ppopp/DohertyDWD19,Dolan:2018:LDRF,DBLP:conf/popl/KangHLVD17}. Of course, such relaxations do not apply to every shared object, e.g., locks (\refex{ex:abs-lock}). As with writes, timestamps may be used to define a linear order of the method calls of the object in question. However, \emph{different} threads may have to have \emph{different} views of the object being implemented, even at the level of the abstraction. For example, consider the state of a shared queue depicted below \begin{center} \begin{tikzpicture}[scale=0.65] \draw[thick,->] (-1,0) -- (7,0); \coordinate (init) at (-1,0); \coordinate (enq1) at (1,0); \coordinate (enq2) at (3.5,0); \coordinate (deq2) at (6,0); \draw (init) circle (4pt) node[below] {$\mathit{qinit}()$}; \draw (enq1) circle (4pt) node[below] {$\mathit{enq}(1)$} node[above] {$t_1$}; \draw [black] (enq2) circle (4pt) node[below] {$\mathit{enq}(2)$}; \draw [black] (deq2) circle (4pt) node[below] {$\mathit{deq}(v)$} node[above] {$t_2$}; \end{tikzpicture} \end{center} where the viewfront of threads $t_1$ and $t_2$ are depicted above the timeline and the abstract operations executed below the timeline. The execution may have been generated by executing the operations in the following temporal order: \begin{enumerate} \item Thread $t_2$ executes $\mathit{enq}(2)$, and hence $t_2$'s view of the data structure should be that of $\mathit{enq}(2)$. \item Thread $t_1$ executes $\mathit{enq}(1)$, but since it has not yet ``seen'' $\mathit{enq}(2)$, it may choose a timestamp that is smaller than the timestamp of $\mathit{enq}(2)$ for $enq(1)$. \item Thread $t_2$ executes a dequeue, $\mathit{deq}(v)$, for some yet to be determined value $v$. Since $t_2$ also executed $\mathit{enq}(2)$, the timestamp of the $\mathit{deq}(v)$ must be larger than that of $\mathit{enq}(2)$. Since the timeline represents a linearization of the queue, the only sensible value for $v$ is $1$. \end{enumerate} One option for the execution to continue in a sensible manner is to ensure that the effect of the dequeue is communicated to all other threads so that their views of the object are updated. Thus, if $t_1$ were to perform a new method call, this method call would appear after $\mathit{deq}(v)$ (with $v = 1$). However, imposing such a communication requirement in the abstract specification induces additional (and potentially unnecessary) synchronisation in an implementation. For instance, a sensible continuation of the timeline in above would be for thread $t_1$ to introduce additional enqueue operations between $\mathit{enq}(1)$ and $\mathit{deq}(v)$ (with $v = 1$). \begin{example}[Queue data structure] For simplicity, we assume a new state component, ${\tt matched}$, that stores the pairs of timestamps corresponding to enqueues that have been dequeued. For any $(ts, ts') \in {\tt matched}$, the timestamp $ts$ corresponds to an enqueue that is removed from the queue by the dequeue with timestamp $ts'$. To ensure first-in-first-out behaviours, for any pair $(ts, ts') \in {\tt matched}$, it is illegal for a dequeue to be introduced between $ts$ and $ts'$. Note that ${\tt matched}$ can be calculated from any queue state so its introduction only plays an auxiliary role. Below, we present a specification of \emph{synchronising} enqueue and dequeue operations. It is straightforward to extend these to also specify unsynchronised accesses (see \reffig{fig:rfs}). \[ \inference[{\sc Enq}] { a \in q.enq_u \qquad {\tt tst}(\gamma.{\tt tview}_t(q)) < {\it ts}' \qquad \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, {\it ts}')\} \\ (\forall (\mathit{ww}, \mathit{tt}) \in \gamma.\mathtt{ops}.\ {\it ww} \in Enq \wedge {\it tt} > {\it ts}' \Rightarrow \mathit{tt} \notin \operatorname{\mathbf{dom}}(\gamma.{\tt matched})) \\ (\forall ({\it ww}, {\it tt}) \in \gamma.\mathtt{ops}.\ {\it tt} > {\it ts}' \Rightarrow ww \neq q.deq_{\it empty}) \\ {\tt mview}' = {\tt tview}' \cup \beta.{\tt tview}_t \qquad {\tt tview}' = \gamma.{\tt tview}_t[q := (a, {\it ts}')] } {\gamma, \beta\ \strans{a}_{t}\ \gamma[\mathtt{ops} \ensuremath{:=} \mathtt{ops}', {\tt tview}_t \ensuremath{:=} {\tt tview}', {\tt mview}_{(a,{\it ts}')} \ensuremath{:=} {\tt mview}'], \beta }\\ \]\smallskip \[ \inference[{\sc Deq-NE}] { a \in q.deq_u \qquad u \neq {\it empty} \qquad {\tt tst}(\gamma.{\tt tview}_t(q)) < {\it ts}' \qquad \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, {\it ts}')\}\\ (w, {\it ts}) \in \gamma.\mathtt{ops} \qquad {\it ts} \notin \operatorname{\mathbf{dom}}(\gamma.{\tt matched}) \\ (\forall ({\it ww}, {\it tt}) \in \gamma.\mathtt{ops}.\ {\it tt} \in Enq \wedge {\it tt} < {\it ts} \Rightarrow {\it tt} \in \sigma.{\tt matched}) \\ (\forall {\it tt}' \in \operatorname{\mathbf{ran}}(\gamma.{\tt matched}).\ ts' \geq {\it tt}') \qquad {\tt tview}' = \gamma.{\tt tview}_t[q := (a, {\it ts}')] \otimes \gamma.{\tt mview}_{(w,{\it ts})} \\ {\tt ctview}' = \beta.{\tt tview}_t \otimes \gamma.{\tt mview}_{(w,{\it ts})} } {\gamma, \beta\ \strans{a}_{t}\ \gamma[\mathtt{ops} \ensuremath{:=} \mathtt{ops}', {\tt tview}_t \ensuremath{:=} {\tt tview}', {\tt mview}_{(a,{\it ts}')} \ensuremath{:=} {\tt mview}'], \beta[{\tt tview}_t := {\tt ctview}']}\\ \]\smallskip \[ \inference[{\sc Deq-Emp}] { a \in q.deq_{\it empty} \qquad {\tt tst}(\gamma.{\tt tview}_t(q)) < {\it ts}' \qquad \mathtt{ops}' = \gamma.\mathtt{ops} \cup \{(a, {\it ts}')\}\\ (\forall ({\it ww}, {\it tt}) \in \gamma.\mathtt{ops}.\ {\it tt} < {\it ts}' \Rightarrow {\it tt} \in \operatorname{\mathbf{dom}}(\sigma.{\tt matched}) \cup \operatorname{\mathbf{ran}}(\sigma.{\tt matched}) \lor {\it ww} = q.deq_{\empty}) \\ {\tt tview}' = \gamma.{\tt tview}_t[q := (a, {\it ts}')] } {\gamma, \beta\ \strans{a}_{t}\ \gamma[\mathtt{ops} \ensuremath{:=} \mathtt{ops}', {\tt tview}_t \ensuremath{:=} {\tt tview}'], \beta } \] \smallskip Thus, enqueues and dequeues must not interfere with other dequeue operations that have already taken effect, including dequeue operations that return empty. The behaviour of enqueue operations is analogous to releasing writes, whereas the behaviour of non-empty dequeue operations is analogous to read-modify-write operations. \end{example} Given that a dequeue that returns empty only observes the queue and does not modify it, one could argue that such operations are analogous to reads, and hence, should not appear in the timeline at all. Using such a specification may allow non-atomic observations, leading to phenomena observed by Emmi and Enea \cite{DBLP:journals/pacmpl/EmmiE19}. We consider further exploration of such ideas to be future work.
{ "timestamp": "2020-12-29T02:26:15", "yymm": "2012", "arxiv_id": "2012.14133", "language": "en", "url": "https://arxiv.org/abs/2012.14133" }
\section{Introduction and main result} Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with $C^{2}$ boundary, and let $D_{1}^{*}$ and $D_{2}^{*}$ be two open sets whose closure belongs to $\Omega$, touching only at the origin with the inner normal vector of $\partial{D}_{1}^{*}$ pointing in the positive $x_{n}$-direction. Translating $D_{1}^{*}$ and $D_{2}^{*}$ by $\frac{\varepsilon}{2}$ along $x_{n}$-axis, we obtain $$D_{1}^{\varepsilon}:=D_{1}^{*}+(0',\frac{\varepsilon}{2}),\quad\mbox{and}\quad\,D_{2}^{\varepsilon}:=D_{2}^{*}-(0',\frac{\varepsilon}{2}).$$ When there is no confusion, we drop the superscripts $\varepsilon$ and denote $D_{1}:=D_{1}^{\varepsilon}$ and $D_{2}:=D_{2}^{\varepsilon}$. Denote $\widetilde{\Omega} := \Omega \setminus \overline{(D_1 \cup D_2)}$, we consider the following elliptic equation with Dirichlet boundary data: \begin{equation}\label{equk} \begin{cases} \mathrm{div}\Big(a_{k}(x)\nabla{u}_{k}\Big)=0&\mbox{in}~\Omega,\\ u_{k}=\varphi(x)&\mbox{on}~\partial\Omega, \end{cases} \end{equation} where $\varphi\in{C}^{2}(\partial\Omega)$ is given, and $$a_{k}(x)= \begin{cases} k\in(0,\infty)&\mbox{in}~D_{1}\cup{D}_{2},\\ 1&\mbox{in}~\widetilde{\Omega}. \end{cases} $$ The equation above can be considered as a simple model for electric conduction, where $a_k$ refers to conductivities, which can be assumed to be 1 in the matrix after normalization, and the solution $u_k$ gives the voltage potential. From an engineering point of view, it is very important to estimate $\nabla u_k$, which represents the electric fields, in the narrow region between the inclusions. This problem is analogous to a linear system of elasticity studied by Babu\v{s}ka, Andersson, Smith and Levin \cite{BASL}, where they analyzed numerically that, when the ellitpicity constants are bounded away from $0$ and infinity, gradient of solutions remain bounded independent of $\varepsilon$, the distance between inclusions. Bonnetier and Vogelius \cite{BV} proved that for a fixed $k$, $|\nabla u_k|$ remains bounded as $\varepsilon$ goes to 0, for circular inclusions $D_1$ and $D_2$ in dimension $n = 2$. This result was extended by Li and Vogelius \cite{LV} to general second order elliptic equation of divergence form with piecewise H\"older coefficients and general shape of inclusions $D_1$ and $D_2$ in any dimension. Furthermore, they established a stronger $C^{1,\alpha}$ control of $u_k$, which is independent of $\varepsilon$, in each region. Li and Nirenberg \cite{LN} further extended this $C^{1,\alpha}$ result to general second order elliptic systems of divergence form. When $k$ equals to $\infty$ (perfect conductor) or $0$ (insulator), it was shown in \cite{Kel, BudCar, Mar} that the gradient of solutions generally becomes unbounded, as $\varepsilon \to 0$. When $k$ goes to $\infty$, $u_k$ converges to the solution of the following perfect conductivity problem: \begin{equation}\label{equinfty} \begin{cases} \Delta{u}=0&\mbox{in}~\widetilde{\Omega},\\ u=C_i \mbox{ (Constants)}&\mbox{on}~\partial{D}_{i},~i=1,2,\\ \int_{\partial{D}_{i}}\frac{\partial{u}}{\partial\nu}=0&i=1,2,\\ u=\varphi(x)&\mbox{on}~\partial\Omega. \end{cases} \end{equation} When $k$ goes to $0$, $u_k$ converges to the solution of the following insulated conductivity problem: \begin{equation}\label{equzero} \left\{ \begin{aligned} -\Delta u &=0 \quad \mbox{in }\widetilde{\Omega},\\ \frac{\partial u}{\partial \nu} &= 0 \quad \mbox{on}~\partial{D}_{i},~i=1,2,\\ u &= \varphi \quad \mbox{on } \partial \Omega. \end{aligned} \right. \end{equation} See, e.g., Appendix of \cite{BLY1} and \cite{BLY2} for derivations of the above equations. Here $\nu$ denotes the inward unit normal vectors on $\partial D_i$, $i = 1,2$. Ammari et al. proved in \cite{AKLLL} and \cite{AKL}, among other things, the following. Let $D_1^*$ and $D_2^*$ be unit balls in $\mathbb{R}^2$, and let $H$ be a harmonic function in $\mathbb{R}^2$. They considered the perfect and insulated conductivity problems in $\mathbb{R}^2$: \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^2\setminus\overline{(D_1 \cup D_2)},\\ u=C_i \mbox{ (Constants)}&\mbox{on}~\partial{D}_{i},~i=1,2,\\ \int_{\partial{D}_{i}}\frac{\partial{u}}{\partial\nu}=0&i=1,2,\\ u(x)-H(x) = O(|x|^{-1})&\mbox{as}~|x| \to \infty, \end{cases} \end{equation*} and \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^2\setminus\overline{(D_1 \cup D_2)},\\ \frac{\partial u}{\partial \nu} = 0 &\mbox{on}~\partial{D}_{i},~i=1,2,\\ u(x)-H(x) = O(|x|^{-1})&\mbox{as}~|x| \to \infty. \end{cases} \end{equation*} In both cases, they proved that for some $C$ independent of $\varepsilon$, $$\| \nabla u\|_{L^\infty(B_4)} \le C \varepsilon^{-1/2}.$$ They also showed that the upper bounds are optimal in the sense that for appropriate $H$, $$\| \nabla u\|_{L^\infty(B_4)} \ge \varepsilon^{-1/2}/C.$$ Yun extended in \cite{Y1} and \cite{Y2} the results allowing $D_1^*$ and $D_2^*$ to be any bounded strictly convex smooth domains. The above gradient estimates were localized and extended to higher dimensions by Bao, Li and Yin in \cite{BLY1} and \cite{BLY2}. For the perfect conductor case, they considered problem \eqref{equinfty} and proved in \cite{BLY1} that \begin{equation*} \begin{cases} \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1/2} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n=2,\\ \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C|\varepsilon \ln \varepsilon|^{-1} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n=3,\\ \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n\ge 4. \end{cases} \end{equation*} The above bounds were shown to be optimal in the paper. For further works on the perfect conductivity problem and closely related ones, see e.g. \cite{ACKLY,BT1,BT2,DL,KLY1,KLY2,L,LLY,LWX,BLL,BLL2,DZ,KL,CY,ADY,Gor,LimYun} and the references therein. For the insulated problem \eqref{equzero}, it was proved in \cite{BLY2} that \begin{equation}\label{insulated_upper_bound} \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1/2} \|\varphi\|_{C^2(\partial \Omega)}\quad \mbox{when}~n\ge 2. \end{equation} The upper bound is optimal for $n = 2$ as mentioned earlier, while it was not known whether it is optimal in dimensions $n \ge 3$. Yun \cite{Y3} considered the following insulated problem in $\mathbb{R}^3$ minus 2 balls: Let $H$ be a harmonic function in $\mathbb{R}^3$, $D_1 = B_1\left(0,0,1+\frac{\varepsilon}{2} \right)$, and $D_2 = B_1\left(0,0,-1-\frac{\varepsilon}{2} \right)$, \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^3\setminus\overline{(D_1 \cup D_2)},\\ \frac{\partial u}{\partial \nu} = 0 &\mbox{on}~\partial{D}_{i},~i=1,2,\\ u(x)-H(x) = O(|x|^{-2})&\mbox{as}~|x| \to \infty. \end{cases} \end{equation*} He proved that for some positive constant $C$ independent of $\varepsilon$, $$\max_{|x_3|\le \varepsilon/2}|\nabla u(0,0,x_3)| \le C \varepsilon^{\frac{\sqrt{2}-2}{2}}.$$ He also showed that this upper bound of $|\nabla u|$ on the $\varepsilon$-segment connecting $D_1$ and $D_2$ is optimal for $H(x) \equiv x_1$. In this paper, we focus on the insulated conductivity problem \eqref{equzero} in dimension $n \ge 3$, and improve the upper bound \eqref{insulated_upper_bound} to the rate $\varepsilon^{-1/2 + \beta}$, for some $\beta > 0$. Analogous questions for elliptic system are still open, and we give some discussions in Section 4. We point out that the insulator case for Lam\'{e} systems in dimension $n = 2$ was studied by Lim and Yu \cite{LimYu}. From now on, we assume that $\partial{D}_{1}^{*}$ and $\partial{D}_{2}^{*}$ are $C^2$, and they are relatively convex near the origin. That is, for some positive constants $R_0, \kappa$, we assume that when $0<|x'|<R_{0}$, $\partial{D}_{1}^{*}$ and $\partial{D}_{2}^{*}$ are respectively the graphs of two $C^{2}$ functions $f$ and $g$ in terms of $x'$, and $$f(x')>g(x'),\quad\mbox{for}~~0<|x'|<R_{0},$$ \begin{equation}\label{fg_0} f(0')=g(0')=0,\quad\nabla_{x'}f(0')=\nabla_{x'}g(0')=0, \end{equation} \begin{equation}\label{fg_1} \nabla^{2}_{x'}(f-g)(x')\geq\kappa I_{n-1},\quad\mbox{for}~~0<|x'|<R_{0}, \end{equation} where $I_{n-1}$ denotes the $(n-1) \times (n-1)$ identity matrix. Let $a(x) \in C^\alpha(\overline{\widetilde{\Omega}})$, for some $\alpha \in (0,1)$, be a symmetric, positive definite matrix function satisfying $$\lambda \le a(x) \le \Lambda, \quad \mbox{for }x \in \widetilde{\Omega},$$ for some positive constants $\lambda, \Lambda$. Let $\nu = (\nu_1, \cdots, \nu_n)$ denote the unit normal vector on $\partial D_1$ and $\partial D_2$, pointing towards the interior of $D_1$ and $D_2$. We consider the following insulated conductivity problem: \begin{equation}\label{main_problem} \left\{ \begin{aligned} -\partial_i (a^{ij} \partial_j u) &=0 \quad \mbox{in }\widetilde{\Omega},\\ a^{ij} \partial_j u \nu_i &= 0 \quad \mbox{on } \partial (D_1 \cup D_2),\\ u &= \varphi \quad \mbox{on } \partial \Omega, \end{aligned} \right. \end{equation} where $\varphi \in C^{2}(\partial \Omega)$ is given. \\ For $0 < \,r\leq\,R_{0}$, we denote \begin{align}\label{domain_def_Omega} \Omega_{x_0,r}:=\left\{(x',x_{n})\in \widetilde{\Omega}~\big|~-\frac{\varepsilon}{2}\right.&\left.+g(x')<x_{n}<\frac{\varepsilon}{2}+f(x'),~|x' - x_0'|<r\right\},\nonumber\\ \Gamma_+ :=& \left\{ x_n = \frac{\varepsilon}{2}+f(x'),~|x'|<R_0\right\},\\ \Gamma_- :=& \left\{ x_n = -\frac{\varepsilon}{2}+g(x'),~|x'|<R_0\right\}. \nonumber \end{align} Since the blow-up of gradient can only occur in the narrow region between $D_1$ and $D_2$, we will focus on the following problem near the origin: \begin{equation}\label{main_problem_narrow} \left\{ \begin{aligned} -\partial_i (a^{ij} \partial_j u) &=0 \quad \mbox{in }\Omega_{0,R_0},\\ a^{ij} \partial_j u \nu_i &= 0 \quad \mbox{on } \Gamma_+ \cup \Gamma_-,\\ \end{aligned} \right. \end{equation} where $\nu = (\nu_1, \cdots, \nu_n)$ denotes the unit normal vector on $\Gamma_+$ and $\Gamma_-$, pointing upward and downward respectively. Here is our main result in the paper.\\ \begin{theorem}\label{main_thm} Let $f,g,a,\alpha$ be as above, and let $u \in H^1(\Omega_{0,R_0})$ be a solution of \eqref{main_problem_narrow} in dimension $n \ge 3$. There exist positive constants $r_0, \beta$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\alpha$, $\|a\|_{C^\alpha(\Omega_{0,R_0})}$, $\|f\|_{C^{2}(\{|x'| \le R_0\})}$ and $\|g\|_{C^{2}(\{|x'| \le R_0\})},$ such that \begin{equation}\label{main_result} |\nabla u (x_0)| \le C \| u\|_{L^\infty(\Omega_{0,R_0})} \left(\varepsilon + |x_0'|^2 \right) ^{-1/2 + \beta}, \end{equation} for all $x_0 \in \Omega_{0 , r_0}$ and $\varepsilon \in (0,1)$.\\ \end{theorem} Let $u \in H^1(\widetilde{\Omega})$ be a weak solution of \eqref{main_problem}. By the maximum principle and the gradient estimates of solutions of elliptic equations, \begin{equation}\label{boundedness_u} \|u\|_{L^\infty(\widetilde{\Omega})} \le \|\varphi\|_{L^\infty(\partial \Omega)}, \end{equation} and $$\| \nabla u\|_{L^\infty(\widetilde{\Omega} \setminus \Omega_{0, r_0} )} \le C\| \varphi\|_{C^{2}(\partial \Omega)}.$$ Therefore, a corollary of Theorem \ref{main_thm} is as follows.\\ \begin{corollary} Let $u \in H^1(\widetilde{\Omega})$ be a weak solution of \eqref{main_problem} in dimension $n \ge 3$. There exist positive constants $\beta$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\|a\|_{C^\alpha}$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that \begin{equation}\label{main_result_2} \|\nabla u\|_{L^\infty(\widetilde{\Omega})} \le C \| \varphi\|_{C^{2}(\partial \Omega)} \varepsilon ^{-\frac{1}{2} + \beta}. \end{equation}\\ \end{corollary} \begin{remark} If there are more than two inclusions, estimate \eqref{main_result_2} still holds, with $\varepsilon$ being the minimal distance between inclusions. \end{remark} The rest of this paper will be organized as follows. In section 2, we prove a lemma which is used in the proof of Theorem \ref{main_thm}. Theorem \ref{main_thm} is proved in Section 3. In section 4, we give a gradient estimate to a problem for elliptic systems analogous to problem \eqref{main_problem_narrow}.\\ \section{A regularity lemma} In this section, we give a regularity lemma for elliptic systems (elliptic equations when $N = 1$). Let us first describe the nature of domains and operators. We define $S$ to be a cylinder $$S = \{ (x',x_n) \in \mathbb{R}^n ~\big|~ |x'| < 1, |x_n| < 1 \},$$ and some constants $c_m$, with $0 \le m \le l$, such that $$-1 = c_0 < c_1 < \cdots < c_l = 1,$$ and denote the integer $m_0$ to be the integer such that $$c_{m_0 - 1} \le 0 < c_{m_0}.$$ We divide the domain $S$ into $l$ parts by setting $$\Omega_m = \{x \in S ~\big|~ c_{m-1} < x_n < c_m \}, \quad \mbox{for }1 \le m \le l.$$ For $1 \le \alpha, \beta \le n, 1 \le i,j \le N$, let $A^{\alpha \beta}_{ij}(x)$ be a function such that $$\| A^{\alpha \beta}_{ij} \|_{L^\infty(S)} \le \Lambda,$$ $$\int_S A^{\alpha \beta}_{ij}(x) \partial_\alpha \varphi_i(x) \partial_\beta \varphi_j(x) \ge \lambda \int_S |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(S; \mathbb{R}^N),$$ for some $\lambda, \Lambda > 0$, and for each $1 \le m \le l$, $A^{\alpha \beta}_{ij}(x) \in C^\mu(\overline{\Omega}_m)$, for some $0 < \mu <1$. We denote $(A^{\alpha \beta}_{ij}(x))$ by $A(x)$. For $1 \le \alpha \le n, 1 \le i \le N$, let \begin{align*} H(x) &= \{H_i\} \in L^\infty(S),\\ G(x) &= \{G_i^\alpha\} \in C^\mu(\overline{\Omega}_m), \end{align*} for all $m = 1 ,\cdots , l$. Then we have the following interior gradient estimate.\\ \begin{lemma}\label{gradient_lemma} Let $A(x)$, $H(x)$ and $G(x)$ be as above. There exists a positive constant $C$, depending only on $n, \mu, \lambda, \Lambda$ and an upper bound of $\{\|A\|_{C^\mu(\overline{\Omega}_m)}\}_{m = 1}^l$, such that if $u \in H^1(S; \mathbb{R}^N)$ is a weak solution to $$\partial_\alpha (A^{\alpha \beta}_{ij}(x) \partial_\beta u_j) = H_i + \partial_\alpha G^\alpha_i \quad \mbox{in }S,$$ then $$\| u\|_{L^\infty(\frac{1}{2}S)} + \| \nabla u\|_{L^\infty(\frac{1}{2}S)} \le C\left(\|u\|_{L^2(S)} + \|H\|_{L^\infty(S)} + \max_{1 \le m \le l} \|G\|_{C^\mu(\overline{\Omega}_m)} \right).$$\\ \end{lemma} \begin{remark} We point out that the constant $C$ in the Lemma is independent of $l$. \end{remark} \begin{proof} The proof of Lemma \ref{gradient_lemma} is a modification of the proof of Proposition 4.1 in \cite{LN}. Even though the constant $C$ in \cite[Proposition 4.1]{LN} depends on $l$, the number of subdomains we divide in the domain $S$, this dependence only enters in estimating the quantities $\| A - \bar{A} \|_{Y^{1+\alpha, 2}}$, $\| G - \bar{G} \|_{Y^{1+\alpha, 2}}$, and $\| H - \bar{H} \|_{Y^{\alpha, 2}}$ which will be defined below. We will show that such quantity is independent of $l$ due to the nature of our domain $S$, and hence the constant $C$ in Lemma \ref{gradient_lemma} is independent of $l$.\\ \end{proof} For $s > 0, 1 < p < \infty$, we define the norm $$\|f\|_{Y^{s,p}}:= \sup_{0 < r \le 1} r^{1-s} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |f|^p \right)^{1/p}.$$ We define a piecewise-constant coefficients $\bar{A}$ associated to $A$ by setting $$\bar{A}(x) := \left\{\begin{aligned} \lim_{x \in \Omega_m, x \to (0', c_{m-1})} &A(x), &&\mbox{if }x \in \Omega_m, m > m_0;\\ &A(0), &&\mbox{if }x \in \Omega_{m_0};\\ \lim_{x \in \Omega_m, x \to (0', c_m)} &A(x), &&\mbox{if }x \in \Omega_m, m < m_0. \end{aligned} \right.$$ Similarly, we can define piecewise-constant tensor $\bar{G}$ associated to $G$. We also define a constant vector $\bar{H}$ associated to $H$ by $$\bar{H} := {\int\hspace*{-4.3mm}\diagup}_S H.$$\\ \begin{lemma} Let $A, \bar{A}, H, \bar{H}, G, \bar{G}$ be as above. Then there exists a positive constant $C$, depending only on $n$, such that \begin{align*} \| A - \bar{A} \|_{Y^{1+\mu , 2}} & \le C\max_{1 \le m \le l} \|A\|_{C^\alpha(\overline{S}_m)},\\ \| G - \bar{G} \|_{Y^{1+\mu , 2}} & \le C\max_{1 \le m \le l} \|G\|_{C^\alpha(\overline{S}_m)},\\ \| H - \bar{H} \|_{Y^{\mu , 2}} & \le C\|H\|_{L^\infty(S)}.\\ \end{align*} \end{lemma} \begin{proof} The last inequality follows immediately from the definition of $Y^{\mu,2}$ and $\bar{H}$: \begin{align*} \| H - \bar{H} \|_{Y^{\mu , 2}} \le \sup_{0 < r \le 1} r^{1-\mu} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |H - \bar{H}|^2 \right)^{1/2} \le C\|H\|_{L^\infty(S)}. \end{align*} By a direct computation, we will have \begin{align*} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |A - \bar{A}|^2 \right)^{1/2} &\le \left( \frac{1}{|rS|} \sum_{m = 1}^l \int_{rS \cap S_m} |A(x) - \bar{A}(x)|^2 \, dx \right)^{1/2}\\ &\le \left[ \frac{1}{|rS|} \left( \sum_{m = 1}^{m_0 - 1} \|A\|^2_{C^\mu(\overline{S}_m)} \int_{rS \cap S_m} |x - (0', c_m)|^{2\mu} \, dx \right.\right.\\ &+ \|A\|^2_{C^\mu(\overline{S}_{m_0})} \int_{rS \cap S_{m_0}} |x|^{2\mu} \, dx\\ + \sum_{m = m_0 + 1}^{l} &\left. \left.\|A\|^2_{C^\mu(\overline{S}_{m-1})} \int_{rS \cap S_{m-1}} |x - (0', c_{m-1})|^{2\mu} \,dx \right) \right]^{1/2} \\ &\le \max_{1 \le m \le l} \|A\|_{C^\mu(\overline{S}_m)} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |x|^{2\mu} \, dx \right)^{1/2}\\ &\le C\max_{1 \le m \le l} \|A\|_{C^\mu(\overline{S}_m)}r^\mu. \end{align*} This proves the first inequality. The second inequality follows similarly. \end{proof} \section{Proof of Theorem \ref{main_thm}} In this section, we prove Theorem \ref{main_thm}. For a small $r_0$ independent of $\varepsilon$, and any $x_0 \in \Omega_{0, r_0}$, we estimate $|\nabla u(x_0)|$ as follows: First we establish a Harnack inequality in $\Omega_{x_0, r} \setminus \Omega_{x_0, r/2}$, for $r > 0$ in a suitable range. Together with the maximum principle, this gives the oscillation of $u$ in $\Omega_{x_0, \delta}$ a decay $\delta^{2\beta}$, for some positive $\varepsilon$-independent $\beta$, where $$\delta:= (\varepsilon + |x_0'|^2)^{1/2}.$$ Then we perform a suitable change of variables in $\Omega_{x_0, \delta/4}$, and apply Lemma \ref{gradient_lemma} to obtain the desired estimate on $|\nabla u(x_0)|$. We fix a $\gamma \in (0,1)$, and let $r_0 >0$ denote a constant depending only on $n$, $\kappa$, $\gamma$, $R_0$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, whose value will be fixed in the proof. We will always consider $0 < \varepsilon \le r_0^2$. First, we require $r_0$ small so that for $|x_0'| < r_0$, $$10\delta < \delta^{1- \gamma} < R_0/4.$$\\ \begin{lemma}\label{harnack_inequality} There exists a small $r_0$, depending only on $n, \kappa, \gamma, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, such that for any $x_0 \in \Omega_{0,r_0}$, $5|x_0'| < r < \delta^{1- \gamma}$, if $u \in H^1(\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4})$ is a positive solution to the equation $$ \left\{ \begin{aligned} -\partial_i (a^{ij}(x) \partial_j u(x)) &=0 \quad \mbox{in }\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4},\\ a^{ij}(x) \partial_j u(x) \nu_i(x) &= 0 \quad \mbox{on } (\Gamma_+ \cup \Gamma_-) \cap \overline{\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}},\\ \end{aligned} \right. $$ then, \begin{equation}\label{harnack} \sup_{\Omega_{x_0,r} \setminus \Omega_{x_0, r/2}} u \le C \inf_{\Omega_{x_0, r} \setminus \Omega_{x_0, r/2}} u, \end{equation} for some constant $C >0$ depending only on $n, \kappa, \lambda, \Lambda, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ but independent of $r$ and $u$. \end{lemma} \begin{proof} We only need to prove \eqref{harnack} for $|x_0'| > 0$, since the $|x_0'| = 0$ case follows from the result for $|x_0'|>0$ and then sending $|x_0'|$ to $0$. We denote $$h_r := \varepsilon + f\left(x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right) - g\left(x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right),$$ and perform a change of variables by setting \begin{equation}\label{x_to_y_1} \left\{ \begin{aligned} y' &= x' - x_0' ,\\ y_n &= 2 h_r \left( \frac{x_n - g(x') + \varepsilon/2}{\varepsilon + f(x') - g(x')} - \frac{1}{2} \right), \end{aligned}\right. \quad (x',x_n) \in \Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}. \end{equation} This change of variables maps the domain $\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}$ to an annular cylinder of height $h_r$, denoted by $Q_{2r, h_r} \setminus Q_{r/4, h_r}$, where \begin{equation}\label{Q_s_t} Q_{s,t}:= \{ y = (y',y_n) \in \mathbb{R}^n ~\big|~ |y'| < s, |y_n| < t\}, \end{equation} for $s,t > 0$. We will show that the Jacobian matrix of the change of variables \eqref{x_to_y_1}, denoted by $\partial_x y$, and its inverse matrix $\partial_y x$ satisfy \begin{equation}\label{transformation_lipschitz} |(\partial_x y)^{ij}| \le C, \quad |(\partial_y x)^{ij}| \le C, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}, \end{equation} where $C > 0$ depends only on $n, \kappa, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Let $v(y) = u(x)$, then $v$ satisfies \begin{equation}\label{equation_v} \left\{ \begin{aligned} -\partial_i(b^{ij}(y) \partial_j v(y)) &=0 \quad \mbox{in } Q_{2r, h_r} \setminus Q_{r/4, h_r},\\ b^{nj}(y) \partial_j v(y) &= 0 \quad \mbox{on } \{y_n = -h_r\} \cup \{y_n = h_r\}, \end{aligned} \right. \end{equation} where the matrix $(b^{ij}(y))$ is given by \begin{equation}\label{b_ij_formula} (b^{ij}(y)) = \frac{(\partial_x y)(a^{ij})(\partial_x y)^t}{\det (\partial_x y)}, \end{equation} $(\partial_x y)^t$ is the transpose of $\partial_x y$. It is easy to see that \eqref{transformation_lipschitz} implies, using $\lambda \le (a^{ij}) \le \Lambda$, \begin{equation}\label{b_ij_ellpticity} \frac{\lambda}{C} \le (b^{ij}(y)) \le C\Lambda, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}, \end{equation} for some constant $C > 0$ depending only on $n, R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. In the following and throughout this section, we will denote $A \sim B$, if there exists a positive universal constant $C$, which might depend on $n, \lambda, \Lambda, R_0, \kappa, \|f\|_{C^{2}}$, and $\|g\|_{C^{2}},$ but not depend on $\varepsilon$, such that $C^{-1} B \le A \le C B$. From \eqref{x_to_y_1}, one can compute that \begin{align*} (\partial_x y)^{ii} &= 1, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_x y)^{nn} &= \frac{2h_r}{\varepsilon + f(x_0'+ y') - g(x_0' + y')},\\ (\partial_x y)^{ni} &= - \frac{2h_r \partial_i g(x_0' + y') + 2y_n [\partial_i f(x_0' + y') - \partial_i g(x_0' + y')]}{\varepsilon + f(x_0' + y')- g(x_0' + y')}, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_x y)^{ij} &= 0, \quad \mbox{for } 1 \le i \le n-1, j \neq i. \end{align*} By \eqref{fg_0} and \eqref{fg_1}, one can see that $$h_r \sim \varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2.$$ Since $|y_n| \le h_r$, by using \eqref{fg_0} and \eqref{fg_1}, we have that, for $1 \le i \le n-1$, \begin{align*} \left|(\partial_x y)^{ni} \right| &\le C\frac{h_r |\partial_i g(x_0' + y')| + h_r [|\partial_i f(x_0' + y')| + |\partial_i g(x_0' + y')|]}{\varepsilon + f(x_0' + y')- g(x_0' + y')}\\ &\le C \frac{h_r}{\varepsilon + f(x_0' + y')- g(x_0' + y')} \left[ |\partial_i f(x_0' + y')| + |\partial_i g(x_0' + y')| \right]\\ &\le C \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} |x_0' + y'|, \end{align*} Since $r/4 < |y'| < 2r < 2\delta^{1- \gamma}$ and $|x_0'| < \delta$, we can estimate \begin{align*} \left|(\partial_x y)^{ni} \right| \le C|x_0' + y'| \le C(|x_0'| + |y'|) \le C \delta^{1 - \gamma}. \end{align*} Next, we will show that \begin{equation}\label{partial_x_y_nn} (\partial_x y)^{nn} \sim 1, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}. \end{equation} Indeed, by \eqref{fg_0} and \eqref{fg_1}, we have $$(\partial_x y)^{nn} = \frac{2h_r}{\varepsilon + f(x_0'+ y') - g(x_0' + y')} \sim \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2}.$$ Since $|y'| > r/4$, it is easy to see $$(\partial_x y)^{nn} \le C \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} \le C.$$ On the other hand, since $|y'| < 2r$ and $|x_0'| < r/5$, we have \begin{align*} \varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2 &\ge \varepsilon + \left( \left| \frac{r}{4} \frac{x_0'}{|x_0'|} \right| - |x_0'| \right)^2\ge \varepsilon + \left( \frac{r}{4} - \frac{r}{5} \right)^2 = \varepsilon + \frac{1}{400}r^2, \end{align*} and \begin{align*} \varepsilon + |x_0' + y'|^2 &\le \varepsilon + 2|x_0'|^2 + 2|y'|^2 \le \varepsilon + \frac{2}{25}r^2 + 8r^2 < \varepsilon + 9r^2. \end{align*} Therefore, $$(\partial_x y)^{nn} \ge \frac{1}{C} \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} \ge \frac{1}{C} \frac{\varepsilon + r^2/400}{\varepsilon + 9 r^2} \ge \frac{1}{C},$$ and \eqref{partial_x_y_nn} is verified. We have shown $(\partial_x y)^{ii} \sim 1$, for all $i = 1, \cdots, n$, and $|(\partial_x y)^{ij}| \le C \delta^{1-\gamma}$, for $i \neq j$. We further require $r_0$ to be small enough so that off-diagonal entries of $\partial_x y$ are small. Therefore \eqref{transformation_lipschitz} follows. As mentioned earlier, \eqref{b_ij_ellpticity} follows from \eqref{transformation_lipschitz}. Now we define, for any integer $l$, $$A_l:= \left\{y \in \mathbb{R}^n ~\big|~ \frac{r}{4} < |y'| < 2r,~ (l-1) h_r < z_n < (l+1) h_r \right\}.$$ Note that $A_0 = Q_{2r, h_r} \setminus Q_{r/4, h_r}.$ For any $l \in \mathbb{Z}$, we define a new function $\tilde{v}$ by $$\tilde{v}(y) := v\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l.$$ We also define the corresponding coefficients, for $k = 1,2, \cdots, n-1$, $$\tilde{b}^{nk}(y)=\tilde{b}^{kn}(y) := (-1)^lb^{nk}\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l,$$ and for other indices, $$\tilde{b}^{ij}(y) := b^{ij}\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l.$$ Therefore, $\tilde{v}(y)$ and $\tilde{b}^{ij}(y)$ are defined in the infinite cylinder shell $Q_{2r, \infty} \setminus Q_{r/4, \infty}$. By \eqref{equation_v}, $\tilde{v} \in H^1(Q_{2r, \infty} \setminus Q_{r/4, \infty})$ satisfies $$-\partial_i (\tilde{b}^{ij}(y) \partial_j \tilde{v}(y)) = 0 \quad \mbox{in }Q_{2r, \infty} \setminus Q_{r/4, \infty}.$$ Note that for any $l \in \mathbb{Z}$ and $y \in A_l$, $\tilde{b}(y) = (\tilde{b}^{ij}(y))$ is orthogonally conjugated to $b\left(y', (-1)^l\left(y_n - 2l h_r\right)\right)$. Hence, by \eqref{b_ij_ellpticity}, we have $$\frac{\lambda}{C} \le \tilde{b}(y) \le C\Lambda, \quad \mbox{for }y \in Q_{2r, \infty} \setminus Q_{r/4, \infty}.$$ We restrict the domain to be $Q_{2r, r} \setminus Q_{r/4, r}$, and make the change of variables $z = y/r$. Set $\bar{v}(z) = \tilde{v}(y), \bar{b}^{ij}(z) = \tilde{b}^{ij}(y)$, we have $$-\partial_i (\bar{b}^{ij}(z) \partial_j \bar{v}(z)) = 0 \quad \mbox{in }Q_{2, 1} \setminus Q_{1/4, 1},$$ and $$\frac{\lambda}{C} \le \bar{b}(z) \le C\Lambda, \quad \mbox{for }z \in Q_{2, 1} \setminus Q_{1/4, 1}.$$ Then by the Harnack inequality for uniformly elliptic equations of divergence form, see e.g. \cite[Theorem 8.20]{GT}, there exists a constant $C$ depending only on $n, \kappa, \lambda, \Lambda, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that $$\sup_{Q_{1,1/2} \setminus Q_{1/2,1/2}} \bar{v} \le C \inf_{Q_{1,1/2} \setminus Q_{1/2,1/2}} \bar{v}.$$ In particular, we have $$\sup_{Q_{1,h_r/r} \setminus Q_{1/2,h_r/r}} \bar{v} \le C \inf_{Q_{1,h_r/r} \setminus Q_{1/2,h_r/r}} \bar{v},$$ which is \eqref{harnack} after reversing the change of variables.\\ \end{proof} \begin{remark} If dimension $n = 2$, Lemma \ref{harnack_inequality} fails since $Q_{2, 1} \setminus Q_{1/4, 1} \subset \mathbb{R}^{2}$ is the union of two disjoint rectangular domains, and the Harnack inequality cannot be applied on it. In fact, in our proof of Theorem \ref{main_thm}, Lemma \ref{harnack_inequality} is the only ingredient where dimension $n \ge 3$ is used. As mentioned above, the conclusion of Theorem \ref{main_thm} does not hold in dimension $n = 2$.\\ \end{remark} For any domain $A \subset \widetilde{\Omega}$, we denote the oscillation of $u$ in A by $\mbox{osc}_A u := \sup_{A} u - \inf_{A} u$. Using Lemma \ref{harnack_inequality}, we obtain a decay of $\mbox{osc}_{\Omega_{x_0, \delta}}u$ in $\delta$ as follows.\\ \begin{lemma}\label{osc_u_decay_lemma} Let $u$ be a solution of \eqref{main_problem_narrow}. For any $x_0 \in \Omega_{0, r_0}$, where $r_0$ is as in Lemma \ref{harnack_inequality}, there exist positive constants $\sigma$ and $C$, depending only on $ n, R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that \begin{equation}\label{osc_u} \mbox{osc}_{\Omega_{x_0, \delta}} u \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}. \end{equation} \end{lemma} \begin{proof} For simplicity, we drop the $x_0$ subscript and denote $\Omega_r = \Omega_{x_0,r}$ in this proof. Let $5|x_0'| < r < \delta^{1- \gamma}$ and $u_1 = \sup_{\Omega_{2r}}u - u, u_2 = u - \inf_{\Omega_{2r}} u.$ By Lemma \ref{harnack_inequality}, we have \begin{align*} \sup_{\Omega_r \setminus \Omega_{r/2}} u_1 &\le C_1 \inf_{\Omega_r \setminus \Omega_{r/2}} u_1, \\ \sup_{\Omega_r \setminus \Omega_{r/2}} u_2 &\le C_1 \inf_{\Omega_r \setminus \Omega_{r/2}} u_2, \end{align*} where $C_1 > 1$ is a constant independent of $r$. Since both $u_1$ and $u_2$ satisfy equation \eqref{main_problem_narrow}, by the maximum principle, \begin{align*} \sup_{\Omega_r \setminus \Omega_{r/2}} u_i = \sup_{\Omega_r} u_i, \quad \inf_{\Omega_r \setminus \Omega_{r/2}} u_i = \inf_{\Omega_r} u_i, \end{align*} for $i = 1,2$. Therefore, \begin{align*} \sup_{\Omega_r} u_1 &\le C_1 \inf_{\Omega_r} u_1, \\ \sup_{\Omega_r} u_2 &\le C_1 \inf_{\Omega_r} u_2. \end{align*} Adding up the above two inequalities, we have $$\mbox{osc}_{\Omega_r} u \le \left( \frac{C_1 - 1}{C_1 + 1} \right)\mbox{osc}_{\Omega_{2r}} u.$$ Now we take $\sigma > 0$ such that $2^{-\sigma} = \frac{C_1 - 1}{C_1 + 1}$, then \begin{equation}\label{recurrence} \mbox{osc}_{\Omega_r} u \le 2^{-\sigma} \mbox{osc}_{\Omega_{2r}} u. \end{equation} We start with $r = r_0 = \delta^{1- \gamma}/2$, and set $r_{i+1} = r_i/2$. Keep iterating \eqref{recurrence} $k+1$ times, where $k$ satisfies $5\delta \le r_k < 10 \delta$, we will have $$\mbox{osc}_{\Omega_{\delta}} u \le \mbox{osc}_{\Omega_{r_k}} u \le 2^{-(k+1)\sigma} \mbox{osc}_{\Omega_{2r_0}} u \le 2^{1-(k+1)\sigma} \|u\|_{L^\infty (\Omega_{\delta^{1-\gamma}})}.$$ Since $10\delta > r^{k} = 2^{-k}r_0 = 2^{-(k+1)}\delta^{1 - \gamma},$ we have $ 2^{-(k+1)} < 10 \delta^\gamma$, and hence \eqref{osc_u} follows immediately.\\ \end{proof} \begin{proof}[Proof of Theorem \ref{main_thm}] Let $u \in H^1(\Omega_{0,R_0})$ be a solution of \eqref{main_problem_narrow}. For $x_0 \in \Omega_{0,r_0}$, we have, using Lemma \ref{osc_u_decay_lemma}, \begin{equation}\label{u-u_0} \| u - u_0\|_{L^\infty(\Omega_{x_0, \delta})} \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}, \end{equation} for some constant $u_0$. We denote $v: = u - u_0$, and $v$ satisfies the same equation \eqref{main_problem_narrow}. We work on the domain $\Omega_{x_0, \delta/4}$, and perform a change of variables by setting \begin{equation}\label{x_to_y} \begin{cases} y' = \delta^{-1} (x'- x_0'),\\ y_n = \delta^{-1} x_n. \end{cases} \end{equation} The domain $\Omega_{x_0, \delta/4}$ becomes \begin{align*} \left\{y\in \mathbb{R}^n ~\big|~ |y'| \le \frac{1}{4}, \delta^{-1} \left( -\frac{1}{2}\varepsilon+ g(x_0' + \delta y')\right)< y_n < \delta^{-1} \left( \frac{1}{2}\varepsilon+ f(x_0'+ \delta y')\right) \right\}. \end{align*} We make a change of variables again by \begin{equation}\label{y_to_z} \begin{cases} z' = 4y' ,\\ z_n = 2\delta \left( \frac{\delta y_n - g(x_0' + \delta y') + \varepsilon/2}{\varepsilon + f(x_0' + \delta y') - g(x_0' + \delta y')} - \frac{1}{2} \right). \end{cases} \end{equation} Now the domain in $z$-variables becomes a thin plate $Q_{1, \delta}$, where $Q_{s,t}$ is defined as in \eqref{Q_s_t}. Let $w(z) = v(x)$, then $w$ satisfies \begin{equation}\label{equation_w} \left\{ \begin{aligned} -\partial_i(b^{ij}(z) \partial_j w(z)) &=0 \quad \mbox{in } Q_{1, \delta},\\ b^{nj}(z) \partial_j w(z) &= 0 \quad \mbox{on } \{z_n = -\delta\} \cup \{z_n = \delta\}, \end{aligned} \right. \end{equation} where the matrix $b(z) = (b^{ij}(z))$ is given by \begin{equation}\label{b_ij_formula_2} (b^{ij}(z)) = \frac{(\partial_y z)(a^{ij})(\partial_y z)^t}{\det (\partial_y z)}. \end{equation} Similar to the proof of Lemma \ref{harnack_inequality}, we will show that the Jacobian matrix of the change of variables \eqref{y_to_z}, denoted by $\partial_y z$, and its inverse matrix $\partial_z y$ satisfy \begin{equation}\label{transformation_lipschitz_2} |(\partial_y z)^{ij}| \le C, \quad |(\partial_z y)^{ij}| \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{equation} where $C > 0$ depends only on $n, \kappa, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. This leads to \begin{equation}\label{b_ij_ellpticity_2} \frac{\lambda}{C} \le b(z) \le C\Lambda, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} From \eqref{y_to_z}, one can compute that \begin{align*} (\partial_y z)^{ii} &= 4, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_y z)^{nn} &= \frac{2\delta^2}{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)},\\ (\partial_y z)^{ni} &= - \frac{2 \delta^2 \partial_i g(x_0' + \delta z'/4) + 2z_n \delta [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)}\\ &~\quad \mbox{for } 1 \le i \le n-1,\\ (\partial_y z)^{ij} &= 0, \quad \mbox{for } 1 \le i \le n-1, j \neq i. \end{align*} First we will show that \begin{equation}\label{partial_y_z_nn} (\partial_y z)^{nn} \sim 1, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} Since $|z'| < 1$ and $|x_0'| < \delta$, it is easy to see that $$(\partial_y z)^{nn} \sim \frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2} \ge \frac{\delta^2}{\varepsilon + C \delta^2} \ge \frac{1}{C}, \quad \mbox{for }z \in Q_{1, \delta},$$ due to \eqref{fg_0} and \eqref{fg_1}. On the other hand, \begin{align*} (\partial_y z)^{nn} &\sim \frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2}\\ &= \frac{\delta^2}{\varepsilon + |x_0'|^2 + (1/4)^2 \delta^2 |z'|^2 + \delta x_0' \cdot z'/2}\\ &\le \frac{\delta^2}{\delta^2 + (1/4)^2|z'|^2\delta^2 - |z'||x_0'|\delta/2}\\ &\le \frac{\delta^2}{(1 + (1/4)^2|z'|^2 - 1/2) \delta^2} \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{align*} Therefore, \eqref{partial_y_z_nn} is verified. Since $|z_n| < \delta$, $|z'| < 1$ and $|x_0'| < \delta$, by \eqref{fg_0} and \eqref{fg_1}, for $1 \le i \le n-1$, \begin{align*} |(\partial_y z)^{ni}| &\le \frac{2 \delta^2 |\partial_i g(x_0' + \delta z'/4)| + 2 \delta^2 [|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)}\\ &\le \frac{C\delta^2}{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)}[|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]\\ &\le C\frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2} |x_0' + \delta z'/4|\\ &\le C (|x_0'| + \delta|z'|) \le C\delta, \end{align*} where in the last line, we have used the same arguments in showing $(\partial_y z)^{nn} \le C$ earlier. We have shown $(\partial_y z)^{ii} \sim 1$, for all $i = 1, \cdots, n$, and $|(\partial_y z)^{ij}| \le C \delta$, for $i \neq j$. We further require $r_0$ to be small enough so that off-diagonal entries are small. Therefore \eqref{transformation_lipschitz_2} follows. As mentioned earlier, \eqref{b_ij_ellpticity_2} follows from \eqref{transformation_lipschitz_2}. Next, we will show \begin{equation}\label{b_ij_holder} \|b \|_{C^\alpha(\overline{Q}_{1,\delta})} \le C, \end{equation} for some $C > 0$ depending only on $n, \kappa, R_0, \|a\|_{C^\alpha}, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, by showing \begin{equation}\label{partial_y_z_lipschitz} |\nabla_z (\partial_y z)^{ij}(z)| \le C, \quad \left| \nabla_z \frac{1}{\det(\partial_y z)} \right| \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} Then \eqref{b_ij_holder} follows from \eqref{partial_y_z_lipschitz}, \eqref{b_ij_formula_2}, and $\|a\|_{C^\alpha} \le C$. By a straightforward computation, we have, for any $i = 1, \cdots, n-1$, \begin{align*} \left| \partial_{z_i} \frac{1}{\det(\partial_y z)} \right| &= \left| \partial_{z_i} \left( \frac{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)}{2 \cdot 4^{n-1}\delta^2} \right) \right|\\ &= \left| \frac{\delta[\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{2 \cdot 4^{n-1}\delta^2} \right|\\ &\le \frac{C}{\delta}[|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]\\ &\le \frac{C}{\delta}|x_0' + \delta z'/4| \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{align*} where in the last inequality, \eqref{fg_0} and \eqref{fg_1} have been used. For any $i = 1, \cdots, n-1$, \begin{align*} |\partial_{z_i} (\partial_y z)^{nn}| &= \left| \frac{2\delta^3 [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{(\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4))^2} \right|\\ &\le \frac{C\delta^3}{(\varepsilon + |x_0' + \delta z'/4|^2)^2}|x_0' + \delta z'/4|\\ &\le \frac{C\delta^3}{\delta^4}(|x_0'| + |\delta z'|) \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{align*} where in the last line, we have used the same arguments in showing $(\partial_y z)^{nn} \le C$ earlier. Similar computations apply to $\partial_{z_i} (\partial_y z)^{ni}$, for $i = 1, \cdots, n-1$, and we have $$|\partial_{z_i} (\partial_y z)^{ni}| \le C, \quad \mbox{for }z \in Q_{1, \delta}.$$ Finally, we compute, for $i = 1, \cdots, n-1$, \begin{align*} |\partial_{z_n} (\partial_y z)^{ni}| &= \left| \frac{ 2 \delta [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)} \right|\\ &\le \frac{C\delta|x_0' + \delta z'/4|}{\varepsilon + |x_0' + \delta z'/4|^2} \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{align*} Therefore, \eqref{partial_y_z_lipschitz} is verified, and hence \eqref{b_ij_holder} follows as mentioned above. Now we define $$S_l:= \left\{z \in \mathbb{R}^n ~\big|~ |z'| < 1,~ (l-1) \delta < z_n < (l+1) \delta \right\}$$ for any integer $l$, and $$S: = \left\{z \in \mathbb{R}^n ~\big|~ |z'| < 1,~ |z_n| < 1\right\}.$$ Note that $Q_{1, \delta} = S_0$. As in the proof of Lemma \ref{harnack_inequality}, we define, for any $l \in \mathbb{Z}$, a new function $\tilde{w}$ by setting $$\tilde{w}(z) := w\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall z \in S_l.$$ We also define the corresponding coefficients, for $k = 1,2, \cdots, n-1$, $$\tilde{b}^{nk}(z)=\tilde{b}^{kn}(z) := (-1)^lb^{nk}\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall z \in S_l,$$ and for other indices, $$\tilde{b}^{ij}(z) := b^{ij}\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall y \in S_l.$$ Then $\tilde{w}$ and $\tilde{b}^{ij}$ are defined in the infinite cylinder $Q_{1, \infty}$. By \eqref{equation_w}, $\tilde{w}$ satisfies the equation $$-\partial_i (\tilde{b}^{ij} \partial_j \tilde{w}) = 0, \quad \mbox{in }Q_{1, \infty}.$$ Note that for any $l \in \mathbb{Z}$, $\tilde{b}(z)$ is orthogonally conjugated to $b\left(z', (-1)^l\left(z_n - 2l \delta\right)\right),$ for $z \in S_l$. Hence, by \eqref{b_ij_ellpticity_2}, we have \begin{equation*} \frac{\lambda}{C} \le \tilde{b}(z) \le C\Lambda, \quad \mbox{for }z \in Q_{1,\infty}, \end{equation*} and, by \eqref{b_ij_holder}, \begin{equation*} \|\tilde{b} \|_{C^\alpha(\overline{S}_{l})} \le C, \quad \forall l \in \mathbb{Z}. \end{equation*} Apply Lemma \ref{gradient_lemma} on $S$ with $N = 1$, we have $$\| \nabla \tilde{w} \|_{L^\infty(\frac{1}{2}S)} \le C \| \tilde{w} \|_{L^2(S)}.$$ It follows that $$\| \nabla w \|_{L^\infty(Q_{1/2, \delta})} \le \frac{C}{\delta} \| w \|_{L^2(Q_{1, \delta})} \le C\|w\|_{L^\infty(Q_{1, \delta})},$$ for some positive constant $C$, depending only on $n, \alpha, R_0, \kappa, \lambda, \Lambda, \|a\|_{C^\alpha}, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Since $\|(\partial_z y)\|_{L^\infty(Q_{1,\delta})} \le C$ by \eqref{transformation_lipschitz_2}, where $C$ depends only on $R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, and in particular, is independent of $\varepsilon$ and $\delta$. Reversing the change of variables \eqref{y_to_z} and \eqref{x_to_y}, we have $$\delta \| \nabla v\|_{L^\infty(\Omega_{x_0, \delta/8})} \le C \|v\|_{L^\infty(\Omega_{x_0, \delta/4})} \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}$$ by \eqref{u-u_0}. In particular, this implies $$|\nabla u(x_0)| \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{-1 + \gamma \sigma},$$ and it concludes the proof of Theorem \ref{main_thm} after taking $\beta = \gamma \sigma/2$.\\ \end{proof} \section{Gradient estimates of elliptic systems} A natural question is whether the estimate in Theorem \ref{main_thm} can be extended to elliptic systems of divergence form. We tend to believe that the answer to this question is affirmative, and plan to pursue this in a subsequent paper. Following closely the proof of \eqref{insulated_upper_bound} in \cite{BLY2}, we give a preliminary gradient estimate of elliptic systems in this section. We consider the vector-valued function $u = (u_1, \cdots, u_N)$, and for $1 \le \alpha, \beta \le n, 1 \le i,j \le N$, let $A^{\alpha \beta}_{ij}(x)$ be a function such that $$\| A^{\alpha \beta}_{ij} \|_{L^\infty(\Omega_{0,R_0})} \le \Lambda,$$ $$\int_{\Omega_{0,R_0}} A^{\alpha \beta}_{ij}(x) \partial_\alpha \varphi_i(x) \partial_\beta \varphi_j(x) \ge \lambda \int_{\Omega_{0,R_0}} |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(\Omega_{0,R_0}; \mathbb{R}^N),$$ for some $\lambda, \Lambda > 0$, where $\Omega_{0,R_0}$ is defined as in \eqref{domain_def_Omega}. We assume $A^{\alpha \beta}_{ij}(x) \in C^\mu(\Omega_{0,R_0})$ for some $\mu \in(0,1)$, and consider the system \begin{equation}\label{system} \left\{ \begin{aligned} -\partial_\alpha \left(A^{\alpha \beta}_{ij}(x) \partial_\beta u_j(x)\right) &=0 \quad \mbox{in }\Omega_{0,R_0},\\ A^{\alpha \beta}_{ij}(x) \partial_\beta u_j(x) \nu_\alpha(x) &= 0 \quad \mbox{on } \Gamma_+ \cup \Gamma_-,\\ \end{aligned} \right. \end{equation} for $i = 1,\cdots, N$, where $\Gamma_+, \Gamma_-$ are defined as in \eqref{domain_def_Omega}, $\nu = (\nu_1, \cdots, \nu_n)$ denotes the unit normal vector on $\Gamma_+$ and $\Gamma_-$, pointing upward and downward respectively. We have the following gradient estimate by essentially following the proof of Theorem 1.2 in \cite{BLY2}.\\ \begin{theorem}\label{system_thm} Let $u\in H_0^1(\Omega_{0,R_0}; \mathbb{R}^N)$ be a solution to \eqref{system} in dimension $n \ge 2$, with the coefficient $A^{\alpha \beta}_{ij}$ defined as above. There exist positive constants $r_0$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\mu$, $\|A\|_{C^\mu(\Omega_{0,R_0})}$, $\|f\|_{C^{2}(\{|x'| \le R_0\})}$ and $\|g\|_{C^{2}(\{|x'| \le R_0\})},$ such that \begin{equation}\label{main_result_system} |\nabla u (x_0)| \le C \| u\|_{L^\infty(\Omega_{0,R_0})} \left(\varepsilon + |x_0'|^2 \right) ^{-1/2}, \end{equation} for all $\varepsilon \in (0,1)$, $x_0 \in \Omega_{0 , r_0}$.\\ \end{theorem} \begin{remark} The elliptic systems we have considered include the linear systems of elasticity: $n = N$, and the coefficients $A^{\alpha \beta}_{ij}$ satisfy $$A^{\alpha \beta}_{ij} = A^{\beta \alpha}_{ji} = A^{i \beta}_{\alpha j},$$ and for all $n \times n$ symmetric matrices $\{\xi_{\alpha}^i\}$, $$\lambda |\xi|^2 \le A^{\alpha \beta}_{ij}\xi_{\alpha}^i \xi_{\beta}^j \le \Lambda|\xi|^2.$$\\ \end{remark} \begin{proof}[Proof of Theorem \ref{system_thm}] Let $u\in H^1(\Omega_{0,R_0}; \mathbb{R}^N)$ be a solution to \eqref{system}. We perform the changes of variables \eqref{x_to_y} and \eqref{y_to_z}. For any $1 \le i,j \le N$, we define $$B_{ij}^{\alpha \beta}(z) = \frac{(\partial_y z)(A^{\alpha \beta}_{ij})(\partial_y z)^t}{\det (\partial_y z)},$$ and let $v(z) = u(x)$. Then $v$ satisfies \begin{equation*} \left\{ \begin{aligned} -\partial_\alpha \left(B^{\alpha \beta}_{ij}(z) \partial_\beta v_j(z)\right) &=0 \quad \mbox{in } Q_{1, \delta},\\ B^{n \beta}_{ij}(z) \partial_\beta v_j(z) &= 0 \quad \mbox{on } \{z_n = -\delta\} \cup \{z_n = \delta\}, \end{aligned} \right. \end{equation*} for $i = 1,\cdots, N$, where $Q_{s,t}$ is defined as in \eqref{Q_s_t}. As in the proof of Theorem \ref{main_thm}, we can show that $$\| B^{\alpha \beta}_{ij} \|_{L^\infty(Q_{1,\delta})} \le C\Lambda, \quad \| B^{\alpha \beta}_{ij} \|_{C^\mu(\bar{Q}_{1,\delta})} \le C,$$ $$\int_{Q_{1,\delta}} B^{\alpha \beta}_{ij}(z) \partial_\alpha \varphi_i(z) \partial_\beta \varphi_j(z) \ge \frac{\lambda}{C} \int_{Q_{1,\delta}} |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(Q_{1,\delta}; \mathbb{R}^N),$$ where $C$ is a positive constant that depends only on $n$, $N$, $\mu$, $R_0$, $\kappa$, $\lambda$, $\Lambda$, $\|A\|_{C^\mu}$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Then we argue as in the proof of Theorem \ref{main_thm} to obtain $$|\nabla v(0)| \le C\|v\|_{L^\infty(Q_{1, \delta})},$$ which is \eqref{main_result_system} after reversing the changes of variables \eqref{x_to_y} and \eqref{y_to_z}.\\ \end{proof}
{ "timestamp": "2020-12-29T02:23:33", "yymm": "2012", "arxiv_id": "2012.14056", "language": "en", "url": "https://arxiv.org/abs/2012.14056" }
\section{Introduction} Motivated by Masuda's equivariant cohomological rigidity result for toric symplectic manifolds \cite[Theorem 1.1]{M}, Franz--Yamanaka \cite{FY} recently showed that the isomorphism type of a GKM graph is encoded in its graph cohomology. Hence, for a GKM manifold, the equivariant cohomology contains complete information on the combinatorics of the one-skeleton which translates to a complete understanding of the combinatorial aspects of the entire orbit type stratification. Furthermore in \cite{All} and \cite{Q}, combinatorial aspects of actions of compact Lie groups are related to the spectrum of the equivariant cohomology ring, leading among other things to an algebraic criterion for uniformity of an action. These results naturally trigger the question in how far the combinatorics of a general torus action is determined by its equivariant cohomology. More specifically, we consider the connected orbit type stratification of an action of a compact torus $T$ on a space $X$, i.e.\ the collection of all connected components of fixed point sets $X^U$ where $U\subset T$ is some subtorus. It naturally carries the combinatorial structure of a poset as well as a function which remembers the kernel of the action on each element of the stratum. We ask whether this combinatorial data is encoded in the rational equivariant cohomology algebra. In full generality, such a statement cannot hold true, as the equivariant cohomology algebra of any torus action on a sphere with nonempty, connected fixed point set is the same as that of the trivial action, see Example \ref{ex:spheres}. In this paper we argue that the reason for such behavior is to be found in the existence of unramified elements in the orbit type stratification. We define an element in the orbit type stratification to be \emph{ramified} if it is either minimal, or, recursively, minimal with the property that it contains two distinct ramified elements -- see Definition \ref{defn:ramified}. Our first result, Theorem \ref{thm:encodesstratification}, which holds under mild topological assumptions on the space but without any further conditions on the action, states: \begin{thm*} The rational equivariant cohomology of a compact $T$-space $X$ encodes the subposet $\overline{\chi}$ of ramified elements in the connected orbit type stratification, together with the function $\overline{\lambda}:\overline{\chi}\to \{\textrm{connected subgroups of }T\}$ that associates to an element $C\in\overline{\chi}$ the identity component of the kernel of the $T$-action on $C$. \end{thm*} Our technique of choice to prove this theorem is the new notion of Thom system, which we introduce in Definition \ref{defn:thomsystem}. It may be formulated in the general context of (graded) commutative rings and encodes algebraic properties of the equivariant Thom classes of the path-components of the fixed point set in case of a smooth action. While this connection to Thom classes plays a major role in the second half of the paper, the theorem above could also be proved using the machinery developed in \cite{All} and \cite{Q}, see Remark \ref{rem:somewhatdually}. In more specialized settings, the subset of ramified elements may already contain further information on the stratification. We note that the ramification condition in the statement below, which is Corollary \ref{cor:stratifequivformal}, is satisfied in particular if the fixed points of the action are isolated. \begin{cor*} Let $X$ be an equivariantly formal, compact $T$-space such that every isotropy codimension $1$ element of the orbit type stratification poset $\chi$ is ramified. Then $H_T^*(X)$ encodes $\chi$ up to rational equivalence. If $X$ is additionally a manifold and the $T$-action is smooth, then all of $\chi$ is encoded in $H_T^*(X)$. \end{cor*} In Section \ref{sec:cohom} we investigate in how far the equivariant cohomology algebra of a $T$-space encodes all other (equivariant) cohomology algebras in the orbit type stratum. A starting point is the following result (cf.\ Proposition \ref{prop:fpdimension}). \begin{prop*} The sum of all Betti numbers of each indiviual path-component of $X^T$ is encoded in $H_T^*(X)$. If $X$ is equivariantly formal then the individual sums of all Betti numbers of the components of $X^U$ are encoded in $H_T^*(X)$ for every subtorus $U\subset T$. \end{prop*} In particular this implies the first half of the following corollary. The second part is a consequence of the theorem above and was proved first in \cite{FY}. \begin{cor*} For an equivariantly formal compact orientable $T$-manifold $M$ the equivariant cohomology algebra $H_T^*(M)$ encodes whether the action is of GKM type or not. In case the action is GKM, $H_T^*(M)$ also encodes the GKM graph of the action. \end{cor*} In general, one can not expect $H_T^*(X)$ to contain more specific information like individual Betti numbers of the strata or multiplicative structure, even for equivariantly formal actions on compact manifolds whose orbit type stratification consists only of ramified elements (see Example \ref{ex:cohomnotencoded}). However under stronger conditions, we show in Theorem \ref{thm:encodescohomology}: \begin{thm*} Let $M$ be an equivariantly formal, compact orientable $T$-manifold such that the map $H^*(M)\rightarrow H^*(X)$ is surjective for all components $X$ of $M^T$. Then the equivariant cohomology $H_T^*(M)$ encodes the connected orbit type stratification $\chi$ of $M$ as well as for any $C, D\in \chi$ with $C\subset D$ the respective equivariant cohomology algebras and the map $H_T^*(D)\rightarrow H_T^*(C)$ induced by the inclusion. \end{thm*} The surjectivity condition is trivially satisfied in case the fixed point set of the action consists of isolated points. In Section \ref{sec:integral} we conclude the paper with some remarks on the question which additional information on the orbit type stratification can be obtained from the cohomology by considering integral instead of rational coefficients.\\ \noindent {\bf Acknowledgements.} This work is part of a project funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 452427095. The second author is grateful to the Max Planck Institute for Mathematics in Bonn for its hospitality and financial support. \section{Preliminaries} In this paper we consider continuous actions of a compact torus $T$. All spaces are assumed to be locally contractible, Hausdorff, and have finite-dimensional rational (singular) cohomology. In parts of Section \ref{sec:cohom} we will restrict to smooth manifolds. Cohomology is always singular with coefficients in $\mathbb{Q}$ if not specified otherwise and we usually suppress coefficients from the notation. When considering elements of a graded object, these will always be assumed to be homogeneous in absence of further specification. Given a $T$-space $X$, as well as a subgroup $U\subset T$, we denote by $X^U$ the set of points in $X$ fixed by $U$. Note that if $X$ is a smooth manifold and the action is smooth, then every component of $X^U$ is a smooth submanifold. The components of all $X^U$ form a poset: \begin{defn}\label{defn:orbitstratification} Let $X$ be a $T$-space. The connected orbit type stratification of $X$ is the poset $\chi=\{\textrm{connected components of }X^U\mid U\subset T \textrm{ connected, closed subgroup}\}$, together with the function $\lambda\colon \chi\rightarrow\{\textrm{connected, closed subgroups of }T\}$ such that $\lambda(C)$ is the identity component of the kernel of the restricted $T$-action on $C$. \end{defn} To any $T$-space $X$ one associates its equivariant cohomology $H^*_T(X)$, which is the cohomology of its Borel construction $X_T:=ET\times_T X$, equipped with the structure of $H^*(BT)$-algebra induced by the natural projection $ET\times_T X\to BT$. We will abbreviate $R:=H^*(BT)$. \begin{lem}\label{lem:equivformal} For a $T$-space $X$ the following conditions are equivalent: \begin{enumerate} \item $H^*_T(X)$ is a free $R$-module. \item $H^*_T(X) = R\otimes H^*(X)$ as an $R$-module. \item The map $H^*_T(X)\to H^*(X)$ induced by a fiber inclusion $X \to X\times_T ET$ is surjective. \item $\dim_{\mathbb{Q}} H^*(X^T) = \dim_{\mathbb{Q}} H^*(X)$. \end{enumerate} \end{lem} These equivalences are standard, see e.g.\ \cite[Corollary 4.2.3]{AP} and \cite[Corollary IV.2]{Hs}. \begin{defn} If a $T$-space satisfies the equivalent conditions of Lemma \ref{lem:equivformal} then we say that it is equivariantly formal. \end{defn} Let us recall the Borel localization theorem, see e.g.\ \cite[Theorem 3.2.6]{AP}: for any multiplicatively closed subset $S\subset R$, we set \[ X^S = \{x\in X\mid S^{-1}H_T^*(Tx)\neq 0\}, \] where $S^{-1}$ denotes localization at $S$. \begin{thm} Assume that either $X$ is compact or that it is paracompact and that the set of identity components of isotropy subgroups is finite. Then the inclusion $X^S\to X$ induces an isomorphism of $S^{-1}R$-modules $S^{-1}H^*_T(X) \longrightarrow S^{-1}H^*_T(X^S)$. \end{thm} We will be particularly interested in the following situation: for a subtorus $U\subset T$, let $\mathfrak{p}_U = \ker (H^*(BT)\to H^*(BU))$, and $S = R\setminus {\mathfrak{p}}_U$. Then $X^S = X^U$, the fixed point set of the restricted $U$-action, and we obtain an isomorphism \begin{equation}\label{eq:borelU} S^{-1}H^*_T(X) \longrightarrow S^{-1}H^*_T(X^U). \end{equation} For $U=T$ we obtain in particular that the kernel of the canonical map $H^*_T(X)\to H^*_T(X^T)$ is the torsion submodule of $H^*_T(X)$. Denoting by $X_1 = \{x\in X\mid \dim Xp \leq 1\}$ the one-skeleton of the action, we have a sequence \begin{equation}\label{eq:shortexact} 0 \longrightarrow H^*_T(X) \longrightarrow H^*_T(X^T)\longrightarrow H^*_T(X_1,X), \end{equation} which, for an equivariantly formal action, is exact at $H^*_T(X)$. \begin{lem}[Chang-Skjelbred Lemma {\cite[Lemma 2.3]{CS}}]\label{lem:CSL} Assume that either $X$ is compact or that it is paracompact and that the set of identity components of isotropy subgroups is finite. Then for an equivariantly formal action of a torus $T$ on $X$ the sequence \eqref{eq:shortexact} is also exact at $H^*_T(X^T)$. \end{lem} In the following proposition we collect some well-known properties of equivariantly formal actions. \begin{prop}\label{prop:eqforminherited} Consider an equivariantly formal $T$-action on a space $X$. Then the following hold true: \begin{enumerate} \item For any subtorus $U\subset T$, the induced $U$-action on $X$ is equivariantly formal. \item For any subtorus $U\subset T$, the induced $T$-action on every component of $X^U$ is equivariantly formal. \end{enumerate} \end{prop} \begin{proof} We observe that for any subtorus $U\subset T$ the map $H^*_T(X)\to H^*(X)$ induced by fiber inclusion factors as $H^*_T(X)\to H^*_U(X)\to H^*(X)$. Thus, if $H^*_T(X)\to H^*(X)$ is surjective, so is $H^*_U(X)\to H^*(X)$. The first statement then follows from Lemma \ref{lem:equivformal}. Lemma \ref{lem:equivformal} then implies that $\dim H^*(X^T) = \dim H^*(X) = \dim H^*(X^U)$. As $(X^U)^T= X^T$, this implies, again via Lemma \ref{lem:equivformal}, that the $T$-action on $X^U$ is equivariantly formal. In general, a $T$-action is equivariantly formal if and only if the action on every connected component is equivariantly formal. \end{proof} \section{Orbit type stratification} The purpose of this section is to recover information on the combinatorics of the orbit type stratification of a $T$-action from the algebraic data of its equivariant cohomology algebra $H^*_T(X)$. To this end we introduce the notion of Thom system, see Definition \ref{defn:thomsystem}. It is motivated by the fact that in certain smooth settings, a preferred choice of Thom system in the equivariant cohomology will be given by the equivariant Thom classes of the path-components of the fixed point set (cf.\ Lemma \ref{lem:minimalthomsystem}). \begin{rem}\label{rem:somewhatdually} Somewhat dually to this approach, Quillen \cite{Q} and Allday \cite{All}, see also \cite[Section 3.6]{AP}, related the combinatorics of the action to the spectrum of the even degree part of the equivariant cohomology ring, by considering ideals of the form ${\mathfrak{p}}(K,c) = \ker(H^*_T(X)\to H^*(BK))$ where $K\subset T$ is a subtorus, $c$ a component of $X^K$, and the map is induced by the inclusion of a point in $c$. The set of all such pairs $(K,c)$ forms a poset $\mathcal{T}(X)$, where $(K,c)\leq (L,d)$ if and only if $K\subset L$ and $d\subset c$. The results in \cite{All} can be used to reconstruct $\mathcal{T}(X)$ from $H^*_T(X)$, in some sense substituting for the roles of Theorems \ref{thm:detectthomsys} and \ref{thm:inclusiondetector} in this section. The poset $\mathcal{T}(X)$ is not the same as the connected orbit type stratification (see Definition \ref{defn:orbitstratification}) as it does not detect whether, for $(K,c)\leq (L,d)$, the inclusion $d\subset c$ is strict. Rather, this poset corresponds, in the proof of Theorem \ref{thm:encodesstratification} below, to the poset $\chi'$. As we are interested in the more geometric orbit type stratification, we are led to the concept of ramification, see Definition \ref{defn:ramified} below. \end{rem} \begin{defn}\label{defn:thomsystem} For a (graded) commutative ring $A$, we call a (homogeneous) collection of elements $\tau_1,\ldots,\tau_k\in A$ a Thom system if \begin{itemize} \item $\tau_i\cdot \tau_j$ is nilpotent whenever $i\neq j$ \item $\tau_i$ is not nilpotent \item for any system $\alpha_1,\ldots,\alpha_l\in A$ satisfying the two preceding properties we have $l\leq k$. \end{itemize} \end{defn} \begin{lem}\label{lem:trivialaction'} Let $X$ be a path-connected space with trivial $T$-action and $\alpha\in H_T^*(X)$. The following are equivalent: \begin{enumerate}[(i)] \item $\alpha$ is not nilpotent. \item $\alpha$ restricts to a nontrivial element in $H_T^*(*)$ for any point in $X$. \item multiplication with $\alpha$ is injective on $H_T^*(X)$ \end{enumerate} \end{lem} \begin{proof} We have $H_T^*(X)=R\otimes H^*(X)$ as $R$-algebras. Thus an element is nilpotent if and only if it is contained in $R\otimes H^+(X)$. This proves the equivalence of $(i)$ and $(ii)$. Clearly also $(iii)$ implies $(i)$. Finally, assume that $(ii)$ holds and write the nontrivial $R\otimes H^0(X)$-component of $\alpha$ as $f\otimes 1$. It follows that multiplying an element of $R\otimes H^{\geq k}(X)$ with $\alpha$ multiplies its $R\otimes H^k(X)$ component with $f$. Thus multiplication with $\alpha$ is injective. \end{proof} \begin{prop}\label{prop:TSchar'} Let $X$ be a space with trivial $T$-action. Then $H_T^*(X)$ admits a Thom system. A collection $\tau_1,\ldots,\tau_k\in H_T^*(X)$ is a Thom system if and only if $X$ has $k$ connected components $X_1,\ldots,X_k$ which can be numbered in a way such that for any choice of points $p_i\in X_i$ \begin{itemize} \item $\tau_i$ restricts to $0$ in $H_T^*(p_j)$ for $i\neq j$ \item the restriction of $\tau_i$ to $H_T^*(p_i)$ is not $0$. \end{itemize} \end{prop} \begin{proof} Let $\tau_1,\ldots,\tau_l\in H_T^*(X)$ be elements satisfying the first two conditions in the definition of a Thom system and let $X_1,\ldots,X_k$ be the components of $X$. Choose $p_i\in X_i$. As the $\tau_i$ are not nilpotent, Lemma \ref{lem:trivialaction'} shows that they restrict nontrivially to at least one of the $H_T^*(p_i)$. Also, since the $H_T^*(p_i)$ are integral domains it follows that no two of the $\tau_i$ restrict nontrivially to the same point. Thus it follows that $l\leq k$ and that, if $l=k$, then the $\tau_i$ correspond bijectively to the $X_i$ in the manner described in the proposition (after possibly adjusting the order). Thus it remains to argue that a Thom system has $k$ elements. This follows from the fact that the first two conditions in the definition of a Thom system are satisfied by the elements $e_1,\ldots,e_k$ defined by the condition that $e_i$ restricts to $1$ in $H_T^*(X_i)$ and to $0$ in $H_T^*(X_j)$ for $j\neq i$. \end{proof} For a subtorus $U\subset T$ of a torus $T$ we denote \[ \mathfrak{p}_U:=\ker (H^*(BT)\rightarrow H^*(BU)) ,\] as well as $S_U:=R\setminus\mathfrak{p}_U$. For a space $X$ with $T$-action, we will also consider $H^*_U(X)$ as an algebra over $R$, via the map $H^*(BT)\rightarrow H^*(BU)$. \begin{lem}\label{lem:kernel} Let $X$ be a compact $T$-space, and $U\subset T$ a subtorus which acts trivially on $X$. If $x\in \ker ( H_T^*(X)\rightarrow H_U^*(X))$ then it is nilpotent in $H_T^*(X)/\mathfrak{p}_UH_T^*(X)$. \end{lem} \begin{proof} Let $S$ be a subtorus of $T$ which is complementary to $U$, i.e.\ $T=U\times S$. Then $R=H^*(BU)\otimes H^*(BS)$ and $H_T^*(X)\cong H^*(BU)\otimes H_S^*(X)$ as $R$-algebras with the obvious $R$-algebra structure. Furthermore $H_U^*(X)\cong H^*(BU)\otimes H^*(X)$ and the restriction $r\colon H_T^*(X)\rightarrow H_U^*(X)$ corresponds to \[\mathrm{id}_{H^*(BU)}\otimes r'\colon H^*(BU) \otimes H_S^*(X)\rightarrow H^*(BU)\otimes H^*(X)\] where $r'$ is the restriction $H_S^*(X)\rightarrow H^*(X)$. Both algebras above inherit a bigrading with respect to the tensor product. The bigrading is respected by $r$. If $x$ lies in the kernel of $r$ then its $H^*(BU)\otimes H^0_S(X)$ component is zero. Hence $x\in H^*(BU)\otimes H^+_S(X)$. Note that for $N$ large enough, any product of $N$ elements of $H^+_S(X)$ lies in $H^+(BS)\cdot H_S^*(X)$. This follows from the fact that $H_S^*(X)$ is finitely generated as $H^*(BS)$-module \cite[Prop.\ 3.10.1]{AP} where we put $N$ larger than the highest degree in a $H^*(BS)$-generating set of $H_S^*(X)$. Consequently, $x^N$ lies in $H^+(BS)\cdot H_T^*(X)$ and in particular in $\mathfrak{p}_U\cdot H_T^*(X)$. \end{proof} \begin{lem}\label{lem:nilpotencelem} Let $X$ be a compact $T$-space and $U\subset T$ a subtorus. Then for any $x\in H_T^*(X)$ the image of $x$ in $H_U^*(X^U)$ is nilpotent if and only if the image of $x$ in $S_U^{-1}(H_T^*(X)/\mathfrak{p}_UH_T^*(X))$ is nilpotent. \end{lem} \begin{proof} As $\mathfrak{p}_UH^*_T(X)$ is contained in the kernel of $H^*_T(X)\to H^*_U(X)$, the restriction map $H_T^*(X)\to H_U^*(X^U)$ factors as \[ H^*_T(X)\to H^*_T(X)/\mathfrak{p}_UH^*_T(X) \to H_U^*(X^U). \] Applying localization at $S_U$, it follows that if the image of $x$ in $S_U^{-1}H_T^*(X)/S_U^{-1}\mathfrak{p}_UH_T^*(X)$ is nilpotent, then the same holds for the image in $S_U^{-1}H_U^*(X^U)$. An element of $H_U^*(X^U)$ is nilpotent if and only if it is nilpotent in $S_U^{-1}H_U^*(X^U)$ which proves one direction. Assume conversely that the image of $x$ in $H_U^*(X^U)$ is nilpotent. Then some power $x^k$ maps to the kernel of $H_T^*(X^U)\rightarrow H_U^*(X^U)$. By Lemma \ref{lem:kernel}, some higher power $x^N$ satisfies $f(x^N)\in \mathfrak{p}_U H_T^*(X^U)$ where $f$ is the map $H_T^*(X)\rightarrow H_T^*(X^U)$. By Borel localization the map \[S_U^{-1}f\colon S_U^{-1} H_T^*(X)\rightarrow S_U^{-1}H_T^*(X^U)\] is an isomorphism of $S_U^{-1}R$-modules, see \eqref{eq:borelU}. Since $S_U^{-1}f(x^N)$ is in $S_U^{-1}\mathfrak{p}_U\cdot S_U^{-1}H_T^*(X^U)$ it follows that the image of $x^N$ in $S_U^{-1}H_T^*(X)$ lies in $S_U^{-1}\mathfrak{p}_U\cdot S_U^{-1}H_T^*(X)$. The Lemma follows from the fact that localization commutes with taking quotients, i.e.\ $S_U^{-1}H_T^*(X)/S_U^{-1}\mathfrak{p}_U\cdot H_T^*(X)\cong S_U^{-1}(H_T^*(X)/\mathfrak{p}_UH_T^*(X))$. \end{proof} \begin{rem}\label{rem:sphericalstuff} With regards to the above lemma and the theorem below, we remark that in general $S_U^{-1}(H_T^*(X)/\mathfrak{p}_UH_T^*(X))$ and $S_U^{-1}H_U^*(X^U)$ are not isomorphic. Also, we only obtain criteria for elements to restrict to nilpotent elements in $H_U^*(X^U)$ without a precise description on the kernel. The reason for this is the fact that in general the kernel of the restriction $H_T^*(X)\rightarrow H_U^*(X)$ is larger than $\mathfrak{p}_U H_T^*(X)$ and may additionally contain certain Massey products involving elements of $\mathfrak{p}_U$. This phenomenon is discussed in \cite{AZ} under the name of spherical actions. \end{rem} \begin{thm}\label{thm:detectthomsys} Let $X$ be a compact $T$-space and $U\subset T$ a subtorus. Then there are elements $\tau_1,\ldots,\tau_k\in H_T^*(X)$ which restrict to a Thom system of $H_U^*(X^U)$. A set of elements $\tau_1,\ldots,\tau_k\in H_T^*(X)$ has this property if and only if it restricts to a Thom system in $S^{-1}_U(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$. \end{thm} \begin{proof} By Lemma \ref{lem:nilpotencelem}, $\tau_1,\ldots,\tau_r\in H_T^*(X)$ satisfy the first two conditions of a Thom system when restricted to $H_U^*(X^U)$ if and only if they do so in $S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$. Let $\eta_1,\ldots,\eta_l\in S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$ be a set satisfying the nilpotence conditions of a Thom system. Then multiplying the $\eta_i$ with elements of $S_U$ preserves these conditions. Consequently, we may assume that the $\eta_i$ are restrictions from $H_T^*(X)$. Since $H_U^*(X^U)$ admits a Thom system by Proposition \ref{prop:TSchar'}, it follows that $l\leq k$, where $k$ is the number of elements in a Thom system of $H_U^*(X^U)$, i.e., the number of path components of $X^U$. In particular $S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$ admits a Thom system which lies in the image of $H_T^*(X)\rightarrow S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$. The theorem follows if we show that the image of $H_T^*(X)\rightarrow H_U^*(X^U)$ contains a Thom system of $H_U^*(X^U)$. Let $X_1,\ldots,X_k$ be the components of $X^U$ and let $e_i\in H_T^*(X^U)$ be the element which restricts to $1\in H_T^*(X_i)$ and to $0\in H_T^*(X_j)$ for $i\neq j$. By Borel Localization there are polynomials $f_i\in S$ such that $f_i\cdot e_i$ is the restriction of some $\tau_i\in H_T^*(X)$. It remains to check that the $\tau_i$ restrict to a Thom system of $H_U^*(X^U)$. Choose points $p_i\in X_i$. The composition \[H_T^*(X)\rightarrow H_T^*(X^U)\rightarrow H_U^*(X^U)\rightarrow H_U^*(p_i)\] is $R$-linear. Thus $\tau_i$ maps to $f_i\cdot 1\in H_U^*(p_i)$, which is nonzero because $f\in S_U$, and to $0\in H_U^*(p_j)$ for $i\neq j$. Thus the $\tau_i$ restrict to a Thom system in $H_U^*(X^U)$ by Lemma \ref{lem:trivialaction'}. \end{proof} \begin{defn} For a subtorus $U\subset T$, we call a set of elements $\tau_1,\ldots,\tau_k\in H_T^*(X)$ with the property as in Theorem \ref{thm:detectthomsys} a \emph{$U$-local Thom system} (of $H_T^*(X)$). Given such a system, we denote by $F_U(\tau_i)$ the unique component of $X^U$ such that $\tau_i$ restricts to a nonzero element in $H_U^*(p_i)$ for any point $p_i\in F_U(\tau_i)$. \end{defn} \begin{thm} \label{thm:inclusiondetector} Let $X$ be a compact $T$-space and $H\subset U\subset T$ subtori. Let $\tau_1,\ldots,\tau_k\in H_T^*(X)$ be a $U$-local Thom system and $\eta_1,\ldots,\eta_l$ be an $H$-local Thom system. Then $F_U(\tau_i)\subset F_H(\eta_j)$ if and only if there is some $f\in S_H$ such that the image of $f\tau_i-\eta_j\tau_i$ in $S^{-1}_U(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$ is nilpotent. \end{thm} \begin{proof} For $i=1,\ldots,k$, let $p_i\in F_U(\tau_i)$ and let $r_i\colon H_T^*(X)\rightarrow H_U^*(p_i)$ be the natural restriction map. We claim that for any $x\in H_T^*(X)$ and $f\in R$ we have $r_i(x)=f+\mathfrak{p}_U\in R/\mathfrak{p}_U\cong H_U^*(p_i)$ if and only if the image of $f\tau_i-x\tau_i$ in $S^{-1}_U(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$ is nilpotent. To prove the claim, recall that by Lemma \ref{lem:nilpotencelem} the latter condition is equivalent to $f\tau_i-x\tau_i$ being nilpotent in $H_U^*(X^U)$. By Lemma \ref{lem:trivialaction'} this is again equivalent to $r_j(f\tau_i-x\tau_i)$ being $0$ for $j=1,\ldots,k$. Since $r_j(\tau_i)=0$ for $j\neq i$ this depends only on $r_i(f\tau_i-x\tau_i)$ being zero. But this is the case if and only if $r_i(f\cdot 1)=r_i(x)$, which proves the claim. The inclusion $F_U(\tau_i)\subset F_H(\eta_j)$ holds if and only if $p_i\in F_H(\eta_j)$. This is the case if and only if the image of $\eta_j$ in $H_H^*(p_i)$ is not $0$. Since this restriction map factors through $H_U^*(p_i)$ the condition is equivalent to $r_i(\eta_j)\notin \ker(H^*(BU)\rightarrow H^*(BH))=\overline{\mathfrak{p}_H}$, where the latter denotes the image of $\mathfrak{p}_H$ in $H^*(BU)$. This is equivalent to $r_i(\eta_j)=r_i(f\cdot 1)$ for some $f\in S_H$, proving the theorem. \end{proof} Let $\chi$ be the connected orbit type stratification, see Definition \ref{defn:orbitstratification}. Let us introduce the following recursive definition: \begin{defn}\label{defn:ramified} We call an element $C\in \chi$ \emph{ramified} if it is either minimal in the poset $\chi$, or there exist two ramified elements $D_1\neq D_2\in\chi$ with the property that $D_1,D_2\subset C$ and $C$ is minimal with respect to this property. \end{defn} \begin{defn} We define $\overline{\chi}$ to be the subposet of $\chi$ given by all ramified elements in $\chi$. \end{defn} \begin{ex}\label{ex:spheres} Consider a $T$-action on a sphere $S^n$ with nonempty, connected fixed point set $F$. Then the action is automatically equivariantly formal -- for even $n$ any action on $S^n$ is, for odd $n$ this is because then the total Betti number of the fixed point set is necessarily $2$. Note that all elements of the connected orbit type stratification except for the minimal one are unramified. We choose an $R$-module basis of $H_T^*(S^n)$ of the form $\{1,a\}$, with $a\in H^n_T(S^n)$. By replacing $a$ by an element of the form $a+f$, with $f\in H^n(BT)$, we may assume that $a$ restricts to an element in $R\otimes H^+(F)$. As the restriction map $H_T^*(S^n) \to H_T^*(F) = R\otimes H^*(F)$ is injective, this implies that $a^2=0$ (as $H^+(F)$ is concentrated in only one degree). We have shown that the equivariant cohomology $H_T^*(S^n)$ is, as an $R$-algebra, isomorphic to that of the trivial $T$-action on $S^n$. In particular, equivariant cohomology can not distinguish these actions. However, among those indistinguishable actions, many different orbit type stratifications are possible. \end{ex} \begin{thm}\label{thm:encodesstratification} The equivariant cohomology of a compact $T$-space $X$ encodes the subposet $\overline{\chi}$ of ramified elements in the poset $\chi$ of orbit type strata, together with the restriction $\overline{\lambda}:\overline{\chi}\to \{\textrm{connected subgroups of }T\}$ of $\lambda$. \end{thm} \begin{proof} We construct a poset $\overline{\chi}'$ and a map $\lambda'\colon \overline{\chi}'\to \{\textrm{connected subgroups of }T\}$ together with an isomorphism $\varphi\colon \overline{\chi}'\rightarrow \overline{\chi}$ of posets satisfying $\lambda'=\lambda\circ\varphi$. We fix a $U$-local Thom system $\tau^U_1,\ldots,\tau^U_{k_U}\in H_T^*(X)$ for every subtorus $U\subset T$ and define $\chi'$ to be be the set of tuples $(\tau_i^U,U)$. We write $(\tau_j^U,U)\leq (\tau_i^H,H)$ whenever $H\subset U$ and $f\tau_j^U-\tau_i^H\tau_j^U$ is nilpotent in $S^{-1}_U(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$ for some $f\in S_H$. Then by Theorem \ref{thm:inclusiondetector} this turns $\chi'$ into a partially ordered set which corresponds bijectively to the poset of pairs $(C,U)$, where $C$ is a component of $X^U$. The map $(\tau_i^U,U)\mapsto F_U(\tau_i^U)$ corresponds to the forgetful map $(C,U)\mapsto C$ and gives a surjection $\varphi\colon\chi'\rightarrow \chi$ compatible with the poset structure. In analogy with $\overline{\chi}$, we call an element $C\in \chi'$ ramified if it is either minimal in $\chi'$ or there exist two ramified elements $D_1\neq D_2$ in $\chi'$ with the property that $C$ is minimal among the elements containing $D_1$ and $D_2$. Furthermore we set $\lambda'(\tau_i^U,U)=U$. We claim that $\varphi$ restricts to a bijection $\overline{\chi}'\rightarrow\overline{\chi}$ between ramified subsets and that $\lambda'=\lambda\circ \varphi$ on $\overline{\chi}'$. Note first that for any $C\in \chi'$ we have $\mathrm{codim}\, \lambda'(C)\geq \mathrm{codim}\, \lambda(\varphi(C))$ and that any element $C\in\overline{\chi}'$ has to satisfy $\lambda'(C)=\lambda (\varphi(C))$ due to the minimality condition. Also if $\varphi(C)=\varphi(D)$ for $C,D\in \overline{\chi}'$ then in particular $\lambda'(C)=\lambda'(D)$. But then the properties of Thom systems yield $C=D$ proving injectivity of $\varphi|_{\overline{\chi}'}$. It remains to prove that $\varphi(\overline{\chi}')=\overline{\chi}$. To see this we use induction over the isotropy codimension, where the statement is obvious for the fixed points in isotropy codimension $0$. Let $\overline{\chi}'^k$ (resp.\ $\overline{\chi}^k$) be the subset consisting of those elements $x$ for which the codimension of $\lambda'(x)$ (resp.\ $\lambda(x)$) is $k$ or less. Suppose we have shown that $\varphi(\overline{\chi}'^k)=\overline{\chi}^k$. Let $C\in \overline{\chi}^{k+1}$ with $\lambda(C)$ of codimension $k+1$. Then there is some $C'\in \chi'$ with $\varphi( C')=C$ and $\mathrm{codim}\, \lambda'(C')=k+1$. We show that $C'$ is ramified and hence $\varphi(\overline{\chi}')\supset \overline{\chi}$. If $C$ is minimal and $D'\leq C'$ then $\varphi(D')=C$. Thus $\mathrm{codim}\, \lambda'(D')\geq\mathrm{codim}\,\lambda(C)=\mathrm{codim}\, \lambda'(C')$ which implies $D'=C'$. If $C$ is not minimal then there are $D_1,D_2\in \overline{\chi}^k$ such that $C$ is minimal among the elements containing $D_1$ and $D_2$. Let $D_1',D_2'$ denote preimages in $\overline{\chi}'^k$. Then we have $D_1',D_2'\leq C'$. For any $D_1',D_2'\leq B'\leq C'$ we have that $\varphi(B')=C$ so $\mathrm{codim}\,\, \lambda' (B')\geq \mathrm{codim}\,\, \lambda (C)=\mathrm{codim}\,\, \lambda'(C')$ and thus $B'=C'$. This concludes the proof of $\varphi(\overline{\chi}')\supset \overline{\chi}$. In a similar fashion, the induction proves $\varphi(\overline{\chi}')\subset \overline{\chi}$. \end{proof} \begin{defn} A map between two spaces is called a rational equivalence if it induces an isomorphism on rational cohomology. \end{defn} \begin{rem} The existence of a rational equivalence $X\rightarrow Y$ between two spaces is much stronger than the condition $H^*(X)\cong H^*(Y)$. Under appropriate conditions on the fundamental groups it implies that $X$ and $Y$ have the same rational homotopy type and in particular isomorphic homotopy groups up to torsion. \end{rem} \begin{cor}\label{cor:stratifequivformal} Let $X$ be an equivariantly formal, compact $T$-space such that every isotropy codimension $1$ element of $\chi$ is ramified. Then $H_T^*(X)$ encodes $\chi$ up to rational equivalence in the sense that for any $D\in \chi$ the inclusion $C\subset D$ of the unique maximal ramified element in $D$ is a rational equivalence. If $X$ is additionally a manifold and the $T$-action is smooth, then all of $\chi$ is encoded in $H_T^*(X)$. \end{cor} \begin{proof} Every element $D\in \chi$ contains a minimal element. In particular it contains a ramified element $C\subset D$ which we assume to be maximal with this property. If there are two distinct ramified $C,C'\subset D$ which are maximal with respect to these properties then $D$ would by definition be ramified itself, resulting in a contradiction. Thus $C$ is unique. Then since every isotropy codimension $1$ element is ramified it follows that $C$ and $D$ have the same $1$-skeleton. By Proposition \ref{prop:eqforminherited}, $C$ and $D$ are both equivariantly formal so the Chang-Skjelbred Lemma \ref{lem:CSL} implies that the inclusion is a rational equivalence. If $X$ is a manifold, we observe that for some $N\in \chi$ we have $N^T\neq\emptyset$ and that for some $p\in N^T$ the isotropy $T$-representation of $X$ at $p$ decomposes into $2$-dimensional subrepresentations of orbit dimension $1$ and the tangent space of $X^T$. We deduce that the dimension of $N$ is determined by the one-skeleton of $N$. In particular in the above setting $C$ and $D$ are submanifolds of the same dimension so $C=D$. \end{proof} \section{Cohomology}\label{sec:cohom} In this section we discuss under which conditions equivariant cohomology contains cohomological information about the elements in the connected orbit type stratification. We continue to use the notation from the last section, i.e., for a subtorus $U\subset T$, $\mathfrak{p}_U=\ker(H^*(BT)\rightarrow H^*(BU))$ and $S_U = R\backslash \mathfrak{p}_U$. \begin{defn} We call a Thom system $\tau_1,\ldots,\tau_k$ \emph{strict}, if $\tau_i\tau_j=0$ for $i\neq j$. For some subtorus $U\subset T$, a collection $\tau_1,\ldots,\tau_k\in H_T^*(X)$ is called a \emph{strict} $U$-local Thom system of $H_T^*(X)$ if the $\tau_i$ restrict to a strict Thom system of $H_U^*(X^U)$. \end{defn} Clearly any graded commutative ring that admits a Thom system also admits a strict Thom system. However, not all statements on Thom systems transfer directly to their strict counterparts without imposing any additional conditions. The following lemma records some analogous properties. \begin{lem}\label{lem:strictthomsys} Let $X$ be a compact $T$-space, $U\subset T$ a subtorus, and $\tau_1,\ldots,\tau_k\in H_T^*(X)$. \begin{enumerate}[(i)] \item The collection $\tau_1,\ldots,\tau_k$ forms a strict $U$-local Thom system if and only if every $\tau_i$ restricts to $0\in H_U^*(X_j)$ for all components $X_j\subset X^U$ except for a single $X_i$ where it is not nilpotent. \item The $\tau_i$ form a strict $T$-local Thom system if and only if they induce a strict Thom system of $S_T^{-1} H_T^*(X)$. If the action is equivariantly formal then this is the case if and only if the $\tau_i$ are a strict Thom system of $H_T^*(X)$. \item Assume that the action is equivariantly formal. Then the $\tau_i$ are a strict $U$-local Thom system if and only if they induce a strict Thom system of $S^{-1}_U(H_T^*(X)/\mathfrak{p}_U H_T^*(X))$. \end{enumerate} \end{lem} \begin{proof} For the first statement, recall from Lemma \ref{lem:trivialaction'} that multiplication with $\tau_j$ is injective on $H_U^*(X_j)$. Thus $\tau_i\tau_j=0$ implies that $\tau_i$ restricts to $0$ on $H^*_U(X_j)$. The first half of $(ii)$ follows from the localization theorem and the fact that $H_T^*(X^T)\rightarrow S_T^{-1}(X^T)$ is injective. The second half is due to the fact that $H_T^*(X)\rightarrow S_T^{-1}H_T^*(X)$ is injective in the equivariantly formal case. Regarding statement $(iii)$, if $X$ is equivariantly formal, then we claim that $H_U^*(X)\cong H_T^*(X)/\mathfrak{p}_U H_T^*(X)$ as $H^*(BU)\cong R/\mathfrak{p}_U$-algebras. To see this, recall that any set of elements $x_i\in H_T^*(X)$ which restricts to a $\mathbb{Q}$-basis of $H^*(X)$ gives an $R$-basis of $H_T^*(X)$. Since the restricted $U$-action is again equivariantly formal by Proposition \ref{prop:eqforminherited}, the restriction of the $x_i$ to $H_U^*(X)$ is an $H^*(BU)$-basis. Consequently the restriction $H_T^*(X)\rightarrow H_U^*(X)$ is surjective with kernel $\mathfrak{p}_U H_T^*(X) $, which proves the claim. The Borel localization theorem applied to the restricted $U$-action yields (note that localizing $H_U^*(X)$ at $S_U$ is the same as localizing at the image of $S_U$ in $H^*(BU)$) \[S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X)) \cong S_U^{-1}H^*_U(X)\cong S_U^{-1}H_U^*(X^U).\] The $\tau_i$ are a $U$-local Thom system if and only if they restrict to a Thom system of $S_U^{-1}H_U^*(X^U)$ thus $(iii)$ follows. \end{proof} It follows from the above lemma that we can algebraically detect strict $U$-local Thom systems if the action is equivariantly formal or $U=T$. In this case the equivariant cohomology algebra encodes the total Betti numbers in the following way: \begin{prop}\label{prop:fpdimension} Let $X$ be a compact $T$-space, $U\subset T$ a subtorus, and $\tau_1,\ldots,\tau_k\in H_T^*(X)$ a strict $U$-local Thom system corresponding to the components $F_U(\tau_1),\ldots,F_U(\tau_k)$ of $X^U$. Assume further that either $X$ is additionally equivariantly formal or that $U=T$. Then \[\dim H^*(F_U(\tau_i))=\mathrm{rk}_{S_U^{-1}R/\mathfrak{p}_U} I_i\] where $I_i=\{ x\in S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X))\mid x\tau_j=0, \text{ for all } j\neq i\}$. \end{prop} \begin{proof} As argued in the proof of Lemma \ref{lem:strictthomsys}, we have \[S_U^{-1}(H_T^*(X)/\mathfrak{p}_U H_T^*(X)) \cong S_U^{-1}H_U^*(X^U)\] if $U=T$ or if the action is equivariantly formal. The ideal $I_i$ corresponds to the kernel of the restriction $S_U^{-1}H_U^*(X^U)\rightarrow S_U^{-1}H_U^*(X^U-F_U(\tau_i))$. Since $S_U^{-1}H_U^*(X^U)=S_U^{-1}H_U^*(X^U-F_U(\tau_i))\oplus S_U^{-1}H_U^*(F_U(\tau_i))$. It follows that $I_i$ is isomorphic to $S_U^{-1}H_U^*(F_U(\tau_i))$ which, as an $S_U^{-1}R/\mathfrak{p}_U$-module, is isomorphic to $S_U^{-1}R/\mathfrak{p}_U\otimes H^*(F_U(\tau_i))$. \end{proof} The second statement of the following corollary was first proven in \cite[Theorem 5.1]{FY}. Recall that an equivariantly formal compact orientable $T$-manifold is of GKM type \cite{GKM} if it has only finitely many fixed points, and the one-skeleton $M_1$ of the action is a finite union of $T$-invariant $2$-spheres. \begin{cor} For an equivariantly formal compact orientable $T$-manifold $M$ the equivariant cohomology algebra $H_T^*(M)$ encodes if the action is of GKM type or not. In case the action is GKM, $H_T^*(M)$ also encodes the GKM graph of the action. \end{cor} \begin{proof} By Proposition \ref{prop:fpdimension}, in the situation at hand the equivariant cohomology algebra encodes if the fixed point set consists of isolated points. If all fixed point components are isolated points, then every isotropy codimension $1$ element of the orbit type stratification $\chi$ is ramified, and Theorem \ref{thm:encodesstratification} implies that the equivariant structure of the one-skeleton $M_1$ of the action is encoded in $H_T^*(M)$. \end{proof} \begin{ex}\label{ex:cohomnotencoded} In general, we can not expect $H_T^*(X)$ to encode more specific information about the cohomology even if the action is equivariantly formal: consider the $S^1$-action on $S^4\subset \mathbb{C}\oplus \mathbb{C}\oplus\mathbb{R}$ given by $s\cdot (v,w,h)=(sv,w,h)$. As argued in Example \ref{ex:spheres}, the equivariant cohomology of this action agrees with the one of the trivial action on $S^4$. However the fixed point sets of the two actions are $S^2$ and $S^4$, which have different cohomologies. With regards to our previous results it also seems reasonable to ask for an example where additionally the entirety of $\chi$ is ramified. Note that this is fulfilled for any $S^1$-action with at least $2$ fixed point components. We obtain such examples from the previous ones by considering the diagonal action on $S^4\times S^2$ with standard rotation on the right hand side. For these actions we obtain the cohomologically distinct fixed point sets $S^2\coprod S^2$ and $S^4\coprod S^4$. \end{ex} The previous examples show that additional requirements are needed to reconstruct the cohomologies in the orbit type stratification from the global equivariant cohomology. Our main result with regards to cohomology is \begin{thm}\label{thm:encodescohomology} Let $M$ be an equivariantly formal, compact orientable $T$-manifold such that the map $H^*(M)\rightarrow H^*(X)$ is surjective for all components $X$ of $M^T$. Then the equivariant cohomology $H_T^*(M)$ encodes the connected orbit type stratification $\chi$ of $M$ as well as for any $C, D\in \chi$ with $C\subset D$ the respective equivariant cohomology algebras and the map $H_T^*(D)\rightarrow H_T^*(C)$ induced by the inclusion. \end{thm} \begin{ex}We remark that the surjectivity condition is automatically satisfied in case the fixed point set is finite. Let us describe two classes of examples with nondiscrete fixed point set for which it is fulfilled. Recall first that whenever a $T$-action on a compact manifold $M$ admits a $T$-invariant Morse-Bott function $f:M\to {\mathbb{R}}$ with critical set $M^T$, then the action is equivariantly formal, and for the global minimum $c$ of $f$, the restriction map $H^*_T(M)\to H^*_T(f^{-1}(c))$ is surjective -- this follows from the arguments in \cite[Section 1]{AB}, see also \cite[Theorem 7.1]{GT}. Now, consider a Hamiltonian $T$-action on a compact symplectic manifold for which every component of the fixed point set is mapped, via the momentum map, to the boundary of the momentum polytope. In this setting, for any such component $X$, one can choose a component of the momentum map which attains its global minimum exactly at $X$. Thus, such $T$-actions satisfy the assumptions of Theorem \ref{thm:encodescohomology}. Similarly, given a toric symplectic manifold $M$, with acting torus $T$, the restriction of the $T$-action to any subtorus $U$ fulfills the same assumptions. Indeed, every component of $M^U$ corresponds to a face in the momentum polytope of the $T$-action, so that it occurs as the minimum of an appropriate component of the $T$-momentum map. \end{ex} Before we come to the proof of Theorem \ref{thm:encodescohomology}, we need several lemmas. See e.g.\ \cite{GT} for the notion of Cohen-Macaulay module in the context of equivariant cohomology. Also recall that a $T$-equivariant vector bundle $V\rightarrow X$, i.e.\ a vector bundle with a $T$-action such that the transformations between fibers are linear, induces a vector bundle $V_T\rightarrow X_T$ over the Borel construction. The equivariant characteristic classes of $V$ are defined as the regular characteristic classes of $V_T$ in $H_T^*(X)$. \begin{lem}\label{lem:spherebundle} Let $M$ be an equivariantly formal, compact $T$-manifold and $N\subset M$ a connected component of $M^S$ for some subtorus $S\subset T$. Let $SN$ be the sphere bundle of the normal bundle of $N\subset M$ and $e\in H_T^*(N)$ its equivariant Euler class. Then the bundle induces an isomorphism $H_T^*(SN)\cong H_T^*(N)/(e)$. Furthermore, $H_T^*(SN)$ is Cohen-Macaulay. \end{lem} \begin{proof} There is a fiber bundle $S^n\rightarrow (SN)_T\rightarrow N_T$ of Borel constructions. In the associated Serre spectral sequence, the generator of $H^n(S^n)$ transgresses onto the equivariant Euler class $e\in H_T^*(N)$ of the normal bundle. Now multiplication with $e$ is injective in $H_T^*(N)$: to see this it suffices to check that multiplication is injective on $H_T^*(N^T)$ and this is the case because $e$ restricts to a nonzero element in $H_T^*(*)$ for any point in $N^T$ (there it restricts to the monomial over all weights of the normal representation at $*$, see e.g.\ \cite[Lemma 6.10]{Kaw}). As a consequence, in the spectral sequence we are ultimately left with a single row and $H_T^*(SN)\cong H_T^*(N)/(e)$ as $R$-algebras. By \cite[Lemma 5.2]{GT} we have \[\mathrm{depth}(H_T^*(SN))\geq \dim T-1.\] But also $\dim_R H_T^*(SN)\leq \dim T-1$ as $(0)\subsetneq \mathrm{Ann}_R(H_T^*(SN))$ and $(0)$ is prime. \end{proof} \begin{lem}\label{lem:divisiblebye} Let $M$ be an equivariantly formal, compact $T$-manifold and $N\subset M$ a connected component of $M^S$ for some subtorus $S\subset T$. Let $e$ be the equivariant Euler class of the normal bundle of $N\subset M$. Then an element $x\in H_T^*(N)$ is divisible by $e$ if and only if for any component $X\subset N^T$ the restriction $x|_{H_T^*(X)}$ is divisible by $e|_{H_T^*(X)}$. \end{lem} \begin{proof} As argued previously $x$ is divisible by $e$ if and only if it restricts to $0$ in $H_T^*(SN)$. As the latter is Cohen-Macaulay of dimension $\dim T-1$, this is the case if and only if the restriction to $H_T^*((SN)_1)$ is $0$, see \cite[Theorem 6.1]{GT}. Now $(SN)_1$ is contained in the restriction $\left.(SN)\right|_{N^T}$ of $SN$ to $N^T$. If $x|_{H_T^*(X)}$ is divisible by $e|_{H_T^*(X)}$ for every component $X\subset M^T$ then it follows that $x|_{H_T^*(\left.(SN)\right|_{N^T})}=0$ and thus $x|_{H_T^*((SN)_1)}=0$. \end{proof} \begin{lem}\label{lem:complementef} Let $M$ be an equivariantly formal, compact orientable $T$-manifold such that the map $H^*(M)\rightarrow H^*(N)$ is surjective for some component $N$ of $M^T$. Then the $T$-action on $M\backslash N$ is equivariantly formal. \end{lem} \begin{proof} By Lemma \ref{lem:equivformal}, any $T$-space $X$ with finite dimensional cohomology is equivariantly formal if and only if the sums over all Betti numbers satisfy $\dim_\mathbb{Q} H^*(X)=\dim_\mathbb{Q} H^*(X^T)$. Since $(M\backslash N)^T= M^T\backslash N$ it suffices to prove that $\dim H^*(M\backslash N)=\dim H^*(M)-\dim H^*(N)$. By assumption, the inclusion of $N$ is injective on homology and thus the long exact homology sequence of the pair $(M,N)$ splits into short exact sequences \[0\rightarrow H_*(N)\rightarrow H_*(M)\rightarrow H_*(M,N)\rightarrow 0.\] But by Lefschetz duality, applied to $M$ with a tubular neighborhood of $N$ removed, we have $H_*(M,N)=H^{n-*}(M\backslash N)$, where $n$ is the dimension of $M$. \end{proof} \begin{lem}\label{lem:minimalthomsystem} Let $M$ be an equivariantly formal, compact orientable $T$-manifold such that the map $H^*(M)\rightarrow H^*(X_i)$ is surjective for all components $X_1,\ldots,X_k$ of $M^T$. Then there is a strict Thom system $\tau_1,\ldots,\tau_k$ of $H_T^*(M)$ which is minimal in the sense that for any other strict Thom system $\tau_1'\ldots,\tau_k'$, after possibly changing the order, the cohomological degrees satisfy $|\tau_i|\leq |\tau_i'|$. It is, up to scalars from $\mathbb{Q}^\times$, uniquely given by the equivariant Thom classes of the components of $M^T$ in $M$. \end{lem} \begin{proof} There is a Mayer-Vietoris sequence \[0\rightarrow H_T^*(M)\rightarrow H_T^*(M\backslash X_i)\oplus H_T^*(X_i)\rightarrow H_T^*(SX_i)\rightarrow 0\] where $SX_i$ is the sphere bundle of the normal bundle of $X_i$ in $M$. Let $e_i\in H_T^*(X_i)$ denote the equivariant Euler class of $SX_i$ and let $\tau_i\in H_T^*(M)$ be the equivariant Thom class of $X_i\subset M$, i.e.\ the unique element that restricts to $(0,e_i)$ in the above sequence. Then by Lemma \ref{lem:strictthomsys}, $\tau_1,\ldots,\tau_k$ form a strict Thom system. Assume $\tau_i'$ is a strict Thom system of $H^*_T(M)$. Then by Lemma \ref{lem:strictthomsys} the element $\tau_i'$ restricts to $0$ on all $X_j$ except for $X_i$. The action on $M\backslash X_i$ is again equivariantly formal by Lemma \ref{lem:complementef}. Thus $\tau_i'$ restricts to $0$ in $H_T^*(M\backslash X_i)$ and therefore to the kernel of $H_T^*(X_i)\rightarrow H_T^*(SX_i)$. The latter is the ideal generated by the equivariant Euler class of $SX$, thus the claim follows. \end{proof} \begin{lem}\label{lem:ramified} Let $M$ be an equivariantly formal compact, orientable $T$-manifold such that the map $H^*(M)\rightarrow H^*(X_i)$ is surjective for all components $X_1,\ldots,X_k$ of $M^T$. Then every isotropy codimension $1$ element of the connected orbit type stratification $\chi$ of $M$ is ramified. \end{lem} \begin{proof} Suppose this is not the case. Then there is an isotropy codimension $1$ element of $\chi$ which contains only a single $X_i$. Consequently the one-skeleton of $M\backslash X_i$ is disconnected. However the one-skeleton of a equivariantly formal action on a connected manifold is connected by Proposition \ref{lem:CSL} which contradicts Lemma \ref{lem:complementef}. \end{proof} For the following lemma, recall our convention that elements in a graded space are assumed to be homogeneous. \begin{lem}\label{lem:productdecomposition} Let $X$ be a path-connected, trivial $T$-space. \begin{enumerate}[(i)] \item Fix $ x\in H_T^*(X)$ as well as coprime elements $f_1,\ldots,f_k\in R$. If there are $x_1,\ldots,x_k\in H_T^*(X)$ with the properties that $x=\prod_{i=1}^k x_i$ and the $x_i-f_i$ are nilpotent, then $x_1,\ldots,x_k$ are unique. \item Assume $X$ is a smooth manifold and $E\rightarrow X$ is an effective $T$-equivariant vector bundle. Then $V$ splits as $V=V_1\oplus \ldots\oplus V_l$ for $T$-equivariant vector bundles $V_i\rightarrow X$ with the property that the identity component of every isotropy in $V_i$ is the same codimension $1$ subtorus $S_i\subset T$ and $S_i\neq S_j$ for $i\neq j$. Let $e$, resp.\ $e_i$, denote the equivariant Euler classes of $V$, resp.\ $V_i$ and let $\alpha_i\in R^2$ be a nontrivial weight associated to $S_i$ (i.e.\ which vanishes when restricted to $H^2(BS_i)$). Then $e=\prod_{i=1}^k e_i$ and $e_i-a_i\alpha_i^{k_i}$ is nilpotent for some $k_i\in \mathbb{N}$, $a_i\in \mathbb{Q}^\times$. \item In the setting of $(ii)$, suppose we have a decomposition $e=\prod_{i=1}^l x_i$ such that $x_i-b_i\beta_i^{l_i}$ is nilpotent for some $b_i\in \mathbb{Q}^\times$, $l_i\in \mathbb{N}$ and pairwise linearly independent $\beta_i\in R^2$. Then there are $c_i,d_i\in\mathbb{Q}^\times$, such that after possibly changing the order we have $\alpha_i=c_i\beta_i$ and $e_i=d_ix_i$, $i=1,\ldots,l$. \end{enumerate} \end{lem} \begin{proof} Suppose first that we have already shown statement $(i)$ for products of $2$ elements. As the action is trivial, $H_T^*(X)\cong R\otimes H^*(X)$ inherits a multiplicative bigrading. By Lemma \ref{lem:trivialaction'}, $x_i-f_i$ being nilpotent is equivalent to the fact that $x_i$ restricts to $f_i$ in $H_T^*(*)=R$ for any point, i.e.\ the $R\otimes H^0(X)$ component of $x_i$ is $f_i$. In particular $\prod_{i\geq l} x_i-\prod_{i\geq l} f_i$ is nilpotent for any $l$ and $(i)$ follows by inductively applying the result for two factors of the form $x_l$ and $\prod_{i\geq l+1} x_i$. For the proof with $2$ factors, suppose we have coprime elements $f,g\in R$ and $a,b,a',b'\in H_T^*(X)$ such that $ab=a'b'$ and $a-f$, $b-g$, $a'-f$, $b'-g$ are nilpotent. We show inductively that $a\equiv a'$ and $b\equiv b'\mod R\otimes H^{\geq k}(X)$ for all $k$. Starting at $k=1$ we note that an element is nilpotent if and only if it is contained in $R\otimes H^+(X)$. Thus the assumptions imply $a\equiv f\cdot 1 \equiv a'$ and $b\equiv g\cdot 1\equiv b'\mod R\otimes H^+(X)$. Now suppose $a\equiv a'$, $b\equiv b'\mod R\otimes H^{k-1}(X)$ for some $k\geq 2$. For any element $y\in H_T^*(X)$ write $y_i$ for its component in $R\otimes H^i(X)$ and $y_{\leq i}:=y_0+\ldots+y_i$. Then $(a_{\leq k}\cdot b_{\leq k})_k=(ab)_k=x_k=(a'b')_k=(a_{\leq k}'\cdot b_{\leq k}')_k$ and thus by induction \[a_k\cdot g+f\cdot b_k=a_k'\cdot g+f\cdot b_k'.\] Now choose a basis $h_\alpha$ of $H^k(X)$. We write $a_k=\sum u_\alpha h_\alpha$, $b_k=\sum v_\alpha h_\alpha$, $a_k'=\sum u_\alpha' h_\alpha$, and $b_k'=\sum v_\alpha' h_\alpha$ for unique $u_\alpha,v_\alpha,u_\alpha',v_\alpha'\in R$. In particular we obtain the equations $u_\alpha g+fv_\alpha=u_\alpha'g+fv_\alpha'$ and thus $(u_\alpha-u_\alpha')g=f(v_\alpha-v_\alpha')$ in $R$. As $f$ and $g$ are coprime it follows that $f|(u_\alpha-u_\alpha')$. However note that, as $f\otimes 1$ is the $R\otimes H^0(X)$-part of the homogeneous element $a$, the total degree satisfies $\deg(u_\alpha-u_\alpha')=\deg a - k = \deg f-k<\deg f$. Thus $u_\alpha-u_\alpha'=0$ and also $v_\alpha-v_{\alpha}'=0$ which finishes the proof of $(i)$. For any $U\subset T$, the fixed point set $V^U$ is a subbundle of $V$. This yields the decomposition $V=V_1\oplus\ldots\oplus V_l$ as described in the lemma (see also \cite[Lemma 1.6.7]{A}). The product decomposition of $e$ follows from the sum decomposition of $V$. Restricting $V_i$ to a point gives a representation which decomposes into $2$-dimensional representations associated to a weight which is some nonzero rational multiple of $\alpha_i$. Hence the restriction of $e_i$ to $H_T^*(*)\cong R$ is equal to $c_i\alpha^{k_i}$ for some $c_i\in\mathbb{Q}^\times$, $k_i\in \mathbb{N}$. Then the rest of $(ii)$ follows by Lemma \ref{lem:trivialaction'}. To prove $(iii)$ note that for a product decomposition of $e$ as in the lemma, the $R\otimes H^0(X)$ component of $e$ is equal to $\prod b_i\beta_i^{l_i}$. Thus $\alpha_i^{k_i}=\beta_i^{l_i}$ up to scalars and order. Since the $\alpha_i^{k_i}$ are coprime, the claim now follows from $(i)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:encodescohomology}] By Corollary \ref{cor:stratifequivformal} and Lemma \ref{lem:ramified}, $\chi$ is encoded in $H_T^*(M)$, so it remains to prove that equivariant cohomology is encoded as well. As in Lemma \ref{lem:minimalthomsystem}, we algebraically detect the actual equivariant Thom classes $\tau_1,\ldots, \tau_k$ (up to $\mathbb{Q}^\times)$) of the fixed point components $X_1,\ldots,X_k$ as a minimal strict Thom system. Since the restriction $H_T^*(M)\rightarrow H^*(M)$ is surjective by Lemma \ref{lem:equivformal}, the surjectivity assumption implies that $H_T^*(M)\rightarrow H_T^*(X_i)$ is surjective for all $i$. The kernel of this map is $K_i=\{x\in H_T^*(M)\mid x\tau_i=0\}$. Thus the cohomology of all $X_i$ is encoded as $H_T^*(X_i)\cong H_T^*(M)/K_i$. Now let $N\in \chi$ be a submanifold which without loss of generality contains exactly the components $X_1,\ldots,X_l$ and whose principal isotropy has a subtorus $S\subset T$ as identity component. We will compute $H_T^*(N)$ from this data. Consider a small equivariant tube $DN\subset M$ around $N$ with equivariant sphere bundle $SN\subset DN$. We have a Mayer-Vietoris sequence \[0\rightarrow H_T^*(X)\rightarrow H_T^*(M-N)\oplus H_T^*(N)\rightarrow H_T^*(SN)\rightarrow 0.\] Let $J_N\subset H_T^*(N)$ denote the ideal generated by the equivariant Euler class $e_N$ of the normal bundle of $N\subset M$. By Lemma \ref{lem:spherebundle}, $J_N$ is the kernel of $H_T^*(N)\rightarrow H_T^*(SN)$. Consequently, for $y\in J$, elements of the form $(0,y)$ in the middle term of the Mayer-Vietoris sequence lie in the image of the restriction map on $H_T^*(X)$. This proves that $J_N$ is contained in the image of the restriction $r_N\colon H_T^*(M)\rightarrow H_T^*(N)$. We now try to characterize $r_N^{-1}(J_N)$ algebraically. Let $r_i\colon H_T^*(M)\rightarrow H_T^*(M)/K_i\cong H_T^*(X_i)$, $i=1,\ldots,l$, denote the restriction map. Recall that the class $\tau_i$ restricts in $H_T^*(X_i)$ to the equivariant euler class of the normal bundle of $X_i$ in $M$, up to scalar. Then it follows from part $(ii)$ of Lemma \ref{lem:productdecomposition} that we find a product decomposition \[r_i(\tau_i)=\prod_{j=1}^{n_j}x_{ij}\] in which $x_{ij}$ has the property that for some $c_{ij}\in\mathbb{Q}^\times$ and $l_{ij}\in \mathbb{N}$ the element $x_{ij}-c_i\alpha_{ij}^{l_{ij}}$ is nilpotent where $\alpha_{ij}\in R^2$ are the distinct weights associated to those codimension $1$ isotropy groups which occur around $X_i$. Then by part $(iii)$ of Lemma \ref{lem:productdecomposition}, the $x_{ij}$ are up to scalars the equivariant Euler classes of certain subbundles $V_{ij}$ of the normal bundle $V_i$ of $X_i\subset M$ as in Lemma \ref{lem:productdecomposition} part $(ii)$. The restriction of the normal bundle of $N\subset M$ to $X_i$ is equal to the sum over those $V_i$ whose unique connected maximal isotropy group does not contain the principal isotropy $S$ of $N$. Let $x_i$ be the product over those $x_{ij}$ for which $\alpha_{ij}$ does not vanish on $S$. Then $x_i\in H_T^*(X_i)$ is -- up to scalar -- the restriction of $e_N$. After possibly renormalizing the individual $x_i$ we find an element $\tau_N\in H_T^*(M)$ for which $r_i(\tau_N)=x_i$, for all $i=1,\ldots,l$. By Lemma \ref{lem:divisiblebye}, $r_N(\tau_N)$ agrees with $e_N$ up to scalar and we have \[ r_N^{-1}(J_N)=\{x\in H_T^*(M)\mid r_i(\tau_N)|r_i(x)\text{ for }i=1,\ldots,l\}=:I_N. \] This description encodes $r_N^{-1}(J_N)$ algebraically, as it is independent of the particular choice of $\tau_N$. As the ideal $I_N$ restricts onto $J_N=e_N\cdot H_T^*(N)$, the map \[I_N\rightarrow \bigoplus_{i=1}^l H_T^*(X_i)\cong H_T^*(N^T),\quad x\mapsto \left(\frac{r_1(x)}{r_1(\tau_N)},\ldots,\frac{r_l(x)}{r_l(\tau_N)}\right)\] is well defined and its image is that of the injective map $H_T^*(N)\rightarrow H_T^*(N^T)$. Thus we have constructed $H_T^*(N)$ out of $H_T^*(M)$. If furthermore $N'\subset N$ is another isotropy manifold containing without loss of generality the fixed point components $X_1,\ldots, X_{l'}$, $l'\leq l$, then there is a commutative diagram \[\xymatrix{H_T^*(N)\ar[d]\ar[r] & \prod_{i=1}^l H_T^*(X_i)\ar[d]\\ H_T^*(N')\ar[r]& \prod_{i=1}^{l'} H_T^*(X_i)}\] where the horizontal maps are injective and the right hand map is projection onto the first $l'$ factors. We have just argued that $H_T^*(M)$ encodes the image of the horizontal maps, hence it also encodes the left hand map. \end{proof} \section{Remarks on the integral case}\label{sec:integral} This section has the purpose of commenting on the question what additional information on the orbit type stratification can be deduced from the integral equivariant cohomology. Generalizing from our results in the rational case it seems natural to ask: \begin{enumerate}[(i)] \item What does $H_T^*(X;\mathbb{Z})$ know about the full orbit type stratification also considering disconnected isotropies? \item Does $H_T^*(X;\mathbb{Z})$ encode the equivariant integral cohomologies of the strata in an equivariantly formal setting analogous to Theorem \ref{thm:encodescohomology}? \end{enumerate} There are some subtleties regarding the right requirements and the notion of equivariant formality in the integral setting. E.g.\ unlike in the rational case, a module of the form $H^*(BT;\mathbb{Z})\otimes H^*(X;\mathbb{Z})$ is not necessarily free over $H^*(BT;\mathbb{Z})$ and in particular freeness is not necessarily implied by the degeneracy of the Serre spectral sequence of the Borel fibration. This problem however does not arise in case $H^*(X;\mathbb{Z})$ is free over $\mathbb{Z}$. Let us begin by pointing out the limitations of possible generalizations even under the assumption of torsion freeness. \begin{ex} Consider $T^2$-actions on $S^4\subset \mathbb{C}^2\oplus \mathbb{R}$ given by $(s,t)\cdot (v,w,h)=(s^at^bv,s^c t^d w,h)$, for $a,b,c,d\in\mathbb{Z}$. Then $R=\mathbb{Z}[X,Y]$ and $H_T^*(S^4)$ is generated over $R$ by $1\in H^0_T(S^4)$, $\alpha\in H^4_T(S^4)$ with a single relation $\alpha^2=(aX+bY)(cX+dY)\alpha$. To see this, note that the action above is a pullback of the standard $T^2$-action on $S^4$, i.e.\ with $(a,b,c,d)=(1,0,0,1)$, which has the relation $\alpha^2=XY\alpha$ (this follows e.g. from the integral GKM description). There is a map induced by the pullback between the two equivariant cohomologies, which maps $H(BT;\mathbb{Z})$ bases to one another and transforms $H(BT;\mathbb{Z})$ via $X\mapsto aX+bY$ and $Y\mapsto cX+dY$. Hence the relation for $\alpha^2$ transforms accordingly as claimed above. \begin{enumerate}[(i)] \item Setting $(a,b,c,d)=(2,0,3,0)$ we obtain 5 different elements in the orbit type stratification: two fixed points, $2$ two-spheres with isotropies $\mathbb{Z}_2\times S^1$ and $\mathbb{Z}_3\times S^1$ as well as the principal orbit type $\{1\}\times S^1$. For $(6,0,1,0)$ there are only 4 as only one two-sphere occurs with isotropy $\mathbb{Z}_6\times S^1$. Thus we see that the combinatorics of the orbit type stratification are not encoded even though all extensions are ramified. The connected orbit type stratification is of course encoded by the previous rational results. \item Consider the actions with $(a,b,c,d)$ equal to $(2,0,0,3)$ and $(6,0,0,1)$. Then, while the connected orbit type stratification is encoded in $H_T^*(S^4)$, the corresponding equivariant cohomology algebras are not: in both cases $(S^4)^{S^1\times \{1\}}$ is $S^2$ however the rotation speeds of the respective $T$-actions are different which yields nonisomorphic equivariant cohomology algebras. Thus without further restrictions on the combinatorics of the stratification no integral result analogous to Theorem \ref{thm:encodescohomology} can be expected to hold. \end{enumerate} \end{ex} The reason for the failure of these methods lies in the fact that for general subgroups of $S\subset T$ the Borel localization theorem does no longer establish a bijection between Thom systems of $H_S^*(X)$ and components of $X^S$. This is due to fact that the ring $H^*(BS;\mathbb{Z})$ might contain elements of positive degree which multiply to $0$ and thus multiplicatively closed subsets available for localization are somewhat limited. \begin{ex} Consider the above example for $(a,b,c,d)=(2,0,0,3)$ and the subgroup $S=S^1\times \mathbb{Z}_2$. One has $H^*(BS)=\mathbb{Z}[X,Y]/(2Y)$ and $H_S^*(S^4)=H^*(BS)\otimes_R H_T^*(S^4)\cong \mathbb{Z}[X,Y,\alpha]/(2Y,\alpha^2)$. The fixed point set $(S^4)^S$ consists of two discrete points. However there is no element in $H_S^*(S^4)$ which restricts to a nontrivial element on a single fixed point while vanishing on the other: $\alpha$ has to restrict to $0$ on the fixed points due to being nilpotent while elements in the image of $H^*(BS)\rightarrow H_S^*(S^4)$ restrict to the same element on both fixed points. Thus the technique of using Thom systems to detect fixed point components does not apply for the subgroup $S$. \end{ex} Despite these counterexamples, the integral cohomology $H_T^*(X;\mathbb{Z})$ does of course know more about the orbit type stratification than $H_T^*(X;\mathbb{Q})$. The correspondence between certain Thom systems and fixed point components of $S$ does carry over in case $H^+(BS;\mathbb{Z})-\{0\}$ is multiplicatively closed, enabling Borel localization. The groups $S$ for which this is the case are precisely tori and $p$-tori, i.e.\ subgroups of the form $(\mathbb{Z}_p)^r$, where $p$ is prime. One way to go would therefore be to develop results analogous to those in this paper for $p$-tori (in fact, the references \cite{Q} and \cite[Section 3.6]{AP} mentioned in the introduction and in Remark \ref{rem:somewhatdually} also deal with $p$-torus actions), and deduce refined results on the orbit type stratification from $H_T^*(X;\mathbb{Z})$ via this route.
{ "timestamp": "2020-12-29T02:22:34", "yymm": "2012", "arxiv_id": "2012.14038", "language": "en", "url": "https://arxiv.org/abs/2012.14038" }
\section{ Introduction} It is well known that perturbative QCD suffers a fundamental problem: the scattering amplitude decreases at large impact parameters ($b$) as a power of $b$. Such behaviour contradicts the Froissart theorem\cite{FROI} and, hence, perturbative QCD cannot lead to an effective theory at high energy. In particular, the CGC/saturation approach (see Ref.\cite{KOLEB} for a review) which is based on perturbative QCD, is confronted by this problem \cite{KW,FIIM}. At large $b$ the scattering amplitude is small and, therefore, only the linear BFKL (Balitsky, Fadin, Kuraev and Lipatov) equation\cite{BFKL} describes the scattering amplitude in perturbative QCD. It is known that the eigenfunction of this equation (the scattering amplitude of two dipoles with sizes $r$ and $R$) has the following form\cite{LIP} \begin{equation}\label{EIGENF} \phi_\gamma\left( \vec{r}, \vec{R}, \vec{b}\right)\,\,=\,\, \left( \frac{r^2\,R^2}{\left( \vec{b} + \frac{1}{2}(\vec{r} - \vec{R})\right)^2\,\left( \vec{b} - \frac{1}{2}(\vec{r} - \vec{R})\right)^2} \right)^\gamma\,\,\xrightarrow{b\,\gg\,r,R}\,\,\left( \frac{r^2\,R^2}{b^4}\right)^\gamma \end{equation} One can see that $\phi_\gamma\left( \vec{r}, \vec{R}, \vec{b}\right)$ at large impact parameter $b$ decreases as a power of $b$. In particular, such a decrease leads to the growth of the radius of interaction as a power of the energy\cite{KW,FIIM}, resulting in the violation of Froissart theorem. Since it was proven in Ref.\cite{LIP} that the eigenfunction of any kernel with conformal symmetry has the form of \eq{EIGENF}, we can only change the large $b$ behaviour by introducing a new dimensional scale in the kernel of the equation. A variety of ideas to overcome this problem have been suggested in Refs.\cite{LERYB1,LERYB2,LETAN,QCD2,KHLEP,KKL,FIIM,GBS1,BLT,GKLMN,HAMU,MUMU,BEST1,BEST2,KOLE,LETA,LLS,LEPION,KHLE,KAN,GOLEB}. In our previous paper \cite{GOLEM} we used the Gribov-Zwanziger approach\cite{GRI0,GRI1,GRI2,GRI3,GRI4,PVB,Z1,Z2,Z3,GRREV,DOKH} for the confinement of quarks and gluons to fix this non-perturbative scale. We derived the generalized BFKL evolution equation, which incorporates this new dimensional scale, and demonstrated that this equation leads to the exponential decrease of the scattering amplitude at large $b$. We will discuss both the equation and large $b$ behaviour of the solution in the next section which has a review character. The goal of this paper is to find the solution to this new equation. In section III we consider the general properties of the spectrum and eigenfunctions, which follow from an analytical approach. In particular, we prove that the eigenfunctions of \eq{EIGENF} describe the eigenfunctions of the new equation at short distances $r\,\,\ll\,\,R$. In section IV we concentrate our efforts on the numerical solution of the equation. We show that all eigenvalues of the new equation, that generate the power energy increase of the scattering amplitude, coincide with the massless BFKL eigenvalues. However the eigenfunctions have quite a different behaviour in comparison with the eigenfunction of the massless BFKL equation and they crucially depend on the input from Gribov-Zwanziger confinement approach. Finally, in Section V we discuss our results and future prospects. \section{BFKL evolution equation for Gribov-Zwanziger confinement - a recap} \subsection{Gribov - Zwanziger confinement: gluon propagator} As we have alluded that in Ref.\cite{LIP} it is proven, that eigenfunctions of \eq{EIGENF} have the same form for all kernels with conformal symmetries. Hence we have to modify the kernel of the BFKL equation introducing a new dimensional scale of the non-perturbative origin. In other words, we need an approach which models the confinement of quarks and gluons. Among numerous approaches to confinement, the one proposed by Gribov, \cite{GRI0,GRI1,GRI2,GRI3,GRI4,PVB,Z1,Z2,Z3,GRREV,DOKH} has special advantages, which makes it most suitable for discussion of the BFKL equation in the framework of this hypotheses. First, it is based on the existence of Gribov copies \cite{GRI0} - multiple solutions of the gauge-fixing conditions, which are the principle properties of non-perturbative QCD. Second, the main ingredient is the modified gluon propagator, which can be easily included in the BFKL-type of equations. Third, in Ref.\cite{KHLE} (see also ref.\cite{FDGS}) it is demonstrated that the Gribov gluon propagator originates naturally from the topological structure of non-perturbative QCD in the form: \begin{equation} \label{GLPR} G\left( q\right)\,\,=\,\,\frac{1}{q^2\,+\,\,\frac{\chi_{\rm top}}{q^2}}\,\,=\,\,\frac{q^2}{q^4\,+\,\mu^4} \end{equation} where $\chi_{\rm top}\,=\,\mu^4 $ is the topological susceptibility of QCD, which is related to the $\eta'$ mass by the Witten-Veneziano relation\cite{VEN,WIT}. This allows us to obtain the principal non-perturbative dimensional scale, directly from the experimental data. However, it is shown in Ref.\cite{GOLEM} that propagator of \eq{GLPR}, which vanishes at $q=0$, does not lead to the exponential suppression of the scattering amplitude at large impact parameters ($b$). Fortunately, the lattice calculation of the gluon propagator generates the gluon propagator with $G\left( q=0\right)\,\neq\,0$ (see Refs.\cite{DOS,DOV,CDMV} and references therein), in explicit contradiction with \eq{GLPR}. In Refs.\cite{HU1,CFPS,CFMPS,CDMV,DSV,HU2,HU3,AHS,DOV,GRA,FMP,DSVV,DGSVV,CLSST,Z4,Z5,LVS}% \footnote{This list of references is not complete. More details can be found in the reviews \cite{GRREV,HU1}.} it is shown that $G\left( q=0\right)\,\neq\,0$ is a general feature of non-perturbative approaches and that Gribov's copies lead to the gluon propagator which is final at $q \to 0$. In this paper we parameterize the gluon propagator in the following form: \begin{equation} \label{GGLPR1} G\left( q\right) \,\,=\,\,\frac{q^2\,+\,M^2_0}{\left( q^2\,+\,M^2\right)^2\,+\,\mu^4} \end{equation} We view this form as parameterization of the sum of Gribov's propagators of \eq{GLPR} with different values of $\mu$, as it has been discussed in Ref.\cite{GOLEM}. We are aware that \eq{GGLPR1}, which describes the lattice QCD data, is a simplified version of the refined Gribov-Zwanziger (RGZ) theoretical approaches that have been discussed in Refs.\cite{HU1,CFPS,CFMPS,CDMV,DSV,HU2,HU3,AHS,DOV,GRA,FMP,DSVV,DGSVV,CLSST,Z4,Z5,LVS}. However, we believe that it is a good first approximation, which allows us to introduce two dimensional parameters from confinement physics. In this paper we call the gluon propagator of \eq{GGLPR1} as the lattice QCD propagator or as RGZ propagator. As we have mentioned, at high energies $q$ is a two dimensional vector, which corresponds to transverse momentum carried by the gluon. Introducing \begin{equation} \label{GGLPR2} G^{\pm}\left( q\right)\,\,=\,\,\frac{1}{\left( q^2\,+\,M^2\right)\,\pm\,i\,\mu^2} \end{equation} we can re-write \eq{GGLPR1} in the form: \begin{eqnarray} \label{GGLPR3} G(q)&=&\frac{1}{2}\Big(G^{+}(q)+G^{-}(q)\Big)\,+\,\frac{M^2_0-M^2}{2\,i\mu^2} \Big(G^{+}(q)-G^{-}(q)\Big) \,=\,\frac{1}{\mu^2}\Big({\rm Re}\,G^{+}(\kappa)\,+\,(M^2_0-M^2)\,{\rm Im}\,G^{+}(\kappa)\Big)\\ &=&\frac{1}{2}\Bigg\{\mkern-8mu \left( 1 + i\,\frac{M^2_0\!-\!M^2}{\mu^2}\right) G^{+}(q) + \left( 1 - i\,\frac{M^2_0\!-\!M^2}{\mu^2}\right) G^{-}(q) \Bigg\} =\frac{1}{2\,\mu^2} \Big\{(1+i\,m_0) G^{+}(\kappa) +(1-i\,m_0) G^{-}(\kappa) \Big\}\nonumber \end{eqnarray} where we use notations: \begin{equation} \label{VAR} \kappa \,=\,\frac{q^2}{\mu^2},\quad \kappa'\,=\,\frac{q'^2}{\mu^2},\quad E\,=\,- \frac{\omega}{\bar{\alpha}_S},\quad \bar{\alpha}_S\,=\,\frac{\alpha_S N_c}{\pi},\quad m\,=\,\frac{M^2}{\mu^2},\quad m_0\,=\,\frac{M^2_0-M^2}{\mu^2}\,. \end{equation} \subsection{The BFKL equation in momentum representation.} The BFKL equation for Gribov-Zwanziger gluon propagator has been derived in our previous paper\cite{GOLEM}, using the procedure that has been described in Ref.\cite{LLS}. It has two parts: the gluon reggeization and the emission of gluons. The first one has a general form\cite{BFKL}: \begin{equation} \label{GGLTR} \omega_G(q)\,=\,G^{-1}(q)\,\Sigma(q)\quad\mbox{with}\quad \Sigma(q)\,=\,\int\frac{d^2 q'}{4\,\pi} G\left(\vec{q}'\right)\,G\left(\vec{q} - \vec{q}'\right) \end{equation} where $G(q)$ is given by \eq{GGLPR1}. The analytical expression for \eq{GGLTR} we will discuss below (see also appendix A of Ref.\cite{GOLEM}). The emission kernel has been calculated in Ref.\cite{GOLEM} using the decomposition of \eq{GLPR}. Indeed, using this decomposition we can treat the production of the gluon as the sum of two sets of the diagrams (see \fig{3}) with $\tilde{M}^2 =\,\,i\mu^2$ and with $\tilde{M}^2 =-\,i\mu^2$. We sum the first diagrams of the gluon emission shown in \fig{3} to find the vertex $\Gamma_\mu(q,q')$ for the kernel of the BFKL equation. It is easy to see that the sum shown in \fig{3}, leads to the Lipatov vertex that has the following form\cite{LLS}: \begin{equation} \label{V} \Gamma_\mu\left( q, q'\right)\, = \,- q^\perp_{\mu}\,-\,q'^\perp_{\mu} \,+\,p_{1,\mu} \left( - G^{-1}(q) \frac{1}{p_1 \cdot k}\,+\,\frac{p_2\cdot k}{p_1\cdot p_2}\right) \,-\,p_{2,\mu} \left( - G^{-1}(q')\frac{1}{p_2 \cdot k}\,+\,\frac{p_1\cdot k}{p_1\cdot p_2}\right) \end{equation} where $p_{1,\mu}$ and $p_{2, \mu}$ are the momenta of incoming particles (see \fig{3} for all notations). \begin{figure}[ht] \centering \leavevmode \includegraphics[width=16cm]{GRCBFKL.pdf} \caption{The first Feynman diagrams with gluon emission, whose sum leads to $\Gamma_\nu(q,q')$ (Lipatov vertex is denoted by the gray blob).} \label{3} \end{figure} Using \eq{GGLTR} and \eq{V}, the BFKL equation for Gribov-Zwanziger confinement takes the form (for $\vec{Q}_T = 0$\footnote{% $\vec{Q}_T$ is the momentum transferred by the BFKL Pomeron, a conjugate variable to the impact parameter.}): \begin{equation} \label{BFKLMR} \omega \,\phi(\omega, q)\,= \,-\,2\,\omega_G(q)\,\phi(\omega,q) \,+\,\bar{\alpha}_S \int \frac{d^2 q'}{\pi} G\left(\vec{q} - \vec{q}^{\,'}\right)\,\,\phi(\omega,q') \end{equation} This equation looks similar to the BFKL equation for a massive gluon \cite{LLS} in the non-abelian Yang-Mills theories with a Higgs particle, which is responsible for mass generation. However, we do not have a contact term in \eq{BFKLMR}. As we have discussed in \cite{GOLEM} the absence of a contact term in our equation is a direct indication that Gribov-Zwanziger confinement does not lead to a massive gluon. Assuming that $\phi(q)$ depends only on $|\vec{q}|$, we can integrate the emission kernel over the angle and in terms of the variable of \eq{VAR}, \eq{BFKLMR} takes the form: \begin{subequations} \begin{eqnarray} E \,\phi(\kappa)\,\,&=&\,\,\, \underbrace{T(\kappa)\,\phi(\kappa)}_{\mbox{kinetic energy}} \,-\, \underbrace{\int d \kappa' \,K(\kappa,\kappa')\,\phi(\kappa')}_{\mbox{emission kernel}} \label{BFKLF1}\\ &=& -\, T(\kappa)\,\phi(\kappa)\,\,-\,\,\int d\kappa' \,K(\kappa,\kappa')\, \Bigg\{\phi\left( \kappa'\right) \,-\,\frac{G(\kappa')}{G(\kappa)}\,\phi(\kappa)\Bigg\} \label{BFKLF2} \end{eqnarray} \end{subequations} where \begin{subequations} \begin{eqnarray} T(\kappa) \,\,&=&\,\,\frac{1}{4}G^{-1}(\kappa)\Big\{ \,{\rm Re} \left( \mzi^2 \,I_1(\mpi,\kappa)\right) \,+\,(1\,+\,m_0^2)\,I_2(m,\kappa) \Big\}\,, \label{GZ1}\\ I_1(\mpi,\kappa)\,\,&=&\,\, \frac{2}{\sqrt{\kappa (\kappa + 4\,\mpi)}} \ln\left(\frac{\sqrt{\kappa}+\sqrt{\kappa+4\,\mpi}}{-\sqrt{\kappa}+\sqrt{\kappa+4\,\mpi}}\right)\,, \label{I1}\\ I_2(m,\kappa)\,\,&=&\,\, -\frac{1}{\sqrt{4 m \kappa+\kappa^2-4}} \ln\left(\frac{\kappa+2\,m\,-\,\sqrt{4 m \kappa+\kappa^2-4}}{\kappa+2\,m\,+\,\sqrt{4 m \kappa+\kappa^2-4}}\right)\,, \label{I2}\\ K(\kappa,\kappa')\,\,&=&\,\, {\rm Re}\left\{\frac{\mzi}{\sqrt{2\,\mpi\,(\kappa\,+\,\kappa')+\mpi^2+(\kappa\,-\,\kappa')^2}}\right\}\,, \label{GZ2}\\ G(\kappa)\,\,&=&\,\,\frac{\kappa\,+\,m\,+\,m_0}{\left(\kappa\,+\,m\right)^2\,+\,1}\,, \label{GZ3}\\ \mpi\,\,&=&\,\,m+i\,,\quad \mzi\,\, = \,\,1+i\,m_0\,. \label{MPI} \end{eqnarray} \end{subequations} In \eq{BFKLF1} - \eq{GZ3} we use the variables which are given in \eq{VAR}. \section{The basics of the spectrum for the master equation } \subsection{ The equation for the eigenfunctions of the massless BFKL equation} As has been mentioned the eigenfunctions of the massless BFKL equation \begin{equation} \label{LKAP} \phi_{\mbox{\tiny BFKL}}(\kappa;\gamma)\,=\,\kappa^{\gamma - 1}\quad\mbox{with}\quad\gamma\,=\,\frac{1}{2}\,+\,i\,\nu \end{equation} form the complete and orthogonal set of functions. Hence, we can expect that the solution to the master equation can be written as the sum over these functions. By this reason we find instructive to consider how the emission kernel of our master equation (see \eq{BFKLF1}) acts on the eigenfunctions of \eq{LKAP}: \begin{subequations} \begin{eqnarray} \int \!\!d \kappa' K(\kappa,\kappa')\,\kappa'^{\,\gamma-1}\!-\!\chi(\gamma)\,\kappa^{\gamma-1}\!\!&=&\!\! \int^\infty_0 \!\!\!d\kappa'\, {\rm Re}\,\left\{\frac{\mzi}{\sqrt{2\,\mpi\,(\kappa+\kappa')+\mpi^2+(\kappa-\kappa')^2}}\right\} \,\kappa'^{\,\gamma-1} -\chi(\gamma)\,\kappa^{\gamma-1} \label{SC1}\\ &\to&\kappa^{\gamma-1} \Bigg\{\int_0^1\!\!d t\, \left(t^{\gamma-1}\!-1\right) {\rm Re}\!\left(\!-\frac{1}{\sqrt{(1-t)^2}}+\frac{\mzi}{\sqrt{(\mpi/\kappa)^2 + 2(t+1)\mpi/\kappa+(1-t)^2}}\right) \nonumber\\ &+&\int_0^1\!\!d t \left(t^{-\gamma}\!-1\right) {\rm Re}\!\left(\!-\frac{1}{\sqrt{(1-t)^2}}+\frac{\mzi}{\sqrt{(t\mpi/\kappa)^2 + 2(t+1)t\mpi/\kappa+(1-t)^2}}\right) \!\!\Bigg\} \label{SC11}\\ &\equiv&\kappa^{\gamma - 1} \,P(\kappa,\gamma) \label{SC12} \end{eqnarray} \end{subequations} where the kernel of massless BFKL $\chi\left( \gamma\right)$ has the form \cite{BFKL}: \begin{equation} \label{CHI} \chi\left( \gamma\right) \,\,=\,\,\psi(1-\gamma)\,+\,\psi(\gamma)-2\,\psi(1) \,\,=\,\,\psi\left( \frac{1}{2} + i\,\nu\right) \,+\,\psi\left( \frac{1}{2} - i\,\nu\right) \,-\,2\,\psi(1) \end{equation} where $\psi(z)$ is the Euler $\psi$-function (formula {\bf 8.36} of Ref.\cite{RY}). In \eq{SC11} the region of integration over $\kappa'$ is divided in two: $\kappa' \,\leq \,\kappa$ and $\kappa'\,\geq \,\kappa$. In the first region the new variable is introduced $t = \kappa'/\kappa$, while in the second the new variable is $t =\kappa/\kappa'$. In this way we have both $t$'s in the region $(0,1)$. In addition, we subtracted in \eq{SC11} (terms with $1$ in the numerators of the equation) the contribution from the Regge trajectory (see \eq{GGLTR}). \begin{figure}[ht] \begin{tabular}{c c } \includegraphics[width=0.45\textwidth]{PvsL.pdf} & \includegraphics[width=0.44\textwidth]{TTILDEvsL.pdf} \\ \fig{p}-a & \fig{p}-b\\ \end{tabular} \caption{Functions $P\left(\kappa=e^l,\gamma=0.5\right)$ (\protect\fig{p}-a) and $\widetilde{T}\left( e^l\right)$ (\protect\fig{p}-b) versus $l$. In these figures the solid lines describe the case $m = m_0 = 0$ while the dotted ones correspond to $m =1.27, m_0 = 3.76$, which follows from the lattice QCD estimates \cite{DOS}.} \label{p} \end{figure} Using formula {\bf 3.211} of Ref.\cite{RY} we can express these integral over $t$ through the Appel $F_1$ function (see Ref.\cite{RY} formulae {\bf 9.180-9.184}): \begin{eqnarray} \label{APEK} &&P(\kappa,\gamma)\,+\,\chi(\gamma)\,=\, {\rm Re}\Bigg\{\mzi\Bigg[ \frac{\kappa}{\gamma\left( \kappa+\mpi\right)} \,\,F_1\!\left(\gamma;\frac{1}{2},\frac{1}{2};\gamma+1; \frac{\kappa}{\kappa - \mpi - 2\sqrt{-\kappa\mpi}}, \frac{\kappa}{\kappa - \mpi + 2\sqrt{-\kappa\mpi}}\right) \\ &&-\frac{\sqrt{4\kappa^3+\mpi(3\kappa+\mpi)^2}}{(\gamma-1) (\kappa+\mpi) \sqrt{4\kappa+\mpi}} \,\,F_1\!\left(1-\gamma;\frac{1}{2},\frac{1}{2};2-\gamma; \frac{(\kappa+\mpi)^2}{\kappa-\mpi - 2\sqrt{-\kappa\mpi}}, \frac{(\kappa+\mpi)^2}{\kappa-\mpi + 2\sqrt{-\kappa\mpi}}\right) \nonumber\\ &&+\,\ln\left(\frac{2}{1+\sqrt{1+4\kappa/\mpi}}\right) -\,\frac{\kappa}{\kappa+\mpi}\ln\left( \frac{1}{2\kappa}\left[ \frac{\kappa+\mpi}{\mpi}\,\sqrt{\mpi(4\kappa+\mpi)} + 3\kappa+\mpi \right] \right) \Bigg]\Bigg\} \nonumber \end{eqnarray} From \eq{SC11} and \eq{APEK} (see also \fig{p}-a) we can see that $P(\kappa,\gamma)$ is rather small and decreases at large positive $l = \ln\kappa$. Since in \eq{SC11} we subtract the reggeization term we have re-defined the kinetic term in \eq{BFKLF1}, subtracting from $T\left( \kappa\right)$ of \eq{GZ1} function $L\left( \kappa\right)$ which is equal \begin{eqnarray} L(\kappa)&=& \int_0^1\! dt\,t\,\frac{\mzi}{\sqrt{\left(t\,\mzi/\kappa\right)^2 + 2 (t+1)\,t\,\mzi/\kappa + (1-t)^2}} \,+\,\int_0^1\! dt \frac{\mzi}{\sqrt{\left( \mzi/\kappa\right)^2 + 2 (t+1) \,\mzi/\kappa + (1-t)^2}} \nonumber\\ &=&\,\mzi\ln\left(\frac{1+\sqrt{1+4\kappa/\mpi}}{2}\right) +\,\frac{\kappa\,\mzi}{\kappa+\mpi}\ln\left( \frac{1}{2\kappa}\left[ \frac{\kappa+\mpi}{\mpi}\sqrt{\mpi(4\kappa+\mpi)}+3\kappa+\mpi \right] \right) \label{LL} \end{eqnarray} We denote \begin{equation} \label{TT} \widetilde{T}(l)\,=\,T\left( e^l\right)\,-\,{\rm Re}\,L\left( e^l\right) \end{equation} $\widetilde{T}(l)$ is plotted in \fig{p}-b. Using function $P\left( l,\gamma\right)$ and $\tilde{T}(l)$ we see that our equation for the function $e^{(\gamma - 1) l}$ has the form \begin{equation} \label{SC5} \big(E + \chi(\gamma)\big)\, e^{(\gamma - 1) l}\,=\, e^{(\gamma - 1) l} \big(P(l,\gamma) \,- \,\tilde{T}(l)\big)\,=\, e^{(\gamma - 1) l}\,\widetilde{P}(l,\gamma) \end{equation} In \fig{omdifnu} we plot \begin{equation} \label{OMDIFNU} E(\nu)\,=\,- \chi(\nu) \,+\,\widetilde{P}(l,\nu) \end{equation} fixing $\gamma\,=\,\frac{1}{2}\,+\,i\,\nu$. \fig{omdifnu}-a gives \eq{OMDIFNU} for $m = m_0 = 0$, which corresponds to the Gribov gluon propagator while in \fig{omdifnu}-b the energy is plotted for the lattice QCD gluon propagator with $m = 1.27$ and $m_0 = 3.76$ \cite{DOS}. One can see that the wave functions of the massless BFKL show up as the eigenfunctions of the master equation in the kinematic region of large $l =\ln\kappa \,\gg\,1$. Generally speaking, it means that the eigenvalues of \eq{BFKLF1} could be (1) the same as the massless one or (2) could be selected out due to behaviour at small $\kappa$, leading to the set of the eigenvalues, which is more restricted than the massless BFKL one. In addition, of course, could be some discrete states, whose wave functions decrease steeper than $\kappa^{\gamma-1}$ at large values of $\kappa$. From the numerical solution (see below) we see that there is no selection and all energies of the massless BFKL equations occur as the eigenvalues of the master equation. We can understand this, since the BFKL eigenvalues of \eq{LKAP} are twice degenerate. One can see this since the eigenvalues of the massless BFKL equation do not depend on the sign of $\nu$. At $l < 0$ we expect the the eigenfunction of the massive BFKL equation should be constant. Replacing this behaviour by the boundary condition $\phi(l = 0; \nu) = \mbox{Const}$ we see that we can satisfy this boundary condition choosing $\phi(l)\,=\,C_1\,\phi_{\mbox{\tiny BFKL}}(l,\nu) + C_2\,\phi_{\mbox{\tiny BFKL}}(l,\nu) \,=\,\sin\left( l\,\nu + \varphi\right)$ where $\varphi$ is the phase. One can see that we do not bring any selection with this procedure. However, the eigenfunctions of the massless BFKL equation appear the eigenfunctions of \eq{SC5} also for $\kappa \,\ll\,1$ ($l \,=\,\ln \kappa\,\ll\,-1$) (see \fig{omdifnu}-b) but only for the case of $m\neq 0$ and $m_0 \neq 0$ with the eigenvalue $E_0 \,=\,T(\kappa=0)\,=\,0.866$ for any value of $\nu$(brown line in \fig{omdifnu}-b). The independence of $\nu$ means that the eigenvalue $\omega_0$ is infinitely degenerate. \fig{p}-c shows that in \eq{OMDIFNU} $E(\nu)\,=\,\big\{-\chi(\nu)\,+\,P(l,\gamma)\big\}\,-\,\tilde{T}(l)$ the term in $\big\{\dots\big\}$ vanishes al $l<0$ (see \fig{p}-c), while $\widetilde{T}(l)$ approaches a constant (see \fig{p}-b). In principle, such solutions could be rejected for the master equation if the behaviour at small $\kappa$ cannot be matched with the behaviour at large $\kappa$. However, it looks very unlikely. Indeed, any function of the following type: $\phi(\kappa)\,=\,P_n(2\,\kappa - 1)\,\Theta(1-\kappa)$ where $P_n(z)$ is the Legendre polynomial (see Ref.\cite{RY} formulae {\bf 8.91}), is orthogonal to $\phi(\kappa) = \mbox{Const}$ at $l<0$ (for $n>1$) and satisfies \eq{SC5}. The numerical calculations, which we will discuss below, confirm that $E = E_0$ appears as the eigenvalue of the generalization of the BFKL equation (see \eq{BFKLF1}). \begin{figure}[ht] \centering \begin{tabular}{c c} \includegraphics[width=0.5\textwidth]{EGR.pdf} & \includegraphics[width=0.5\textwidth]{EFIT.pdf} \\ \fig{omdifnu}-a &\fig{omdifnu}-b\\ \end{tabular} \caption{The dependence of the energies of the master equation (see \eq{BFKLF1}) versus $l=\ln\kappa$ for the eigenfunctions of \eq{LKAP} with different values of $\gamma \frac{1}{2}\,+\,i \nu$. The dotted lines describe the eigenvalues of the massless BFKL equation. \fig{omdifnu}-a describes the energies for the Gribov gluon propagator (see \eq{GLPR}) while \fig{omdifnu}-b corresponds to the gluon propagator of \eq{GGLPR1} with $m=1.27$ and $m_0=3.76$ which follows from the lattice QCD estimates \cite{DOS}. The brown line in \fig{omdifnu}-b shows $E= T(\kappa=0)$.} \label{omdifnu} \end{figure} \subsection{General features of the spectrum} Following the general pattern of Ref.\cite{LLS} we can re-write \eq{BFKLF1} in the coordinate space, introducing \begin{equation} \label{GF1} \Psi(r)\,\,=\,\,\int \frac{d^2 q_T}{(2\,\pi)^2} e^{i \vec{r}\cdot\vec{q}_T } \,\phi\left( q_T\right) \end{equation} The equation takes the following form \begin{equation} \label{HGR} E\,\Psi(r)\,\,=\,\,{\cal H}\,\Psi(r) \end{equation} with \begin{equation} \label{HGR1} {\cal H}\,\,=\,\,\underbrace{T\left(\hat{\kappa}\right)}_{\mbox{\small kinetic energy}} \,\,-\,\,\underbrace{U\left( r \right)}_{\mbox{\small potential energy}} \,\,=\,\,T\left(\hat{\kappa}\right)\,-\,G(r) \end{equation} where $\hat\kappa=-\nabla^2_r$ is the momentum operator and $G(r)$ is equal to \begin{equation} \label{GF2} G(r)\,=\,\int \frac{d^2 q_T}{(2\,\pi)^2} e^{i \vec{r}\cdot\vec{q}_T }\,G\left( q_T\right) =\,\mzi K_0\left(\sqrt{\mpi}\,r\right)+\mzi^* K_0\left(\sqrt{\mmi}\,r\right) \,\,\xrightarrow{r\,\gg\,m} \,\,{\rm Re}\left\{\frac{\mzi}{\mpi^{1/4}}\sqrt{\frac{\pi}{2\,r}} e^{-\sqrt{\mpi}r}\right\} \end{equation} For large $r$, $G(r)$ exponentially decreases as one can see from \eq{GF2}. Hence, at large $r$ \eq{HGR} takes the following form: \begin{equation} \label{HGR2} E\Psi(r)\,\,=\,\,T\left(\hat{\kappa}\right)\,\Psi(r) \end{equation} with the eigenfunctions that have the following form \begin{equation} \label{GF3} \phi\left( \vec{r}\right) \,\,\sim\,\, e^{ i\sqrt{\kappa^2} r}, ~~~\kappa^2\,>\,0; \quad\phi\left(\vec{r}\right)\,\,\sim\,\, e^{-\sqrt{-\kappa^2} r}, ~~~\kappa^2\,<\,0. \end{equation} Denoting the large asymptotic behaviour of the eigenfunction as $\Psi(r) \xrightarrow{r\,\gg\,\,1/\mu} \,\,\exp\left(-\sqrt{a}\,r\right)$, we see that the energy is equal to \begin{equation} \label{HGR3} E \,\,=\,\, T(-a) \end{equation} On the other hand, in the region of small $r$ \eq{HGR1} reduces to the massless QCD BFKL equation (see the previous section and \cite{BFKL,LIP}: \begin{equation} \label{HGR4} E\,\Psi\left( r \right) \,\Psi\left( r \right) \,\,=\,\,{\cal H}_0 \,\Psi\left( r \right) \end{equation} where \cite{LIP} \begin{equation} \label{H0} {\cal H}_0\,=\,\ln p^2 \,+\,\ln |r|^2\,-\,2\psi(1) \end{equation} The eigenfunctions of \eq{HGR4} are $\Psi(r)\,=\,r^{2(1-\gamma)}$, and the eigenvalues of \eq{HGR4} can be parametrized as a function of $\gamma$ (see \eq{CHI}). Therefore, for $r\,\to\,0$ we have the eigenvalue which is equal to \begin{equation} \label{HGR5} E\,\,=\,\,\chi(\gamma) \end{equation} From \eq{HGR3} and \eq{HGR5} we can conclude, that the value of $a$ and $\gamma$ are correlated, since \begin{equation} \label{HGR6} E\,\,=\,\,\chi(\gamma)\,\,=\,\,T(-\,a) \end{equation} Based on \eq{HGR5} (see also the previous section) we expect that the minimum eigenvalue is equal to $\chi(\frac{1}{2}) \,=\,-\,4\,\ln 2$. From \fig{t} we can see that \eq{HGR6} is violated. For Gribov's propagator of \eq{GLPR} $T(\kappa)$ is positive for all values of $-\infty\,<\,\kappa\,<\,+\infty$. For the gluon propagator that describes the lattice QCD estimates ($m = 1.27$ and $m_0 =3.76$ we can see from \fig{t}-b that $T(\kappa)$ is negative at $\kappa\,<\,0$, and therefore, \eq{HGR6} can be satisfied. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{TGR.pdf} & \includegraphics[width=0.5\textwidth]{TFIT.pdf} \end{tabular} \caption{ $T(\kappa)$ versus $\kappa$ for Gribov's propagator of \eq{GLPR} ($m=m_0=0$) and for the gluon propagator that describes the lattice QCD estimates ($m = 1.27$ and $m_0 =3.76$). The red line shows $E=-4\ln 2$, which is the ground state for the massless BFKL equation.} \label{t} \end{figure} On the other hand the estimates of \eq{HGR3} contradict the result of Ref.\cite{GOLEM} that the eigenfunction of the master equation with the Gribov's gluon propagator exhibit the power-like decrease at long distances. We believe that a resolution of this inconsistency is intimately related to the definition of $\Psi(r)$. In particular, instead of \eq{GF1} we suggest to introduce the following transform to the coordinate space. First we introduce a new $\tilde{\phi}(\kappa)\,\,=\,\,G^{-1}(\kappa)\,\phi(\kappa)$. For this function \eq{BFKLMR} takes the form: \begin{equation} \label{BFKLN} \omega\,\tilde{\phi}(\omega,q)\,\,= \,\,-\,2\omega_G(q)\,\tilde{\phi}(\omega,q) \,\,+\,\,\bar{\alpha}_S \int \frac{d^2 q'}{\pi} \left\{G^{-1}(q)\,G\left(\vec{q}-\vec{q}^{\,'}\right)\,\,G\left( q'\right)\right\} \,\tilde{\phi}\left(\omega,q'\right) \end{equation} The eigenfunction in the coordinate space has the form: \begin{equation} \label{GF4} \Psi(r)\,\,=\,\,\int \frac{d^2 q_T}{(2\,\pi)^2} e^{i\vec{r}\cdot\vec{q}_T}\,\tilde{\phi}\left( q_T\right) \end{equation} and the master equation has the form of \eq{HGR} with the potential energy, which has the different form: \begin{equation} \label{GF05} {\cal H}\Psi(r)\,\,=\,\,\underbrace{T\left(\hat{\kappa}\right)}_{\mbox{\small kinetic energy}}\,\Psi(r) \,\,-\,\,\int d^2\, r'\underbrace{U\left(\vec{r},\vec{r}'\right)}_{\mbox{\small potential energy}}\,\Psi\left( r'\right) \end{equation} with \begin{eqnarray} \label{GF5} &&U\left( r,r'\right)\,\,=\,\, \int \frac{d^2 q_T }{(2\,\pi)^2} e^{i\vec{r}\cdot\vec{q}_T } \int \frac{d^2 q'_T}{(2\,\pi)^2} e^{i\vec{r}'\cdot\vec{q}'_T } \left\{G^{-1}(q)\,G\left(\vec{q}-\vec{q}'\right)\,\,G\left( q'\right)\right\}\\ &&=\,\,\int d^2\,r'' \,K_0\big((m+m_0)\,|\,\vec{r}\,-\,\vec{r}''\,|\,\big) \left(\left(-\nabla^2_{r''}+m\right)^2\,+\,1\right)\, G\left( r''\right)\,G\left(\vec{r}''\,-\,\vec{r}'\right)\nonumber \end{eqnarray} One can see that for $m = m_0 = 0$ the potential energy $U(r,r')\propto\,ln\left( r\right)$ and \eq{HGR2} turns out to be incorrect. For $m\,\neq\,0$ and $m_0\,\neq\,0$ the potential energy decreases exponentially at long distances. Hence, in this case \eq{HGR2} holds. \subsection{Eigenfunctions in the vicinity of $E_0 = T(\kappa=0)$} As we have discussed above, there is a possibility, that the master equation has the eigenvalues in addition to the eigenvalues of the QCD BFKL equation. These states should have the wave functions that decrease much steeper that the eigenfunction of \eq{LKAP}. From \fig{omdifnu} one can expect that the vicinity of $E = E_0 = T(\kappa=0)$ can provide such states. Indeed, in the vicinity $E\,\to\,E_0$ $T(\kappa)$ takes the form \begin{eqnarray} T(\kappa)\,&=&{\rm Re}\Bigg\{ \frac{\mzi\mpi\mmi}{4(m+m_0)} \left[\frac{\mzi}{\mpi} + \frac{i}{2} \mzi^* \ln\left(\frac{\mmi}{\mpi}\right)\right] +\,\frac{\kappa\,\mzi}{4} \Bigg[ \left(1-\frac{\mzi\mzi^*}{(m+m_0)^2}\right) \left(\frac{\mzi}{\mpi}+\frac{i}{2} \mzi^* \ln\left(\frac{\mmi}{\mpi}\right)\right) \nonumber\\ &+&\frac{\mpi\mmi}{m+m_0} \left\{ -\frac{\mzi}{6\,\mpi^2} + \mzi^* \left(-\frac{1}{2}+\frac{i}{4} m \ln\left(\frac{\mmi}{\mpi}\right)\right) \right\} \Bigg] \Bigg\} \label{E01} \\ &\equiv& E_0 \,\,+ \,\,E'_0\,\kappa \label{E02} \end{eqnarray} \eq{BFKLF1} takes the following form in vicinity of $\kappa \to 0$ \begin{equation} \label{E03} (E - E_0 - E'_0\,\kappa) \phi(\kappa)\,\,= \,\,-\int d\kappa'\,K\left(\kappa=0,\,\kappa'\right)\,\phi(\kappa') \,\,-\,\,\kappa\,\int d\kappa'\,\frac{\partial K\left( \kappa,\kappa'\right)}{\partial\kappa}\Bigg{|}_{\kappa=0}\phi\left(\kappa'\right) \,\,+\,\,{\cal O }\left(\kappa^2\right) \end{equation} Introducing $$\epsilon = \frac{E-E_0}{E'_0\,-\,\int d\kappa'\,\frac{\partial K\left(\kappa,\kappa'\right)}{\partial\kappa}\Big{|}_{\kappa=0}}$$ one can see from \eq{E03} that $\phi\left( \kappa\right)$ has a singularity: \begin{equation} \label{E04} \phi(\kappa)\Big{|}_{\kappa\,\to\,\epsilon}\,\,=\,\,\frac{\mbox{Const}}{\epsilon\,-\,\kappa} \end{equation} or, in other words the wave function has the form: \begin{equation} \label{E05} \phi(\kappa)\,\,=\,\,\frac{\mbox{Const}}{\epsilon\,-\,\kappa}\,+\,\phi_{\,\rm bg}(\kappa) \end{equation} where $\phi_{\,\rm bg}$ is the function which has no singularities. Since $E=E_0$ is multiple degenerate eigenvalue, the sum of functions of \eq{E05} is also an eigenfunction. It is instructive to note that the eigenfunctions of \eq{E05} does not appear for the QCD BFKL equation. As we have seen, the origin of such eigenfunctions is in the fact that typical $\kappa'$ in \eq{E03} are about the values of mass and not equal to zero. \subsection{Resume} Concluding this section we wish to emphasize two results that we have proved. First, the eigenvalues of the massless BFKL equation, generally speaking, are expected to be the eigenvalues of the master equation. In principle, it is possible that the behaviour of the wave functions at small values of $\kappa$ could select out some of the eigenvalues of the BFKL equation in QCD. However, due to double degeneracy of each of the massless BFKL eigenvalues (see, that \eq{CHI} has symmetry $\nu \to -\nu$) the boundary conditions at $\kappa \to 0$ does not lead to a loss of the eigenvalues of the master equation in comparison with the massless BFKL equation. Second, it is possible that the eigenvalues of the master equation have a reacher structure than the eigenvalues of the BFKL equation in QCD. Indeed, could be states with the wave functions that are suppressed at large $\kappa$: $\phi(\kappa) \,\,\ll\,\, \kappa^{-\frac{1}{2} + i\,\nu}$. An example of such function could be \eq{E04}. As we see from \fig{omdifnu} the eigenfunctions with $E\,=\,E_0$ have infinite degeneracy and all of them are eigenfunctions that have not been present in the massless BFKL equation. The separate problem is the state with the wave function that decreases steeper than the eigenfunction of the massless BFKL equation but with the eigenvalue which is smaller than $E_{\rm min} = -4\ln 2$. At the moment we cannot answer this question without finding the numerical solution to the master equation. \section{Numerical solution} \subsection{General approach} Generally speaking we need to solve the equation which has the following structure: \begin{equation} \label{NS1} E\,\phi\left( \kappa\right)\,\,=\,\int d \kappa' {\cal K}\left( \kappa, \kappa'\right) \,\phi\left( \kappa'\right) \end{equation} where ${\cal K}$ is defined in \eq{BFKLF2}. The advantage of using \eq{BFKLF2} in comparison with \eq{BFKLF1}, have been discussed in Appendix B of Ref.\cite{GOLEM}. For numerical solution we discretize the continuous variables $\kappa$ and $\kappa'$ using the logarithmic grid $\{\kappa_n\}$ with $N+1$ nodes \begin{eqnarray}\label{NS2} \kappa_{n} &=& \kappa_{\min} \exp \left(n \Delta_\kappa\right), \quad \Delta_\kappa=\frac{1}{N}\, \ln\left(\kappa_{\max}/\kappa_{\min}\right), \quad n=0,...,N, \end{eqnarray} where the values of $\kappa_{\min}$ and $\kappa_{\max}$ are fixed. In the most details we consider the case with $\kappa_{\min}=10^{-10}$, $\kappa_{\max}=10^{65}$ and $N=2000$, but we investigated the dependence of the solution on the values of $\kappa_{\min}$, $\kappa_{\max}$ and $N$. In the discrete variables \eq{NS1} can be approximated in the form \begin{equation} \label{NS3} E \phi\left( \kappa_n\right)\, \,=\,\, \sum^{N}_{m=0} \kappa_m\,\Delta_\kappa\, K\left( \kappa_n, \kappa_m\right)\, \phi\left(\,\kappa_m\right) \end{equation} Introducing the notations: $\phi\left( \kappa_n\right) \equiv \phi_n$ and $\kappa_m\,\Delta_\kappa\,{\cal K}\left( \kappa_n, \kappa_m\right)\equiv\, {\cal K}_{n m}$ we can re-write \eq{NS3} in the matrix form \begin{equation} \label{NS4} E\,\phi_n\,\,=\,\, \sum^{N}_{m=0} {\cal K}_{n m}\,\phi_m\qquad\mbox{or}\qquad E\,\vec{\phi}\,\,=\,\,\,{\cal \mathbf K}\,\vec{\phi} \end{equation} where vector $\vec{\phi}$ has $N+1$ components $\phi_n$ and ${\cal\mathbf K}$ is $(N+1)\times(N+1)$ matrix. We need to find the roots of the characteristic polynomial $p\left( E \right)$ of the matrix ${\cal \mathbf K}\,-\,E\,{\mathbf I}$ where ${\mathbf I} $ is the identity matrix. Hence, we need to solve the secular equation \begin{equation} \label{NS5} p\left( E\right)\,\,=\,\,\mbox{det}\left({\cal \mathbf K}\,-\,E\, {\mathbf I}\right)\,\,=\,\,0 \end{equation} We solve \eq{NS5} for several equations. First, we found the solution to our new \eq{BFKLF2} in two cases: for the Gribov propagator of \eq{GLPR} and for the propagator of \eq{GZ3}. In the first case we need to put $m=0$ and $m_0=0$ in Eqs.~(\ref{GZ1})-(\ref{GZ3}) while in the second we need to choose $m=1.27$ and $m_0=3.76$ in these equations. Such values follow from the lattice estimates for the gluon propagator. Second, we solve the original BFKL equation for QCD, which has the form: \begin{equation} \label{BFKLQCD} E \,\phi_{\mbox{\tiny BFKL}}\left( \kappa\right)\,\,= \,\,-\,\int\frac{ d \kappa'}{|\kappa - \kappa'|} \Bigg( \phi_{\mbox{\tiny BFKL}}\left( \kappa'\right) \,\,-\,\,\frac{\kappa}{\kappa'}\phi_{\mbox{\tiny BFKL}}\left( \kappa\right) \Bigg) \,\,-\,\, \int \frac{ \kappa\,\,d \kappa' }{\kappa'\,\,\sqrt{\kappa^2 + 4 \kappa'^2}}\,\,\phi_{\mbox{\tiny BFKL}}\left( \kappa\right) \end{equation} We believe that we need to compare our numerical procedure, with the equation which has the analytical solution, both to check the accuracy of our numerical estimates and to evaluate our transition to the continuous limit. Recall, that the numerical solution gives the discrete spectrum of the eigenvalues instead of the continuous one. In addition we solve the equation, which was derived in Ref.\cite{LLS} for non-abelian gauge theory with the Higgs mechanism for mass generation. This theory is not QCD, since it has no confinement of quarks and gluons. However, it has the same colour structure as QCD and introduces the dimensional scale: the mass of Higgs boson. Solving the BFKL equation for this theory we could find out what is more essential: the new dimensional scale or specifics related to the confinement. We will call below this approach ``the model'' and the main BFKL equation for this model takes the form: \begin{eqnarray} \label{BFKLHIGGS} \hspace{-0.5cm}&& E \phi\left( \kappa\right)\,=\, \underbrace{\frac{\kappa+1}{\sqrt{\kappa}\sqrt{\kappa+4}} \ln\frac{\sqrt{\kappa+4}+\sqrt{\kappa}}{\sqrt{\kappa+4}-\sqrt{\kappa}}\phi\left( \kappa\right)}_{\mbox{\small kinetic energy term}} \,-\, \underbrace{\int^{\infty}_{0}\-\frac{d \kappa' \phi\left( \kappa'\right)}{\sqrt{(\kappa-\kappa')^2\,+\,2 (\kappa + \kappa') + 1}}}_{\mbox{\small potential energy term}} \,+\, \underbrace{\frac{N^2_c + 1}{2 N^2_c}\frac{1}{\kappa + 1}\int^{\infty}_0 \frac{\phi\left( \kappa'\right) \,d \kappa'}{\kappa' + 1}}_{\mbox{\small contact term}} \end{eqnarray} This equation has been investigated in detail. In Ref.\cite{LLS} it was proven that the solution of this equation coincide with the solution to the massless BFKL equation, which is known analytically and can be used as a control of the accuracy of our numerical calculations. \subsection{Eigenvalues: the general characteristics} The eigenvalues of these four equations are shown in \fig{rootgen}. One can see that (1) numerical estimates do not show the discrete eigenvalues with energy $E \,\,<\,\,E_{\min} = -4 \ln 2$, where $E_{\min}$ is the minimal energy of the massless BFKL equation; and (2) none of the eigenvalues of the massless BFKL equation has been selected out in accord with our expectations in section III-A. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{plot-10_65-2000-En-all.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-En-100.pdf} \\ \fig{rootgen}-a &\fig{rootgen}-b\\ \end{tabular} \caption{ The eigenvalues $E_n$ of four equations: the BFKL equation in QCD (BFKL, see \eq{BFKLQCD}), the BFKL equation for the modl (Higgs, see \eq{BFKLHIGGS}), for \eq{BFKLF2} in the case of Gribov propagator (G-Z, $m=0$) and in the case of lattice QCD propagator of \eq{GGLPR1} (G-Z, $m\neq0$). The solid lines in \fig{rootgen}-b show the eigenvalues calculated using \eq{EVN0} with $\beta_n$, that were taken from the pattern of zeros of the eigenfunctions given by \eq{EF1}, \eq{EF2} and \eq{EF4}. All results correspond to solutions with $\kappa_{\min}=10^{-10}$, $\kappa_{\max}=10^{65}$ and grid size $N=2000$ (see \eq{NS2}). } \label{rootgen} \end{figure} From \fig{rootgen} we see that all eigenvalues can be divided in three regions: the eigenvalues $E_n \,\leq\,E_0= T\left( \kappa = 0\right)$, the multiply degenerate eigenvalue $E_0$ and $E_n\,\geq\,E_0$. For $E_{\rm min} \,\leq E_n \,\leq E_0$ there are no other eigenvalues except the massless BFKL ones. Indeed, we can describe these eigenvalues using the following formulae: \begin{eqnarray} \label{EVN0} E(n) &=& -2\,\psi\left( 1 \right) \,+\, \psi\left( \frac{1}{2} \,+\,i \beta(n) \right) \,-\, \psi\left( \frac{1}{2}\,- \,i \beta(n) \right)\\ \beta(n) &=& \,a_\beta\,(n \,+\,1), ~~~ a_\beta = c_\beta / \ln \left( \kappa_{\max}/m^2_\beta \right) \nonumber \end{eqnarray} For the QCD BFKL equation $c_\beta=3.015$ and $m^2_\beta=\kappa_{\min}$, while for all other three equations we can put $c_\beta=3.140$ and $m^2_\beta=0.0042$ (see \fig{bvk}-a). Hence, \fig{rootgen}-b and Table~I demonstrate a new phenomenon: the eigenvalues of all three equations, which introduce a dimensional scale in the BFKL approach, turns out to be the same. Actually, they are the eigenvalues of the QCD BFKL equation, as it shows \eq{EVN0} and \fig{rootgen}-b. In \fig{rootgen} the eigenvalues from \eq{EVN0} are shown by the solid lines and one can see that all these values for $E_n \leq E_0$ can be perfectly described by this equation. \eq{EVN0} can be interpreted as an indication that the transition to the continuous limit reduces to replacement $\frac{1}{2} + i \beta(n) \,\,\to\,\,\frac{1}{2} \,+\,i\,\nu\,\equiv \,\gamma$. In these new variables the eigenvalues of the QCD BFKL looks familiar\cite{KOLEB}: \begin{equation} \label{BFKLQCDE} E\left( \gamma \right)\,\,=\, -2\, \psi\left( 1 \right) \,+\,\psi\left(\gamma\right) \,-\,\psi\left( 1 - \gamma\right) \end{equation} Table~I, in which we put the first 20 roots of the secular equation, illustrates these points. First we see that solution to the QCD BFKL equation gives the eigenvalues, which are quite close to the analytical estimates (see \eq{EVN0}). This indicates that our method of numerical solving provides a good accuracy. As we can see, in both cases the lowest eigenvalue becomes quite close to $E_{\min} = - 4 \ln2$ and difference is negligibly small (of the order of $5\times10^{-3}$ for $\kappa_{\min}=10^{-10}$ and $\kappa_{\max}=10^{65}$). \fig{root} shows the dependence of the first 7 roots versus the value of $\kappa_{\max}$. One can see that when $\kappa_{\max}$ grows ($\kappa_{\max}\to\infty$) the distance between neighboring roots decreases rapidly, inferring the smooth transition to the continuous limit. As we can see from \fig{rootgen} at $E_n = E_0 = T\left( \kappa = 0\right)$ for three equations, that introduce a new dimensional scale, we have multiple degenerate eigenvalue. For the Gribov propagator this degeneration is not very large but in other cases it is so large that we can expect something like Bose-Einstein condensation at this energy. The general structure of the eigenfunction at this value of energy we have discussed in section III-C and will consider below. For the scattering amplitude all these eigenfunctions correspond to the cross sections that decrease as a power of energy and, because of this, do not show up in the high energy scattering processes. \begin{center} \begin{table} \begin{tabular}{||r|c|c|c|c|c||} \hline\hline n & ~~~$E_n$ (QCD)~~~ & ~~~$E_n$ (Higgs)~~~ & ~$E_n$ (G-Z, $m=0$)~ & ~$E_n$ (G-Z, $m\neq0$)~ & ~~$E_n$ (\eq{EVN0})~~ \\ \hline\hline 0 & -2.7675 & -2.7657 & -2.7660 & -2.7666 & -2.7657 \\ \hline 1 & -2.7519 & -2.7448 & -2.7457 & -2.7483 & -2.7452 \\ \hline 2 & -2.7261 & -2.7103 & -2.7123 & -2.7178 & -2.7114 \\ \hline 3 & -2.6905 & -2.6630 & -2.6665 & -2.6753 & -2.6650 \\ \hline 4 & -2.6456 & -2.6036 & -2.6088 & -2.6211 & -2.6067 \\ \hline 5 & -2.5919 & -2.5332 & -2.5403 & -2.5561 & -2.5377 \\ \hline 6 & -2.5301 & -2.4529 & -2.4620 & -2.4810 & -2.4588 \\ \hline 7 & -2.4610 & -2.3640 & -2.3751 & -2.3968 & -2.3715 \\ \hline 8 & -2.3854 & -2.2677 & -2.2808 & -2.3048 & -2.2768 \\ \hline 9 & -2.3040 & -2.1653 & -2.1802 & -2.2060 & -2.1760 \\ \hline 10 & -2.2177 & -2.0581 & -2.0746 & -2.1018 & -2.0705 \\ \hline 11 & -2.1273 & -1.9472 & -1.9651 & -1.9932 & -1.9612 \\ \hline 12 & -2.0336 & -1.8337 & -1.8528 & -1.8815 & -1.8492 \\ \hline 13 & -1.9373 & -1.7186 & -1.7387 & -1.7675 & -1.7356 \\ \hline 14 & -1.8391 & -1.6029 & -1.6236 & -1.6523 & -1.6212 \\ \hline 15 & -1.7397 & -1.4872 & -1.5083 & -1.5368 & -1.5068 \\ \hline 16 & -1.6396 & -1.3724 & -1.3936 & -1.4215 & -1.3930 \\ \hline 17 & -1.5394 & -1.2588 & -1.2800 & -1.3072 & -1.2804 \\ \hline 18 & -1.4395 & -1.1470 & -1.1680 & -1.1944 & -1.1695 \\ \hline 19 & -1.3403 & -1.0374 & -1.0579 & -1.0835 & -1.0605 \\ \hline 20 & -1.2421 & -0.9303 & -0.9501 & -0.9748 & -0.9539 \\ \hline \hline \end{tabular} \caption{ The first 20 eigenvalues $E_n$ for BFKL equation in QCD (QCD, see \eq{BFKLQCD}), for the model, developed in Ref.\cite{LLS} (Higgs, see \eq{BFKLHIGGS}), for \eq{BFKLF2} in the case of Gribov propagator (G-Z $m=0$) and in the case of lattice QCD propagator of \eq{GGLPR1} (G-Z, $m \neq 0$). Last column contains values defined by \eq{EVN0}. } \label{t1} \end{table} \end{center} \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{En-kmax-3-2000.pdf} & \includegraphics[width=0.5\textwidth]{En-kmax-3-m-2000.pdf} \\ \fig{root}-a & \fig{root}-b\\ \end{tabular} \caption{ The first several eigenvalues of \eq{BFKLF1} versus $\ln\left(\kappa_{\max}\right)$. \fig{root}-a for the Gribov propagator of \eq{GLPR} and \fig{root}-b for the lattice propagator of \eq{GGLPR1}. } \label{root} \end{figure} \subsection{Eigenfunctions for $E\,\,\leq\,\,E_0\,\,=\,T\left( \kappa = 0\right)$} For the QCD BFKL equation(see \eq{BFKLQCD}) the eigenfunctions are given by \eq{LKAP} and for the numerical solutions they take the form: \begin{equation} \label{EF1} \phi^{\mbox{\tiny BFKL}}_n\left( \kappa\right)\,\,=\,\,\frac{\alpha_n}{\sqrt{\kappa}}\sin\left( \beta_n \,\ln \kappa\,\,+\,\,\varphi_n\right) \end{equation} The eigenfunction for \eq{BFKLHIGGS} have been discussed in Ref.\cite{LLS} and can be described as follows: \begin{equation} \label{EF2} \phi_n \left( \kappa\right)\,\,=\,\,\frac{1}{\sqrt{\kappa\,+\,4}}\sin\left( \beta_n \, \mbox{Ln}(\kappa)\,\,+\,\,\varphi_n\right) \end{equation} where \begin{equation} \label{EF3} \mbox{Ln}(\kappa)\,\,=\,\, \frac{\sqrt{\kappa\,+\,4} + \sqrt{\kappa}}{\sqrt{\kappa\,+\,4} - \sqrt{\kappa}}\,\,\xrightarrow{\kappa\,\gg\,1}\,\,\ln (\kappa) \end{equation} \begin{figure}[hb] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-0_1_2_10.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-0_1_2_10-mass.pdf} \\ \fig{ef}-a &\fig{ef}-b\\ \end{tabular} \caption{ The examples of the eigenfunctions of \eq{BFKLF2}. \fig{ef}-a for the Gribov propagator of \eq{GLPR}, i.e. $m=0,m_0=0$ in \eq{GZ3}. \fig{ef}-b for the lattice QCD propagator of \eq{GZ3} with $m=1.27,m_0=3.76$\,. Functions $\phi_n(\kappa)$ for $n=0,1,2$ and $10$. One an see that every $\phi_n(\kappa)$ has $n$-zeros. } \label{ef} \end{figure} Several examples of the eigenfunctions for \eq{BFKLF2} are shown in \fig{ef}. One can see that the number of zeros follows the usual pattern of a quantum mechanical approach: the minimum energy state has no zeros. The next has one and so on. At large $\kappa\,$ $\phi_n\left( \kappa\right)\,\propto\,\sin\left( \alpha_\beta\,n \,\ln \kappa\right)$ or in other words $\phi_n\left( \kappa\right) = C_1 \phi_{\mbox{\tiny BFKL}}\left( \kappa; \frac{1}{2} + i \beta_n\right) \, +\,C_2 \phi_{\mbox{\tiny BFKL}}\left( \kappa; \frac{1}{2} - i \beta_n\right),$ where $\phi_{\mbox{\tiny BFKL}}\left( \kappa; \gamma\right)$ are given by \eq{LKAP}. For $\kappa \,\geq 1$ all eigenfunctions can be parameterized in the following way: \begin{equation} \label{EF4} \phi_n\left( \kappa\right)\,\,=\,\,\frac{\alpha_n\,(\kappa+m)}{\sqrt{(\kappa\,+\,a_n)^3}}\,\sin\left( \beta_n Ln(\kappa) \,+\, \varphi_n\right) \end{equation} where \begin{equation} \label{EF5} Ln(\kappa) \,\,=\,\, \frac{\kappa}{4} \Big\{ \mbox{Re}\left( \mzi^2 \,I_1 \right) \,+\, \mzi \mzi^* \,\,I_2 \Big\} \,\,\xrightarrow{\kappa \,\gg\,1} \,\, \ln \kappa \end{equation} Parameter $\beta_n$ has the simple form defined by \eq{EVN0} \begin{equation} \label{EF6} \beta_n \,\,=\,\,a_\beta\,(n+1), \quad a_\beta = \frac{3.140}{\ln(\kappa_{\rm max}/m^2_\beta)} \end{equation} It was found that for the Gribov's propagator ($m = 0$, $m_0 = 0$) and for the propagator with $m \neq 0$, $m_0 \neq 0$ the same $m^2_\beta=0.0042$ can be used (see \fig{bvk}-a). While $\varphi_n$ needs a bit more complicated parametrization \begin{equation} \label{EF7} \varphi_n\,\,=\,\,a_{\varphi,0} \,+\, a_{\varphi,1}\,n \,+\, a_{\varphi,2} \left( n - n_\varphi\right)^3 \end{equation} For $\kappa_{\max} = 10^{65}$ we obtain the following values for parameters $a_{\varphi,i}$: \begin{equation} \label{PAM} a_{\varphi,0} = 0.486\;(1.520);~~ a_{\varphi,1} = 0.0350\;(-0.0223);~~ a_{\varphi,2} = 0.425\times10^{-4}\;(1.211\times10^{-4});~~ n_\varphi = 21.71\;(21.76) \end{equation} In \eq{PAM} the values of $a_{\varphi,i}$ for the case $m \neq 0, m_0\neq0$ are given in parentheses. \fig{efpar} shows the $n$ dependence of $\beta_n$ and $\phi_n$ for $n \leq 40$. One can see that the linear dependence of \eq{EF6} holds for $\beta_n$, but $\varphi_n$ shows a more complicated pattern: $\varphi_n \,\propto\,n$ with variations described by \eq{EF7}. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{plot-10_65-2000-beta-60.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-phi-40.pdf} \\ \fig{efpar}-a &\fig{efpar}-b\\ \end{tabular} \caption{ Parameters $\beta_n$ (\fig{efpar}-a) and $\varphi_n$ (\fig{efpar}-b) of \eq{EF4} for the eigenfunctions $\phi_n(\kappa)$ versus $n$. Presented data correspond to the eigenfunctions obtained with $\kappa_{\min}=10^{-10}$ and $\kappa_{\max}=10^{65}$. One can see that $\beta_n$ has simple linear dependence while $\varphi_n$ has a more complicated form given by \eq{EF7}. Solid lines correspond to the proper parameterizations (i.e. \eq{EF6} for $\beta_n$ and \eq{EF7} for $\varphi_n$). } \label{efpar} \end{figure} \begin{figure}[ht] \centering \begin{tabular}{c c} \includegraphics[width=0.5\textwidth]{abeta-kmax-10_65.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-beta-500.pdf} \\ \fig{bvk}-a & \fig{bvk}-b\\ \end{tabular} \caption{ \fig{bvk}-a: $a_\beta$ versus $\kappa_{\max}$ for Gribov's and lattice QCD gluon propagators. The solid line corresponds to \eq{EF6} with $m^2_\beta = 0.0042$ for both cases. \fig{bvk}-b: $\beta_n$ of \eq{EF4} for the eigenfunctions $\phi_n(\kappa)$ obtained with $\kappa_{\min}=10^{-10}$ and $\kappa_{\max}=10^{65}$ at $\kappa \,>\, 1$ versus $n$. } \label{bvk} \end{figure} \fig{bvk}-a shows the dependence of $a_\beta$ on the value of $\kappa_{\rm max}$ for Gribov's ($m=0$) and lattice QCD ($m\neq 0$) gluon propagators. One can see that \eq{EF6} describes this dependence quite well. As far as dependence on $\kappa_{\max}$ of $\beta_n $, $\varphi_n$ and $a_n$ is concerned, we found that they have quite a simple scaling property, which allows to relate their values obtained with different $\kappa_{\max}$: values of some parameter $P_n=P(n;{\kappa_{\max}}_1)$ corresponding to ${\kappa_{\max}}_1$ fit well to results $P(n;{\kappa_{\max}}_2)$ after simple change of the scale of $n$ \begin{equation} \label{Skmax} P(n;{\kappa_{\max}}_1) \approx P(n S;{\kappa_{\max}}_2), ~~~~ S = \frac{\ln({\kappa_{\max}}_2)}{\ln({\kappa_{\max}}_1)} \end{equation} Such scaling behaviour for $\beta_n$ follows from \eq{EF6}, while \fig{Sphi} illustrates such $\kappa_{\max}$ scaling for parameter $\varphi_n$. Figure shows parameters obtained at different $\kappa_{\max}$ in the range from $10^{20}$ to $10^{65}$. Sets of $\varphi$ were scaled on $n$ with proper coefficient (see \eq{Skmax}) to $\kappa_{\max}=10^{65}$. One can see that "scaled" points are in good agreement with original values for $\kappa_{\max}=10^{65}$. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{PhiFit-phi-3.pdf} & \includegraphics[width=0.5\textwidth]{PhiFit-phi-3-m.pdf} \\ \fig{Sphi}-a &\fig{Sphi}-b\\ \end{tabular} \caption{ Scaling on $\kappa_{\max}$ for parameter $\varphi_n$. Values on Fig.~\ref{Sphi}-a correspond to the Gribov's gluon propagator and on Fig.~\ref{Sphi}-b to the lattice QCD propagator. Open symbols denote parameter obtained at different $\kappa_{\max}=10^{20}, 10^{30}, 10^{45}, 10^{65}$. Sets of $\varphi$ scaled on $n$ to $\kappa_{\max}=10^{65}$ are denoted by appropriate solid symbols. } \label{Sphi} \end{figure} For the Gribov's propagator the eigenfunction $\phi_n\left( \kappa\right) \,\,\propto\,\,\kappa$ in the region of small $\kappa$. In other words, the eigenfunctions in the coordinate space exhibit the power-like decrease at long distances. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{PhiFit-a_n-a_mod.pdf} & \includegraphics[width=0.5\textwidth]{PhiFit-a_n.pdf} \\ \fig{an}-a &\fig{an}-b\\ \end{tabular} \caption{ Values of parameter $a_n$ from the \eq{EF4} versus $n$. Left plot (\fig{an}-a) demonstrates scaling property of this parameter for the case with Gribos's propagator. Right plot (\fig{an}-b) shows $a_n$ for both variants (i.e. Gribov's and QCD propagators) corresponding to eigenfunctions with $\kappa_{\min}=10^{-10}$ and $\kappa_{\max}=10^{65}$. } \label{an} \end{figure} It turns out that for small $\kappa$ we can use \eq{EF4}, which introduce the dependence of parameter $a_n$ on $n$. \fig{an}-a shows the scaling behaviour of \eq{Skmax} for $a_n$ in the case of Gribov's propagator. It should be stressed that the same scaling behaviour holds for the case of $ m \neq 0$. \fig{an}-b shows the dependence of $a_n$ on $n$. From this figure we see that $a_n \,\propto\,n^2$ for the Gribov's propagator with $a_n = 0.3$ at $n = 0$. In other words, the typical $\kappa$ turns out to be in the region of $\kappa = 0.27\div\,0.75$. It should be stressed that \eq{EF4} describes quite well the behaviour of the eigenfunctions both at large $\kappa \,\geq\,1$ and at small $\kappa \,\leq\,a_n$, but at $\kappa \,\sim\,a_n$ \eq{EF4} does not lead to a good fit of the eigenfunctions. For the case of the lattice QCD propagator (\eq{GZ3} with $m =1.27$ and $m_0$=3.76) \fig{an}-b shows that $a_n$ decreases with $n$ approaching $a_n \approx 0.4 $ at $n \to 40$. One can see that for small $n$ the typical values of $\kappa \sim\,1$. and the range of typical $\kappa$ is $0.4 \div 1$. We would like to stress that the value of $a_n$ cannot be viewed as a typical scale for the $\kappa$ dependence of the eigenfunctions. Indeed, one can see directly from \fig{ef} that the typical $\kappa $ is about $\kappa \sim 1$ for both cases. \subsection{Eigenfunctions for $E_n\,\,=\,E_0\,\,=\,T\left( \kappa = 0\right)$} As we have discussed above, the solution leads to the multiple degenerate state at $E_n = E_0 = T\left( \kappa = 0\right)$. Using \eq{EVN0} we can estimate the value of $\beta^0$: viz. $E_0 = - 2 \,\psi(1) \,+\,\psi\left( \frac{1}{2} \,+\,i\,\beta^0\right) \,+\,\psi\left( \frac{1}{2} \,-\,i\,\beta^0\right)$. Corresponding eigenfunction index $n$, where the first degenerate eigenvalue appears, can be estimated using \eq{EF6}: degenerate eigenfunction sequence starts when $\beta_n$ reaches value $\beta^0$. For the Gribov's propagator $\beta^0\approx0.85$ (see \fig{efpar}-a) leads to the value $n=41$ in the case with $\kappa_{\max}=10^{65}$, while for RGZ gluon propagator $\beta^0\approx0.92$ gives $n=45$ (see \fig{efpar}-a). At such values of $n$ the $\kappa$ behaviour of the eigenfunctions shows a discontinuity in $\kappa$: of course, numerical values of eigenfunctions at $\ln\left(\kappa/\kappa_{\rm min}\right) = n \Delta$ (see \eq{NS2}) are still finite, but the values in the neighbouring nodes have a different sign and derivative, indicating that eigenfunctions have pole in $\kappa$ located somewhere between nodes. The structure of the eigenfunctions with this eigenvalue is rather simple for the lattice (RGZ) QCD gluon propagator (see \eq{GZ3} with $m =1.27$ and $m_0 = 3.76$) and it is close to one, that has been discussed in Ref.\cite{LLS} for \eq{BFKLHIGGS}. The eigenfunctions for $E_n=E_0$ have poles in $\kappa=\kappa_{p,n}$ as it has been shown in section III-C. Actually the minimal value of $\kappa_{p,n}$ is equal to $\kappa_{\min}$ (strictly speaking the first pole is located somewhere between the first node $\kappa_0=\kappa_{\min}$ and the next node $\kappa_1 = \kappa_{\min}\exp(\Delta_\kappa) \approx \kappa_{\min} (1+\Delta_\kappa$). With each increase of $n$ pole moved exactly to the next interval on $\kappa$ (i.e. second pole is located between $\kappa_1$ and $\kappa_2$ and so on). This sequence terminates when the pole reaches the maximal value of $\kappa=\kappa^0\,\approx\,3$, where $\kappa^0$ is the location of the first zero of the eigenfunction of \eq{EF4}. All eigenfunctions with $E_n = E_0$ have the same number of zeros. All these features can be seen from \fig{efe0mass}. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-50_150_250_400-mass1.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-50_150_250_400-mass2.pdf} \\ \fig{efe0mass}-a &\fig{efe0mass}-b\\ \end{tabular} \caption{ The examples of the eigenfunctions with the eigenvalue $E_n = E_0 = T\left( \kappa=0\right)$ for the lattice QCD gluon propagator(\eq{GZ3} with $m=1.27$ and $m_0=3.76$). \fig{efe0mass}-a shows $\phi_n\left( \kappa\right)$ versus $\kappa$ with $n=50,150,250$ for which $E_n = E_0$ and $\phi_{400}\left( \kappa\right)$ with $E_{400} > E_0$. One can see that all eigenfunctions with $E_n = E_0$ have the same number of zeros. The same eigenfunctions are shown in \fig{efe0mass}-b but in the region of small $\kappa \leq 100$. It is clearly seen that $\phi_n\left( \kappa\right)$ have the pole whose position moves from $\kappa_{\rm min}$ to $\kappa \approx 1$. Function $\phi_{400}\left( \kappa\right)$ has a different number of zeroes, which corresponds to increase of the value of $\beta$ in \eq{EFE01}. } \label{efe0mass} \end{figure} Generally for $m \,\neq\,0$ and $m_0\,\neq\,0$, the eigenfunction can be approximated by the following expression: \begin{equation} \label{EFE01} \phi^{\mbox{\tiny (approx)}}_n\left(\kappa\right) \,=\,\frac{a_{p,n}}{\kappa_{p,n}\,-\,\kappa} \,+\,\phi_n\left(\kappa;\,\eq{EF4}\right) \,=\,\frac{a_{p,n}}{\kappa_{p,n}\,-\,\kappa} \,+\,\frac{\alpha_n\,(\kappa+m)}{\sqrt{(\kappa\,+\,a_n)^3}} \,\sin\left(\beta^0 Ln(\kappa)\,+\,\varphi_n\right) \end{equation} \eq{EFE01}, having $\beta^0$ (see \fig{bvk}-b), which does not depend on $n$, reflects the fact that for all these states the behaviour of the eigenfunctions at large $\kappa \,\geq\,1$ can be described by one function with the same number of zeros. \fig{parwfmass} shows the $n$-dependence of other parameters of $\phi^{\mbox{\tiny (approx)}}_n\left( \kappa\right)$ (see \eq{EFE01}). One can see that both the position of the pole $\kappa_{p,n}$ and its residue $a_{p,n}$ are proportional to $n$ ($\ln\left(\kappa_{p,n}\right) \,\propto\,n, \ln\left( a_{p,n}\right)\, \propto\, n$), while the parameters of $\phi_n\left( \kappa; \eq{EF4}\right) $: $a_n$ and $\varphi_n$ do not depend on $n$ in the range $n = 50 - 250$ which corresponds to $E_n = E_0$. The value of $a_n=1$ gives us the typical transverse momentum $q\,=\,\mu$ (see \eq{VAR}). \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{PhiFit-kpn-mass-500.pdf} & \includegraphics[width=0.5\textwidth]{PhiFit-apn-mass-250.pdf} \\ \fig{parwfmass}-a &\fig{parwfmass}-b\\ \includegraphics[width=0.5\textwidth]{PhiFit-a_n-mass-250.pdf} & \includegraphics[width=0.5\textwidth]{PhiFit-phi-mass-250.pdf} \\ \fig{parwfmass}-c &\fig{parwfmass}-d\\ \end{tabular} \caption{ Parameters of the wave functions with $E_n = E_0$ versus $n$. \fig{parwfmass}-a shows the position of the pole $k_{p,n}$ in \eq{EF4} as a function of $n$ while \fig{parwfmass}-b presents the $n$ dependence of the residue $a_{p,n}$. \fig{parwfmass}-c and \fig{parwfmass}-d describe the dependence of parameters $a_n$ and $\varphi_n$ on $n$. } \label{parwfmass} \end{figure} For Gribov's gluon propagator the structure of the eigenfunction with $E_n=E_0$ is much more complex. First, one can notice from \fig{efe0} that the number of zeros are not the same for these eigenfunction but $\beta^0$ in \eq{EFE02} changes with $n$ rather slowly (see \fig{efpar}-a for $n$ in the region $n = 41 \div 70$). Second, we see that $\phi_n\left( \kappa\right)$ with $E_n = E_0$ have two poles. First pair of poles $\kappa_{p,1}<\kappa_{p,2}$ appear near $\kappa\approx1$. With increase of $n$ the smaller one ($\kappa_{p,1}$ decreases, while $\kappa_{p,2}$ increases. On each increment of $n$ only one of $\kappa_{p,i}$ moves to the neighbouring $\kappa$ interval between nodes. So the distance between these poles (in terms of index of $\kappa$-nodes) each time increases exactly on $1$. The contribution of each of these poles vanishes at $\kappa \to 0$ and residues of these poles can have the same or opposite sign. Third, the position of the poles are in the region $\kappa = 0.1 \div 10$ and they exist also in the eigenfunctions with $E_n > E_0$. Fourth, two poles in the eigenfunctions have close positions. \eq{EFE02} reflects the main features that we have discussed but the actual structure of the eigenfunction turns out to be much more complex. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-50_63_73_103-1.pdf} & \includegraphics[width=0.5\textwidth]{plot-10_65-2000-Phi-50_63_73_103-2.pdf} \\ \fig{efe0}-a &\fig{efe0}-b\\ \end{tabular} \caption{ The examples of the eigenfunctions with the eigenvalue $E_n = E_0 = T\left( \kappa=0\right)$ for Gribov's gluon propagator(\eq{GZ3} with $m=0$ and $m_0=0$). \fig{efe0}-a shows $\phi_n\left( \kappa\right)$ versus $\kappa$ with $n=50,63,73$ for which $E_n \approx E_0$ and $\phi_{103}\left(\kappa\right)$ with $E_{103} > E_0$. (For better clarity, some functions were scaled: values for $n=63$ are multiplied by $10^3$, $n=73$ by $10^6$ and $n=103$ by $10^9$.) One can see that all eigenfunctions with $E_n = E_0$ have approximately the same number of zeros, but $\beta$ in \eq{EFE02} is not a constant but slowly grows as $n$ increase. In \fig{efe0}-b the same functions are shown in the region of small $\kappa \leq 100$. It is clearly seen that $\phi_n\left( \kappa\right)$ have two poles whose position moves in opposite directions from $\kappa=1$. The open circles in \fig{efe0}-b show the region where the function is negative. } \label{efe0} \end{figure} However, for the Gribov's propagator ($m = 0, m_0 =0$) the eigenfunction vanishes at $\kappa=0$ and can be approximated as follows: \begin{equation} \label{EFE02} \phi^{\mbox{\tiny (approx)}}_n\left( \kappa\right)\,\,= \,\,\frac{\kappa\,a_{p,1}(n)}{\kappa^2 - \kappa^2_{p,1}(n)}\,\, \pm\,\,\frac{\kappa\,a_{p,2}(n)}{\kappa^2 - \kappa^2_{p,2}(n)}\,\, + \,\,\frac{\alpha_n\,\kappa}{\sqrt{(\kappa+a_n)^3}}\sin\left(\beta^0 \, Ln\kappa\,+\,\varphi_n\right) \end{equation} The appearance of two poles in \eq{EFE02} looks natural (see section III-C) due to multiple degeneracy of this eigenvalue. Indeed, due to this the sum of two functions with one pole in each, is also the eigenfunction. \fig{efgre0} demonstrates that the region of $n$ for the degenerate states with $E_n = E_0$ is very narrow but the structure of the eigenfunction with two poles lasts for $E_n \,>\,E_0$. \begin{figure}[ht] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{En-2000-100.pdf} & \includegraphics[width=0.5\textwidth]{PhiFit-kpn-100.pdf} \\ \fig{efgre0}-a &\fig{efgre0}-b\\ \end{tabular} \caption{ The eigenvalues for the Gribov's gluon propagator (see \eq{GZ3} with $m=0$ and $m_0=0$) versus $n$ (\fig{efgre0}-a). The value of $n$ at which energy $E_n$ reaches $E_0$, can be found from the equation $\beta_n = \beta^0$= 0.85. \fig{efgre0}-b shows the dependence of two poles in \eq{EFE02} on $n$ at different values of $\kappa_{\max}$. Note, that the minimal value of $n$ when two poles $\,\kappa_{p,i}$ appear is determined by the same condition $\beta_n = \beta^0$= 0.85.} \label{efgre0} \end{figure} \subsection{Eigenfunctions for $E_n\,\,>\,\,E_0\,\,=\,T\left( \kappa = 0\right)$} For $E_n \,\,>\,\,E_0$ the eigenfunctions take the following form for $\kappa\,>\,1$: \begin{equation} \label{EFE03} \phi_n\left( \kappa\right)\,\,=\,\,\frac{\alpha_n}{\sqrt{\kappa}}\,\sin\left( \beta_n Ln(\kappa) \,+\, \varphi_n\right) \end{equation} with $\beta_n \,\propto\,n$ for both Gribov's and lattice QCD gluon propagators as it is seen from \fig{bvk}-b. One can see from \fig{rootgen}-a, that for$\kappa\,\gg\,1$ $E_n$ tends to the same asymptotic values for all four equations that we have studied in this paper. For $\kappa\,<\,1$ the eigenfunctions have the similar structure as for $E_n = E_0$, i.e. they have one pole (see \fig{parwfmass}-a) for the lattice gluon propagator and two poles for the Gribov's one (see \fig{efgre0}-b). Since these eigenvalues corresponds to the Pomeron intercept $\omega_{I\!\!P} \,=\, -\bar{\alpha}_S E_n\,\,<\,\,\omega_0=- \bar{\alpha}_S \,E_0 \,<\,0$, they do not contribute to the high energy behaviour of the scattering amplitude. \section{The scattering amplitude} \subsection{Green's function of the BFKL Pomeron for the Gribov-Zwanziger confinement} The Green's function of the BFKL equation on $Y=\,\ln(1/x)$ representation takes the general form \begin{equation} \label{GF1} G\left( Y, \kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,=\,\, \sum_{n=0}^{\infty}\int^{\epsilon\,+\,i\,\infty}_{\epsilon \,-\,i\,\infty} \frac{d \omega}{ 2\,\pi\,i}\,\, \frac{1}{\omega\,-\,\omega_n}\,\, e^{\omega\,Y}\, \phi_n\left( \kappa_{\mbox{\tiny in}}\right)\, \phi_n\left( \kappa_{\mbox{\tiny fin}}\right) \,\,=\,\, \sum_{n=0}^{\infty}\,e^{-\bar{\alpha}_S E_n\,Y}\, \phi_n\left( \kappa_{\mbox{\tiny in}}\right)\, \phi_n\left( \kappa_{\mbox{\tiny fin}}\right) \end{equation} At high energies the main contribution stems from the minimal energy. However, we cannot restrict ourselves by calculating only one term in \eq{GF1}. To demonstrate this, we use \eq{EF4} for the approximate eigenfunctions, which can be written in the following form: \begin{equation} \label{GF2} \phi_n^{\mbox{\tiny approx}} \,\,=\,\,\Phi_n\left( \kappa\right) \,\sin\left( a_\beta\,(n\,+1)\, Ln(\kappa)\,+\,\varphi_n\right) \end{equation} where \begin{equation} \label{GF3} \Phi_n\left( \kappa\right)\,\,\,=\,\,\,\frac{\alpha_n\,(\kappa+m)}{\sqrt{(\kappa\,+\,a_n)^3}} \end{equation} The eigenvalues of \eq{EVN0} we calculate in diffusion approximation in which: \begin{equation} \label{EVND} \omega\left( n\right) \,\,=\,\,-\bar{\alpha}_S\,E_n \,\,=\,\,\omega_{\mbox{\tiny BFKL}}\,\,-\,\,D\,a_\beta^2\,n^2\,\,+\,\,{\cal O}\left( n^3\right) \,\,=\,\,\omega_{\mbox{\tiny BFKL}}\,\,-\,\,D\,\beta^2 \end{equation} where $\omega_{\mbox{\tiny BFKL}}\,=\,4\,\ln2\, \bar{\alpha}_S$; $ D\,=\,14\, \zeta(3) \,\bar{\alpha}_S$. Therefore, in this approximation the Green function takes the form \begin{equation} \label{GF4} G\left( Y, \kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,=\,\,e^{\omega_{\mbox{\tiny BFKL}}\,Y} \,\sum_{n=0}^{\infty} \phi_n\left(\kappa_{\mbox{\tiny fin}}\right)\, \phi_n\left(\kappa_{\mbox{\tiny in}}\right) \,e^{-D\,Y\,a_\beta^2\,n^2} \to\,\,e^{\omega_{\mbox{\tiny BFKL}}\,Y}\,\int^{\infty}_0 \,d\beta \,\phi\left(\kappa_{\mbox{\tiny fin}},\beta\right)\, \phi\left(\kappa_{\mbox{\tiny in}},\beta\right) \,e^{ - D\,Y\,\beta^2} \end{equation} Taking the integral over $\beta$ in \eq{GF4} we obtain the following Green's function at large values of $Y$: \begin{equation} \label{GF5} G\left( Y, \kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,=\Phi_0(\kappa_{\mbox{\tiny fin}}) \Phi_0(\kappa_{\mbox{\tiny in}}) \,\frac{1}{2}\,e^{\omega_{\mbox{\tiny BFKL}}Y}\,\sqrt{\frac{\pi}{D\,Y}} \Bigg\{ e^{-\frac{\left( Ln\left( \kappa_{\mbox{\tiny fin}}\right) \,-\,Ln\left( \kappa_{\mbox{\tiny in}}\right)\Rb^2}{4\,D\,a_\beta^2\,Y}} \,-\,e^{-\frac{\left( Ln\left( \kappa_{\mbox{\tiny fin}}\right) \,+\,Ln\left( \kappa_{\mbox{\tiny in}}\right)\,+\,2\,a_\phi\right)^2}{4\,D\,a_\beta^2\,Y}} \Bigg\} \end{equation} One can see that at large $Y$, $G\left( Y,\kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,\propto\,\,\left( D \,Y\right)^{-3/2}\,e^{\omega_{\mbox{\tiny BFKL}}\,Y}$, which should be compared with the massless BFKL case for which $G\left( Y, \kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,\propto\,\,\left( D \,Y\right)^{-1/2}\,e^{\omega_{\mbox{\tiny BFKL}}\,Y}$. These estimates show that we need to sum the contributions of the eigenvalues in the vicinity of $n=0$. The source of such contribution can be seen from the first two components of the sum over $n$ in \eq{GF1}, which can be re-written as follows: \begin{equation} G\left( Y, \kappa_{\mbox{\tiny fin}} | 0, \kappa_{\mbox{\tiny in}}\right) \,\,\propto\,\, \Phi_0(\kappa_{\mbox{\tiny fin}}) \Phi_0(\kappa_{\mbox{\tiny in}}) e^{\omega_{\mbox{\tiny BFKL}}\,Y}\Bigg( 1\,+\,\frac{\Phi_1(\kappa_{\mbox{\tiny fin}})\,\Phi_1(\kappa_{\mbox{\tiny in}})}{% \Phi_0(\kappa_{\mbox{\tiny fin}})\,\Phi_0(\kappa_{\mbox{\tiny in}})} \,e^{\Delta \omega_1 \,Y} \Bigg) \end{equation} where $\Delta\omega_1\,=\,-\bar{\alpha}_S\left( E_0 - E_1\right)$. For $\kappa_{\rm max}\to\infty$ ~ $\Delta\omega_1\to0$, however, the product $\Delta\omega_1\,Y$ at large $\kappa_{\max}$ and $Y$ is undefined. Hence, we have to perform numerical estimates for the sum of \eq{GF1} to determine the answer. We will discuss such kind of estimate below. At the moment we wish to emphasize that since in the vicinity of $n=0$ the spectrum of the master equation coincide with the QCD BFKL equation, one can see that the influence on the asymptotic behaviour of the scattering amplitude due to Gribov-Zwanziger confinement is rather small. Indeed, as we have seen from the above estimates, we obtain extra suppression of the scattering of the order of $1/(D Y)$ for our case. \subsection{Transverse momentum distribution in the BFKL Pomeron for the Gribov-Zwanziger confinement} Using \eq{GF1} we can find the scattering amplitude which will be equal to \begin{equation} \label{TMD1} N\left( Y; \kappa \right)\,\,=\,\,\,\sum_{n=0}^{\infty}\,c_n\,e^{-\bar{\alpha}_S E_n\,Y} \, \phi_n(\kappa), \quad\mbox{with}\quad c_n \,=\, \int d \kappa_{\mbox{\tiny in}} \phi_n\left( \kappa_{\mbox{\tiny in}}\right)\, N\left( Y=0,k_{\mbox{\tiny in}}\right) \end{equation} where $N\left( Y=0,\kappa_{\mbox{\tiny in}}\right)$ is the initial condition for the scattering amplitude at $Y=0$. In \fig{contour} $N\left( Y=0,\kappa_{\mbox{\tiny in}}\right)$ is taken to be equal to $1/(\kappa+1)^2$. We plot in \fig{contour} the contours on which function $\kappa\,N\left( Y,\kappa\right)$ (see \eq{TMD1} is constant. In QCD we have transverse momentum distribution, which depends on $|\ln\kappa\,|$,. The QCD evolution results in the increase of $|\ln\kappa\,|$ with $Y$. Such an increase leads to two possible branches (depending on different sign of $\ln \kappa$) with increasing and decreasing average transverse momentum. Such a behaviour is seen in \fig{contour}. For our master equation one can see that the confinement cuts the small transverse momenta and the average $\kappa_T$ are larger than the values of $\kappa_T$ in initial conditions , which we consider $\kappa_{\mbox{\tiny in}} = 1$, and they grow with $Y$. Therefore, introducing the Gribov-Zwanziger confinement in the framework of the BFKL equation we obtain the transverse momentum distribution, which is determined by the behaviour of the scattering amplitude at large transverse momenta (at short distances), where we can trust the perturbative QCD approach. \begin{figure} \begin{center} \includegraphics[scale=1]{Ktrans.pdf} \end{center} \caption{\label{contour} The contour with constant $\kappa N\left( Y, \kappa\right)$ (see dashed line) for the QCD BFKL equation and for the BFKL equation for Gribov-Zwanziger confinement: the blue curve is for $m=0$ and the red one is for $m>0$.} \end{figure} \section{Conclusions} In this paper we solved the new evolution equation for high energy scattering amplitude that stems from the Gribov-Zwanziger approach to the confinement of quarks and gluons (see \eq{BFKLF1}). The results of this solution we find quite surprising and instructive for future development of high energy physics. First, the energy dependence of the scattering amplitude turns out to be the same as for QCD BFKL evolution. In particular, the eigenvalues of the new equation, which exceed $\omega_0 = -\bar{\alpha}_S\,E_0 = -\bar{\alpha}_S\,T\left(\kappa=0\right)$, coincide with the QCD BFKL equation. Second, the spectrum of the new equation does not depend on the details of the Gribov-Zwanzinger approach and coincides with the set of the eigenvalues of the model: non-abelian gauge theories with the Higgs mechanism for mass generation, developed in Ref.\cite{LLS}. This model has no relation to a QCD approach except having the same colour structure. These features support the ideas, that come out from the analytical analysis of the equation: the main influence of the confinement is in taking off the double degeneration of the QCD BFKL equation, which shows up in independence of the spectrum of the QCD BFKL equation on the sign of $\nu$ (see \eq{CHI}). Third, all eigenfunctions coincide with the eigenfunctions of the QCD BFKL equation at large transverse momenta $\kappa\,\geq\,1$. The numerical estimates show that there exist no new eigenvalues with the eigenfunctions that decreases faster than the eigenfunction of the QCD BFKL equation at large transverse momenta. The eigenfunctions of the master equation with the Gribov's gluon propagator tends to zero at small transverse momenta. In the coordinate representation it means that the eigenfunctions exhibit the power-like decrease at long distances, leading to the power-like decrease in the impact parameters and, therefore, to the severe problem with Froissart theorem and s-channel unitarity (see Refs.\cite{KW,FIIM,GOLEM}). In other words, the gluon propagator which tends to zero as the Gribov's propagator does, cannot solve the problem with large $b$ dependence of the scattering amplitude in the CGC approach. However, the structure of the gluon propagator in Gribov-Zwanziger approach that stems from the lattice QCD estimates and from the theoretical evaluation (see Refs.\cite{HU1,CFPS,CFMPS,CDMV,DSV,HU2,HU3,AHS,DOV,GRA,FMP,DSVV,DGSVV,CLSST,Z4,Z5,LVS}), leads to the gluon propagator which tends to a finite value at zero transverse momentum ($G\left( q \to 0 \right)\, \neq\, 0$). This results in the exponential suppression of the eigenfunction at long distances and in the resolution of the difficulties, that the CGC approach as well as other approaches, based on perturbative QCD, faces at large impact parameters. For the intercept $\omega = -\bar{\alpha}_S\,T\left( \kappa=0\right)$ we have the multiple degeneration of this eigenvalue, which is strongly correlated with the new dimensional parameter that we introduced to the theory from the confinement. This degeneration looks as Bose-Einstein condensation but it does not contribute to the scattering amplitude at high energy. We calculate the momentum distributions of the scattering amplitude and found that the typical transverse momentum increases with energy and become independent of the typical confinement scales that we have introduced in our equation. Therefore, to our surprise, we have to conclude that the confinement of quark and gluons, at least in the form of Gribov-Zwanziger approach, does not influence on the scattering amplitude except solving the long standing theoretical problem of the large impact parameter behaviour of the scattering amplitude. This is a very optimistic message for the CGC approach, but before coming to the strong conclusions we have to check the solution to the non-linear equation with the new kernel. This will be our next problem. \section*{Acknowledgements} We thank our colleagues at Tel Aviv University and UTFSM for encouraging discussions. This research was supported by ANID PIA/APOYO AFB180002 (Chile) and FONDECYT (Chile) grant 1180118.
{ "timestamp": "2020-12-29T02:26:31", "yymm": "2012", "arxiv_id": "2012.14139", "language": "en", "url": "https://arxiv.org/abs/2012.14139" }
\section{Introduction} Synchronization problem of multi-agent systems (MAS) has become a hot topic among researchers in recent years. Cooperative control of MAS is used in practical application such as robot network, autonomous vehicles, distributed sensor network, and others. The objective of synchronization is to secure an asymptotic agreement on a common state or output trajectory by local interaction among agents (see \cite{bai-arcak-wen,mesbahi-egerstedt,ren-book,wu-book} and references therein). State synchronization inherently requires homogeneous networks. Most works have focused on state synchronization where each agent has access to a linear combination of its own state relative to that of the neighboring agents, which is called full-state coupling \cite{saber-murray3,saber-murray,saber-murray2,ren-atkins,ren-beard-atkins,tuna1}. A more realistic scenario which is partial-state coupling (i.e. agents share part of their information over the network) is studied in \cite{tuna2,li-duan-chen-huang,pogromsky-santoboni-nijmeijer,tuna3}. For heterogeneous network it is more reasonable to consider output synchronization since the dimensions of states and their physical interpretation may be different. Introspective agents possess some knowledge about their own states. For heterogeneous MAS with non-introspective agents, it is well-known that one needs to regulate outputs of the agents to a priori given trajectory generated by a so-called exosystem (see \cite{wieland-sepulchre-allgower, grip-saberi-stoorvogel3}). Other works on synchronization of MAS with non-introspective agents can be found in the literature as \cite{grip-yang-saberi-stoorvogel-automatica,grip-saberi-stoorvogel}. On the other hand, for MAS with introspective agents, one can achieve output and regulated output synchronization. Most of the literature for heterogeneous MAS with introspective agents are based on modifying the agent dynamics via local feedback to achieve some form of homogeneity. There have been many results for synchronization of heterogeneous networks with introspective agents, see for instance \cite{kim-shim-seo,yang-saberi-stoorvogel-grip-journal,li-soh-xie-lewis-TAC2019,modares-lewis-kang-davoudi-TAC2018,qian-liu-feng-TAC2019,chen-auto2019}. In this paper, we propose \textbf{scale-free} protocol design to solve output and regulated output synchronization problems for heterogeneous MASs with introspective right-invertible agents. Scale-free protocols are designed based on localized information exchange with neighbors and do not require any knowledge of the communication network except connectivity. The protocol design is scale-free, namely, \renewcommand\labelitemi{{\boldmath$\bullet$}} \begin{itemize} \item The design is independent of the information about communication networks such as spectrum of the associated Laplacian matrix. That is to say, the universal dynamical protocols work for any communication network as long as it is connected. \item The dynamic protocols are designed solely based on agent models and do not depend on communication network and the number of agents. \item The proposed protocols achieve output and regulated output synchronization for heterogeneous MAS with any number of agents and any communication network. \end{itemize} \subsection*{Notations and definitions} Given a matrix $A\in \mathbb{R}^{m\times n}$, $A^{\mbox{\tiny T}}$ denotes its conjugate transpose. A square matrix $A$ is said to be Hurwitz stable if all its eigenvalues are in the open left half complex plane. We denote by $\diag\{A_1,\ldots, A_N \}$, a block-diagonal matrix with $A_1,\ldots,A_N$ as its diagonal elements. $A\otimes B$ depicts the Kronecker product between $A$ and $B$. $I_n$ denotes the $n$-dimensional identity matrix and $0_n$ denotes $n\times n$ zero matrix; sometimes we drop the subscript if the dimension is clear from the context. To describe the information flow among the agents we associate a \emph{weighted graph} $\mathcal{G}$ to the communication network. The weighted graph $\mathcal{G}$ is defined by a triple $(\mathcal{V}, \mathcal{E}, \mathcal{A})$ where $\mathcal{V}=\{1,\ldots, N\}$ is a node set, $\mathcal{E}$ is a set of pairs of nodes indicating connections among nodes, and $\mathcal{A}=[a_{ij}]\in \mathbb{R}^{N\times N}$ is the weighted adjacency matrix with non negative elements $a_{ij}$. Each pair in $\mathcal{E}$ is called an \emph{edge}, where $a_{ij}>0$ denotes an edge $(j,i)\in \mathcal{E}$ from node $j$ to node $i$ with weight $a_{ij}$. Moreover, $a_{ij}=0$ if there is no edge from node $j$ to node $i$. We assume there are no self-loops, i.e.\ we have $a_{ii}=0$. A \emph{path} from node $i_1$ to $i_k$ is a sequence of nodes $\{i_1,\ldots, i_k\}$ such that $(i_j, i_{j+1})\in \mathcal{E}$ for $j=1,\ldots, k-1$. A \emph{directed tree} is a subgraph (subset of nodes and edges) in which every node has exactly one parent node except for one node, called the \emph{root}, which has no parent node. A \emph{directed spanning tree} is a subgraph which is a directed tree containing all the nodes of the original graph. If a directed spanning tree exists, the root has a directed path to every other node in the tree \cite{royle-godsil}. For a weighted graph $\mathcal{G}$, the matrix $L=[\ell_{ij}]$ with \[ \ell_{ij}= \begin{system}{cl} \sum_{k=1}^{N} a_{ik}, & i=j,\\ -a_{ij}, & i\neq j, \end{system} \] is called the \emph{Laplacian matrix} associated with the graph $\mathcal{G}$. The Laplacian matrix $L$ has all its eigenvalues in the closed right half plane and at least one eigenvalue at zero associated with right eigenvector $\textbf{1}$ \cite{royle-godsil}. \section{Problem Formulation} We will study a MAS consisting of $N$ non-identical linear agents: \begin{equation}\label{hete_sys} \begin{system*}{cl} \dot{x}_i&=A_ix_i+B_iu_i,\\ y_i&=C_ix_i, \end{system*} \end{equation} where $x_i\in\mathbb{R}^{n_i}$, $u_i\in\mathbb{R}^{m_i}$ and $y_i\in\mathbb{R}^p$ are the state, input, output of agent $i$ for $i=1,\ldots, N$. The agents are introspective, meaning that each agent has access to its own local information. Specifically each agent has access to part of its state \begin{equation}\label{local} z_i=C_i^mx_i. \end{equation} where $z_i\in \mathbb{R}^{q_i}$. The communication network provides agent $i$ with the following information which is a linear combination of its own output relative to that of other agents: \begin{equation}\label{zeta1} \zeta_i=\sum_{j=1}^{N}a_{ij}(y_i-y_j) \end{equation} where $a_{ij}>0$ and $a_{ii}=0$. The communication topology of the network can be described by a weighted and directed graph $\mathcal{G}$ with nodes corresponding to the agents in the network and the weight of edges given by the coefficient $a_{ij}$. In terms of the coefficients of the associated Laplacian matrix $L$, $\zeta_i$ can be rewritten as \begin{equation}\label{zeta} \zeta_i= \sum_{j=1}^{N}\ell_{ij}y_j. \end{equation} In this paper, we also introduce a localized information exchange among agents, namely, each agent $i\in\{1,...,N\}$ has access to localized information, denoted by $\hat{\zeta}_i$, of the form \begin{equation}\label{etahat} \hat{\zeta}_i=\sum_{j=1}^{N}a_{ij}(\eta_i-\eta_j) \end{equation} where $\eta_i$ is a variable produced internally by agent $i$ and to be defined later. In order to explicitly state our problem formulation we need the following definition. \begin{definition}\label{def1} Let $\mathbb{G}^N$ denote the set of directed graphs of $N$ agents which contain a directed spanning tree. \end{definition} Now we formulate the problem of \textbf{scalable} output synchronization for a heterogeneous MAS. \begin{problem}\label{prob_sync} Consider a heterogeneous network of $N$ agents \eqref{hete_sys} with local information \eqref{local} satisfying Assumption \ref{ass2}. Let the associated network communication be given by \eqref{zeta}. Let $\mathbb{G}^N$ be the set of network graphs as defined in Definition \ref{def1}. The \textbf{scalable output synchronization problem based on localized information exchange} is to find, if possible, a linear dynamic controller for each agent $i \in\{1, \dots, N\}$, using only knowledge of agent models, i.e. $(C_i,A_i,B_i)$, of the form: \begin{equation}\label{out_dyn} \begin{system}{cl} \dot{x}_{i,c}&=A_{i,c}x_{i,c}+B_{i,c}\zeta_i+C_{i,c}\hat{\zeta}_i+D_{i,c}z_i,\\ u_i&=E_{i,c}x_{i,c}+F_{i,c}\zeta_i+G_{i,c}\hat{\zeta}_i+H_{i,c}z_i, \end{system} \end{equation} where $\hat{\zeta}_i$ is defined in \eqref{etahat} with $\eta_i=M_{i,c}x_{i,c}$, and $x_{c,i}\in\mathbb{R}^{n_i}$, such that output synchronization \begin{equation}\label{synch_out} \lim\limits_{t\to\infty}(y_i(t)-y_j(t))=0 \end{equation} is achieved for any $N$ and any graph $\mathcal{G}\in\mathbb{G}^N$. \end{problem} Next, we consider regulated output synchronization where output of agents converge to a priori given trajectory $y_r$ generated by a so-called exosystem as \begin{equation}\label{exo} \begin{system*}{cl} \dot{x}_r&=A_rx_r, \quad x_r(0)=x_{r0},\\ y_r&=C_rx_r, \end{system*} \end{equation} where $x_r \in\mathbb{R}^r$ and $y_r\in\mathbb{R}^p$. We assume a nonempty subset $\mathscr{C}$ of the agents which have access to their output relative to the output of the exosystem. In other word, each agent $i$ has access to the quantity \begin{equation} \Psi_i=\iota_i(y_i-y_r), \qquad \iota_i=\begin{system}{cl} 1, \quad i\in \mathscr{C},\\ 0, \quad i\notin \mathscr{C}. \end{system} \end{equation} By combining this with \eqref{zeta1}, the information exchange among agents is given by \begin{equation}\label{zetabar} \tilde{\zeta}_i=\sum_{j=1}^{N}a_{ij}(y_i-y_j)+\iota_i(y_i-y_r). \end{equation} $\tilde{\zeta}_i$ as defined in above, can be rewritten in terms of the coefficients of a so-called expanded Laplacian matrix $\tilde{L}=L+diag\{\iota_i\}=[\tilde{\ell}_{ij}]_{N \times N}$ as \begin{equation}\label{zetabar2} \tilde{\zeta}_i=\sum_{j=1}^{N}\tilde{\ell}_{ij}(y_j-y_r). \end{equation} Note that $\tilde{L}$ is not a regular Laplacian matrix associated to the graph, since the sum of its rows need not be zero. We know that all the eigenvalues of $\tilde{L}$, have positive real parts. In particular matrix $\tilde{L}$ is invertible. To guarantee that each agent gets the information from the exosystem, we need to make sure that there exists a path from node set $\mathscr{C}$ to each node. Therefore, we define the following set of graphs. \begin{definition}\label{def_rootset} Given a node set $\mathscr{C}$, we denote by $\mathbb{G}_{\mathscr{C}}^N$ the set of all graphs with $N$ nodes containing the node set $\mathscr{C}$, such that every node of the network graph $\mathcal{G}\in\mathbb{G}_\mathscr{C}^N$ is a member of a directed tree which has its root contained in the node set $\mathscr{C}$. We will refer to the node set $\mathscr{C}$ as root set. \end{definition} \begin{remark} Note that Definition \ref{def_rootset} does not require necessarily the existence of directed spanning tree. \end{remark} Now we formulate the problem of \textbf{scalable} regulated output synchronization for a heterogeneous MAS. \begin{problem}\label{prob_reg_sync} Consider a heterogeneous network of $N$ agents \eqref{hete_sys} with local information \eqref{local} satisfying Assumption \ref{ass2} and the associated exosystem \eqref{exo} satisfying Assumption \ref{ass-exo}. Let a set of nodes $\mathscr{C}$ be given which defines the set $\mathbb{G}^N_\mathscr{C}$. Let the associated network communication be given by \eqref{zetabar2}. The \textbf{scalable regulated output synchronization problem based on localized information exchange} is to find, if possible, a linear dynamic controller for each agent $i \in\{1, \dots, N\}$, using only knowledge of agent models, i.e. $(C_i, A_i, B_i)$, of the form: \begin{equation}\label{out_reg_dyn} \begin{system}{cl} \dot{x}_{i,c}&=A_{i,c}x_{i,c}+B_{i,c}\tilde{\zeta}_i+C_{i,c}\hat{\zeta}_i+D_{i,c}z_i,\\ u_i&=E_{i,c}x_{i,c}+F_{i,c}\tilde{\zeta}_i+G_{i,c}\hat{\zeta}_i+H_{i,c}z_i, \end{system} \end{equation} where $\hat{\zeta}_i$ is defined in \eqref{etahat} with $\eta_i=M_{i,c}x_{i,c}$, and $x_{c,i}\in\mathbb{R}^{n_i}$, such that regulated output synchronization \begin{equation}\label{reg_synch_out} \lim\limits_{t\to\infty}(y_i(t)-y_r(t))=0 \end{equation} is achieved for any $N$ and any graph $\mathscr{G}\in\mathbb{G}^N_\mathscr{C}$. \end{problem} In this paper, we make the following assumptions for agents and the exosystem. \begin{assumption}\label{ass2} For agents $i \in \{1,\dots,N\}$, \begin{enumerate} \item $(C_i,A_i,B_i)$ is stabilizable, detectable and right-invertible. \item $(C_i^m,A_i)$ is detectable. \end{enumerate} \end{assumption} \begin{assumption}\label{ass-exo} For exosystem, \begin{enumerate} \item $(C_r, A_r)$ is observable. \item All the eigenvalues of $A_r$ are on the imaginary axis. \end{enumerate} \end{assumption} \section{Scalable Output Synchronization}\label{OS} In this section, we design protocols to solve scalable output synchronization problem as stated in Problem \ref{prob_sync}. After introducing the architecture of our protocol, we design the protocols through four steps. \subsection{{Architecture of the protocol}} Our protocol has the structure shown below in Figure \ref{Heterogeneous}. \begin{figure}[ht] \includegraphics[width=8.3cm, height=4.3cm]{Heterogeneous} \centering \caption{Architecture of the protocol for output synchronization }\label{Heterogeneous} \vspace*{-.3cm} \end{figure} As seen in the above figure, our design methodology consists of two major modules. \begin{enumerate} \item The first module is reshaping the dynamics of the agents to obtain the target model by designing pre-compensators following our previous results in \cite{yang-saberi-stoorvogel-grip-journal}. \item The second module is designing collaborate protocols for almost homogenized agents to achieve output and regulated output synchronization \end{enumerate} \subsection{Protocol design} We design our protocols through the following four steps. \textbf{Step 1: choosing target model} First, we choose the suitable target model, i.e. $(C,A,B)$ such that the following conditions are satisfied. \begin{enumerate} \item $\rank(C)=p$ \item $(C, A, B)$ is invertible of uniform rank $n_q\ge\bar{n}_d$, and has no invariant zeros, where $\bar{n}_d$ denotes the maximal order of infinite zeros of $(C_i,A_i,B_i), i=1,\ldots,N$. \item eigenvalues of $A$ are in closed left half plane. \end{enumerate} \textbf{Step 2: designing pre-compensators} In this step, we design pre-compensators to reshape agent models to almost identical agents. Given chosen target model $(C,A,B)$, by utilizing the design methodology from \cite[Appendix B]{yang-saberi-stoorvogel-grip-journal}, we design a pre-compensator for each agent $i \in \{1, \dots, N\}$, of the form \begin{equation}\label{pre_con} \begin{system}{cl} \dot{\xi}_i&=A_{i,h}\xi_i+B_{i,h}z_i+E_{i,h}v_i,\\ u_i&=C_{i,h}\xi_i+D_{i,h}v_i, \end{system} \end{equation} which yields the compensated agents as \begin{equation}\label{sys_homo} \begin{system*}{cl} \dot{\bar{x}}_i&=A\bar{x}_i+B(v_i+\rho_i),\\ {y}_i&=C\bar{x}_i, \end{system*} \end{equation} where $\rho_i$ is given by \begin{equation}\label{sys-rho} \begin{system*}{cl} \dot{\omega}_i&=A_{i,s}\omega_i,\\ \rho_i&=C_{i,s}\omega_i, \end{system*} \end{equation} and $A_{i,s}$ is Hurwitz stable. Note that the compensated agents are homogenized and have the target model $(C, A, B)$. \textbf{Step 3: designing collaborative protocols for compensated agents} In this step, we design a dynamic protocol based on localized information exchange for compensated agents \eqref{sys_homo} and \eqref{sys-rho} as \begin{equation}\label{pscp1} \begin{system}{cl} \dot{\hat{x}}_i&=A\hat{x}_i-BK\hat{\zeta}_i+H(\zeta_i-C\hat{x}_i)\\ \dot{\chi}_i&=A\chi_i+Bv_i+\hat{x}_i-\hat{\zeta}_i\\ v_i&=-K\chi_i \end{system} \end{equation} where $H$ and $K$ are matrices such that $A-HC$ and $A-BK$ are Hurwitz stable. The agents communicate $\eta_i$ which is chosen as $\eta_i=\chi_i$. Therefore, each agent has access to the following information: \begin{equation}\label{hatzeta} \hat{\zeta}_i=\sum_{j=1}^{N}a_{ij}(\chi_i-\chi_j), \end{equation} and $\zeta_i$ is defined as \eqref{zeta}. \textbf{Step 4: obtaining protocols for the agents} Finally, our protocol which is the combination of module $1$ and $2$ is as following. \begin{equation}\label{pscp1final} \begin{system}{cl} \dot{\xi}_i&=A_{i,h}\xi_i+B_{i,h}z_i-E_{i,h}K\chi_i,\\ \dot{\hat{x}}_i&=A\hat{x}_i-BK\hat{\zeta}_i+H(\zeta_i-C\hat{x}_i)\\ \dot{\chi}_i&=A\chi_i-BK\chi_i+\hat{x}_i-\hat{\zeta}_i\\ u_i&=C_{i,h}\xi_i-D_{i,h}K\chi_i, \end{system} \end{equation} Then, we have the following theorem for output synchronization of heterogeneous MAS. \begin{theorem}\label{thm_out_syn} Consider a heterogeneous network of $N$ agents \eqref{hete_sys} with local information \eqref{local} satisfying Assumption \ref{ass2}. Let the associated network communication be given by \eqref{zeta}. Then, the scalable output synchronization problem as defined in Problem \ref{prob_sync} is solvable. In particular, the dynamic protocol \eqref{pscp1final} with localized information \eqref{hatzeta} solves the scalable output synchronization problem based on localized information exchange for any $N$ and any graph $\mathcal{G}\in\mathbb{G}^N$. \end{theorem} \begin{remark} It is interesting to note that for the case that agents are \textbf{homogeneous} and \textbf{non-introspective} one does not require designing pre-compensators, and obviously can achieve scalable output synchronization utilizing collaborative protocols proposed in \textit{step $3$} for homogeneous networks as long as the agents eigenvalues are in the closed left half plane. \end{remark} To obtain this result, we recall the following lemma for Laplacian matrix $L$. \begin{lemma}[\cite{zhang-saberi-stoorvogel-delay}]\label{LbarL}\label{lemmaLbar} Let a Laplacian matrix $L\in \mathbb{R}^{N\times N}$ be given associated with a graph that contains a directed spanning tree. We define $\bar{L}\in \mathbb{R}^{(N-1)\times (N-1)}$ as the matrix $\bar{L}=[\bar{\ell}_{ij}]$ with \[ \bar{\ell}_{ij} = \ell_{ij}-\ell_{Nj}. \] Then the eigenvalues of $\bar{L}$ are equal to the nonzero eigenvalues of $L$. \end{lemma} \begin{proof} We have: \[ \bar{L} = \begin{pmatrix} I & -\textbf{1} \end{pmatrix} L \begin{pmatrix} I \\ 0 \end{pmatrix} \] Assume that $\lambda$ is a nonzero eigenvalue of $L$ with eigenvector $x$, Then \[ \bar{x} = \begin{pmatrix} I & -\textbf{1} \end{pmatrix} x \] where $\textbf{1}$ is a vector with all $1$'s, satisfies, \[ \begin{pmatrix} I & -\textbf{1} \end{pmatrix} Lx = \begin{pmatrix} I & -\textbf{1} \end{pmatrix} \lambda x =\lambda \bar{x} \] and since $L\textbf{1}=0$ we find that \[ \bar{L} \bar{x} = \begin{pmatrix} I & -\textbf{1} \end{pmatrix} Lx =\lambda \bar{x}. \] This shows that $\lambda$ is an eigenvector of $\bar{L}$ if $\bar{x}\neq0$. It is easily seen that $\bar{x}=0$ if and only if $\lambda=0$. Conversely if $\bar{x}$ is an eigenvector of $\bar{L}$ with eigenvalue $\lambda$ then it is easily verified that \[ x = L \begin{pmatrix} I \\ 0 \end{pmatrix} \bar{x} \] is an eigenvector of $L$ with eigenvalue $\lambda$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm_out_syn}] Let $\bar{x}_i^o=\bar{x}_i-\bar{x}_N$, $y_i^o=y_i-y_N$, $\hat{x}_i^o=\hat{x}_i-\hat{x}_N$, and $\chi_i^o=\chi_i-\chi_N$. Then, we have \[ \begin{system*}{ll} \dot{\bar{x}}_i^o&=A\bar{x}_i^o+B(v_i-v_N+\rho_i-\rho_N),\\ {y}_i^o&=C\bar{x}_i^o,\\ \bar{\zeta}_i&=\zeta_i-\zeta_N=\sum_{j=1}^{N-1}\bar{\ell}_{ij}{y}_j^o,\\ \dot{\hat{x}}_i^o&=A\hat{x}_i^o-BK(\hat{\zeta}_i-\hat{\zeta}_N)+H(\bar{\zeta}_i-C\hat{x}_i^o)\\ \dot{\chi}_i^o&=A\chi_i^o+B(v_i-v_N)+\hat{x}_i^o-\sum_{j=1}^{N-1}\bar{\ell}_{ij}{\chi}_j^o\\ v_i-v_N&=-K\chi_i^o \end{system*} \] where $\bar{\ell}_{ij}=\ell_{ij}-\ell_{Nj}$ for $i,j=1,\cdots,N-1$. According to Lemma \ref{lemmaLbar}, we have that eigenvalues of $\bar{L}=[\bar{\ell}_{ij}]_{(N-1)\times(N-1)}$ are equal to the nonzero eigenvalues of $L$. We define \begin{equation*} \bar{x}=\begin{pmatrix} \bar{x}_1^o\\ \vdots\\ \bar{x}_{N-1}^o \end{pmatrix},\hat{x}=\begin{pmatrix} \hat{x}_1^o\\ \vdots\\ \hat{x}_{N-1}^o \end{pmatrix},\chi=\begin{pmatrix} \chi_1^o\\ \vdots\\ \chi_{N-1}^o \end{pmatrix},\rho=\begin{pmatrix} \rho_1\\ \vdots\\ \rho_N\end{pmatrix},\omega=\begin{pmatrix} \omega_1\\ \vdots\\ \omega_N\end{pmatrix} \end{equation*} then, we have the following closed-loop system: \begin{equation} \begin{system}{cl} \dot{\bar{x}}&=(I\otimes A)\bar{x}-(I\otimes BK )\chi+(\Pi\otimes B)\rho\\ \dot{\hat{x}}&=(I\otimes (A-HC))\hat{x}-(\bar{L}\otimes BK )\chi+(\bar{L}\otimes HC)\bar{x}\\ \dot{\chi}&=(I\otimes(A-BK )-\bar{L}\otimes I)\chi+\hat{x} \end{system} \end{equation} with $\Pi=\begin{pmatrix} I&-\mathbf{1} \end{pmatrix}$. By defining $e=\bar{x}-\chi$ and $\bar{e}=(\bar{L}\otimes I)\bar{x}-\hat{x}$, we can obtain \begin{equation}\label{x-e} \begin{system*}{cl} \dot{\bar{x}}&=(I\otimes (A-BK ))\bar{x}+(I\otimes BK )e+(\Pi\otimes B) C_s\omega\\ \dot{e}&=(I\otimes A-\bar{L}\otimes I)e+\bar{e}+(\Pi\otimes B) C_s\omega\\ \dot{\bar{e}}&=(I\otimes(A-HC))\bar{e}+(\bar{L}\Pi\otimes B) C_s\omega \end{system*} \end{equation} where $C_s=diag\{C_{i,s}\}$ for $i=1,\hdots ,N$. We obtain the synchronization result when $\lim_{t \to \infty}\bar{x}\to 0.$ By combining \eqref{sys-rho} and \eqref{x-e}, we have \begin{multline}\label{bar_x} \begin{pmatrix} \dot{\bar{x}}\\\dot{e}\\\dot{\bar{e}}\\\dot{\omega} \end{pmatrix}=\left(\begin{array}{cc} I\otimes (A-BK )&I\otimes BK \\ 0&I_{N-1}\otimes A-\bar{L}\otimes I\\ 0&0\\ 0&0 \end{array}\right. \\ \left.\begin{array}{cc} 0&(\Pi\otimes B) C_s\\ I&(\Pi\otimes B) C_s\\ I\otimes(A-HC)&(\bar{L}\Pi\otimes B) C_s\\ 0&A_s \end{array}\right) \begin{pmatrix} \bar{x}\\e\\\bar{e}\\\omega \end{pmatrix} \end{multline} where $A_s=diag\{A_{i,s}\}$ for $i=1,\hdots ,N$. Since the eigenvalues $\lambda_1,\ldots, \lambda_N$ of $\bar{L}$ have positive real part, we have, we have \begin{equation}\label{boundapl} (T\otimes I)(I\otimes A-\bar{L}\otimes I)(T^{-1}\otimes I)=I\otimes A-\bar{\Lambda}\otimes I \end{equation} for a non-singular transformation matrix $T$, where \eqref{boundapl} is upper triangular Jordan form with $A-\lambda_i I$ for $i=1,\cdots,N-1$ on the diagonal. Since all eigenvalues of $A$ are in the closed left half plane, $A-\lambda_i I$ is stable. Therefore, all eigenvalues of $I\otimes A-\bar{L}\otimes I$ have negative real part. Moreover, $A_s=\diag(A_{i,s})$ and $A-HC$ are Hurwitz stable, thus it implies we just need to prove the stability of $\dot{\bar{x}}=(I\otimes (A-BK ))\bar{x}$ based on the structure of \eqref{bar_x}. Obviously, since $A-BK$ is Hurwitz stable, we have $\bar{x}$ is asymptotically stable, i.e. $\lim_{t \to \infty}\bar{x}_i^o\to 0$. Thus, it implies $\lim_{t \to \infty}y_i^o=y_i-y_N=C\bar{x}_i^o\to 0$ which proves the result. \end{proof} \section{Scalable Regulated Output Synchronization} \subsection{Architecture of the protocol} The protocol for regulated output synchronization has two main modules as shown in figure \ref{Heterogeneous_reg}. \begin{figure}[h] \includegraphics[width=8.3cm, height=4.3cm]{Heterogeneous_reg} \centering \caption{Architecture of the protocol for regulated output synchronization}\label{Heterogeneous_reg} \vspace*{-.4cm} \end{figure} \subsection{Protocol design} Similar to scalable output synchronization, for solving scalable regulated output synchronization, our design procedure consists of four steps. \textbf{Step 1: remodeling the exosystem} First, we remodel the exosystem to arrive at suitable choice for the target model $(\check{C}_r,\check{A}_r,\check{B}_r)$ following the design procedure in \cite{yang-saberi-stoorvogel-grip-journal} summarized in the following lemma. \begin{lemma}[\cite{yang-saberi-stoorvogel-grip-journal}]\label{lem-exo} There exists another exosystem given by: \begin{equation}\label{exo-2} \begin{system*}{cl} \dot{\check{x}}_r&=\check{A}_r\check{x}_r, \quad \check{x}_r(0)=\check{x}_{r0}\\ y_r&=\check{C}_r\check{x}_r, \end{system*} \end{equation} such that for all $x_{r0} \in \mathbb{R}^r$, there exists $\check{x}_{r0}\in \mathbb{R}^{\check{r}}$ for which \eqref{exo-2} generate exactly the same output $y_r$ as the original exosystem \eqref{exo}. Furthermore, we can find a matrix $\check{B}_r$ such that the triple $(\check{C}_r,\check{A}_r,\check{B}_r)$ is invertible, of uniform rank $n_q$, and has no invariant zero, where $n_q$ is an integer greater than or equal to maximal order of infinite zeros of $(C_i,A_i,B_i), i\in \{1,...,N\}$ and all the observability indices of $(C_r, A_r)$. Note that the eigenvalues of $\check{A}_r$ consists of all eigenvalues of $A_r$ and additional zero eigenvalues. \end{lemma} \textbf{Step 2: designing pre-compensators} Next, given the target model $(\check{C}_r,\check{A}_r,\check{B}_r)$, we design pre-compensators to achieve almost identical models as in \textit{step $2$} of section \ref{OS}. \textbf{Step 3: designing collaborative protocols for the compensated agents} Collaborative protocols based on localized information exchanges are designed for the compensated agents $i=1,\hdots, N$ as \begin{equation}\label{pscp2} \begin{system}{cl} \dot{\hat{x}}_i&=\check{A}_r\hat{x}_i-\check{B}_rK\hat{\zeta}_i+H(\tilde{\zeta}_i-\check{C}_r\hat{x}_i)+\iota_i \check{B}_r v_i,\\ \dot{\chi}_i&=\check{A}_r\chi_i+\check{B}_rv_i+\hat{x}_i-\hat{\zeta}_i-\iota_i\chi_i,\\ v_i&=-K\chi_i, \end{system} \end{equation} where $H$ and $K$ are design matrices such that $\check{A}_r-H\check{C}_r$ and $\check{A}_r-\check{B}_rK$ are Hurwitz stable. The exchanging information $\hat{\zeta}_i$ is defined as \eqref{hatzeta} and $\tilde{\zeta}_i$ is defined as \eqref{zetabar2}. \textbf{Step 4: obtaining the protocols} The final protocol which is the combination of module $1$ and $2$ is \begin{equation}\label{pscp2final} \begin{system}{cl} \dot{\xi}_i&=A_{i,h}\xi_i+B_{i,h}z_i-E_{i,h}K\chi_i,\\ \dot{\hat{x}}_i&=\check{A}_r\hat{x}_i-\check{B}_rK\hat{\zeta}_i+H(\tilde{\zeta}_i-\check{C}_r\hat{x}_i)-\iota_i \check{B}_r K\chi_i,\\ \dot{\chi}_i&=\check{A}_r\chi_i-\check{B}_rK\chi_i+\hat{x}_i-\hat{\zeta}_i-\iota_i\chi_i,\\ u_i&=C_{i,h}\xi_i-D_{i,h}K\chi_i, \end{system} \end{equation} Then, we have the following theorem for scalable regulated output synchronization of heterogeneous MAS. \begin{theorem}\label{thm_reg_out_syn} Consider a heterogeneous network of $N$ agents \eqref{hete_sys} satisfying Assumption \ref{ass2} with local information \eqref{local} and the associated exosystem \eqref{exo} satisfying Assumption \ref{ass-exo}. Then, the scalable regulated output synchronization problem as defined in Problem \ref{prob_reg_sync} is solvable. In particular, the dynamic protocol \eqref{pscp2final} solves the scalable regulated output synchronization problem based on localized information exchange for any $N$ and any graph $\mathscr{G}\in\mathbb{G}^N_\mathscr{C}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm_reg_out_syn}] Similar to Theorem \ref{thm_out_syn}, we design a pre-compensator \eqref{pre_con} for each agent $i \in \{1,..., N\}$ to obtain our target model as \begin{equation}\label{sys_reg_homo} \begin{system*}{cl} \dot{\bar{x}}_i&=\check{A}_r\bar{x}_i+\check{B}_r(v_i+\rho_i),\\ {y}_i&=\check{C}_r\bar{x}_i, \end{system*} \end{equation} where $\rho_i$ is given by \eqref{sys-rho}. Let $\tilde{x}_i=\bar{x}_i-\check{x}_r$ and define \begin{equation*} \tilde{x}=\begin{pmatrix} \tilde{x}_1\\ \vdots\\ \tilde{x}_N \end{pmatrix},\hat{x}=\begin{pmatrix} \hat{x}_1\\ \vdots\\ \hat{x}_N \end{pmatrix},\chi=\begin{pmatrix} \chi_1\\ \vdots\\ \chi_N \end{pmatrix},\rho=\begin{pmatrix} \rho_1\\ \vdots\\ \rho_N\end{pmatrix},\omega=\begin{pmatrix} \omega_1\\ \vdots\\ \omega_N\end{pmatrix} \end{equation*} then, we have the following closed-loop system \begin{equation} \begin{system}{cl} \dot{\tilde{x}}&=(I\otimes \check{A}_r)\tilde{x}-(I\otimes \check{B}_rK)\chi+(I\otimes \check{B}_r)\rho\\ \dot{\hat{x}}&=(I\otimes (\check{A}_r-H\check{C}_r))\hat{x}-(\tilde{L}\otimes \check{B}_rK)\chi+(\tilde{L}\otimes H\check{C}_r)\tilde{x}\\ \dot{\chi}&=(I\otimes(\check{A}_r-\check{B}_rK)-\tilde{L}\otimes I)\chi+\hat{x} \end{system} \end{equation} By defining $e=\tilde{x}-\chi$ and $\bar{e}=(\tilde{L}\otimes I)\tilde{x}-\hat{x}$, we can obtain \begin{equation}\label{newsystem3} \begin{system*}{cl} \dot{\tilde{x}}&=(I\otimes (\check{A}_r-\check{B}_rK ))\tilde{x}+(I\otimes \check{B}_rK )e+(I\otimes \check{B}_r)C_s\omega\\ \dot{e}&=(I\otimes \check{A}_r-\tilde{L}\otimes I)e+\bar{e}+(I\otimes \check{B}_r)C_s\omega\\ \dot{\bar{e}}&=(I\otimes(\check{A}_r-H\check{C}_r))\bar{e}+(\tilde{L}\otimes \check{B}_r)C_s\omega \end{system*} \end{equation} Similar to Theorem \ref{thm_out_syn}, since all eigenvalues of $\tilde{L}$ have positive real part, we obtain that $e$ and $\bar{e}$ have asymptotically stable dynamics. Therefore, we just need to prove the stability of \begin{equation}\label{statefeedback2} \dot{\tilde{x}}=(I\otimes (\check{A}_r-\check{B}_rK ))\tilde{x} \end{equation} Thus, according to the result of Theorem \ref{thm_out_syn}, we can obtain the asymptotic stability of \eqref{statefeedback2}, i.e., $\lim_{t\to \infty}\tilde{x}_i\to 0$. It implies that $\bar{x}_i-\check{x}_r\to0$ which proves the result. \end{proof} \section{Simulation Results} In this section, we will illustrate the effectiveness of our protocols with a numerical example for output synchronization of heterogeneous MAS with partial-state coupling. We show that our protocol design \eqref{pscp1final} is scale-free and it works for any graph with any number of agents. \begin{figure}[t] \includegraphics[width=4cm, height=3.5cm]{Graph_1} \centering \caption{Communication network topology for case $1$}\label{Graph_1} \end{figure} \begin{figure}[t] \includegraphics[width=4cm, height=3.7cm]{Graph_2} \centering \caption{Communication network topology for case $2$}\label{Graph_2} \end{figure} \begin{figure}[t!] \includegraphics[width=6cm, height=3.5cm]{Graph_3} \centering \caption{Communication network topology for case $3$}\label{Graph_3} \end{figure} Consider the agents models \eqref{hete_sys} as: \begin{equation*} \begin{system*}{cl} A_1&=\begin{pmatrix} 0&1&0&0\\0&0&1&0\\0&0&0&1\\0&0&0&0 \end{pmatrix},\quad B_1=\begin{pmatrix} 0&1\\0&0\\1&0\\0&1 \end{pmatrix},\\ C_1&=\begin{pmatrix} 1&0&0&0 \end{pmatrix},\quad C^m_1=I\\ A_2&=\begin{pmatrix} 0&1&0\\0&0&1\\0&0&0 \end{pmatrix},\quad B_2=\begin{pmatrix} 0\\0\\1 \end{pmatrix},\\ C_2&=\begin{pmatrix} 1&0&0 \end{pmatrix},\quad C^m_2=I, \end{system*} \end{equation*} and for $i=3,4$ \begin{equation*} \begin{system*}{cl} A_i&=\begin{pmatrix} -1&0&0&-1&0\\0&0&1&1&0\\0&1&-1&1&0\\0&0&0&1&1\\-1&1&0&1&1 \end{pmatrix},\quad B_i=\begin{pmatrix} 0&0\\0&0\\0&1\\0&0\\1&0 \end{pmatrix},\\ C_i&=\begin{pmatrix} 0&0&0&1&0 \end{pmatrix},\quad C^m_i=I, \end{system*} \end{equation*} and \begin{equation*} \begin{system*}{cl} A_5&=\begin{pmatrix} 0&1&0\\0&0&1\\1&1&0 \end{pmatrix},\quad B_5=\begin{pmatrix} 0\\0\\1 \end{pmatrix},\\ C_5&=\begin{pmatrix} 1&0&0 \end{pmatrix},\quad C^m_2=I, \end{system*} \end{equation*} Note that $\bar{n}_d=3$, which is the degree of infinite zeros of $(C_2,A_2,B_2)$. We choose $n_q=3$ and matrices $A,B,C$ as following. \begin{equation*} \begin{system*}{cl} A&=\begin{pmatrix} 0&1&0\\0&0&1\\0&-1&0 \end{pmatrix},\quad B=\begin{pmatrix} 0\\0\\1 \end{pmatrix}, \quad C=\begin{pmatrix} 1&0&0 \end{pmatrix}\\ \end{system*} \end{equation*} and $K$ and $H$ as: \[ K=\begin{pmatrix} 30\\30\\10 \end{pmatrix} \quad H=\begin{pmatrix} 6&10&0 \end{pmatrix} \] We consider three different heterogeneous MAS with different number of agents and different communication topologies to show that the designed protocols are independent of the communication networks and the number of agents $N$.\\ \begin{itemize} \item \emph{Case $1$:} Consider a MAS with $4$ agents with agent models $(C_i, A_i, B_i)$ for $i \in \{1,\hdots,4\}$, and directed communication topology shown in Figure \ref{Graph_1}.\\ \item \emph{Case $2$:} In this case, we consider a MAS with $3$ agents with agent models $(C_i, A_i, B_i)$ for $i \in \{1,\hdots,3\}$ and directed communication topology shown in Figure \ref{Graph_2}. \\ \item \emph{Case $3$:} Finally, we consider a MAS with $5$ agents with agent models $(C_i, A_i, B_i)$ for $i \in \{1,\hdots,5\}$ and directed communication topology shown in Figure \ref{Graph_3}.\\ \end{itemize} \begin{figure}[t] \includegraphics[width=9cm, height=5cm]{Results_case1} \centering \caption{Output synchronization for communication networks of case $1$}\label{Results_case1} \end{figure} \begin{figure}[t] \includegraphics[width=9cm, height=5cm]{Results_case2} \centering \caption{Output synchronization for communication networks of case $2$}\label{Results_case2} \end{figure} \begin{figure}[t] \includegraphics[width=9cm, height=5cm]{Results_case3} \centering \caption{Output synchronization for communication networks of case $3$}\label{Results_case3} \end{figure} Figures \ref{Results_case1}-\ref{Results_case3} show that output synchronization is achieved for all three cases. The simulation results also confirm that the protocol design is independent of the communication graph and is scale-free so that we can achieve output synchronization with one-shot protocol design, for any graph with any number of agents.\\ \bibliographystyle{plain}
{ "timestamp": "2020-12-29T02:22:25", "yymm": "2012", "arxiv_id": "2012.14032", "language": "en", "url": "https://arxiv.org/abs/2012.14032" }
\section{Introduction} \subsection{Backgrounds} In the past decades, the number of smart devices has been massively deployed in numerous fields such as industrial IoT, nets of vehicles, environment monitoring networks. As an emerging paradigm for such distributed data systems, edge computing (EC), especially mobile edge computing (MEC), couples the communication with the computation and expands the cloud computing (CC) through allowing the computation to be executed at the edge nodes deployed along the path from data sources to the center cloud. In real applications, an MEC system is composed of three layers. The bottom level contains the sources of the data such as users' equipments (UEs), sensors or web cameras that generate the data in order to provide certain services. At the middle layers, the edge devices can be smartphones, smart routers, intelligent base stations and vehicles with processors on board, which play a relay role. Edge devices can collect data from edge sensors, assist to execute some computation tasks and then relay them to cloud center \cite{shi2016edge}. At the top level, the data and the service requirements, refer to tasks, are assembled at the cloud center. The edge processing schemes need to be designed, so as to improve the system efficiency as well as flexibility \cite{sun2016edgeiot}. That is, it can efficiently utilize the edge computation capabilities to reduce the redundant communication consumption of the networks. Recently, benefiting from the development of 5G networks, more data can be transmitted with lower latency, which make it possible for MEC systems to be applied in several promising real-time applications. For instance, MEC can support the augment reality (AR), virtual reality (VR), and to enhance the user's experience and promote the quality of the media streaming. In such cases, the images or videos are transmitted and processed in the network and the demands for the data rate and timeliness become stricter than the conventional distributed scenes \cite{hu2015mobile}. Besides, in some urban security scenes such as urgency monitoring, the delay of the data transmission and the processing directly impacts the quality of service \cite{mao2017survey}. To stress out such time-sensitive scenarios, age of information (AoI) \cite{abd2019role} has been introduced to the related investigations as a metric of data freshness. Generally, the computing resources and capacities of the mobile edge devices are limited. Additionally, the arrival of data as well as the computation tasks are dynamic, and the communication rates between the entities are also constrained by restricted bandwidth. Thus, the scheduling of the MEC systems should be well-designed. To some degree, such a problem is usually cast into a joint stochastic optimization problem with the certain targets. In particular, considering the strategies for the entity operation in the MEC networks, the MEC collaboration shall be fully studied \cite{abbas2017mobile}. The collaborations are mainly considered from the following three aspects: 1) The resource management, including computation resources, power and bandwidth allocation, etc. \cite{liu2020resource,du2018energy}; 2) The edge-level control, i.e., the data collection and trajectory planning for mobile edge devices \cite{8767017}; 3) The data scheduling, including the task execution at the edge devices, the computation assignment for the edge network and the data offloading to the cloud center \cite{mach2017mobile,zhao2019novel}. The critical point of the joint collaboration is to achieve the global optimal strategies for MEC systems. Unfortunately, due to the dynamics of the MEC environment and the complex coupling of the entities from different layers, the related optimization problems are always non-convex and NP-hard \cite{zhao2019computation}. Therefore, the conventional vanilla optimization methods may not work well for the joint MEC collaboration problems and the iterated online approaches with better learning ability and intelligence shall be investigated. \subsection{Motivations} Further, for future 6G networks, edge-native artificial intelligence (AI) is regarded to be a potential subject which leverages the MEC systems together with the distributed computing applications. The concepts of \textit{AI for EC} and \textit{EC for AI} are promising visions for future data systems \cite{loven2019edgeai}. In this article, we mainly focus on \textit{AI for EC} and consider how to exploit AI or deep learning (DL) techniques to improve the performance of edge computing systems. Reinforcement learning (RL), especially deep reinforcement learning (DRL) \cite{mnih2015human} has been incorporated into several decision-making scenarios such as automatic robot control and game playing \cite{ye2020mastering}. A widely used class of reinforcement learning approaches are value based RL, e.g. Q-learning and deep Q-network (DQN) which predict the system targets of each action and the make decisions according to an action-value function \cite{sutton2018reinforcement}. For the continuous control or the cases where the action space is extremely large, policy based methods such as DDPG (Deep Deterministic Policy Gradient) are introduced \cite{lillicrap2015continuous}. The basic notion is that an actor module and a critic module are built to fit the best strategies and the evaluation values respectively. Moreover, multi-agent reinforcement learning (MARL) \cite{busoniu2008comprehensive} and federated learning (FL) based reinforcement learning \cite{zhuo2019federated} are also developed for the distributed scenes where agents learn to make decisions through their local observations and cooperate for the same system targets. Motivated by these, we consider a multi-agent reinforcement learning approach for joint MEC collaboration to maintain the data freshness. In particular, we investigate the cooperative learning framework with mixed policies where the agents learn multiple strategies through local observations and states. Besides, the communication mechanism among learning agents is also a critical point to be studied in this article. \subsection{Related Works} In the literature, some related researches on the joint collaboration, the age-optimal optimization and the applications of deep learning in MEC systems has been broadly studied. Ndikumana \textit{et al.} \cite{ndikumana2019joint} proposed a joint framework for communication, task computation, data caching and the distributed control in big data edge computing and evaluated several performance metrics for different procedures. To enhance the mobility and adaptability of the edge computing, MEC systems with unmanned aerial vehicle (UAV) assisted were investigated \cite{merwaday2015uav,sharma2016uav}. The model formulations, including task arrival, computation, data scheduling and communication, were respectively studied in \cite{emara2020spatiotemporal,cao2019intelligent,ning2020partial,matolak2015unmanned}. Zhou \textit{et al.} \cite{zhou2018uav} and Liu \textit{et al.} \cite{liu2019uav} studied the energy efficient joint optimization of UAV-assisted system, and latency aware collaboration were also studied in \cite{liu2017latency}. In terms of the age sensitive MEC system, AoI based metrics are introduced to measure the freshness of data \cite{costa2014age}. Wang \textit{et al.} \cite{wang2021minimizing} proposed the age of critical information in mobile computing and developed a partially observed scheduling approach. The AoI aware radio resource allocation of multi-vehicular communications was studied in \cite{chen2020age}. Besides, Liu \textit{et al.} \cite{liu2018age} and Hu \textit{et al.} \cite{hu2020aoi} studied the age optimal joint collaboration in time sensitive MEC systems. A number of researches exploited the multi-stage optimization method or iterative algorithms to solve the joint collaboration problems. However, these existing approaches suffered from the challenges that the real-world scenes are dynamics and complicated, which makes it difficult for these algorithms to extract the latent connections between the environment variation and the entity operation. Hence, some reinforcement learning based online algorithms are developed \cite{tong2020deep} for MEC data systems. The Q-learning and DQN based RL methods were applied for resource allocation \cite{wang2019smart}, task offloading \cite{li2018deep}, as well as trajectory planning \cite{wan2019towards}. Particularly, Chen \textit{et al.} \cite{chen2018optimized} addressed the limitation of Q-learning and proposed a double DQN based offloading algorithm. Additionally, taking the multiple edge devices into consideration, some extended versions of multi-agent reinforcement learning methods were proposed. A multi-agent actor-critic based offloading approach was designed in \cite{wang2020multi1}. Peng \textit{et al.} \cite{peng2020multi} and Wang \textit{et al.} \cite{wang2020multi} adopted multi-agent DDPG (MADDPG) frameworks to resource management and trajectory planning in MEC networks. Besides, in \cite{zhang2020uav}, the vehicular layer MADDPG with attention mechanism was studied for multi-UAV assisted networks. Moreover, by employing federated learning into multi-agent control \cite{kumar2017federated}, Wang \textit{et al.} \cite{wang2020federated} combined the FL and DQN as a decentralized cooperative framework to improve the performance of edge caching. \subsection{Contributions \& Paper Organization} The main contributions of this work can be summarized as follows: \begin{itemize} \item[$\bullet$] We put forward the system model and a multi-agent Markov decision process (MDP) formulation to characterize the problems in age sensitive mobile edge computing for further investigations. \item[$\bullet$] We build a simulation environment as a \textit{gym} module\footnotemark[1] for these MEC systems, which can be easily employed to test the performance of different collaboration approaches. \item[$\bullet$] We present a multi-agent deep reinforcement learning framework, H-MAAC, for MEC joint collaboration. It's a multimodal framework that takes heterogeneous inputs to learn the mixed policies for trajectory planning, data scheduling and resource allocation. \item[$\bullet$] We develop the corresponding multi-agent cooperation algorithm for the online joint collaboration by introducing the edge federated learning mode into the MEC collaboration, abbreviated as EdgeFed H-MAAC\footnote[1]{The codes for the MEC simulation environment and the evaluation of several RL collaboration algorithms are publicly available at \url{https://github.com/Zhuzzq/EdgeFed-MARL-MEC}}, which outperforms DDPG and MADDPG on both system metrics and convergence. To the best of our knowledge, it’s the first joint MEC collaboration algorithm that combines the edge federated mode with the multi-agent actor-critic reinforcement learning. \item[$\bullet$] The convergence analysis of the proposed MEC collaboration algorithm is also provided in theory which implies that the EdgeFed H-MAAC method reaps better convergence. Besides, the parameter design is also discussed. \end{itemize} The remaining of the article is organized as follows. In Section \ref{section system} we propose an age sensitive MEC system model and present the problems of interests in this work. In Section \ref{section alg}, we firstly formulate the Markov decision process for the age minimization problem, and then build up a multi-agent edge federated actor-critic learning framework as well as develop the corresponding cooperation algorithm. The learning convergence of the proposed algorithm are presented as a theorem in Section \ref{section convergence}. The simulation results and more discussions are presented in Section \ref{section sim}. Finally, in Section \ref{section conclusion}, we conclude this work and give several potential research directions. \section{System Model and Problem Formulation} \label{section system} In this section, a classic 3-tier multi-agent edge computing system will be firstly introduced, including the corresponding communication and operation models. Then, the problems concerned in age sensitive scenarios will be defined. \begin{table*}[htbp] \caption{\upshape Main notations.} \label{notation} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \begin{tabular}{c|c|l} \hline \rowcolor{gray!20} & Notations & Description \\ \hline \multirow{22}{*}{\makecell[c]{Environment \\Notations}} & $t$ & The $t$-th time slot. \\ & $N\!_s$ & The number of the data sources. \\ & $N\!_e$ & The number of the edge devices. \\ & $\mathbb{S}=\left\{S_1,\cdots,S_{N\!_s}\right\}$ & The set of data sources. \\ & $\mathbb{E}=\left\{E_1,\cdots,E_{N\!_e}\right\}$ & The set of edge devices. \\ & $\mathbb{C}$ & Cloud data center. \\ & $d_{.}(t)$, $w_{.}(t)$, $idx_{.}(t)$ & \makecell[l]{Attributes of each packet considered in this research, i.e., the data size, \\the elapsed time and the index of its source.} \\ & $D_n(t)=\Big\{\left[d_{n,i}(t),w_{n,i}(t),n\right]\Big\}$ & $S_n$'s data buffer at $t$-th time slot. \\ & $\boldsymbol{\Delta}(t)=\big\{\Delta_1(t),\cdots,\Delta_{N\!_s}(t)\big\}$ & The list storing the age of all data sources at $t$-th time slot. \\ & $D^k_{col}(t)=\bigg\{\left[d_{col}^{k,i}(t),w_{col}^{k,i}(t),idx^{k,i}_{col}(t)\right]\bigg\}$ & $E_k$'s collected data buffer at caching the data collected but not processed. \\ & $D^k_{exe}(t)=\bigg\{\left[d_{exe}^{k,i}(t),w_{exe}^{k,i}(t),idx^{k,i}_{exe}(t)\right]\bigg\}$ & $E_k$'s executed data buffer caching the processed data waiting to be offloaded. \\ & $B_{col}^k$ & The collected data buffer size of $E_k$. \\ & $B_{exe}^k$ & The executed data buffer size of $E_k$. \\ & $\boldsymbol{\rm pos}_k(t)=\big[x_k(t),y_k(t),h_k(t)\big]$ & The position of $E_k$ at $t$-th time slot. \\ & $r^k_{move}$, $r^k_{obs}$, $r^k_{collect}$ & \makecell[l]{The moving radius (per time slot), observing radius \\and the collecting coverage radius of $E_k$.} \\ & $\{\boldsymbol{a}_k(t)\}=\Big\{\left[\boldsymbol{\rm move}_k(t),\ \boldsymbol{\rm exe}_k(t),\ \boldsymbol{\rm off}_k(t)\right]\Big\}$ & The action of $E_k$ at $t$, including movement, execution and offloading. \\ & $\boldsymbol{b}(t)=\left[b_1(t), \cdots, b_{N\!_e\!}(t)\right]$ & Bandwidth allocation for edge agent offloading at $t$-th time slot. \\ \hline \multirow{14}{*}{\makecell[c]{Learning \\Framework\\Notations}}& $\mathcal{A}_k$, $\mathcal{A}'_k$ & The actor net and target actor net on $k$-th learning agent. \\ & $\mathcal{C}_k$, $\mathcal{C}'_k$ & The critic net and target critic net on $k$-th learning agent. \\ & $\boldsymbol{\mathcal{A}}=\big\{\mathcal{A}_1,\cdots,\mathcal{A}_{N\!_e},\mathcal{A}_\mathbb{C}\big\}$ & The set of actor net learning agents for edge devices and center controller. \\ & $\boldsymbol{\mathcal{C}}=\big\{\mathcal{C}_1,\cdots,\mathcal{C}_{N\!_e},\mathcal{C}_\mathbb{C}\big\}$ & The set of critic net learning agents for edge devices and center controller. \\ & $\boldsymbol{\mathcal{A}'}=\big\{\mathcal{A}'_1,\cdots,\mathcal{A}'_{N\!_e},\mathcal{A}'_\mathbb{C}\big\}$ & The set of target actor net learning agents corresponding to $\boldsymbol{\mathcal{A}}$. \\ & $\boldsymbol{\mathcal{C}'}=\big\{\mathcal{C}'_1,\cdots,\mathcal{C}'_{N\!_e},\mathcal{C}'_\mathbb{C}\big\}$ & The set of target critic net learning agents corresponding to $\boldsymbol{\mathcal{C}}$. \\ & $\big\{\boldsymbol {s}_k(t)\big\}$, $\boldsymbol s_\mathbb{C}(t)$ & The input states of edge device and center controller at $t$-th time slot. \\ & $\boldsymbol{\theta}_k$, $\boldsymbol{\theta}'_k$ & The parameters of $k$-th learning agent's actor net and target actor net. \\ & $\boldsymbol{\phi}_k$, $\boldsymbol{\phi}'_k$ & The parameters of $k$-th learning agent's actor net and target actor net. \\ & $\eta_{\mathcal{A}}$, $\eta_{\mathcal{C}}$ & Learning rates for actor/critic nets. \\ & $\boldsymbol{\mathcal{B}}$ & Experience buffer for replay. \\ & $\gamma\in[0,1]$ & Reward/Penalty decay. \\ & $\tau\in[0,1]$ & Coefficient for target net updates. \\ & $\omega\in[0,1]$ & Federated factor for edge updating. \\ \hline \end{tabular} \end{table*} \begin{figure}[htbp] \centering \begin{minipage}[b]{0.48\textwidth} \includegraphics[width=1\textwidth]{system.pdf} \end{minipage}% \caption{The 3-Tier Mobile Edge Computing System.} \label{system} \end{figure} As shown in Fig. \ref{system}, a classic 3-tier edge computing system with data collection is studied. The bottom level are data sources ($\mathbb{S}$) such as sensors, web cameras and users' equipments that continuously generate data packets. Then middle level consists of mobile edge devices ($\mathbb{E}$) such as automobile base stations and unmanned aerial vehicle base stations (UAV-BSs) which manage to move in the area, communicate with the data sources and the cloud center, as well as carry out some processing of the data. At the top level, the cloud center ($\mathbb{C}$) is usually the data center with computing clusters that execute the computing tasks, store the data and implement centralized control for the whole system. The related models of data generation, edge mobility, edge operation, data scheduling, transmission and center controller will be introduced in detail. Additionally, some necessary notations and their explanations are listed in TABLE~\ref{notation}. To explicitly investigate the operations in each time slot, we divide continuous time into small slots and use the integer $t$ to denote the $t$-th time slot. \subsection{Data Generation Model} Consider the data sources generate the packets of same formatted data structures, for instance, the sensor data from distributed sensors in IoTs, the image or video data from web cameras in urban security monitoring or virtual/augment reality (VR/AR) scenarios, and the data for a certain class of computing tasks from users with smart equipments. Assuming that the data sources generates independently, we describe the data packets using a tuple of data size, elapsed time and the index of their sources, denoted by $d_{.}(t)$, $w_{.}(t)$, $idx_{.}(t)$ respectively. The arrival packets are temporally stored in the source buffers and wait to be collected. \subsection{Edge Mobility Model} In the MEC system introduced above, edge devices are considered to be vehicular base stations with mobility and computing capacity. All edge devices move in the area, collect the packets from data sources, process the data locally and offload the data to cloud center. In the following discussion, we assume that the edge devices $\mathbb E$ are a set of UAVs flying at certain heights over the data sources. Then, the mobility of edge devices can be modelled as: \begin{equation} \label{move} \boldsymbol{\rm pos}_k(t+1)=\boldsymbol{\rm pos}_k(t)+\boldsymbol{\rm move}_k(t) \end{equation} where $\boldsymbol{\rm pos}_k(t)$ denotes the position of $E_k$ at the beginning of $t$-th time slot, and the movement $\boldsymbol{\rm move}_k(t)$ in each time slot is constrained by \begin{equation} \label{mobility} \Big\Vert \boldsymbol{\rm move}_k(t)\Big\Vert_2\leq r^k_{move} \end{equation} where $\Vert\cdot\Vert_2$ is the 2-norm of the vectors. \subsection{Edge Processing \& Data Scheduling Model} While edge agent $E_k$ hovering over the data source $S_n$, all the packets in $S_n$'s data buffer will be collected and take up one piece of the collected data buffer, $D^k_{col}(t)$. The data pieces in $D^k_{col}(t)$ will be scheduled to be preprocessed using local processor on $E_k$. Since the data packets are assumed to be of formatted structure and the edge preprocess algorithms are determined, the computation on edge devices can be modelled by the data size and the edge computing rate \cite{liu2016delay}. Then, the accumulated execution duration of packets collected from $S_n$ with $E_k$ executing edge processing at the time slot $t$ can be obtained by the equation, \begin{equation} \label{edge execution} \tau^k_n(t)=\frac{\sum\limits_i \mathbbm{1}_{\left\{idx^{k,i}_{col}(t)=n\right\}}\cdot d^{k,i}_{col}(t)}{f^k_c(t)} \end{equation} where $f^k_c(t)$, related to CPU-cycle frequency, is the $E_k$'s edge execution data rate for the preset tasks at time slot $t$. On each edge device $E_k$, collected data buffer $D^k_{col}(t)$ cache those data from data sources but have not been edge processed. Assume that in every operation slot, $E_k$ allocates its edge computation resources for one data piece in the buffer. Thus, the edge execution decision at the $t$-th slot can be denoted by a one-hot vector, \begin{equation} \label{exe op} \boldsymbol{\rm exe}_k(t)=\big[{\rm exe}_1(t),\cdots,{\rm exe}_{B_{col}^k}(t)\big] \end{equation} where ${\rm exe}_i(t)\in\left\{0,1\right\}$ is the CPU allocation flag for each data piece in $E_k$'s collected data buffer and it obviously satisfies the condition, \begin{equation} \sum\limits_{i=1}^{B_{col}^k}{\rm exe}_i(t)=1\ \ \ \ \ for\ k=1,\cdots,N_e. \end{equation} After local execution on edge, the data will be cached in executed data buffers, $\big\{D^k_{exe}(t)\big\}$, and wait to be offloaded to the cloud center. Similarly, assume that in every operation slot, $E_k$ decides to offload one piece of packets in $D^k_{exe}(t)$. Thus, the offloading scheduling can be described by a one-hot vector, \begin{equation} \label{off op} \boldsymbol{\rm off}_k(t)=\big[{\rm off}_1(t),\cdots,{\rm off}_{B_{exe}^k}(t)\big] \end{equation} where ${\rm off}_i(t)\in\left\{0,1\right\}$ are the offloading decisions for each data piece, and they also satisfy the condition, \begin{equation} \sum\limits_{i=1}^{B_{exe}^k}{\rm off}_i(t)=1\ \ \ \ \ for\ k=1,\cdots,N_e. \end{equation} \subsection{Communication Model} There are 3 kinds of transmission links between the entities in the environment, containing source-edge, edge-edge and edge-cloud. In this work, we investigate the cases where edge devices only share the states, observations and learning parameters which means that neither data nor tasks are transmitted between edge devices. Therefore, the transmission costs of edge-edge communication shall be negligible. The transmission process of source-edge and edge-cloud can be modelled as an air to ground (A2G) channel \cite{matolak2015unmanned} where line-of-sight (LoS) path loss as well as none-line-of-sight (NLoS) loss shall be considered \cite{al2014optimal}. \begin{equation} \label{PL} PL_\xi(t)=\Big(\frac{4\pi f}{c}\Big)^2\cdot d^2(t)\cdot\eta_\xi \end{equation} where $d(t)=\sqrt{x^2(t)+y^2(t)+h^2(t)}$ is the distance between the edge device and the ground entity (a chosen data sources or cloud center), $f$ is the carrier frequency and $c$ is the speed of light. Besides, $\eta_\xi$ with $\xi=\{0,1\}$ represents the excessive path loss of LoS and NLoS cases. Hence, the average A2G path loss of the communication channel for $E_k$-$S_n$ (or $E_k$-$\mathbb{C}$) at $t$-th slot can be obtained by \begin{equation} \label{path loss} \overline{L}_{k,n}(t)=p_0(t)\cdot PL^{k,n}_0(t)+p_1(t)\cdot PL^{k,n}_1(t) \end{equation} where $p_0(t)$, $p_1(t)$ are the probability of LoS and NLoS which can be closely approximated by the following form: \begin{equation} \label{p_los} p_0(t)=\frac{1}{1+a\exp\big(-b(\psi-a)\big)} \end{equation} where $\psi=\tan^{-1}\Big(\frac{h(t)}{\sqrt{x^(t)+y^2(t)}}\Big)$ is the angle between the edge-ground link and the horizontal plane. Moreover, $a$ and $b$ are parameters related to the environment. Then, considering the frequency division mode with total bandwidth $W$, for the channel with allocated bandwidth proportion $b_{k,n}(t)$, the transmission rate between $E_k$ and $S_n$ (or $\mathbb{C}$) can be expressed by \begin{equation} \label{trans rate} R_{k,n}(t)=b_{k,n}(t)W\log_2\Big(1+\frac{P^k_{tr}(t)}{\overline{L}_{k,n}(t)N_0b_{k,n}(t)W}\Big) \end{equation} where $N_0$ is the noise power spectral density and $P^k_{tr}(t)$ represents the power for transmission satisfying \begin{equation} \label{power} 0\leq P^k_{tr}(t)\leq P^k_{tr,max}. \end{equation} \subsection{Problem Formulation} In the age sensitive scenarios, the freshness of the data shall be significantly stressed. Recapping the concept, age of information (AoI) \cite{kosta2017age}, in the system introduced above, the age of data source $S_n$ at $t$-th time slot is defined as the subtraction of current time and the generation time of the latest data at the receiver \cite{costa2016age}, which can be expressed as: \begin{equation} \label{age} \Delta_n(t)=t-T_g^n(t) \end{equation} where $T_g^n(t)$ denotes the generation time of $S_n$'s latest data packet received by the cloud center. Different from the delay of each data packet, $\Delta_n(t)$ is a duration measure for each data source which implies how frequently the data of $S_n$ are collected, edge executed and offloaded, which is especially important to such time-sensitive MEC system. Thus, an $N_s$-dimension vector $\boldsymbol{\Delta}(t)$ is used to record the age of each data source. To maintain the timeliness, the target is to minimize the average age of all data sources by controlling the edge devices, scheduling the data and allocating the resources effectively. Therefore, taking the above system models into account as the constraints, we obtain the following optimization problem: \begin{align} \label{P1} \mathcal P_1:\quad & \min\limits_{\{\boldsymbol{a}_k(t)\},\boldsymbol{b}(t)}\ \ \Bigg[\overline{\boldsymbol{\Delta}}(t):=\frac{1}{N_s}\sum\limits_{n=1}^{N_s}\Delta_n(t)\Bigg] \\ & \begin{array}{l@{\ }l@{}l@{\ }l} \notag \mbox{s.t.}\quad & \quad\ Eq.(\ref{move})\ \ to\ \ Eq.(\ref{power}). \\ \end{array} \end{align} Before solving the optimization problem above, let us rethink the introduced system models and corresponding problems from following three perspectives. Firstly, the constraints of $\mathcal P_1$ are heterogeneous and the optimization objectives are of two stages (edge stage and cloud stage). Thus, the optimal solutions cannot be explicitly expressed and an iterative algorithm shall be adopted. Secondly, note that $\mathcal P_1$ is an instant optimization problem and the environment sates are stochastic, which means that the strategies for each time slot should be variant. Moreover, because $\Delta_n(t)$ depends on not only current states but also the states of previous time, the optimal solutions for each time slot may not lead to full-time optimization. For above reasons, we are to formulate the optimization problem into MDP game and discuss the reinforcement learning based solutions as \cite{wiering2012reinforcement}. \section{An Edge Federated Actor-Critic Learning Framework for Multi-Agent Cooperation} \label{section alg} In this section, a Markov decision process (MDP) will be modelled for optimization problem $\mathcal{P}_1$. And then, with the MDP formulation for MEC collaboration, we shall adopt multi-agent reinforcement learning approaches to solve the MDP problem \cite{van2012reinforcement}. Q-Learning and DQN are popular value based RL methods which learn the action-value function $Q(\boldsymbol{s},\boldsymbol{a})$ related to system reward/penalty. However, while the action spaces grow too large, the search for optimal actions becomes extremely hard. To overcome the complexity of the action space, policy based methods such as A2C and DDPG are introduced, where dual neural networks are employed to estimate the action $\boldsymbol{a}$ and $Q$ value respectively. Moreover, in multi-agent scenes, the single learning agent mode or centralized RL requests a large neural network with a complex structure and massive model parameters, which may suffer from some difficulties on training convergence and model generalization \cite{littman1994markov,panait2005cooperative}. For above reasons, we present a heterogeneous multi-agent actor critic learning framework (H-MAAC) for MEC collaboration and the corresponding algorithm to solve the optimization problem. Besides, the convergence analysis of the proposed algorithm will be given. \subsection{MDP Formulation for MEC Collaboration} Markov decision process is a common model to formulate such environment-interactive systems \cite{puterman2014markov}. An MDP game can be expressed by a tuple with four elements, $\mathcal{M}\big\{S,A,R,\mathcal{T}\big\}$, standing for the states, actions, rewards and transition policies respectively. Note that in optimization problem $\mathcal{P}_1$, the target is to minimize the objective average AoI. Therefore, we rewrite the element $R$ as $P$ to represent the penalty of the game and obtain the substitute tuple of MDP formulation, $\mathcal{M}\big\{S,A,P,\mathcal{T}\big\}$. Furthermore, a multi-agent extended version of MDP contains a number of agents to match the scenarios with multiple controllable entities. In the above MEC collaboration system, all mobile edge devices and the center controller can be regarded as the agents. Overall we have $N_e$ edge agents and one center agent. All the agents observe their states and act with certain strategies. \subsubsection{States} For edge agents, their states $\{\boldsymbol{s}_k(t)\}$ contain the local observations of the environment, the status of the edge devices including buffer states, allocated offloading bandwidth, etc. The states of center agent, $\boldsymbol{s}_\mathbb{C}(t)$, consists of all edge device status. \subsubsection{Actions} In the cases of interest, edge devices move, collect data, locally execute and offload tasks to cloud center. Then, the edge agents' actions are composed of movement, execution decision and offloading scheduling, denoted by \begin{equation} \big\{\boldsymbol{a}_k(t)\big\}=\Big\{\left[\boldsymbol{\rm move}_k(t),\ \boldsymbol{\rm exe}_k(t),\ \boldsymbol{\rm off}_k(t)\right]\Big\} \end{equation} Meanwhile, the cloud center controller allocates the offloading bandwidth for each edge device. Thus, the action of center agent, $\mathrm{a}_\mathbb{C}(t)$, is the one-sum bandwidth proportion vector $\boldsymbol{b}(t)$. \subsubsection{Penalties} Since we focus on the edge collaboration in age sensitive MEC system where all agents collaborate to minimize the average age of data sources, the global penalty will be shared for all agents. Then, the current penalty at $t$-th slot for each agent can be described as \begin{equation} \label{penalty} p_k(t)=\overline{\Delta}(t)\ \ for\ k=1,\cdots,N_e,\mathbb{C}. \end{equation} To investigate the global optimization of the system, the following long-time penalty with decay is considered: \begin{equation} \label{long-time penalty} P_k(t)=\sum\limits_{i=0}^{T}\gamma^i\cdot p_k(t+i) \end{equation} where $\gamma$ is the decay coefficient and $T$ is the length of time window. \subsubsection{Transition Policies} For MEC system, it is hard to find a formatted policy to cover all the state transitions of data sources, edge devices, cloud center as well as resource allocation. As a result, we use \begin{equation} \label{transition} \mathcal{T}\Big(\{\boldsymbol{s}_k(t+1)\},\boldsymbol{s}_\mathbb{C}(t+1)\Big|\{\boldsymbol{s}_k(t)\},\boldsymbol{s}_\mathbb{C}(t),\{\boldsymbol{a}_k(t)\},\boldsymbol{a}_\mathbb{C}(t)\Big) \end{equation} to represent the entities' interactions in the system. \subsection{An Edge-Federated Heterogeneous Multi-Agent Actor-Critic Framework} \begin{figure}[htbp] \centering \begin{minipage}[b]{0.48\textwidth} \includegraphics[width=1\textwidth]{learning_framework.pdf} \end{minipage}% \caption{Architecture of the proposed H-MAAC based reinforcement learning collaboration framework} \label{learning framework} \end{figure} \begin{figure*}[htbp] \centering \subfigure[Edge Actor Net.]{ \label{aanet} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{aanet} \end{minipage}}% \subfigure[Center Actor Net.]{ \label{canet} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{canet} \end{minipage}} \subfigure[Edge Critic Net.]{ \label{acnet} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{acnet} \end{minipage}} \subfigure[Center Critic Net.]{ \label{ccnet} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{ccnet} \end{minipage}} \caption{Neural network models of each actor-critic agent.} \label{Net Model} \end{figure*} To control the edge devices and optimize the center bandwidth allocation, we present a heterogeneous multi-agent actor-critic (H-MAAC) framework. The construction of the framework and the learning procedures are shown in Fig. \ref{learning framework}, where learning agents interact with the environment, memorize the experience replay and learn the optimal actions to minimize the system penalty \cite{lowe2017multi}. The heterogeneity comes from three aspects: 1) The multi-modality of input states; 2) The multiple output actions; 3) The ensemble neural network models to learn the mixed policies. Overall, dual lightweight neural networks are built for each learning, containing original actor/critic nets and target actor/critic nets. Since the structures of the networks are relatively simple which will cost less computation resources, the framework can be deployed on each edge devices. The details of the framework are as follows. \subsubsection{Neural network design} Actor nets, $\mathcal{A}_k\big(\boldsymbol{s}_k(t);\boldsymbol{\theta}_k\big)$, take the states of each agent as input and output the current actions $\boldsymbol{a}_k(t)$. Since the input states and the output actions are multimodal, the network should be designed according to the specific data structures. For edge agents, we build a multiple input-output neural network for each device to learn the diversified actions through the ensemble states, including the local observation for data sources, the edge buffer states and the offloading channel states. The structure of the edge agent actor net is shown in Fig. \ref{aanet} where a convolutional neural network (CNN) with average pooling is employed to obtain the movement action while multilayer perceptrons (MLPs) are employed for edge execution as well as offloading scheduling. Specifically, we format the local observation for data sources into an $r^k_{obs}\times r^k_{obs}\times 2$ map. The observation maps can be regarded as 2-channel images where the third dimension refers to the aggregated size and delay of the data packets sensed by edge device $E_k$. Inspired by the successful applications of CNNs in computer visions, we adopt a CNN based neural network to extract the area with larger data packets as well as higher AoI. Then, through the average pooling, we project the $r^k_{obs}\times r^k_{obs}\times 2$ observation map onto the $r^k_{move}\times r^k_{move}\times 1$ movement map to decide the trajectory actions. The other inputs, buffer states and allocated bandwidth are formatted as lists and scalars. And the data scheduling vectors are outputted by an MLP net. For center actor net, its inputs consist of the state lists of the edge devices and the output is a one-sum vector for bandwidth allocation. As for the center agent actor, we use an MLP to combine the multiple edge state lists to allocate bandwidth proportion for edge-center communication, as shown in Fig. \ref{acnet}. Additionally, on each learning agent, a critic net, $\mathcal{C}_k\big(\boldsymbol{s}_k(t),\boldsymbol{a}_k(t);\boldsymbol{\phi}_k\big)$, is also deployed to approximate the action-value function $Q\big(\boldsymbol{s}_k(t),\boldsymbol{a}_k(t)\big)$ with the current states and actions as inputs. Note that the objectives of all $\mathcal{C}_k$ are consistent in order to minimize the average age of the whole system. The critic nets are designed by the main structures of actor nets plugging the layers for action evaluation, as shown in Fig. \ref{canet} and Fig. \ref{ccnet}. \subsubsection{Target net update} In addition to the original actor-critic nets, the target actor-critic nets are also built. Target nets have the same structure and initialization as the original nets. While training the network parameters, target nets estimate the future actions $\boldsymbol{a}'_k(t+1)$ as well as $Q'\big(\boldsymbol{s}'_k(t+1),\boldsymbol{a}'_k(t+1)\big)$ values based on the states of next slot. The employ of target nets improves the stability and convergence of replay training \cite{arulkumaran2017brief}. The parameters of target nets are slowly updated by the original nets every $T_u$ period with the mixing weight $\tau$, i.e., \begin{equation} \label{target update} \begin{aligned} \boldsymbol{\theta}'_k=\tau\boldsymbol{\theta}'_k+(1-\tau)\boldsymbol{\theta}_k \\ \boldsymbol{\phi}'_k=\tau\boldsymbol{\phi}'_k+(1-\tau)\boldsymbol{\phi}_k. \end{aligned} \end{equation} \subsubsection{Experience replay} To improve the online learning efficiency, an experience replay approach is exploited here. The interactions between the entities and the environment, denoted as tuples, $\left\{\boldsymbol{s}(t),\boldsymbol{a}(t),\overline{\Delta}(t),\boldsymbol{s}'(t+1)\right\}$, are stored in an experience buffer $\boldsymbol{\mathcal{B}}$. The experience buffer is of finite capacity. When the buffer is full, new records will replace the oldest ones. As for the sampling rules, to guarantee synchronization of learning and the environment interactions, the latest $B/2$ data are always exploited for training. In each learning epoch, each agent samples interaction data with batch size $B$ from the experience buffer and updates the network parameters with the learning agents $\eta_{\mathcal{A}}$ and $\eta_{\mathcal{C}}$. Specifically, the critic nets are updated by minimizing the MSE loss function, \begin{equation} \label{C loss} \ell_{\mathcal{C}_k}(\boldsymbol{\phi}_k):=\mathrm{E}\left[\Vert\mathcal{C}_k(\boldsymbol{s}_k,\boldsymbol{a}_k;\boldsymbol{\phi}_k)-\hat{y_k}\Vert^2\right] \end{equation} where \begin{equation} \label{yhat} \hat{y}_k=\overline{\Delta}+\gamma\mathcal{C}'_k(\boldsymbol{s}'_k,\boldsymbol{a}'_k;\boldsymbol{\phi}'_k) \end{equation} is the estimated long-time $Q$ value. Since the goal is to minimize the penalty, the loss function of actor nets can be defined as the predicted $Q$ value, \begin{equation} \label{A loss} \ell_{\mathcal{A}_k}({\boldsymbol\theta}_k):=\mathcal{C}_k\Big(\boldsymbol{s}_k,\mathcal{A}_k(\boldsymbol{s}_k;{\boldsymbol{\theta}}_k);\boldsymbol{\phi}_k\Big). \end{equation} Then, we summarize the training procedure at $t$-th epoch as Algorithm \ref{alg:replay}. \begin{algorithm} \caption{Experience Replay Procedure} \label{alg:replay} \begin{algorithmic}[1] \FOR{each agent $k$ in $\left\{1,\cdots,N_e,\mathbb{C}\right\}$} \STATE Sample $\left\{\boldsymbol{s}_k,\boldsymbol{a}_k,\overline{\Delta},\boldsymbol{s}'_k\right\}$ from $\boldsymbol{\mathcal{B}}[k]$;\\ \STATE Predict new actions: $\boldsymbol{a}'_k=\mathcal{A}'_k(\boldsymbol{s}'_k;\boldsymbol{\theta}'_k)$;\\ \STATE Predict new $Q$-value: $Q'(\boldsymbol{s}'_k,\boldsymbol{a}'_k)=\mathcal{C}'_k(\boldsymbol{s}'_k,\boldsymbol{a}'_k;\boldsymbol{\phi}'_k)$;\\ \STATE Calculate $\hat{y}_k$ by Eq.(\ref{yhat});\\ \STATE Calculate $\ell_{\mathcal{C}_k}(\boldsymbol{\phi}_k)$, $\ell_{\mathcal{A}_k}(\boldsymbol{\theta}_k)$ by Eq.(\ref{C loss}), Eq.(\ref{A loss});\\ \STATE Update network parameters using SGD optimizer:\\ \quad\quad\quad\quad\quad $\boldsymbol{\phi}^{t+1}_k\leftarrow\boldsymbol{\phi}^t_k-\eta_{\mathcal{C}}\nabla_\phi\tilde{\ell}_{\mathcal{C}_k}(\boldsymbol{\phi}^t_k)$,\\ \quad\quad\quad\quad\quad $\boldsymbol{\theta}^{t+1}_k\leftarrow\boldsymbol{\theta}^t_k-\eta_{\mathcal{A}}\nabla_\theta\tilde{\ell}_{\mathcal{A}_k}(\boldsymbol{\theta}^t_k)$.\\ \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Edge-federated mode} Note that it is a cooperative model and the penalties of edge agents are identical. Commonly speaking, so as to reach the global optimum, the cross-communication is required in such multi-agent learning scenarios to share the knowledge of different agents. However, in the MEC system of interests, the encoding of the multimodal input states is hard to design. What's more, the transmission and the processing of the observation will cost excessive communication and computation resources. Hence, to overcome these difficulties, inspired by the concept of federated learning \cite{yang2019federated}, we propose an edge-federated mode for the above framework, where every $E_f$ learning epoch, all edge agents share their actor net parameters and carry out the federated updating. Under the proposed updating rule, each edge agent preserves the parameters with weight $\omega$ and mixes the others' parameters, which can be formulated by \begin{equation} \label{ef updating} \boldsymbol{\theta}^{t+1}=\boldsymbol{\theta}^t\cdot\boldsymbol{\Omega} \end{equation} where $\boldsymbol{\theta}^t=\left[\boldsymbol{\theta}^t_1,\cdots,\boldsymbol{\theta}^t_{N_e}\right]$ denotes the vector of all edge actor nets at the $t$-th learning epoch and $\boldsymbol{\Omega}$ denotes the federated updating matrix, \begin{equation} \label{Omega} \boldsymbol{\Omega}= \left[ \begin{array}{cccc} \omega & \frac{1-\omega}{N_e-1} & \cdots & \frac{1-\omega}{N_e-1} \\ \frac{1-\omega}{N_e-1} & \omega & \cdots & \frac{1-\omega}{N_e-1} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{1-\omega}{N_e-1} & \frac{1-\omega}{N_e-1} & \cdots & \omega \end{array} \right ] \end{equation} On the one hand, edge-federated provide the model-wise communication for each edge agent. Instead of sharing the input states, only the parameters of the lightweight actor nets are transmitted, which improves the communication efficiency of the system \cite{konevcny2016federated} and such communication costs can be neglected compared to the data volume in the considered MEC system. On the other hand, the edge-federated mode performs better learning convergence which will be discussed in Section \ref{section convergence}. \subsubsection{Exploitation-exploration} Online learning suffers from the exploitation-exploration dilemma, namely, the agents tend to repeat the previous actions which may cause trap at some position and the loss of exploration in the MEC system. To avoid such phenomenon during the online interactions with the MEC environment, we adopt an $\epsilon$-exploration approach \cite{sutton2018reinforcement} which enforces random actions with the probability $\epsilon$. \subsection{Online Learning Algorithms for Multi-Agent Cooperation} \begin{algorithm}[h] \caption{EdgeFed H-MAAC Online Collaboration} {\bf Initialization:} Initialize system parameters and hyper parameters for learning.\\ {\bf Initialization:} Initialize net parameters $\boldsymbol{\theta}_k$, $\boldsymbol{\phi}_k$ and set target nets: $\boldsymbol{\theta}'_k\leftarrow\boldsymbol{\theta}_k$, $\boldsymbol{\phi}'_k\leftarrow\boldsymbol{\phi}_k$.\\ \label{alg:all} \begin{algorithmic}[1] \FOR{epoch $t$ = 1 to MAX\_EPOCH} \STATE Randomly generate $q\in[0,1]$; \FOR{each agent $k$ in $\left\{1,\cdots,N_e,\mathbb{C}\right\}$} \IF{$p<\epsilon$ or $\vert\boldsymbol{\mathcal{B}}[k]\vert<B$} \STATE Randomly choose actions $\boldsymbol{a}_k(t)$; \ELSE \STATE Ensemble local observation and states: $\boldsymbol{s}_k(t)$; \STATE Set actions: $\boldsymbol{a}_k(t)=\mathcal{A}_k\left(\boldsymbol{s}_k(t);\boldsymbol{\theta}_k\right)$; \ENDIF \ENDFOR \STATE Interact with environment and obtain $\overline{\Delta}(t)$, $\boldsymbol{s}'(t+1)$; \STATE Add $\left\{\boldsymbol{s},\boldsymbol{a},\overline{\Delta},\boldsymbol{s}'\right\}$ into $\boldsymbol{\mathcal{B}}$; \FOR{each agent $k$ in $\left\{1,\cdots,N_e,\mathbb{C}\right\}$} \IF{$\vert\boldsymbol{\mathcal{B}}[k]\vert\geq B$} \STATE Run replay procedure, Algorithm \ref{alg:replay}; \ENDIF \ENDFOR \IF{$t \mod T_u==1$} \STATE Update target nets, as Eq.(\ref{target update}); \ENDIF \IF{$t \mod E_f==1$} \STATE Run edge-federated updating, as Eq.(\ref{ef updating}); \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} Based on the proposed learning framework, we develop the corresponding online multi-agent collaboration as Algorithm \ref{alg:all} where the agents learn and update the optimal strategies while the MEC system works continuously. More specifically, each epoch comprises four procedures: (1)Line 1 to line 11 is the acting procedure with $\epsilon$-exploration, where agents choose whether to act randomly or to follow the actor net strategies; (2) Line 13 to line 17 is the replay training procedure of the networks which will be skipped if the count of samples in $\boldsymbol{\mathcal{B}}$ is insufficient; (3) Line 18 to line 20 is the periodical target net updating procedure; (4) Line 21 to line 23 is the edge-federated updating procedure. \section{Convergence Analysis} \label{section convergence} In this section, we shall investigate the convergence of the collaboration algorithms and show that the edge-federated learning mode has a better performance in terms of convergence rate and stability, compared to the original H-MAAC. Firstly, consider the convergence of actor nets, the objective function is defined by the average loss of all edge agents' actor nets, i.e., \begin{equation} \label{objective l} \mathcal L(\overline{\boldsymbol \theta})=\frac{1}{N_e}\sum\limits_{k=1}^{N_e}\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k) \end{equation} where $\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k):=\mathcal{C}_k\Big(\boldsymbol{s}_k,\mathcal{A}_k(\boldsymbol{s}_k;\overline{\boldsymbol{\theta}}_k);\boldsymbol{\phi}_k\Big)$ is the loss of $\mathcal{A}_k$ for edge agent $E_k$ and $\overline{\boldsymbol\theta}_k$ represents the updated model parameters of $\mathcal{A}_k$ under the federated rule, Eq.(\ref{ef updating}). Secondly, since the proposed algorithm is online and the environment is dynamic, the training may not guarantee either the global optimal or the penalty stability. Hence, we investigate the learning convergence from an alternative perspective, the gradients' 2-norm time average of all edge actor nets, \begin{equation} \label{gradient 2-norm} \frac{1}{N_eT}\sum\limits_{t=T_0+1}^{T_0+T}\sum\limits_{k=1}^{N_e}\Big\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}^t_k)\Big\Vert^2 \end{equation} where $T$ is the time horizon length after $T_0$-th learning epoch. Additionally, we denote the training interaction sets of $E_k$ from epoch $T_0+1$ to $T_0+T$ as \begin{equation} \label{training set} \{\boldsymbol{\mathcal{T}}_k\}^{T_0}:=\Big\{\boldsymbol{\mathcal{T}}_k(T_0+1),\cdots,\boldsymbol{\mathcal{T}}_k(T_0+T)\Big\} \end{equation} where $\boldsymbol{\mathcal{T}}_k(t)=\left\{\boldsymbol{s}_k,\boldsymbol{a}_k,\overline{\Delta},\boldsymbol{s}'_k\right\}^t$ represents the interaction records used to train $E_k$'s neural networks at $t$-th learning epoch. \subsection{Assumptions} Before the analysis, let us illustrate some assumptions similar to \cite{wang2018cooperative}, under which the convergence theorem can be carried out. \subsubsection{\textbf{Existence of optimal loss}} While the environment state sets are given, we assume that the optimal net parameters, $\boldsymbol{\theta}^*_k$, and the corresponding actor loss, $\ell^*_{\boldsymbol{A}_k}$, of each edge agent exist. This assumption is intuitive because the action space is finite and the loss is related to system penalty. \subsubsection{\textbf{Independence of center agent learning}} Since we consider the federated learning mode of edge agents, to eliminate the influence of center agent, we assume that the training process of center agent is independent with the edge agents'. Consequently, the learning of center actor-critic parameters, $\boldsymbol{\theta}_\mathbb{C}$ and $\boldsymbol{\phi}_\mathbb{C}$ can always reach the optimal parameters and be out of account while discussing the edge learning convergence. \subsubsection{\textbf{Fine-fitness of critic nets}} Note that each critic net is trained to predict the common penalty value $Q$ related to the system average age, $\overline{\boldsymbol{\Delta}}(t)$. Owing to the large learning capacity of neural networks, we assume that after $T_0$-th epoch, for given training set $\{\boldsymbol{\mathcal{T}}_k\}^{T_0}$, critic nets converge and fit the $Q$ values well, which refers to that the critic nets' parameters $\boldsymbol{\phi}_k$ keep the consistent values of $\boldsymbol{\phi}^{T_0*}_k$ and the loss $\ell_{\mathcal{A}_k}(\boldsymbol\theta_k)=\mathcal{C}_k\Big(\boldsymbol{s}_k,\mathcal{A}_k(\boldsymbol{s}_k;\boldsymbol{\theta}_k);\boldsymbol{\phi}_k^*\Big)$ can be regarded as a determined function with fixed $\boldsymbol{\phi}_k^*$. \subsubsection{\textbf{Conditional L-smoothness for given environment states}} In general, neural networks are neither smooth nor convex. However, for given training set $\{\boldsymbol{\mathcal{T}}_k\}^{T_0}$, the Lipschitz constant of a model consisting of MLP or CNN can be estimated \cite{balan2017lipschitz}. Besides, the activation functions used in proposed net such as ReLU and Softmax are Lipschitz continuous and differentiable \cite{latorre2020lipschitz}. Therefore, we can denote the Lipschitz constant of each actor net as $L_{\boldsymbol{\mathcal{T}}_k}$ which is related to its inputs $\{\boldsymbol{\mathcal{T}}_k\}^{T_0}$ from $T_0+1$-th to $T_0+T$-th training epoch. Then, the conditional L-smoothness of each actor net's loss function can be expressed as \begin{equation} \label{l-smooth} \Vert\nabla\ell_{\mathcal{A}_k}(\boldsymbol\theta_k)-\nabla\ell_{\mathcal{A}_k}(\boldsymbol\theta'_k)\Vert\leq L_{\boldsymbol{\mathcal{T}}_k}\Vert\boldsymbol{\theta}_k-\boldsymbol{\theta}'_k\Vert. \end{equation} \subsubsection{\textbf{Unbiased bounded SGD}} In each epoch, each agent samples a mini batch with size $B$ from the experience memory. Consider an SGD (stochastic gradient descent) based optimizer is applied for back propagation, the unbiased stochastic gradient $\tilde{\boldsymbol{g}}_k$'s variance is bounded by \begin{equation} \label{sgd bound} \mathrm{E}\big[\Vert\tilde{\boldsymbol g}_k-\boldsymbol g_k\Vert^2\big]\leq C\Vert \boldsymbol g_k\Vert^2+\frac{\sigma_{\boldsymbol{\mathcal{T}}_k}^2}{B} \end{equation} where $\boldsymbol g_k=\mathrm{E}\big[\tilde{\boldsymbol g}_k\big]=\nabla\ell_{\mathcal{A}_k}(\boldsymbol\theta_k)$ is the average stochastic gradient for given training input states $\{\boldsymbol{\mathcal{T}}_k\}^{T_0}$ and $\sigma_{\boldsymbol{\mathcal{T}}_k}$ is the related variance constant. Moreover, $C$ is the non-negative constant for all edge agents. Briefly, the assumptions 1) to 3) are reasonable to the MEC environment while 4) and 5) are assumptions for the learning progress. \subsection{Convergence Theorem for EdgeFed H-MAAC} Under the assumptions above, the learning convergence can be conducted and the results can be summarized as the following theorem. \begin{theorem} \label{convergence thm} For the proposed EdgeFed H-MAAC algorithm with the update period $E_f$, and under the assumptions 1) to 5), if the learning rates of all edge agents are set to be $\eta$ which satisfies \begin{equation} \begin{aligned} \Bigg[\frac{2CE_fL_{max}^2}{1-\zeta^2} & +\frac{E_f^2L_{max}^2}{1-\zeta}\left(\frac{2\zeta}{1+\zeta}+\frac{2\zeta}{1-\zeta}+\frac{E_f-1}{E_f}\right)\Bigg]\eta^2 \\ & +L_{max}(C+1)\eta-1\leq 0 \end{aligned} \label{lr} \end{equation} where $\zeta=\frac{N_e\omega-1}{N_e-1}$ is the second maximum eigenvalue of the updating matrix $\boldsymbol{\Omega}$ and $L_{max}=\max\limits_k\ L_{\boldsymbol{s}_k}$. Then, the time average squared gradient norm after $T_0$-th epoch is bounded by \begin{equation} \begin{aligned} \mathrm{E} \Bigg[\frac{1}{N_eT} & \sum\limits_{t=T_0+1}^{T_0+T}\sum\limits_{k=1}^{N_e}\big\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol{\theta}}^t_k)\big\Vert^2\Bigg]\leq \frac{2\sum\limits_{k=1}^{N_e}\left[\ell_{\mathcal{A}_k}(\overline{\boldsymbol{\theta}}^{T_0}_k)-\ell_{\mathcal{A}_k}^*\right]}{\eta N_eT} \\ & +\frac{\eta}{N_eB}\sum\limits_{k=1}^{N_e}L_{\boldsymbol{\mathcal{T}}_k}\sigma_{\boldsymbol{\mathcal{T}}_k}^2+\frac{\eta^2\sigma_{max}^2L_{max}^2}{2B}\Big(\frac{1+\zeta^2}{1-\zeta^2}E_f-1\Big). \end{aligned} \label{thm} \end{equation} \end{theorem} \begin{proof} \label{proof} We present the proof of the theorem in the way similar to Appendix D of \cite{haddadpour2019convergence}. Consider the difference of the average loss, \begin{equation} \mathcal{L}(\overline{\boldsymbol{\theta}}^{t+1})-\mathcal{L}(\overline{\boldsymbol{\theta}}^{t})=\frac{1}{N_e}\sum\limits_{k=1}^{N_e}\Big[\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^{t+1})-\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t)\Big]. \label{def minus} \end{equation} As an application of the L-smoothness gradient assumption, we obtain \begin{equation} \begin{aligned} \ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^{t+1})-\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t) \leq \frac{\eta^2L_{\boldsymbol{\mathcal{T}}_k}}{2}\big\Vert\tilde{\boldsymbol{g}}^t_k\big\Vert^2-\eta\left<\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t),\tilde{\boldsymbol g}_k^t\right>. \end{aligned} \label{part1} \end{equation} By assumption 5), the expected value of the second term can be bounded by \begin{equation} \begin{aligned} \mathrm{E}\Big[\Vert\tilde{\boldsymbol{g}}^t_k\Vert^2\Big] & =\mathrm{E}\Big[\Vert\tilde{\boldsymbol{g}}^t_k-\boldsymbol{g}^t_k\Vert^2\Big]+\Vert\boldsymbol{g}^t_k\Vert^2 \\ & \leq(C+1)\Vert\boldsymbol{g}^t_k\Vert+\frac{\sigma_{\boldsymbol{\mathcal{T}}_k}^2}{B}. \end{aligned} \label{g bound} \end{equation} For the mean of the first term, by Eq.(\ref{l-smooth}) in assumption 4), we have \begin{equation} \begin{aligned} \mathrm{E} & \Big[-\eta\left<\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t),\tilde{\boldsymbol g}_k^t\right>\Big] =-\eta\left<\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t),\mathrm{E}\left[\tilde{\boldsymbol g}_k^t\right]\right> \\ & =-\eta\left<\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t),\boldsymbol g_k^t\right> \\ & =\frac{-\eta}{2}\left[\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t)\Vert^2+\Vert\boldsymbol g_k^t\Vert^2-\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t)-\boldsymbol g_k^t\Vert\right] \\ & \leq\frac{\eta}{2}\left[-\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t)\Vert^2-\Vert\boldsymbol g_k^t\Vert^2+L_{\boldsymbol{\mathcal{T}}_k}^2\Vert\overline{\boldsymbol{\theta}}^t_k-\boldsymbol{\theta}^t_k\Vert^2\right] \end{aligned} \label{product} \end{equation} According to Eq.(136) of Appendix D.2.4 in \cite{wang2018cooperative}, we get the average bound for the last term in Eq.(\ref{product}) as \begin{equation} \begin{aligned} \frac{1}{N_eT}\sum\limits_{t,k} & L^2_{\boldsymbol{\mathcal{T}}_k}\Vert\overline{\boldsymbol{\theta}}^t_k-\boldsymbol{\theta}^t_k\Vert^2 \leq\frac{\eta^2\sigma_{max}^2L^2_{max}}{B}\left(\frac{1+\zeta^2}{1-\zeta^2}E_f-1\right) \\ & +\Bigg[\frac{\eta^2E_f^2L^2_{max}}{1-\zeta}\left(\frac{2\zeta}{1+\zeta}+\frac{2\zeta}{1-\zeta}+\frac{E_f-1}{E_f}\right) \\ & +\frac{2\eta^2CE_fL^2_{max}}{1-\zeta^2}\Bigg]\frac{1}{N_eT}\sum\limits_t\sum\limits_k \Vert\boldsymbol g_k^t\Vert^2 \end{aligned} \label{part2} \end{equation} Then, by taking the average on both sides of Eq.(\ref{def minus}) and combining Eq.(\ref{part1}) to Eq.(\ref{part2}), we obtain \begin{equation} \begin{aligned} \frac{1}{T} & \sum\limits_{t=T_0+1}^{T_0+T}\mathrm{E}\left[\mathcal{L}(\overline{\boldsymbol{\theta}}^{t+1})-\mathcal{L}(\overline{\boldsymbol{\theta}}^{t})\right]\leq\frac{1}{N_eT}\sum\limits_t\sum\limits_k\frac{\eta^2L_{\boldsymbol{\mathcal{T}}_k}}{2}\big\Vert\tilde{\boldsymbol{g}}^t_k\big\Vert^2 \\ & \quad\quad\quad +\frac{1}{N_eT}\sum\limits_t\sum\limits_k-\eta\left<\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t),\tilde{\boldsymbol g}_k^t\right> \\ & \leq\frac{-\eta}{2N_eT}\sum\limits_t\sum\limits_k\Vert\nabla\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k^t)\Vert^2+\boldsymbol{\Gamma}\cdot\frac{\eta}{2N_eT}\sum\limits_t\sum\limits_k\Vert\boldsymbol{g}^t_k\Vert^2 \\ & +\frac{\eta^3\sigma_{max}^2L_{max}^2}{2B}\left(\frac{1+\zeta^2}{1-\zeta^2}E_f-1\right)+\frac{\eta^2}{2N_eB}\sum\limits_kL_{\boldsymbol{\mathcal{T}}_k}\sigma_{\boldsymbol{\mathcal{T}}_k}^2 \end{aligned} \end{equation} where $\Gamma$ is the left-hand side of Eq.(\ref{lr}), \begin{equation} \begin{aligned} \Gamma=\Bigg[\frac{2CE_fL_{max}^2}{1-\zeta^2} & +\frac{E_f^2L_{max}^2}{1-\zeta}\left(\frac{4\zeta}{1-\zeta^2}+\frac{E_f-1}{E_f}\right)\Bigg]\eta^2 \\ & +L_{max}(C+1)\eta-1 \end{aligned} \end{equation} Particularly, if the learning rate $\eta$ is set properly to make $\Gamma\leq 0$, the terms related with $\Vert\boldsymbol{g}^t_k\Vert^2$ can be eliminated and the conclusion Eq.(\ref{thm}) can be obtained. \end{proof} Further, we summarize the following remarks to interpret some insights observed from \textbf{Theorem \ref{convergence thm}}. \begin{remark} \label{gradient convergence} \textbf{Convergence of gradients.} \rm The theorem investigates the learning convergence from the perspective of gradients. When the average 2-norm of $\ell_{\mathcal{A}_k}(\overline{\boldsymbol\theta}_k)$'s gradients are upper bounded, one can deem that the loss functions are stable and the learning process converges. \end{remark} \begin{remark} \label{generalization} \textbf{Generalization to other metrics.} \rm Note that in the above theorem, we consider the actor loss without regard to the specific metric and the goal is to bound the deviation of the gradients. Therefore, if one hopes to maintain any metric (or metric combination) which converges to certain value, the theorem always holds. \end{remark} \begin{remark} \label{interpretation} \textbf{Interpretation of the bounds.} \rm In the theorem, the right-hand side upper bound $\Lambda\left(\omega,E_f,\boldsymbol{\mathcal{T}}_k^{T_0}\right)$ can be interpreted by the following three separated terms: \begin{equation} \label{interpret thm} \begin{aligned} \Lambda\left(\omega,E_f,\boldsymbol{\mathcal{T}}_k^{T_0}\right) & =\underbrace{\frac{2\sum\limits_{k=1}^{N_e}\left[\ell_{\mathcal{A}_k}(\overline{\boldsymbol{\theta}}^{T_0}_k)-\ell_{\mathcal{A}_k}^*\right]}{\eta N_eT}}_{\rm Initial\ Deviation}+\underbrace{\frac{\eta\sum\limits_{k=1}^{N_e}L_{\boldsymbol{\mathcal{T}}_k}\sigma_{\boldsymbol{\mathcal{T}}_k}^2}{N_eB}}_{\rm Sequel\ Deviation} \\ & +\underbrace{\frac{\eta^2\sigma_{max}^2L_{max}^2}{2B}\Big(\frac{1+\zeta^2}{1-\zeta^2}E_f-1\Big)}_{\rm Federated\ Updating\ Deviation}. \end{aligned} \end{equation} The first term is the initial deviation caused by the training before $T_0$. The second term, related to the interaction after $T_0$, is called the sequel deviation. The third term is the gradients' deviation when the edge federated updating mode is carried. \end{remark} \begin{remark} \label{minimum bound} \textbf{Minimum bounds.} \rm For the federated update matrix in Eq.(\ref{Omega}), $\boldsymbol{\Omega}$ is positive-definite and symmetrical which has a 1-order eigenvalue $\Lambda_1=1$ and $N_e-1$ order repeated eigenvalue, $\Lambda_2=\cdots=\Lambda_{N_e-1}=\frac{N_e\omega-1}{N_e-1}$. Then, $\zeta=\frac{N_e\omega-1}{N_e-1}\in\left[\frac{-1}{N_e-1},1\right]$. Besides, one can rewrite the last term of the right-hand side in Eq.(\ref{thm}) as: \begin{equation} \label{mini bound} \begin{aligned} & \frac{\eta^2\sigma_{max}^2L_{max}^2}{2B}\Big(\frac{1+\zeta^2}{1-\zeta^2}E_f-1\Big) \\ & =\frac{\eta^2\sigma_{max}^2L_{max}^2}{2B}\Big(\frac{2}{1-\zeta^2}E_f-E_f-1\Big). \end{aligned} \end{equation} Thus, with other hyper parameters fixed, the upper bound of the gradients' 2-norm in Eq.(\ref{thm}) meets its minimum if $\zeta^*=0$, i.e., $\omega^*=\frac{1}{N_e}$. This implies that when the edge-federated updating is carried out with uniform weights, namely, all elements in $\Omega$ are equal to $\frac{1}{N_e}$, the gradients’ mean-square time average is bounded tightly, which leads to best training convergence. \end{remark} \begin{remark} \label{omega=1} \textbf{Impact of $\omega$.} \rm Intuitively, if one sets $\omega<\frac{1}{N_e}$, the diagonal elements of $\Omega$ in Eq.(\ref{Omega}) are smaller than the others. We regard these cases as chaos because each agent keeps less knowledge of the policies learned from its own observations. Also, in particular, when $\omega=1$, $\Omega$ is an identity matrix which refers to the original mixed H-MAAC, where each agent learns the policies individually and no parameter sharing occurs. In this case, $\zeta=1$ and the right-hand side of Eq.(\ref{thm}) becomes infinity, which means that the gradients are unbounded, and therefore, the convergence of original H-MAAC is not guaranteed under the above deductions. Thus, the effective interval of $\omega$ lies on $[\frac{1}{N_e},1)$. Overall, we elucidate the impact of $\omega$ as the following Fig. \ref{explain omega}. When $\omega\rightarrow\frac{1}{N_e}$, the federated parameter sharing becomes uniform and the edge agents tend to be homogeneous. When $\omega\rightarrow 1$, the diagonal elements of $\Omega$ dominate and each edge agents keeps more individuality by preserving the most of its own policies. \begin{figure}[htbp] \centering \begin{minipage}{0.3\textwidth} \includegraphics[width=1\textwidth]{explain_omega.pdf} \end{minipage}% \caption{A sketch for the impact of $\omega$.} \label{explain omega} \end{figure} \end{remark} \begin{remark} \label{efficiency convergence} \textbf{Convergence vs. efficiency.} \rm Note that in assumption 3) to 5), we require the preconditions that the interaction sets are given and fixed. Nevertheless, in the online MEC collaboration, the environment is stochastic where the dynamics of data arrival and $\epsilon$-exploration are time variant. In addition, the random sampling in experience replay may also lead to variant training sets, $\{\boldsymbol{\mathcal{T}}_k\}^{t}$. For reasons above, the optimal critic parameters $\boldsymbol{\phi}_k^*$, the Lipschitz constant $L_{\boldsymbol{\mathcal{T}}_k}$ in Eq.(\ref{l-smooth}), the gradient variance $\sigma_{\boldsymbol{\mathcal{T}}_k}$ as well as the coefficient $C$ in Eq.(\ref{sgd bound}) are likely to be unstable and of large variance at different learning epochs. Furthermore, to the contrary, the online collaboration system might benefit from the fluctuation of the gradients because the actor parameters $\boldsymbol{\theta}$ can be modified to adapt the change of the environment. Thus, the best training convergence in the ideal scenarios may not always guarantee the best system performance. We reckon this as a learning convergence vs. system efficiency trade-off in practice and the details will be discussed in Section \ref{section sim}. \end{remark} \section{Performance Evaluation} \label{section sim} In this section, we will test the proposed MEC collaboration framework and evaluate its performance through simulation. Besides, the comparisons with popular RL approaches and other insights are investigated. \subsection{Simulation Settings} Firstly, we implement the age sensitive MEC system presented in Section \ref{section system} as a universal \textit{gym} \cite{brockman2016openai} module in Python. For simplicity of the following discussion and simulation, we suppose that the packets independently arrive at each data source $S_n$ with the probability $p^n_g$ at the beginning of each time slot and the data size follows a Poisson distributed with the arrival rate, $\lambda_n$ \cite{fan2018application}. Thus, the data generation process can be described as a switch Poisson process: \begin{equation} \label{data gen} P\left\{d_{n,i}(t)=m\right\}=\mathbbm{1}_{G_n}\cdot\frac{e^{-\lambda_n}\lambda_n^m}{m!} \end{equation} where the boolean random variable $G_n\sim{\rm Bernoulli}(p^n_g)$, indicates the arrival of the packets. \begin{table*}[htbp] \caption{\upshape Main parameter settings for simulations.} \label{sim_settings} \centering \begin{tabular}[c]{ccc} \hline \rowcolor{gray!20}Parameter & Description & Value \\ \hline $(\lambda_n,p_g^n)$ & Data rate and arrival probability of data generation at $S_n$. & (1Kb/slot, 0.3) \\ $(r^k_{move}$, $r^k_{obs}$, $r^k_{collect})$ & Edge devices' radius of movement, observation, collection. & (6, 60, 40) \\ $B_{col}^k$, $B_{exe}^k$ & Maximum buffer size for collected data and executed data. & 5 \\ $f_c^k$ & Computation rate of of$E_k$. & 20Kb/slot \\ $W$ & Total bandwidth for offloading communication. & 100MHz \\ $f$ & The carrier frequency in Eq.(\ref{PL}). & 2.5GHz \\ $(a,b,\eta_{LoS},\eta_{NLoS})$ & Coefficients of A2G path loss. & (9.61, 0.16, 1, 20) \\ $N_0$ & The noise power spectral density of A2G channel. & -130dB \\ $P^k_{tr,max}$ & Maximum power for offloading communication. & 0.2W \\ $\gamma$ & Penalty decay. & 0.85 \\ $\tau$ & Target updating weight. & 0.8 \\ $\epsilon$ & Probability of random exploration. & 0.2 \\ $T_u,E_f$ & Period for target updating and federated updating. & 8 \\ $(\eta_{\mathcal{A}}$, $\eta_{\mathcal{C}})$ & Learning rates for actor/critic nets. & $(1\times 10^{-3},2\times 10^{-3})$ \\ $B$ & Batch size of experience replay. & 128 \\ \hline \end{tabular} \end{table*} Secondly, let us claim some basic environment settings of simulations. The main parameter settings are listed in Table \ref{sim_settings}. As for data sources, we set the arrival probability and the generating rate as $0.3$ and $1$Kb/slot. Then, for the attributes of edge devices, we set $r^k_{move}$, $r^k_{obs}$, $r^k_{collect}$ to be $6$, $60$, $40$ in measure of the grid map and the height is fixed. The computing rate of edge process is $20$Kb/slot and the maximum buffer lengths for caching collected data and executed data are both set to be $5$ packet pieces. For edge-cloud communication channel, we set the total offloading bandwidth as $100$MHz, the carrier frequency $f$ in Eq.(\ref{PL}) as $2.5$GHz, the noise power spectral density $N_0$ as $-130$dB, and the max transmission power $P^k_{tr,max}$ as $0.2$W. The coefficients $(a,b,\eta_{LoS},\eta_{NLoS})$ in Eq.(\ref{PL}) and Eq.(\ref{p_los}) are selected to be $(9.61, 0.16, 1, 20)$, which refers to the urban scenarios mentioned in \cite{al2014optimal}. Particularly, the transmission rate of edge-source collection is fixed to be $8$Kb/slot due to the limit of collection cover. With regard to learning configurations, the decay coefficient $\gamma$ of the system penalty is set to be 0.85. As for hyper parameters, the learning rates of actor and critic are 1e-3, 2e-3, respectively and the batch size of each epoch is 128. As proposed in Section \ref{section alg}, the target updating period $T_u$ and the reserving weight $\tau$ are 8 and $0.8$. In addition, we exploit the $\epsilon$-exploration with $\epsilon=0.2$ and edge-federated mode where parameters are shared every 8 learning epochs. As for the metrics, we also adopt PAoI (peak AoI) and worst AoI in comparisons. PAoI of data source $S_n$, $\Delta_{p,n}$, is defined as the average peak value of $S_n$'s AoIs, which represents the maximum age of information before a new update is received \cite{costa2014age}. We denote \begin{equation} \label{paoi} \overline{\Delta}_p=\frac{1}{N_s}\sum\limits_n\Delta_{p,n} \end{equation} as the average PAoI for all data sources. Besides, the worst AoI, defined as the maximum AoI of data sources at each time slot, is considered to evaluate the AoI performance in worst cases. \subsection{Evaluation Results} \begin{figure*} \centering \subfigure[The evolution of MEC system's average age, $\overline{\Delta}(t)$.]{ \label{age compare} \begin{minipage}{0.38\textwidth} \includegraphics[width=1\textwidth]{compare_age} \end{minipage}}% \subfigure[The worst AoI of all data sources, i.e., $\max \left\{\Delta_k(t)\right\}$.]{ \label{worst compare} \begin{minipage}{0.38\textwidth} \includegraphics[width=1\textwidth]{compare_worst_age} \end{minipage}} \subfigure[The data volume received at cloud center.]{ \label{data compare} \begin{minipage}{0.38\textwidth} \includegraphics[width=1\textwidth]{compare_data} \end{minipage}} \subfigure[The count of the data packets received by cloud center.]{ \label{packet compare} \begin{minipage}{0.38\textwidth} \includegraphics[width=1\textwidth]{compare_packet} \end{minipage}} \caption{The comparison of several RL based MEC collaboration methods. (4 edge servers, 30 data sources on a $200\times 200$ map.)} \label{compare} \end{figure*} \begin{table}[htbp] \caption{\upshape A numerical comparison on AoI metrics.} \label{compare_tab} \centering \begin{tabular}{c|c|c|c|c|c} \hline \multirow{3}*{Approach} & \multicolumn{2}{c|}{Average age $\overline{\Delta}(t)$} & \multicolumn{3}{c}{Peak AoI} \\ \cline{2-6} & \multirow{2}*{mean} & \multirow{2}*{std} & \multirowcell{2}{peak \\ count} & \multirowcell{2}{$\overline{\Delta}_p$}&\multirowcell{2}{$var(\Delta_p)$} \\ & & & & & \\ \hline \multirowcell{2}{Mixed DDPG} & \multirowcell{2}{473.3} & \multirowcell{2}{174.4} & \multirowcell{2}{10121} & \multirowcell{2}{26.6} & \multirowcell{2}{156.0} \\&&&&&\\ \multirowcell{2}{Mixed MADDPG} & \multirowcell{2}{39.3} & \multirowcell{2}{16.7} & \multirowcell{2}{12842} & \multirowcell{2}{24.9} & \multirowcell{2}{93.0} \\&&&&&\\ \multirowcell{2}{EdgeFed H-MAAC \\(proposed)} & \multirowcell{2}{\textbf{21.2}} & \multirowcell{2}{\textbf{3.6}} & \multirowcell{2}{\textbf{13782}} & \multirowcell{2}{\textbf{21.7}} & \multirowcell{2}{\textbf{23.1}} \\&&&&&\\ \hline \end{tabular} \end{table} We implement EdgeFed H-MAAC collaboration algorithm, by building the CNN-MLPs mixed networks for edge actor-critic nets and MLP based networks as center agent with TensorFlow. To compare the performance of the proposed frameworks with the other RL approaches, we select two actor-critic based algorithms, DDPG (centralized) and the popular MADDPG (multi-agent) as the baselines where dual neural networks are also exploited to learn the mixed policies. For fairness, we set the same random seeds of the MEC environment for all methods and instead of spending much effort on network tuning, we also fix the random seeds of training processes. Consequently, the results are of generality and can be easily reproduced. Fig. \ref{compare} and Table \ref{compare_tab} show the comparison results of proposed algorithm (set $\omega=0.5$) and the baselines under the environment with 4 edge devices and 30 data sources on a $200\times 200$ map where the positions of edge devices and data sources are initialized randomly. The average age $\overline{\Delta}(t)$ during the online interaction is shown in Fig. \ref{age compare} where one can find that the mixed DDPG results in highest $\overline{\Delta}(t)$ and the curve is not stable till 5K epoch. However, under mixed MADDPG and EdgeFed H-MAAC, the average age maintains at a lower value after sufficient iterations. Specifically, the statistics of $\overline{\Delta}(t)$ are listed in Table \ref{compare_tab}. EdgeFed H-MAAC attains not only the lowest average age, but also the lowest variance, which means the edge-federated approach outperforms DDPG as well as MADDPG on both system penalty and learning stability. The right-hand part of Table \ref{compare_tab} presents the comparison on PAoI. Evidently, the proposed collaboration algorithm reaps most peak updates and lowest average PAoI, $\overline{\Delta}_p$. This implies that the efficiency of data processing in MEC can be promoted by EdgeFed H-MAAC. The lowest variance of PAoI also demonstrate that all data sources are updated frequently and fairly. Fig. \ref{worst compare} displays the worst AoI of three approaches. EdgeFed H-MAAC also performs the best. One can find that under the centralized DDPG, some sources are ignored for a long time. We explain this as the fact that the centralized collaboration algorithms require larger neural network models with complex structure to extract the relations between the excessive global input states and the local policies of each individual agent, which also leads to the difficulties for training. Besides, Fig. \ref{data compare} and \ref{packet compare} present the volume and the count of the aggregated packets received at the cloud center, namely, the final hop of the MEC system. The curves illustrate that EdgeFed H-MAAC collaboration algorithm also improves the data utility of the MEC system by finishing more data processing within same time. Additionally, we present the training loss of mixed MADDPG and EdgeFed H-MAAC in Fig. \ref{loss fig} where the actor loss and critic loss of the first edge agent and the center agent are displayed. Similarly, EdgeFed H-MAAC leads to lower and more stable loss. What's more, the critic loss shown in Fig. \ref{ecloss} and Fig. \ref{ccloss} demonstrate that it is acceptable to assume that the center agent training can be out of consideration and the critic nets fit the $Q$ values well, which refers to the assumption 2) and 3) of the convergence discussion in Section \ref{section convergence}. Beyond our expectation, although the federated parameter sharing only works on edge agents, this scheme also promotes the center agent training significantly. \begin{figure}[htbp] \centering \subfigure[Edge Agent Actor Loss.]{ \label{ealoss} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{aal} \end{minipage}}% \subfigure[Edge Agent Critic Loss.]{ \label{ecloss} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{acl} \end{minipage}} \subfigure[Center Agent Actor Loss.]{ \label{caloss} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{cal} \end{minipage}}% \subfigure[Center Agent Critic Loss.]{ \label{ccloss} \begin{minipage}{0.48\linewidth} \includegraphics[width=1\textwidth]{ccl} \end{minipage}} \caption{Training loss of all agents' actor-critic nets.} \label{loss fig} \end{figure} \begin{figure}[htbp] \centering \begin{minipage}{0.4\textwidth} \includegraphics[width=1\textwidth]{param1} \end{minipage}% \caption{EdgeFed H-MAAC's $\overline{\Delta}(t)$ performance under different environment settings on a $300\times 300$ map.} \label{param figs} \end{figure} \begin{figure}[htbp] \centering \subfigure[$\overline{\Delta}(t)$ performance under different $\omega$. The vertical dot lines represent the convergence time of each $\omega$.]{ \begin{minipage}{0.47\textwidth} \includegraphics[width=1\textwidth]{omega} \end{minipage}% \label{omega curve}} \subfigure[Box plots of $\overline{\Delta}(t)$ under different $\omega$.]{ \begin{minipage}{0.47\textwidth} \includegraphics[width=1\textwidth]{omega_box} \end{minipage}% \label{omega box}} \caption{EdgeFed H-MAAC performance with different $\omega$. (8 edge devices, 60 data sources on a $300\times 300$ map.)} \label{omega figs} \end{figure} Then, we investigate the performance under different environment settings. We change the edge number $N_e$ as well as the source number $N_s$ and evaluate the EdgeFed H-MAAC in these cases. As presented in Fig. \ref{param figs}, low average ages are well maintained under different environment settings through EdgeFed H-MAAC collaboration. Consistent with the common sense, the more edge servers or fewer data sources both lead to better timeliness of the MEC systems. Furthermore, to investigate the impact of the federated factor $\omega$, we set different $\omega$ in the scene with 8 edge devices and 60 data sources on a $300\times 300$ map. Likewise, the settings and the randomness of the MEC environment are identical for all the simulations. Note that the federated factor $\omega$ denote the weight with which each agent retains its model during the parameter sharing, agents will lose their own parameters if $\omega$ becomes too small. Hence, we only explore the cases where $\omega\geq\frac{1}{N_e}$. The results are presented as $\overline{\Delta}(t)$ evolution curves in Fig. \ref{omega curve} and box plots of $\overline{\Delta}(t)$ in Fig. \ref{omega box}. Intuitively, one can find that while $\omega=1$, i.e., the original H-MAAC, the system gets the worst performance and the learning process seems not to converge after 5K iterations, which is consistent with our theoretical analysis in \textbf{Remark \ref{omega=1}}. Additionally, from \textbf{Theorem \ref{convergence thm}} and \textbf{Remark \ref{minimum bound}}, the deviation of the gradients gets minimum when $\omega^*=0.125$. This is verified by the experiment where the vertical dot lines imply that the average age gets the rapidest convergence when $\omega=\omega^*$ and the larger $\omega$ leads to longer convergence time. However, the simulation outcomes show that $\omega=0.25$ and $\omega=0.5$ also perform well as they reach low average ages and even lower variances with similarly rapid convergence. As discussed in \textbf{Remark \ref{efficiency convergence}}, due to the dynamics of the environment and the randomness of the online learning, the constants in Eq.(\ref{l-smooth}) and Eq.(\ref{sgd bound}) are time variant. Besides, the fine-fitness of edge critic nets are not exactly guaranteed, as shown in Fig. \ref{ecloss} where the edge critic loss of EdgeFed H-MAAC does not strictly decrease to 0. For above reasons, one can find the gap between the convergence theorem and the simulation results. We explain such phenomenon as a trade-off between the ideal learning convergence and the system robustness to environment variation. In practice, the fluctuation of the gradients may contribute to learning the features of the stochastic environment. Meanwhile, smaller $\omega$ implies that the model parameters of each edge agent itself are preserved with lower proportion. Particularly, if $\omega=\omega^*$, all edge agents learn the same parameters after federated updating operation. Therefore, all edge agents tend to make same responses to the input states and lose their individuality small $\omega$. For larger federated factor $\omega$, though the gradients are not bounded tightly, it reserves the individuality of each edge agents to counter the stochastic environment in MEC collaboration systems. Thus, as elaborated in Fig. \ref{explain omega}, in such multi-agent cooperative learning framework, $\omega$ with mediate values may balance the learning convergence and the system efficiency. In addition, the simulation results in Fig. \ref{omega figs} can be utilized to design the proper edge-federated learning mode for the proposed multi-agent collaboration MEC algorithm. Approximately, through the above results and the discussions, the recommended interval of the federated factor $\omega$ lies on $[\frac{1}{N_e},0.5]$. \section{Conclusion} \label{section conclusion} We investigated the age sensitive MEC systems and proposed a policy based multi-agent reinforcement learning framework, H-MAAC, for agent intelligent control of the trajectory planning, data scheduling as well as bandwidth allocation. By adopting federated learning mode, we developed the corresponding edge federated online joint collaboration algorithms whose convergence were theoretically proved. We implemented the MEC simulation system and evaluated the proposed algorithms. The outcomes showed that our method has lower average age and better learning stability compared to classical centralized actor-critic RL approaches. Moreover, some other advantages and the inspirations for edge federated design are also discussed according to the simulation results. For further work, based on the proposed H-MAAC framework, more operations of agents can be expanded such as the power allocation, multitask scheduling and multitask offloading. In addition, the penalty/reward of the system can be flexibly defined which may bring more applications for cooperative MEC systems.
{ "timestamp": "2021-05-12T02:18:39", "yymm": "2012", "arxiv_id": "2012.14137", "language": "en", "url": "https://arxiv.org/abs/2012.14137" }
\section{Introduction} \label{intro} The Cauchy problem for the incompressible Navier-Stokes system in $\{(x,t)| x\in \mathbb{R}^3,t\geq 0 \} $ with given initial data and external force has the form \begin{equation}\label{NS} \begin{cases} u_{t}-\Delta u+(u \cdot \nabla) u+\nabla p =f, \\ \nabla\cdot u =0, \\ u(x, 0) =u_{0}(x), \end{cases} \end{equation} where $u=(u_1,u_2,u_3)$ and $p$ denote the velocity field and pressure respectively. Note that when we consider the construction of solutions to the Cauchy problem (\ref{NS}), there are essentially two methods: the energy method and the perturbation theory. The energy method is based on $a$-$priori$ energy estimate \begin{eqnarray*} \int_{\mathbb{R}^3}|u(x,t)|^2dx+\int_0^t\int_{\mathbb{R}^3}2|\nabla u(x,s)|^2dxds &\leq& \int_{\mathbb{R}^3}|u_0(x)|^2dx+\int_0^t\int_{\mathbb{R}^3}2(f,u)(x,s)dxds. \end{eqnarray*} The global existence of weak solutions was established by Leray \cite{Ler} for divergence free initial data $u_0\in L^2(\mathbb{R}^3)$ and $f=0$. The energy method gives the existence, but the uniqueness and regularity for solutions still remain open, see $e.g.$ \cite{Ba,Caf,Can1,Jia,Lem1,Lem,Te} and references therein. As for the perturbation theory, we treat the nonlinear term $(u \cdot \nabla) u$ as a perturbation and use the scaling property to choose function spaces. As we know, system (\ref{NS}) has the natural scaling \begin{equation*} \begin{aligned} u_{\lambda}(x,t)=\lambda u(\lambda x,\lambda^2 t),\quad p_{\lambda}(x,t)=\lambda ^2p(\lambda x,\lambda^2 t). \end{aligned} \end{equation*} Therefore, the space $L^3(\mathbb{R}^3)$ is a well-known simple example of scaling-invariant space. By the Duhamel principle, we can write these solutions into an integral formulation \begin{equation} u(x,t)=e^{t\Delta}u_0+\int_0^t e^{(t-s)\Delta}\mathbb{P}(f-u\cdot\nabla u)ds, \end{equation} where $\mathbb{P}$ denotes the Leray projector which projects on divergence-free vector fields. Solutions constructed in this way are called mild solutions. Usually, by means of the contraction mapping principle, we can obtain global well-posedness of mild solutions to system (\ref{NS}) with small enough initial data in appropriate scaling-invariant spaces. We refer readers to \cite{Can0,Can1,Kat,Ko,Lem1,Lem,Rob} for additional background and references. There are many results on the existence of weak solutions and $L^2$-decay of weak solutions of the Navier-Stokes system, see e.g. \cite{Kat,Ma,Sch,Wie} and references therein. When $f=0,$ the $L^2$-decay of weak solutions to system (\ref{NS}) can be viewed as the global asymptotic stability in $L^2$ of the trivial solution $(u,p)=(0,0).$ Later, Borchers and Miyakawa \cite{Bo} addressed similar questions on the global asymptotic stability to a family of stationary solutions. The stationary Navier-Stokes system in $ \mathbb{R}^3 $ has the form \begin{equation}\label{SNS} \begin{cases} -\Delta v+(v \cdot \nabla) v+\nabla p =f, \\ \nabla\cdot v =0. \\ \end{cases} \end{equation} When $f=(b(c)\delta_0,0,0)$ with $b(c)=\frac{8\pi c}{3(c^2-1)}\left(2+6c^2-3c(c^2-1)\ln\left(\frac{c+1}{c-1}\right)\right)$ and $\delta_0$ the Dirac measure, $(v_c, p_c)$ given by the following formulas \begin{equation}\label{v_c p_c} \begin{aligned}{v_{c}^{1}(x)=2 \frac{c|x|^{2}-2 x_{1}|x|+c x_{1}^{2}}{|x|\left(c|x|-x_{1}\right)^{2}},} & \ {v_{c}^{2}(x)=2 \frac{x_{2}\left(c x_{1}-|x|\right)}{|x|\left(c|x|-x_{1}\right)^{2}}}, \\ {v_{c}^{3}(x)=2 \frac{x_{3}\left(c x_{1}-|x|\right)}{|x|\left(c|x|-x_{1}\right)^{2}},} & \ {p_{c}(x)=4 \frac{c x_{1}-|x|}{|x|\left(c|x|-x_{1}\right)^{2}}},\end{aligned} \end{equation} with $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}$ and constant $|c|>1$ are distributional solutions to system (\ref{SNS}) in $\mathbb{R}^3$. We note that $b(c)$ is decreasing on $(-\infty,-1)$ and $(1, \infty)$, ${\lim}_{c\rightarrow 1}b(c)=+\infty,$ ${\lim}_{c\rightarrow -1}b(c)=-\infty$ and ${\lim}_{|c|\rightarrow \infty}b(c)=0.$ The explicit stationary solutions (\ref{v_c p_c}) were discovered by Landau \cite{La}. These solutions have been called Landau solutions. Tian and Xin \cite{Ti} proved that all $(-1)-$homogeneous, axisymmetric nonzero solutions of system (\ref{SNS}) in $C^2(\mathbb{R}^3\backslash \{0\})$ are Landau solutions. $\check {\mathrm S}$ver$\acute {\mathrm a}$k \cite{Sv} proved that Landau solutions are the only $(-1)-$homogeneous solutions in $C^2(\mathbb{R}^3\backslash \{0\})$. More details can be found in \cite{Can,La,Sl,Sv,Ti}. Karch and Pilarczyk \cite{Gr} showed that Landau solutions are asymptotically stable under any $L^2$-perturbations. The crucial role played in their paper is an application of the Hardy-type inequality \begin{equation} \left|\int_{\mathbb{R}^3}w\cdot (w \cdot \nabla)v_c dx\right|\leqq K(c)\|\nabla \otimes w\|_2^2, \end{equation} where positive function $K(c)=12\max _{j, k \in\{1,2,3\}} K_{j, k}(c)$ with functions $K_{j, k}:(-\infty,-1) \cup(1, \infty) \rightarrow$ $(0, \infty)$ for every $j, k \in\{1,2,3\}$ satisfying \begin{equation}\label{K_{j,i}} \left|\partial_{x_{j}} v_{c}^{k}(x)\right| \leqq \frac{K_{j, k}(c)}{|x|^{2}}. \end{equation} Moreover, $K_{j,k}(c)$ satisfies \begin{equation} \label{about c} \lim_{|c|\rightarrow 1}K_{j,k}(c)=+\infty \text{ and } \lim_{|c|\rightarrow \infty}K_{j,k}(c)=0, \end{equation} In 2017, Karch, Pilarczyk and Schonbek \cite{Ka} generalized the work of \cite{Gr}. They gave a new method to show the $L^2$-asymptotic stability of a large class of global-in-time solutions including the Landau solutions. Their work also generalizes results in a series of articles on $L^2$-asymptotic stability either of the zero solution \cite{Bor1,Kaj,Oga,Sch1,Sch,Wie} or nontrivial stationary solutions \cite{Bo} to system (\ref{NS}). The above results give the existence in $L^2$ space. The uniqueness in $L^2$ space is a major open problem. We will consider the stability of Landau solutions to the Navier-Stokes system in $L^p$ spaces with $3\leq p <\infty$. We denote $(u,p)(x,t)$ solution to the Navier-Stokes system (\ref{NS}) with the given external force $f=(b(c)\delta_0,0,0)$ and initial data $u_0=v_c+ w_0$. By a direct calculation, functions $w(x,t)=u(x,t)-v_c(x)$ and $\pi(x,t)=p(x,t)-p_c(x)$ satisfy the following system \begin{equation}\label{PNS} \begin{cases} w_{t}-\Delta w+(w \cdot \nabla) w+(w \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right) w+\nabla \pi=0,\\ \nabla\cdot w=0,\\ w(x, 0) =w_{0}(x). \end{cases} \end{equation} We will consider the well-posedness of solutions to system (\ref{PNS}) in $L^p$ spaces with $3\leq p <\infty$. We can obtain global well-posedness of solutions to system (\ref{PNS}) with small initial data in $L_{\sigma}^3$ space and local well-posedness with general initial data in $L^3_{\sigma}$ space, see Theorem \ref{L^3 well-posedness}. For initial data $w_0\in L^p_{\sigma}$ with $3<p<\infty$, we have local well-posedness results, see Theorem \ref{p>3 result}. In addition, for general initial data in $L^3_{\sigma},$ we have the global existence of $L^2+L^3$ weak solutions, see Definition \ref{global 3} and Theorem \ref{w-global-weak-existence}. Karch and Pilarczyk \cite{Gr} denote the linear operator $\mathcal{L}$ \begin{equation}\label{L def} \mathcal{L}u=-\Delta u+\mathbb{P}\left((u \cdot \nabla) v_{c}\right)+\mathbb{P}\left(\left(v_{c} \cdot \nabla\right) u\right), \end{equation} where $\mathbb{P}$ is the Leray projector. For system (\ref{PNS}), we can write solution in the following formula \begin{equation}\label{eq} w(x,t)=e^{-t\mathcal{L}}w_0-\int_0^te^{-(t-s)\mathcal{L}}\mathbb{P}\nabla\cdot(w\otimes w)ds:= a+N(w,w). \end{equation} Karch and Pilarczyk \cite{Gr} showed that $-\mathcal{L}$ is the infinitesimal generator of an analytic semigroup of bounded linear operators on $L_{\sigma}^2({\mathbb{R}^{3}})$. We show that for $1< q<\infty,$ $-\mathcal{L}$ is the infinitesimal generator of an analytic semigroup of bounded linear operators on $L_{\sigma}^q({\mathbb{R}^{3}})$, see Theorem \ref{linear ope property} in Section \ref{linear}. \subsection{$L^p$ mild solutions, $3\leq p<\infty$} Let us give the following standard definition of $L^p$ mild solutions, $3\leq p<\infty$. \begin{definition} Let $3\leq p<\infty$ and $T>0,$ a function $w$ is a $L^p$ mild solution of system (\ref{PNS}) with initial data $w_0\in L_\sigma^p(\mathbb{R}^3)$ on $[0,T]$ if \begin{equation}\label{w_space} w\in C([0,T];L_\sigma^p(\mathbb{R}^3))\cap L^{\frac{4p}{3}}([0,T];L_\sigma^{2p}(\mathbb{R}^3)), \end{equation} and \begin{equation}\label{w_formula} w(x,t)=e^{-t\mathcal{L}}w_0-\int_0^te^{-(t-s)\mathcal{L}}\mathbb{P}\nabla\cdot(w\otimes w)ds. \end{equation} \end{definition} This solution is global if (\ref{w_space}) and (\ref{w_formula}) hold for any $0<T<\infty.$\ In the above, $e^{-t\mathcal{L}}$ denotes the analytic semigroup of bounded linear operators on $L_{\sigma}^p(\mathbb{R}^3)$ generated by $-\mathcal{L}$. See Lemma \ref{alemm}, Lemma \ref{p_alemm} and Theorem \ref{linear ope property}. Properties of $\int_0^te^{-(t-s)\mathcal{L}}\mathbb{P}\nabla\cdot(w\otimes w)ds$ can be seen in Lemma \ref{lem2}. Now we give the following theorem which shows the well-posedness results in $L^3$ space and the decay rates of solutions to system (\ref{PNS}). \begin{theorem}\label{L^3 well-posedness} There exist positive universal constants $c_3,$ $\varepsilon_0$ and $C$ with the following properties\\ $(i)$ For every $|c|>c_3$ and $w_0\in L_{\sigma}^3(\mathbb{R}^3)$, there exists a positive constant $T$ depending only on $w_0$ such that system (\ref{PNS}) has a unique $L^3$ mild solution $w$ on $[0,T]$. Moreover, $\nabla (|w|^{\frac32})\in L^2([0,T];L^2(\mathbb{R}^3)).$\\ $(ii)$ If in addition, $\|w_0\|_{L^3(\mathbb{R}^3)}<\varepsilon_0$, then system (\ref{PNS}) has a unique global $L^3$ mild solution $w$. Moreover, $\nabla (|w|^{\frac32})\in L^2([0,\infty);L^2(\mathbb{R}^3))$, \begin{equation}\label{L^3 stability-1} \|w\|_{C_t(L_x^3)\cap L^4_t(L^6_x)} + \|\nabla(|w|^{\frac32})\|^{\frac23}_{L^2_tL^2_x}\leq C \|w_0\|_{L^3(\mathbb{R}^3)}, \end{equation} and \begin{equation}\label{L^3 stability-2} \lim_{t\rightarrow \infty}\|w(t)\|_{L^3(\mathbb{R}^3)}=0. \end{equation} $(iii)$ For any $q>3$, there exists a positive constant $\tilde{c}_q$ depending only on $q$ such that when $|c|>\tilde{c}_q$, the solution in $(ii)$ satisfies $$ w\in L^{\infty}([\tau,\infty), L_\sigma^q(\mathbb{R}^3)), \quad \text{for all} \quad \tau >0, $$ and \begin{equation}\label{nonlinear decay} \|w(t)\|_{L^q(\mathbb{R}^3)} \leq (\frac13-\frac1q)^{\frac32 (\frac13-\frac1q)} t^{\frac{3}{2q}-\frac{1}{2}}\|w_0\|_{L^3(\mathbb{R}^3)},\quad \text{for all} \quad t>0. \end{equation} \end{theorem} \begin{remark} From (\ref{|c|2}), (\ref{|c|3}) and (\ref{mu}) in this paper, we see a more detailed dependence of $\tilde{c}_q$. On the other hand, we tend to believe that $c$ can be chosen as a constant independent of $q$, and we plan to investigate $L^\infty$ decay in our future work. \end{remark} \begin{remark} It follows from Theorem \ref{L^3 well-posedness} that the flow described by the Landau solution is asymptotically stable under $L^3$-perturbations. \end{remark} \begin{remark} For the two-dimensional Navier-Stokes system, Carlen and Loss \cite{Ca} gave the decay rate of solutions to the vorticity equation. We adapt the method in \cite{Ca} to give the decay rate of solutions to system (\ref{PNS}), and we treat the pressure term $\pi$ by using the $A_p$ weight inequalities for the Riesz transforms \cite{Gra,St}. \end{remark} For $3<p<\infty,$ we have the following result \begin{theorem}\label{p>3 result} For $p\in (3,\infty)$ and $w_0\in L_{\sigma}^p(\mathbb{R}^3)$, there exist two constants ${c}_p$ and $T$, where ${c}_p$ depends only on $p$ while $T$ depends only on $p$ and an upper bound of $\|w_0\|_{L^p}$ such that for all $|c|>{c}_p,$ system (\ref{PNS}) has a unique $L^p$ mild solution $w$ on $[0,T]$. Moreover, $\nabla(|w|^{\frac p2}) \in L_t^2([0,T];L_x^2(\mathbb{R}^3)).$ If in addition, $w_0\in L_{\sigma}^p\cap L_{\sigma}^3(\mathbb{R}^3)$, $\|w_0\|_{L^3}<\varepsilon_0$, where $\varepsilon_0$ is as in Theorem \ref{L^3 well-posedness}, there exists a unique global $L^p$ mild solution $w$ to system (\ref{PNS}). Moreover, for some universal constant $C,$ \begin{equation}\label{p>3-1} \|w \|_{C_t(L_x^p)\cap L_t^\frac{4p}{3}(L_x^{2p})}+\|\nabla(|w|^{\frac p2})\|_{L_t^2L_x^2}^{\frac2p} \leq C \|w_0\|_{L^p}. \end{equation} \end{theorem} \begin{remark} Under the condition of Theorem \ref{p>3 result}, applying Theorem \ref{L^3 well-posedness}, (\ref{L^3 stability-1})-(\ref{L^3 stability-2}) hold. And, if in addition, $|c|>\tilde{c}_q,$ then according to Theorem \ref{L^3 well-posedness}, (\ref{nonlinear decay}) holds. \end{remark} \begin{remark} Note that $w_0\in L^p $ with $p>3$ implies $w_0\in L^2_{uloc}$. For the Navier-Stokes system with $u_0\in L^2_{uloc}$, several authors \cite{Bas,Kik,Lem1,Lem} gave the local existence of weak solution $u$. Moreover, global weak solution exists for decaying initial data $u_0\in E_2$ with $$E_2=\left\{f \in L_{\text {uloc }}^{2}:\|f\|_{L^{2}\left(B\left(x_{0}, 1\right)\right)} \rightarrow 0, \text { as }\left|x_{0}\right| \rightarrow \infty\right\}.$$ Kown and Tsai \cite{Kw} generalized the global existence with non-decaying initial data whose local oscillations decay. Very recently, J.J. Zhang and T. Zhang \cite{Zhang} have given the local existence of solutions to system (\ref{PNS}) with initial data $w_0\in L^p_{uloc}$, $p\geq 2.$ In view of these results, we plan to study the global existence of weak solutions to system (\ref{PNS}) with initial data $w_0\in L_{\sigma}^p$ for $p>3$ in our future work. \end{remark} \begin{remark} L. Li, Y.Y. Li and X. Yan investigated homogeneous solutions of stationary Navier-Stokes system with isolated singularities on the unit sphere \cite{Li1,Li2,Li4,Li3}. For a subclass of (-1)-homogeneous axisymmetric no-swirl solutions on the unit sphere minus north and south poles classified in \cite{Li2}, Y.Y. Li and X. Yan have proved in \cite{Yan Li} the asymptotic stability under $L^2$-perturbations. We will focus on these homogeneous solutions in our future work. \end{remark} Results in Theorems \ref{L^3 well-posedness} and \ref{p>3 result} show the existence and uniqueness of solution $w$ to system (\ref{PNS}) in corresponding space. Actually the solution depends continuously on initial data. \begin{theorem}\label{continuty} For every $|c|>c_p$ and $u_0\in L_{\sigma}^p(\mathbb{R}^3)$ with $3\leq p<\infty,$ assume that $u$ is the unique mild solution to system (\ref{PNS}) on $[0, T_{max})$. Then, for any $T\in (0, T_{max}),$ there exists $\varepsilon>0$ such that for any $v_0\in L_{\sigma}^p(\mathbb{R}^3)$, $\|u_0-v_0\|_{L^p}<\varepsilon,$ there exists a unique $L^p$ mild solution $v$ on $[0,T]$ with initial data $v\vert_{t=0}=v_0$. Moreover, \begin{equation}\label{sol} \lim_{u_0\rightarrow v_0 \text{\ in\ }L^p}\left(\|u-v\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|u-v|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2}\right)= 0. \end{equation} \end{theorem} The constant $c_p$ in the above theorem is the one given in Theorems \ref{L^3 well-posedness} and \ref{p>3 result}. \begin{remark} Karch, Pilarczyk and Schonbek \cite{Ka} showed the $L^2$-asymptotic stability of a large class of global-in-time solutions including the Landau solutions. Based on similar proof of Theorem \ref{continuty}, we can obtain the $L^3$-asymptotic stability of a class of solutions $v_c+w$, where $w$ is as in Theorem \ref{L^3 well-posedness}. More precisely, letting $V$ as perturbation of $v_c+w$, when $\|V_0\|_{L^3}\leq (4C^2e^{2C \int_0^{\infty} \|w\|_{L^6}^4 dt})^{-1}$, using the similar proof of (\ref{Z-crucial}), we obtain \begin{eqnarray} \|V\|_{C_{t}L_x^3\cap L^4_{t}L_x^6 }+\left\|\nabla\left(|V|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_{t}^2L_x^2}\leq 2C\|V_0\|_{L^3}e^{C\int_0^{\infty} \|w\|_{L^6}^4 dt}. \end{eqnarray} \end{remark} \subsection{Weak solution} Karch and Pilarczyk \cite{Gr,Ka} proved the following results: for every $w_{0} \in L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)$, there exists a global weak solution $$w\in C_{w}\left([0, T], L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left([0, T], \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right)$$ for every $T>0$ which satisfies the strong energy inequality \begin{equation}\label{energy-inequality} \|w(t)\|_{2}^{2}+2(1-K(c)) \int_{s}^{t}\|\nabla \otimes w(\tau)\|_{2}^{2} \mathrm{d} \tau \leqq\|w(s)\|_{2}^{2} \end{equation} for almost all $s \geq 0,$ including $s=0$ and all $t \geq s$. The definition of the weak solution is as follows \begin{definition} ($L^2$-weak solution) For $w_0\in L_{\sigma}^2(\mathbb{R}^3)$, a function $w$ is a $L^2$-weak solution of system (\ref{PNS}) on $[0,T]$ if\\ i) $w\in C_w([0,T];L_{\sigma}^2(\mathbb{R}^3))\cap L^2 ([0,T];\dot{H}_{\sigma}^1(\mathbb{R}^3)).$\\ ii) For all $t \geqq s \geqq 0$, all $\varphi \in C\left([0, \infty), H_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right) \cap C^{1}\left([0, \infty), L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right),$ \begin{eqnarray*} &&(w(t), \varphi(t))+\int_{s}^{t}\left[(\nabla w, \nabla \varphi)+(w \cdot \nabla w, \varphi)+\left(w \cdot \nabla v_{c}, \varphi\right)+\left(v_{c} \cdot \nabla w, \varphi\right)\right] \mathrm{d} \tau\\ &=&(w(s), \varphi(s))+\int_{s}^{t}\left(w, \varphi_{\tau}\right) \mathrm{d} \tau. \end{eqnarray*} iii) For all $\phi\in C_{c}^{\infty}(\mathbb{R}^{3})$, $\lim_{t\rightarrow0}\int_{\mathbb{R}^3}w\cdot \phi dx=\int_{\mathbb{R}^3}w_0\cdot \phi(0) dx$.\\ iv) $w$ satisfy the energy inequality \begin{eqnarray*} &&\int_{\mathbb{R}^{3}}|w|^{2} \xi(x, t) d x+2 \int_{0}^{t} \int_{\mathbb{R}^{3}}|\nabla w|^{2} \xi dx ds\\ &\leq &\int_{\mathbb{R}^{3}}|w_0|^{2} \xi(x, 0) d x+\int_{0}^{t} \int_{\mathbb{R}^{3}}(2v_c\otimes w:\nabla w \xi \\ &&+ \left(\partial_{s} \xi+\Delta \xi\right)|w|^{2}+(|w|^{2}+2 \pi+2v_c\cdot w)(w \cdot \nabla) \xi + |w|^2v_c\cdot \nabla \xi )dx ds, \end{eqnarray*} for any $t \in[0, T]$ and for all non-negative smooth functions $\xi \in C_c^{\infty}([0,T]\times \mathbb{R}^{3}).$\\ \end{definition} The following is a weak-strong uniqueness theorem which is analogous to the one for the Navier-Stokes system (Theorem 4.4 in \cite{Tsa}). \begin{theorem}\label{s-w-uniqueness} Let $|c|>8\sqrt2 +1$, $w_0\in L_{\sigma}^2(\mathbb{R}^3)$. Assume that $u, v$ are $L^2$-weak solutions of system (\ref{PNS}) on $[0,T]$ with initial data $u\vert_{t=0}=v\vert_{t=0}=w_0.$ Suppose $u \in L^s([0,T];L^q(\mathbb{R}^3))$, $\frac3q +\frac2s =1,$ $q, s \in [2,\infty].$ If $(q,s)=(3,\infty)$, then $\|u\|_{ L^{\infty}([0,T]; L_{\sigma}^3(\mathbb{R}^3))}$ is assumed sufficiently small. Then $u\equiv v$. \end{theorem} We give the following proposition for which the detailed proof can be seen in Section \ref{w-s}. \begin{proposition}\label{prop-s-w-uniqueness} For $p\geq 3,$ $T>0,$ let ${c}_p$ be as in Theorem \ref{p>3 result}, $|c|>{c}_p$. For $w_0\in L_{\sigma}^p(\mathbb{R}^3)\cap L_{\sigma}^2(\mathbb{R}^3)$, let $w$ be the $L^p$ mild solution of system (\ref{PNS}) on $[0, T]$. Then $w$ is a $L^2$-weak solution of system (\ref{PNS}) on $[0,T].$ \end{proposition} According to (\ref{energy-inequality}), there exists $t_0>0$ such that $w(t_0)\in L_{\sigma}^p\cap L_{\sigma}^3(\mathbb{R}^3)$, $3< p\leq 6$ and $\|w(t_0)\|_{L^3}< \varepsilon_0$. According to Theorem \ref{p>3 result}, when $|c|>c_p$, there exists a unique $L^p$ mild solution on $[t_0,\infty)$ to system (\ref{PNS}) with initial data $w(t_0)$. \begin{corollary}\label{corollary} For ${w}_0\in L_{\sigma}^2(\mathbb{R}^3)$, let $w$ be a $L^2$-weak solution of system (\ref{PNS}). Then for every $3\leq p\leq 6$ and $|c|>{c}_p$, there exists $T>0$ such that $w(\cdot+T)$ is a $L^p$ mild solution to system (\ref{PNS}) with initial data $w(T)\in L^p_{\sigma}\cap L_{\sigma}^2(\mathbb{R}^3)$. \end{corollary} \begin{remark}\label{remark-Kar} Under the condition of Corollary \ref{corollary}, we have $\nabla (|{w}|^{\frac p2})\in L^2 ( [T,\infty);L^2(\mathbb{R}^3))$, and \begin{equation} \lim_{t\rightarrow \infty}\|{w}(t)\|_{L^2(\mathbb{R}^3)}=0. \end{equation} Furthermore, for $q\geq 3,$ $|c|>\tilde{c}_q$, where $\tilde{c}_q$ is as in Theorem \ref{L^3 well-posedness}, \begin{equation} \|{w}(t)\|_{L^q(\mathbb{R}^3)} \leq (\frac12-\frac1q)^{\frac32 (\frac12-\frac1q)} (t-T)^{\frac{3}{2}({\frac1q-\frac{1}{2}})}\|{w}(T)\|_{L^2(\mathbb{R}^3)},\quad \text{for all} \quad t>T. \end{equation} \end{remark} For general initial data $w_0\in L_{\sigma}^3(\mathbb{R}^3)$, we will give the global existence of $L^2+L^3$ weak solution to system (\ref{PNS}). Inspired by method in \cite{Cal,Ka,Se}, for any $w_0\in L^3(\mathbb{R}^3)$, we make a decomposition \begin{equation} w_0=v_{10}+v_{20}, \end{equation} with $\|v_{10}\|_{L^3}<\varepsilon_0$, where $\varepsilon_0$ is as in Theorem \ref{L^3 well-posedness} and $v_{20}\in L^2\cap L^3(\mathbb{R}^3).$ According to Theorem \ref{L^3 well-posedness}, there exists a unique global $L^3$ mild solution $v_1$ to system \begin{equation}\label{v1} \begin{cases} \partial_t v_{1}-\Delta v_{1}+(v_{1} \cdot \nabla) v_{1}+(v_{1} \cdot \nabla) v_c+\left(v_c \cdot \nabla\right) v_{1}+\nabla \pi_1=0,\\ \nabla\cdot v_{1}=0,\\ v_{1}(x, 0) =v_{10}. \end{cases} \end{equation} Set \begin{equation} v_2=w-v_1. \end{equation} Then $v_2$ satisfies \begin{equation}\label{v2} \begin{cases} \partial_t v_{2}-\Delta v_{2}+(v_{2} \cdot \nabla) v_{2}+(v_{2} \cdot \nabla) (v_c+v_1)+\left((v_c+v_1) \cdot \nabla\right) v_{2}+\nabla \pi_2=0,\\ \nabla\cdot v_{2}=0,\\ \end{cases} \end{equation}with $v_{2}(x, 0) =v_{20}$. We can get the global existence of $w$ by investigating the global existence of $v_2.$ From Theorem 2.7 in \cite{Ka}, system (\ref{v2}) has a weak solution \begin{equation}\label{v_2-1} v_2\in C_{w}\left([0, T]; L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left([0, T]; \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right) \end{equation} for each $T>0$ satisfying the strong energy inequality \begin{equation} \|v_2(t)\|_{2}^{2}+2\left(1-K \sup _{t>0}\|v_c+v_1\|_{L^{3}_{w}}\right) \int_{s}^{t}\|\nabla v_2(\tau)\|_{2}^{2} \mathrm{d} \tau \leqslant\|v_2(s)\|_{2}^{2} \end{equation} for a constant $K>0$, almost all $s\geq 0$ and all $t\geq s$, and \begin{equation}\label{v_2-3} \lim_{t\rightarrow \infty}\|v_2(t)\|_{2}=0. \end{equation} In the spirit of the notion of weak $L^3$-solution introduced in Seregin and $\check {\mathrm S}$ver$\acute {\mathrm a}$k \cite{Se}, we give the following definition of $L^2+L^3$ weak solution of system (\ref{PNS}). \begin{definition}\label{global 3} Let $T>0$ and $w_0\in L_{\sigma}^3(\mathbb{R}^3)$. A function $w$ is called a $L^2+L^3$ weak solution to system (\ref{PNS}) in $\mathbb{R}^3\times (0,T)$ if $w=v_1+v_2$ for some $v_1\in C((0,T);L_\sigma^3(\mathbb{R}^3))\cap L^4((0,T);L_\sigma^6(\mathbb{R}^3)),$ and $v_2\in C_{w}\left((0, T); L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left((0, T); \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right)$ such that $v_1$ is a $L^3$ mild solution of (\ref{v1}) and $v_2$ satisfies the following conditions:\\ $i)$ $v_2$ satisfies (\ref{v2}) in the sense of distributions; \\ $ii)$ \begin{equation} w_0=v_1(\cdot, 0)+v_2(\cdot, 0), \end{equation} and \begin{equation} \lim_{t\rightarrow 0}\|v_2(\cdot, t)-v_2(\cdot, 0)\|_{L^2}=0; \end{equation} $iii)$ For all $t\in (0,T)$ \begin{eqnarray} &&\frac12 \int_{\mathbb{R}^3}|v_2(x,t)|^2dx+\int_0^t \int_{\mathbb{R}^3}|\nabla v_2|^2(x,s)dxds\\ &\leq& \frac 12 \int_{\mathbb{R}^3}|v_{20}(x)|^2dx+\int_0^t \int_{\mathbb{R}^3} v_2\otimes (v_c+v_1): \nabla v_2dxds;\nonumber \end{eqnarray} $iv)$ For a.a. $t\in (0,T)$ and any non-negative function $\varphi \in C_{c}^{\infty}\left(\mathbb{R}^3 \times (0,T)\right)$ \begin{eqnarray} &&\int_{\mathbb{R}^{3}} \left|v_{2}(x, t)\right|^{2} \varphi(x, t)d x+2 \int_{0}^{t} \int_{\mathbb{R}^{3}} \left|\nabla v_{2}\right|^{2} \varphi d x d s \\ &\leq& \int_{0}^{t} \int_{\mathbb{R}^{3}} (2(v_c+v_1)\otimes v_2:\nabla v_2 \varphi + \left(\partial_{s} \varphi + \Delta \varphi\right)|v_2|^{2}\nonumber\\ && +(|v_2|^{2}+2 \pi_2+2(v_c+v_1)\cdot v_2)(v_2 \cdot \nabla) \varphi + |v_2|^2 (v_c+v_1)\cdot \nabla \varphi) d x d s.\nonumber \end{eqnarray} \end{definition} We say $w$ is a global $L^2+L^3$ weak solution to system (\ref{PNS}) if it is a $L^2+L^3$ weak solution to system (\ref{PNS}) in $\mathbb{R}^3\times (0,T)$ for all $0<T<\infty.$ Hence, we give the existence of global $L^2+L^3$ weak solutions to system (\ref{PNS}) as follows \begin{theorem} \label{w-global-weak-existence} Assume that $w_0\in L_{\sigma}^3(\mathbb{R}^3)$ has a decomposition $w_0=v_{10}+v_{20}$ with $v_{10}\in L_{\sigma}^3(\mathbb{R}^3)$, $\|v_{10}\|_{L^3(\mathbb{R}^3)}<\varepsilon_0$ and $v_{20}\in L_{\sigma}^2\cap L_{\sigma}^3(\mathbb{R}^3)$ where $\varepsilon_0$ is as in Theorem \ref{L^3 well-posedness}. Then, there exists a global $L^2+L^3$ weak solution $w$ to system (\ref{PNS}) with $w=v_1+v_2$, $v_{1}(\cdot, 0) =v_{10}$ and $v_{2}(\cdot, 0) =v_{20}$. \end{theorem} Proof of Theorem \ref{w-global-weak-existence} is based on proof of Theorem 2.1 in \cite{Ka}. For the convenience of the reader, we will give details in Section \ref{Sec3}. \begin{remark} When initial data $w_0\in L_{\sigma}^p(\mathbb{R}^3)$, $2<p\leq 3,$ by interpolation theory, $w_0$ has a decomposition $w_0=v_{10}+v_{20}$ with $v_{10}\in L_{\sigma}^3(\mathbb{R}^3)$, $\|v_{10}\|_{L^3(\mathbb{R}^3)}<\varepsilon_0$ and $v_{20}\in L_{\sigma}^2$ where $\varepsilon_0$ is as in Theorem \ref{L^3 well-posedness}. Then, we can easily obtain the global existence of $L^2+L^3$ weak solution to system (\ref{PNS}). \end{remark} \textbf{Scheme of the proof and organization of the paper.} In Section \ref{Sec2}, we give the proof of Theorem \ref{L^3 well-posedness}. In other words, we prove the local well-posedness of solutions to system (\ref{PNS}) with general initial data and global well-posedness of solutions to system (\ref{PNS}) with small initial data in $L_{\sigma}^3$ space. Also, we investigate the decay rate of solutions to system (\ref{PNS}). In Section \ref{linear}, properties of linear operator $\mathcal{L}$ on $L^p$, $1<p<\infty$ spaces are studied. In Section \ref{w-s}, we prove the Theorem \ref{s-w-uniqueness}, Proposition \ref{prop-s-w-uniqueness} and illustrate Corollary \ref{corollary} briefly. In Section \ref{Sec3}, we illustrate Theorem \ref{w-global-weak-existence}, i.e. the global existence of $L^2+L^3$ weak solution to system (\ref{PNS}). In Section \ref{Sec4}, we give the proof of Theorem \ref{p>3 result}. In Section \ref{proof of {continuty} }, we give the detailed proof of Theorem \ref{continuty}. Let us complete this section by the notations that we shall use in this article. \textbf{Notations.}\\ $\bullet$ We denote $\|\cdot\|_{p}$ or $\|\cdot\|_{L^p}$ the norm of the Lebesgue space $L_x^p(\mathbb{R}^3)$ with $p\in [1,\infty].$ $\bullet$ We denote $\|\cdot\|_{L_t^pL_x^q}$ the norm of the Lebesgue space $L_t^p([0,\infty); L_x^q(\mathbb{R}^3))$ with $p, q\in [1,\infty].$ $\bullet$ We denote $\|\cdot\|_{C_tL_x^q}$ the norm of the Lebesgue space $C([0,\infty); L_x^q(\mathbb{R}^3))$ with $q\in [1,\infty].$ $\bullet$ We denote $\|\cdot\|_{L_T^pL_x^q}$ the norm of the Lebesgue space $L_t^p([0,T]; L_x^q(\mathbb{R}^3))$ with $p, q\in [1,\infty].$ $\bullet$ We denote $\|\cdot\|_{C_TL_x^q}$ the norm of the Lebesgue space $C_t([0,T]; L_x^q(\mathbb{R}^3))$ with $q\in [1,\infty].$ $\bullet$ $C_0^{\infty}(\mathbb{R}^3)$ denotes the set of smooth and compactly supported functions. $\bullet$ $C_w([0,T]; L_x^q(\mathbb{R}^3))$ with $q\in [1,\infty)$ denotes the set of weakly continuous $L^{q}(\mathbb{R}^3)$-valued functions in $t$, i.e. for any $t_{0} \in [0,T]$ and $w \in L^{q^{\prime}}(\mathbb{R}^3)$, \begin{equation*} \int_{\mathbb{R}^3} v(x, t) \cdot w(x) d x \rightarrow \int_{\mathbb{R}^3} v\left(x, t_{0}\right) \cdot w(x) d x \quad \text { as } t \rightarrow t_{0}. \end{equation*} $\bullet$ For each space $Y,$ we set $Y_{\sigma}=\left\{u \in Y: \text { div } u=0\right\}.$ $\bullet$ We denote $u_i$ the $i$th coordinate ($i=1,2,3$) of a vector $u$. $\bullet$ Constants independent of solutions may change from line to line and will be denoted by $C$. \section{Proof of Theorem \ref{L^3 well-posedness}}\label{Sec2} In this section, we will give the proof of Theorem \ref{L^3 well-posedness}. Our method is based on the following contraction mapping theorem (cf. \cite{Ba}, Theorem 1.72): \begin{lemma} \label{lem1} Let $E$ be a Banach space, $N$ be a continuous bilinear map from $E\times E$ to $E$, and $\alpha$ be a positive real number such that \begin{equation} \alpha < \frac{1}{4\|N\|} \text{ with } \|N\|:= \sup_{\|u\|,\|v\|\leq 1}\|N(u,v)\|. \end{equation} Then for any $a$ in a ball $B(0,\alpha)$ (i.e., with center 0 and radius $\alpha$) in $E$, there exists a unique $x$ in ball $B(0,2\alpha)$ such that \begin{equation} x=a+N(x,x). \end{equation} \end{lemma} We will also use a property of Landau solutions $v_c$ which can be obtained by a direct calculation. \begin{lemma}\label{|x|v_c} Let $v_c$ be the Landau solutions given by (\ref{v_c p_c}), then we have \begin{equation} \||x|v_c\|_{L^{\infty}}\leq \frac{2\sqrt{2}}{|c|-1}:=K_c. \end{equation} \end{lemma} Next lemma is a fundamental inequality with the singular weight in Sobolev spaces: the so-called Hardy inequality which goes back to the pioneering work by G.H. Hardy in \cite{Har1,Har2}. \begin{lemma}\label{hardy} For any $f$ in $\dot{H}^1({\mathbb{R}^{3}}),$ there holds \begin{equation} \left(\int_{\mathbb{R}^{3}}\frac{|f(x)|^2}{|x|^2}dx\right)^{\frac{1}{2}}\leq 2\|\nabla f\|_{L^2({\mathbb{R}^{3}})}. \end{equation} \end{lemma} To complete the proof of Theorem \ref{L^3 well-posedness}, we need Lemmas \ref{alemm} and \ref{lem2} which give the results for linear part $a$ and nonlinear part $N$ in (\ref{eq}), separately. Linear part $a$ satisfies the following Cauchy problem \begin{equation}\label{a} \begin{cases} a_{t}-\Delta a+(a \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right) a+\nabla \pi_1=0,\\ \nabla\cdot a=0,\\ a(x, 0) =w_{0}(x). \end{cases} \end{equation} Namely, $a(x,t)$ satisfies \begin{equation}\label{initial} \int_{\mathbb{R}^{3}} w_0 \phi d x+ \int_{0}^{\infty} \int_{\mathbb{R}^{3}}\big\{ w\left(-\partial_{t} \phi - \Delta \phi\right)-w\otimes w :\nabla \phi -(w\otimes v_c+v_c\otimes w):\nabla \phi \big\}dxdt=0, \end{equation} for all $\phi\in C_c^{\infty}([0,\infty)\times \mathbb{R}^{3})$ satisfying $\nabla \cdot \phi=0.$ \begin{lemma} \label{alemm} For every $c$ satisfies (\ref{|c|2}), there exists a unique global-in-time solution $a(x,t)\in C([0,\infty),$ $L_{\sigma}^3(\mathbb{R}^{3}))\cap L^4([0,\infty),L^6_{\sigma}(\mathbb{R}^{3}))$ to system (\ref{a}) with initial data $w_0\in L_{\sigma}^3({\mathbb{R}^{3}})$. Moreover, \begin{equation}\label{2.7} \|a(\cdot, t)\|_{L^3}\leq \|a(\cdot, s)\|_{L^3}, \end{equation} for any $0\leq t\leq s<\infty,$ and \begin{equation}\label{a-estimate} \|a\|_{C_tL_x^3\cap L^4_tL^6_x}+\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_t^2L_x^2}\leq C_1\|w_0\|_{L^3}, \end{equation} for a universal constant $C_1.$ \end{lemma} \noindent\textbf{Proof.} By classical approximation method, it is easy to obtain the global existence of solutions $a$. For simplicity, we omit the detailed proof and give $a$-$prioi$ estimate for $a.$ Suppose $a$ is sufficiently smooth, multiplying the equation ($\ref{a})_1$ by $|a|a$, then integrating it on $\mathbb{R}^3$, we have \begin{eqnarray}\label{prop L^3} &&\frac{1}{3}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^3_{L^3}+\frac{8}{9}\|\nabla(|a|^{\frac{3}{2}})\|^2_{L^2}\\\nonumber &=&-\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|a) dx-\int_{\mathbb{R}^3} \nabla \pi_1 \cdot (|a|a) dx. \end{eqnarray} For the first term on the right hand side of (\ref{prop L^3}), by using integration by parts, H\"{o}lder's inequality, Lemma \ref{|x|v_c} and Hardy inequality in Lemma \ref{hardy}, we have \begin{eqnarray}\label{prop- rh1} -\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|a) dx &=&\int_{\mathbb{R}^3}(a\otimes v_c+v_c\otimes a) \cdot \nabla(|a|a )dx\nonumber\\ &\leq& 4\int_{\mathbb{R}^3}|\nabla a| |a|^{2} |v_c|dx\nonumber\\ &\leq& \frac{8}{3}\int_{\mathbb{R}^3}|\nabla(|a|^{\frac{3}{2}} )||a|^{\frac{3}{2}} |v_c|dx\nonumber\\ &\leq& \frac{8}{3}\left\||x| v_c\right\|_{L^{\infty}}\left\|\nabla(|a|^{\frac{3}{2}} )\right\|_{L^2}\left\|\frac{|a|^{\frac{3}{2}}}{|x|}\right\|_{L^2}\nonumber\\ &\leq& \frac{16}{3}K_c\left\|\nabla(|a|^{\frac{3}{2}} )\right\|_{L^2}^2. \end{eqnarray} Then we will estimate the second term on the right hand side of (\ref{prop L^3}). Thanks to system (\ref{a}), we have \begin{equation} \pi_1=-\Delta^{-1} \partial_i\partial_j \left(a\otimes v_c+v_c\otimes a\right). \end{equation} Operator $\Delta^{-1} \partial_i\partial_j $ is Calder{\'o}n-Zygmund operator. According to Example 9.1.7 in \cite{Gra}, there holds $|x|^{p-2}\in A_p$ with $1<p<\infty$. By H\"{o}lder's inequality, Hardy inequality in Lemma \ref{hardy}, Sobolev embedding and boundedness of the Riesz transforms on weighted $L^p$ spaces (see Theorem 9.4.6 in \cite{Gra}), we have \begin{eqnarray}\label{prop- rh2} \int_{\mathbb{R}^3} \nabla \pi_1 \cdot (|a|a) dx &\leq& \frac23\int_{\mathbb{R}^3} |x|^{\frac13}\left|\Delta^{-1} \partial_i\partial_j \left(a\otimes v_c+v_c\otimes a\right) \right| \left|\nabla (|a|^{\frac32})\right|\frac{|a|^{\frac12}}{|x|^{\frac13}}dx\nonumber\\ &\leq& \frac23 C_3\||x|^{\frac13}\left(a\otimes v_c+v_c\otimes a\right)\|_{L^{3}} \|\nabla (|a|^{\frac32})\|_{L^{2}} \left\|\frac{|a|^{\frac12}}{|x|^{\frac13}}\right\|_{L^{6}}\nonumber\\ &\leq& \frac43 C_3\||x|v_c\|_{L^{\infty}} \left\|\frac{a}{|x|^{\frac23}}\right\|_{L^3}\|\nabla (|a|^{\frac32})\|_{L^{2}} \left\|\nabla(|a|^{\frac32})\right\|_{L^{2}}^{\frac13}\nonumber\\ &\leq& \frac83 C_3 K_c\|\nabla (|a|^{\frac{3}{2}})\|_{L^2}^2. \end{eqnarray} Combining (\ref{prop L^3}), (\ref{prop- rh1}) and (\ref{prop- rh2}), we deduce \begin{equation}\label{2.13} \frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^3_{L^3}+\left(\frac{8}{3}-16 K_c-8C_3K_c\right)\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^2_{L^2}\leq 0. \end{equation} Choosing $|c|$ big enough such that \begin{equation}\label{|c|2} \frac83-16 K_c-8C_3K_c>0, \end{equation} then we have \begin{equation} \sup_t\|a(t)\|^3_{L^3}+\left(\frac{8}{3}-16 K_c-8C_3K_c\right)\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^2_{L_t^2L_x^2}\leq \|w_0\|^3_{L^3}. \end{equation} Hence, there exists a constant $C$ such that \begin{equation} \sup_t\|a(t)\|_{L^3}+\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_t^2L_x^2}\leq C\|w_0\|_{L^3}. \end{equation} Hence, we deduce (\ref{2.7}). By interpolation theory, we have \begin{equation} \|a\|_{L_t^4L_x^6}\leq \|a\|^{\frac14}_{L_t^{\infty}L_x^3}\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^{\frac12}_{L_t^2L_x^2} \leq C\left(\sup_t\|a(t)\|_{L^3}+\left\|\nabla(|a|^{\frac{3}{2}})\right\|_{L_t^2L_x^2}^{\frac{2}{3}}\right) \leq C\|w_0\|_{L^3}. \end{equation} Therefore, there holds \begin{equation}\label{a-prioi} \|a\|_{L_t^\infty L_x^3\cap L^4_tL^6_x} +\left\|\nabla\left(|a|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_t^2L_x^2}\leq C_1\|w_0\|_{L^3}, \end{equation} for a constant $C_1$. Then, we consider the continuity of solution $a$ over time $t$. Because of the translational invariance in time, we only consider time around 0. For any sequence $t_{n}\rightarrow 0,$ according to (\ref{a-prioi}), there holds \begin{equation}\label{weak con} a\in L_t^\infty L_x^3. \end{equation} Therefore, there exist subsequence $t_{n_j}\rightarrow 0$ such that \begin{equation*} a(\cdot,t_{n_j})\rightharpoonup w_0(\cdot) \text{\ weakly in } L^3. \end{equation*} Therefore, we have \begin{equation}\label{1} \|w_0\|_{L^3}\leq {\underline{\lim}}_{j \rightarrow \infty}\|a(\cdot,t_{n_j} )\|_{L^3}. \end{equation} By energy inequality (\ref{2.13}), there holds \begin{equation}\label{20} {\overline{\lim}}_{j \rightarrow\infty}\|a(\cdot,t_{n_j} )\|_{L^3}\leq \|w_0\|_{L^3} . \end{equation} Combining with (\ref{1}) and (\ref{20}), there holds \begin{equation} \label{con_1} \lim_{j \rightarrow\infty}\|a(\cdot,t_{n_j} )\|_{L^3}=\|w_0\|_{L^3}. \end{equation} Hence \begin{equation} a(\cdot, t_{n_j})\rightarrow w_0 \text{\ in } L^3. \end{equation} Then, we deduce \begin{equation} a(\cdot,t)\rightarrow w_0(\cdot) \text{\ in } L^3 \text{\ as }t\rightarrow 0. \end{equation} Hence, $a\in C([0,\infty);L_x^3)$. {\hfill $\square$\medskip} \begin{remark} Indeed, more strictly, we can prove the existence of $a$ by approximation theory. Assume $a_0=0$, we construct the iterative sequence $\{a_k\} $ as follows \begin{equation*}\label{a-k} \begin{cases} &\partial_t a_{k}-\Delta a_{k}=-(a_{k-1} \cdot \nabla) v_{c}-\left(v_{c} \cdot \nabla\right) a_{k-1}-\nabla \pi_{k-1},\ \ \text{for\ \ }k=1,2,\cdots,\\ &\nabla \cdot a_k=0, \\ &\pi_{k-1}= (-\Delta)^{-1}\partial_i \partial_j(v_c\otimes a_{k-1}+a_{k-1}\otimes v_c), \\ &a_{k}|_{t=0}=w_0. \end{cases} \end{equation*} We claim that $\{a_{k}\}\in C([0,\infty);L_x^3)\cap L_t^4([0,\infty);L_x^6)$ and $\nabla(|a_k|^{\frac32})\in L_t^2([0,\infty);L_x^2).$ By Duhamel principle, $a_k$ also satisfy the integral formulation $e^{t\Delta}a_k-\int_0^t e^{(t-s)\Delta}\mathbb{P}\text{div}(a_{k-1} \otimes v_c+v_c\otimes a_{k-1} )ds.$ Since semigroup $e^{t\Delta}: L^3\rightarrow L^3,$ we have $a_k\in C([0,\infty);L_x^3).$ By energy estimate, we have $\{a_k\}$ is a Cauchy sequence in $L_t^4([0,\infty);L_x^6)$. Limit of $\{a_k\}$ is $a$ which satisfies Lemma \ref{alemm}. We omit the details. \end{remark} For $w_0\in L_{\sigma}^3(\mathbb{R}^3)$ and $0\leq t<\infty,$ let \begin{equation} T(t)w_0:=a(x,t), \end{equation} where $a(x,t)$ is the unique solution of (\ref{a}) given by Lemma \ref{alemm}. Then $T(t),$ $0\leq t<\infty,$ is a one parameter family of bounded linear operators from $L_{\sigma}^3(\mathbb{R}^3)$ into $L_{\sigma}^3(\mathbb{R}^3)$ satisfying $T(0)=I,$ the identity operator of $L_{\sigma}^3(\mathbb{R}^3)$, $T(t+s)=T(t)T(s)$ for every $t,s \geq 0,$ $\|T(t)\|\leq 1$ for every $t\geq 0,$ and $\lim_{t\rightarrow 0+}T(t)w_0=w_0$ in $L_{\sigma}^3(\mathbb{R}^3)$. Therefore, $T(t)$ is a strongly continuous semigroup of contraction, see Definition 1.2.1 and Section 1.3 in \cite{Paz}. The linearized operator $-\mathcal{L}$, given in (\ref{L def}), with the domain of definition \begin{equation} D(-\mathcal{L}):= \Big\{w_0\in L_{\sigma}^3(\mathbb{R}^3): \lim_{t\rightarrow 0+}\frac{T(t)w_0-w_0}{t} \text{ exists in }L_{\sigma}^3(\mathbb{R}^3)\Big\}, \end{equation} is the infinitesimal generator of the semigroup $T(t),$ see Section 1.1 of \cite{Paz}. By Corollary 1.2.5 in \cite{Paz}, $D(-\mathcal{L})$ is dense in $L_{\sigma}^3(\mathbb{R}^3)$ and $-\mathcal{L}$ is a closed linear operator in $L_{\sigma}^3(\mathbb{R}^3)$. We also denote $T(t)$ as $e^{-t\mathcal{L}}$. Next, we will estimate the nonlinear part $N(w_1,w_2)$. Denote $z=N(w_1,w_2),$ it's obvious that $z$ satisfies the following system \begin{eqnarray}\label{N} \begin{cases}{} z_{t}-\Delta z+(z \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right) z+\nabla \pi_2=-\text{div}(w_1\otimes w_2),\\ \nabla\cdot z=0,\\ z(x, 0) =0. \end{cases} \end{eqnarray} \begin{lemma}\label{lem2} For every $c$ satisfies (\ref{|c|3}), there exists a unique solution $z(x,t)\in C([0,T],L_{\sigma}^3(\mathbb{R}^{3}))\cap L^4([0,T],L^6_{\sigma}(\mathbb{R}^{3}))$ to system (\ref{N}) with $w_1, w_2\in L^4([0,T];L^6(\mathbb{R}^3))$, satisfying \begin{equation}\label{z-estimate} \|z(t)\|_{C([0,T];L_x^3)\cap L^4_t([0,T];L^6_x)}+\left\|\nabla\left(|z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_t^2([0,T];L_x^2)} \leq C_2 \|w_1\|_{L_t^4([0,T];L_x^6)}\|w_2\|_{L_t^4([0,T];L_x^6)}, \end{equation} for a universal constant $C_2$ which is independent of $T$. \end{lemma} \noindent\textbf{Proof.} We omit the detailed proof of the existence of solution $z$ since it can be obtained by classical approximation method. Then, we give $a$-$prioi$ estimate for $z.$ Suppose $z$ is sufficiently smooth, multiplying the equation $(\ref{N})_1$ by $|z|z$ and integrating it on $\mathbb{R}^3$, we have \begin{eqnarray*} &&\frac{1}{3}\frac{\mathrm{d}}{\mathrm{d} t}\|z(t)\|^3_{L^3}+\frac{8}{9}\|\nabla(|z|^{\frac{3}{2}})\|^2_{L^2}\\ &=&-\int_{\mathbb{R}^3}\text{div}(z\otimes v_c+v_c\otimes z)\cdot (|z|z)dx-\int_{\mathbb{R}^3}\text{div}(w_1\otimes w_2)\cdot (|z|z)dx -\int_{\mathbb{R}^3} \nabla \pi_2 \cdot (|z|z) dx.\nonumber \end{eqnarray*} The estimate for the first term on the right hand side is the same as (\ref{prop- rh1}). Hence, there holds \begin{equation}\label{aaaaa} \begin{aligned} -\int_{\mathbb{R}^3}\text{div}(z\otimes v_c+v_c\otimes z)\cdot (|z|z) dx\leq \frac{16}{3}K_c\|\nabla(|z|^{\frac{3}{2}} )\|_{L^2}^2.\\ \end{aligned} \end{equation} For the second term, by integration by parts, H\"{o}lder's inequality and Young's inequality, we have \begin{eqnarray} -\int_{\mathbb{R}^3}\text{div}(w_1\otimes w_2) \cdot (|z|z) dx &=&\int_{\mathbb{R}^3}(w_1\otimes w_2)\cdot \nabla(|z|z )dx\nonumber\\ &\leq& \frac43\int_{\mathbb{R}^3}\left|w_1\otimes w_2\right| \left|\nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx\nonumber\\ &\leq& \frac43\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|_{L^2} \left\|(w_1\otimes w_2)|z|^{\frac{1}{2}}\right\|_{L^2}\nonumber\\ &\leq& \frac43\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|_{L^2}\left\||z|^{\frac{1}{2}}\right\|_{L^6}\left\|w_1\otimes w_2\right\|_{L^3}\nonumber\\ &\leq& \frac43\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|_{L^2}\left\|z\right\|_{L^3}^{\frac{1}{2}}\left\|w_1\otimes w_2\right\|_{L^3}\\ &\leq& \frac{2}{15}\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|^2_{L^2}+\frac{10}{3}\left\|z\right\|_{L^3}\left\|w_1\otimes w_2\right\|^2_{L^3}.\nonumber \end{eqnarray} Since $w_1, w_2\in L_T^{4} L_x^6$, we have \begin{eqnarray}\label{prop rh2_3} &&-\int_0^T\int_{\mathbb{R}^3}\text{div}(w_1\otimes w_2) \cdot (|z|z) dxdt \nonumber\\ &\leq& \frac{2}{15}\int_0^T\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|^2_{L^2}dt+\frac{10}{3} \left\|z\right\|_{L_T^{\infty}L_x^3}\left\|w_1\right\|^2_{L_T^4L_x^6}\left\|w_2\right\|^2_{L_T^4L_x^6}. \end{eqnarray} For the third term on the right hand, according to system (\ref{N}), we have \begin{equation} \pi_2=-\Delta^{-1} \partial_i\partial_j \left(z\otimes v_c+v_c\otimes z+w_1 \otimes w_2\right). \end{equation} Using integration by parts and H\"{o}lder's inequality, we have the following estimate \begin{eqnarray}\label{pi} &&\int_{\mathbb{R}^3} \nabla \pi_2 \cdot (|z|z) dx =-\int_{\mathbb{R}^3} \pi_2 \nabla(|z|)\cdot z dx\nonumber\\ &=&\int_{\mathbb{R}^3} \Delta^{-1} \partial_i\partial_j \left(z\otimes v_c+v_c\otimes z+w_1 \otimes w_2\right)\nabla(|z|)\cdot z dx\nonumber\\ &\leq& \frac23 \int_{\mathbb{R}^3} \left|\Delta^{-1} \partial_i\partial_j \left(z\otimes v_c+v_c\otimes z+w_1 \otimes w_2\right)\right|\left| \nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx\nonumber\\ &\leq& \frac23 \int_{\mathbb{R}^3} \left|\Delta^{-1} \partial_i\partial_j \left(z\otimes v_c+v_c\otimes z\right)\right|\left| \nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx\nonumber\\ &&+\frac23 \int_{\mathbb{R}^3} \left|\Delta^{-1} \partial_i\partial_j \left(w_1 \otimes w_2\right)\right|\left| \nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx. \end{eqnarray} Thanks to (\ref{prop- rh2}), we have \begin{equation}\label{pi1} \frac23 \int_{\mathbb{R}^3} \left|\Delta^{-1} \partial_i\partial_j \left(z\otimes v_c+v_c\otimes z\right)\right|\left| \nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx \leq \frac{8}{3} C_3 K_c \|\nabla(|z|^{\frac{3}{2}} )\|_{L^2}^{2}. \end{equation} For the second part, by H\"{o}lder's inequality, property of Riesz operator and Young's inequality, we have \begin{eqnarray}\label{pi2} &&\frac23 \int_{\mathbb{R}^3} \left|\Delta^{-1} \partial_i\partial_j \left(w_1\otimes w_2\right)\right|\left|\nabla\left(|z|^{\frac{3}{2}} \right)\right||z|^{\frac{1}{2}} dx\nonumber\\ &\leq& \frac23 \| \Delta^{-1} \partial_i\partial_j \left(w_1\otimes w_2\right)\|_{L^3}\|\nabla(|z|^{\frac{3}{2}} )\|_{L^2}\| |z|^{\frac{1}{2}}\|_{L^6}\nonumber\\ &\leq& \frac23 H_3\| w_1\otimes w_2\|_{L^3}\|\nabla(|z|^{\frac{3}{2}} )\|_{L^2}\| z\|^{\frac12}_{L^3}\nonumber\\ &\leq& \frac{2}{15}H_3\left\|\nabla\left(|z|^{\frac{3}{2}} \right)\right\|^2_{L^2}+\frac{10}{3}H_3\left\|z\right\|_{L^3}\left\|w_1\otimes w_2\right\|^2_{L^3}. \end{eqnarray} where $H_3$ is a constant and origins from the following inequality \begin{equation}\label{Hr} \|\Delta^{-1} \partial_i\partial_j f \|_{L^r}\leq H_r \|f\|_{L^r}, \end{equation} for $1<r<\infty.$ For scalar Riesz transforms, Iwaniec and Martin \cite{Iwa} showed that the norm $\|R_l\|_{L^r}$ of the Riesz operator $R_l:$ $L^r(\mathbb{R}^n)\rightarrow L^r(\mathbb{R}^n)$ is equal to \begin{equation}\label{crvalue} \begin{cases} \text{tan} (\frac{\pi}{2r}), \text{if } 1<r\leq2,\\ \text{cot} (\frac{\pi}{2r}), \text{if } 2\leq r < \infty. \end{cases} \end{equation} Combining with (\ref{pi}), (\ref{pi1}) and (\ref{pi2}), we have the following estimate \begin{eqnarray*} \int_{\mathbb{R}^3} \nabla \pi_2 \cdot (|z|z) dx &\leq& \left(\frac{8}{3} C_3 K_c +\frac{2}{15}H_3\right)\left\|\nabla (|z|^{\frac{3}{2}})\right\|_{L^2}^2+\frac{10}{3}H_3\|z\|_{L^3}|\|w_1\otimes w_2\|^2_{L^3}. \end{eqnarray*} Therefore, we deduce \begin{equation} \begin{aligned} & \left\|z\right\|^3_{L_T^{\infty}L_x^3}+\left(\frac83-16K_c-\frac{2}{5}-8C_3 K_c -\frac{2}{5}H_3\right)\left\|\nabla(|z|^{\frac{3}{2}})\right\|^2_{L_T^2L_x^2}\\ &\leq (10+10H_3) \|z\|_{L_T^{\infty}L_x^3}\|w_1\|_{L_T^{4}L_x^6}^2\|w_2\|_{L_T^{4}L_x^6}^2. \end{aligned} \end{equation} Choosing $|c|$ big enough such that \begin{equation}\label{|c|3} \frac83-16K_c-\frac{2}{5}-8C_3 K_c -\frac{2}{5}H_3>0, \end{equation} we have \begin{equation}\label{2} \begin{aligned} \left\|z\right\|_{L_T^{\infty}L_x^3}+\left\|\nabla(|z|^{\frac{3}{2}})\right\|^{\frac{2}{3}}_{L_T^2L_x^2}\leq C \|w_1\|_{L_T^{4}L_x^6}\|w_2\|_{L_T^{4}L_x^6}, \end{aligned} \end{equation} for a universal constant $C.$ By interpolation theory, there holds \begin{equation}\label{zzzz} \|z(t)\|_{L_T^\infty L_x^3\cap L^4_T L^6_x}+\left\|\nabla\left(|z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2}\leq C_2 \|w_1\|_{L_T^4 L_x^6}\|w_2\|_{L_T^4 L_x^6}, \end{equation} for a universal constant $C_2.$ By similar argument as (\ref{con_1}), we can obtain the continuity of $z$ over time $t.$ Combining with (\ref{zzzz}), we deduce (\ref{z-estimate}). {\hfill $\square$\medskip} \noindent\textbf{Proof of Theorem \ref{L^3 well-posedness}.} For every $c$ satisfies (\ref{|c|2}) and (\ref{|c|3}), according to Lemma \ref{alemm}, we have \begin{equation} \|a(t)\|_{L^4_t L_x^6}\leq C_1 \|w_0\|_{L^3}. \end{equation} Applying Lemma \ref{lem2} with $(w_1,w_2)=(w,w)$, we have \begin{equation} \|N\|\leq C_2. \end{equation} Using Lemma \ref{lem1} with $E={L^4_tL_x^6}$, when $\|w_0\|_{L^3}< \frac{1}{4C_1C_2}:=\varepsilon_0$, there exists a global unique solution $w\in {L^4_tL_x^6}$ and $\|w\|_{{L^4_t L_x^6}}\leq C \|w_0\|_{L^3}.$ According to Lemmas \ref{alemm} and \ref{lem2}, solution $w\in C([0,\infty);L^3(\mathbb{R}^3))$, $\nabla (|w|^{\frac32})\in L^2([0,\infty);L^2(\mathbb{R}^3))$. When $w_0\in{L^3(\mathbb{R}^3)}$, thanks to Lemma \ref{alemm}, we have \begin{equation} \|a\|_{L^4_tL^6_x} \leq C_1 \|w_0\|_{L^3(\mathbb{R}^3)}. \end{equation} There exists $T>0$ such that $\|a\|_{L^4_TL^6_x}< \frac{1}{4C_2}.$ Using Lemma \ref{lem1} and \ref{lem2} with $E={L^4_T L_x^6}$, a unique local solution $w\in L^4_T L_x^6$ exists on $[0,T].$ According to Lemmas \ref{alemm} and \ref{lem2}, solution $w\in C([0,T];L^3(\mathbb{R}^3))$ and $\nabla (|w|^{\frac32})\in L^2([0,T];L^2(\mathbb{R}^3))$. Next, we will investigate the decay rate of solution $w$, i.e. (\ref{nonlinear decay}). Our method is inspired by \cite{Ca}. Let $T>0$ and $3 \leq q < \infty$. Denote $r(t)=\frac{1}{\frac{1}{T}(\frac{1}{q}-\frac{1}{3})t+\frac{1}{3}}$. First, we give $a$-$prioi$ estimate of $\|w(\cdot,t)\|_{r(t)}$. By direct computation, we have \begin{eqnarray}\label{remk_1} &&r(t)^{2}\|w(\cdot,t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w(\cdot,t)\|_{r(t)}\nonumber\\ &=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)} \ln \left(|w(\cdot,t)|^{r(t)} /\|w(\cdot,t)\|_{r(t)}^{r(t)}\right) \mathrm{d} x\nonumber\\ &&+r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t} \left|w(\cdot,t)\right| \mathrm{d}x\nonumber\\ &=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)} \ln \left(|w(\cdot,t)|^{r(t)} /\|w(\cdot,t)\|_{r(t)}^{r(t)}\right) \mathrm{d} x\nonumber\\ &&+r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\frac{\mathrm{d}}{\mathrm{d} t} w_i \mathrm{d}x\nonumber\\ &=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)} \ln \left(|w(\cdot,t)|^{r(t)} /\|w(\cdot,t)\|_{r(t)}^{r(t)}\right) \mathrm{d} x\nonumber\\ && +r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\left( \partial_j\partial_j w_i-\partial_j (w_iw_j+w_iv_j+v_iw_j)-\partial_i\pi\right) \mathrm{d}x,\nonumber \end{eqnarray} where the last equality holds on account of $(\ref{PNS})_1$. Set \begin{equation} \begin{aligned} \uppercase\expandafter{\romannumeral1}&=r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2}w_i\partial_j\partial_j w_i \mathrm{d}x,\\ \uppercase\expandafter{\romannumeral2}&= -r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\partial_j (w_iw_j)\mathrm{d}x,\\ \uppercase\expandafter{\romannumeral3}&= -r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\partial_j (w_iv_j)\mathrm{d}x,\\ \uppercase\expandafter{\romannumeral4}&= -r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\partial_j (v_iw_j)\mathrm{d}x,\\ \uppercase\expandafter{\romannumeral5}&= -r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2}w_i\partial_i\pi \mathrm{d}x. \end{aligned} \end{equation} Using integration by parts, we have \begin{eqnarray*}\label{I_1} \uppercase\expandafter{\romannumeral1}&=& -r(t)^{2} \int_{\mathbb{R}^{3}} \partial_j(|w(t)|^{r(t)-2} w_i) \partial_j w_idx \nonumber\\ &=& -r(t)^{2} \int_{\mathbb{R}^{3}} \frac12\partial_j|w(t)|^{r(t)-2} \partial_j |w|^2dx-r(t)^{2} \int_{\mathbb{R}^{3}} |w(t)|^{r(t)-2}|\nabla w|^2dx \nonumber\\ &=& -r(t)^{2} \int_{\mathbb{R}^{3}} \frac{4(r(t)-2) }{r(t)^{2} }\left|\nabla (|w(\cdot,t)|^{r(t) / 2})\right|^{2}dx -r(t)^{2} \int_{\mathbb{R}^{3}} |w(t)|^{r(t)-2}|\nabla w|^2dx\nonumber\\ &=& -4r(t)(r(t)-2)\|\nabla (|w(\cdot,t)|^{\frac{r(t)}{2}})\|_{L^2}^2-r(t)^{2} \int_{\mathbb{R}^{3}} |w(t)|^{r(t)-2}|\nabla w|^2dx,\nonumber \end{eqnarray*} where the third equality holds by use of the fact \begin{equation*} \nabla\left(|w(\cdot, t)|^{r(t)-2}\right) \cdot \nabla (|w(\cdot,t)|^2)=\frac{8(r(t)-2) }{r(t)^{2} }\left|\nabla (|w(\cdot,t)|^{r(t) / 2})\right|^{2}. \end{equation*} We have \begin{eqnarray}\label{important_1} &&r(t)^{2}\|w(\cdot, t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w(\cdot, t)\|_{r(t)}\nonumber\\ &=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w(\cdot, t)|^{r(t)} \ln \left(|w(\cdot, t)|^{r(t)} /\|w(\cdot, t)\|_{r(t)}^{r(t)}\right) \mathrm{d}x-4r(t)(r(t)-2)\|\nabla (|w(\cdot,t)|^{\frac{r(t)}{2}})\|_{L^2}^2\nonumber\\ &&-r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot, t)|^{r(t)-2}|\nabla w(\cdot, t)|^2dx +\uppercase\expandafter{\romannumeral2}+\uppercase\expandafter{\romannumeral3}+\uppercase\expandafter{\romannumeral4}+\uppercase\expandafter{\romannumeral5}. \end{eqnarray} Next, we will estimate $\uppercase\expandafter{\romannumeral2}-\uppercase\expandafter{\romannumeral5}$, respectively. Thanks to integration by parts, H\"{o}lder's inequality and Sobolev embedding $\dot{H}^{1}(\mathbb{R}^{3})\hookrightarrow L^6(\mathbb{R}^{3})$ (best constant can be seen in \cite{Swa}), we have \begin{eqnarray} \uppercase\expandafter{\romannumeral2}&=& -\frac{r(t)^{2} }{2}\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_j\partial_j|w|^2\mathrm{d}x\nonumber\\ &=& \frac{r(t)^{2} }{2}\int_{\mathbb{R}^{3}} \partial_j(|w(\cdot,t)|^{r(t)-2}) w_j|w|^2\mathrm{d}x\nonumber\\ &\leq& r(t)(r(t)-2)\int_{\mathbb{R}^{3}} \left| \nabla(|w(\cdot,t)|^{\frac{r(t)}{2}})\right| |w|^{\frac{r(t)}{2}}|w|\mathrm{d}x\nonumber\\ &\leq& r(t)(r(t)-2)\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}\left\||w(t)|^{\frac{r(t)}{2}}\right\|_{L^{6}}\|w(t)\|_{L^{3}}\nonumber\\ &\leq& r(t)(r(t)-2)\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}\|w(t)\|_{L^{3}}. \end{eqnarray} Combining (\ref{L^3 stability-1}) with $\|w_0\|_{L^3}\leq \varepsilon_0,$ there holds \begin{equation}\label{I_2} \begin{aligned} \uppercase\expandafter{\romannumeral2}\leq r(t)(r(t)-2)C\varepsilon_0 \left\|\nabla (|w(t)|^{\frac{r(t)}{2}})\right\|^2_{L^2}. \end{aligned} \end{equation} According to H\"{o}lder's inequality, the Hardy inequality in Lemma \ref{hardy} and Lemma \ref{|x|v_c}, we deduce \begin{eqnarray}\label{I_3} \uppercase\expandafter{\romannumeral3}&=& \frac{r(t)^{2}}{2} \int_{\mathbb{R}^{3}} \partial_j(|w(\cdot,t)|^{r(t)-2} )|w|^2v_j\mathrm{d}x\nonumber\\ &=& r(t)(r(t)-2)\int_{\mathbb{R}^{3}} v_c\cdot \nabla (|w(\cdot,t)|^{\frac{r(t)}{2}})|w(\cdot,t)|^{\frac{r(t)}{2}}\mathrm{d}x\nonumber\\ &\leq& r(t)(r(t)-2)\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}\left\|\frac{|w(t)|^{\frac{r(t)}{2}}}{|x|}\right\|_{L^{2}}\left\||x| v_{c}\right\|_{L^{\infty}}\nonumber\\ &\leq& 2 r(t)(r(t)-2) K_{c}\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}. \end{eqnarray} For forth term, we integrate by parts to have \begin{eqnarray*} \uppercase\expandafter{\romannumeral4}&=& -r(t)^{2} \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} w_i\partial_j (v_iw_j)\mathrm{d}x\\ &=&r(t)^{2}\int_{\mathbb{R}^{3}} \partial_j (|w(\cdot,t)|^{r(t)-2}) w_iv_iw_j\mathrm{d}x + r(t)^{2}\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} \partial_jw_i v_iw_j\mathrm{d}x. \end{eqnarray*} The estimate of the first part is similar as (\ref{I_3}). We have \begin{eqnarray*} r(t)^{2}\int_{\mathbb{R}^{3}} \partial_j (|w(\cdot,t)|^{r(t)-2}) w_iv_iw_j\mathrm{d}x \leq 4 r(t)(r(t)-2) K_{c}\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}. \end{eqnarray*} By Lemma \ref{|x|v_c}, Cauchy inequality and the Hardy inequality, we can estimate the second part as follows \begin{eqnarray*} &&r(t)^{2}\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} \partial_jw_i v_iw_j\mathrm{d}x\nonumber\\ &\leq& r(t)^{2}\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} |\partial_jw_i | ||x|v_i|\frac{|w_j|}{|x|}\mathrm{d}x\nonumber\\ &\leq& r(t)^{2}K_c\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} |\partial_jw_i | \frac{|w_j|}{|x|}\mathrm{d}x\nonumber\\ &\leq& \frac{r(t)^{2}}{2} K_c\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} |\nabla w(\cdot,t)|^{2} dx+ \frac{r(t)^{2}}{2} K_c \int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} \frac{|w(\cdot,t)|^{2}}{|x|^2} dx\nonumber\\ &\leq& \frac{r(t)^{2}}{2} K_c\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} |\nabla w(\cdot,t)|^{2} dx+ 2r(t)^{2} K_c\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}. \end{eqnarray*} Therefore, we have \begin{equation}\label{I_4} \begin{aligned} \uppercase\expandafter{\romannumeral4}\leq & \frac{r(t)^{2}}{2} K_c\int_{\mathbb{R}^{3}} |w(\cdot,t)|^{r(t)-2} |\nabla w(\cdot,t)|^{2} dx\\ &+ (4 r(t)(r(t)-2)+2r(t)^{2}) K_c\left\|\nabla\left(|w(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}. \end{aligned} \end{equation} Note that the pressure $\pi=-\frac{\partial_i\partial_j}{\Delta}(w_iw_j+v_iw_j+w_iv_j )$, using integration by parts, we obtain \begin{eqnarray}\label{I_5} \uppercase\expandafter{\romannumeral5} &=& r(t)^{2} \int_{\mathbb{R}^{3}} \partial_i(|w(\cdot,t)|^{r(t)-2})w_i\pi \mathrm{d}x\nonumber\\ &=& r(t)^{2} \int_{\mathbb{R}^{3}} \partial_i(|w(\cdot,t)|^{r(t)-2})w_i\left(-\frac{\partial_i\partial_j}{\Delta}(v_iw_j+w_iv_j+w_iw_j)\right) \mathrm{d}x. \end{eqnarray} This term is more complex to deal with, we will estimate it more carefully. Set $$ \uppercase\expandafter{\romannumeral5}_1= r(t)^{2} \int_{\mathbb{R}^{3}} \partial_i(|w(\cdot,t)|^{r(t)-2})w_i\left(-\frac{\partial_i\partial_j}{\Delta}(v_iw_j+w_iv_j)\right) \mathrm{d}x, $$ and $$ \uppercase\expandafter{\romannumeral5}_2= r(t)^{2} \int_{\mathbb{R}^{3}} \partial_i(|w(\cdot,t)|^{r(t)-2})w_i\left(-\frac{\partial_i\partial_j}{\Delta}w_iw_j\right) \mathrm{d}x. $$ According to \cite{Gra}, there holds $|x|^{r-2}\in A_r$ with $1<r<\infty$. By H\"{o}lder's inequality, boundedness of the Riesz transforms on weighted $L^p$ spaces (Theorem 9.4.6 in \cite{Gra}), Lemma \ref{|x|v_c} and the Hardy inequality, there holds \begin{eqnarray}\label{51} \uppercase\expandafter{\romannumeral5}_1&\leq&2r(t)(r(t)-2)\int_{\mathbb{R}^{3}} \left| \nabla\left(|w(\cdot, t)|^{\frac{r(t)}{2}}\right)\right||w(\cdot, t)|^{\frac{r(t)}{2}-1} \left|\frac{\partial_i\partial_j}{\Delta}(v_iw_j+w_iv_j)\right|\mathrm{d} x\nonumber\\ &\leq& 4 r(t)(r(t)-2) C_{r}\||x|^{\frac{r-2}{r}}\left(v_{c} \otimes w\right)\|_{L^{r}}\left\| \frac{w^{\frac{r}{2}-1}}{x^{\frac{r-2}{r}}} \right\|_{L^{\frac{2 r}{r-2}}}\| \nabla(|w(\cdot, t)|^{\frac{r}{2}})\|_{L^{2}}\nonumber\\ &\leq& 4 r(t)(r(t)-2) C_{r}\||x|v_{c} \|_{L^{\infty}}\||x|^{-\frac{2}{r}} w\|_{L^{r}}\left\|\frac{w}{|x|^{\frac{2}{r}}}\right\|_{L^{r}}^{\frac{r}{2}-1}\| \nabla(|w(\cdot, t)|^{\frac{r}{2}})\|_{L^{2}}\nonumber\\ &\leq& 4 r(t)(r(t)-2) C_{r}K_c\left\|\frac{|w|^{\frac{r}{2}}}{|x|}\right\|_{L^{2}}\| \nabla(|w(\cdot, t)|^{\frac{r}{2}})\|_{L^{2}}\nonumber\\ &\leq& 8 r(t)(r(t)-2) C_{r} K_c\| \nabla(|w(\cdot, t)|^{\frac{r}{2}})\|_{L^{2}}^2, \end{eqnarray} where $C_r$ is as in Theorem 9.4.6 in \cite{Gra}. Thanks to \cite{Iwa}, we deduce that $\|\frac{\partial_i\partial_j}{\Delta} f\|_{L^r} \leq H_r\|f\|_{L^r} $. Combining with H\"{o}lder's inequality and Sobolev embedding $\dot{H}^{1}(\mathbb{R}^{3})\hookrightarrow L^6(\mathbb{R}^{3})$, we have \begin{eqnarray} \uppercase\expandafter{\romannumeral5}_2&\leq & 2r(t)(r(t)-2) \|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|_{L^2}\||w(\cdot, t)|^{\frac{r(t)}{2}-1} \|_{L^{\frac{6r}{r-2}}}\left\|\frac{\partial_i\partial_j}{\Delta}w_iw_j\right\|_{L^{\frac{3r}{r+1}}}\nonumber\\ &\leq& 2r(t)(r(t)-2) H_{\frac{3r}{r+1}}\|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|_{L^2}\||w(\cdot, t)|^{\frac{r(t)}{2}-1} \|_{L^{\frac{6r}{r-2}}}\left\|w \otimes w\right\|_{L^{\frac{3r}{r+1}}}\nonumber\\ &\leq& 2r(t)(r(t)-2) H_{\frac{3r}{r+1}}\|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|_{L^2}\|w(\cdot, t) \|^{\frac{r(t)}{2}-1}_{L^{3r}}\left\|w(\cdot, t)\right\|_{L^{3r}}\left\|w(\cdot, t)\right\|_{L^{3}}\nonumber\\ &\leq& 4r(t)(r(t)-2) H_{\frac{3r}{r+1}}\|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|_{L^2}\||w(\cdot, t)|^{\frac{r(t)}{2}}\|_{L^6}\|w(\cdot, t) \|_{L^{3}}\nonumber\\ &\leq& 4r(t)(r(t)-2) H_{\frac{3r}{r+1}}\|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|^2_{L^2}\|w(\cdot, t) \|_{L^{3}}.\nonumber \end{eqnarray}{} According to (\ref{L^3 stability-1}), when $\|w_0\|_{L^3}\leq \varepsilon_0,$ there holds \begin{equation}\label{52} \begin{aligned} \uppercase\expandafter{\romannumeral5}_2 \leq 4r(t)(r(t)-2) H_{\frac{3r}{r+1}}C \varepsilon_0\|\nabla(|w(\cdot, t)|^{\frac{r(t)}{2}})\|^2_{L^2}. \end{aligned} \end{equation} Combining with (\ref{important_1})-(\ref{52}), there holds \begin{eqnarray}\label{remk_2} &&r(t)^{2}\|w(t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w(t)\|_{r(t)}\nonumber\\ &\leq&\dot{r}(t) \int_{\mathbb{R}^{3}} |w(t)|^{r(t)} \ln \left(|w(t)|^{r(t)} /\|w(t)\|_{r(t)}^{r(t)}\right) \mathrm{d}x\nonumber\\ &&-\left(4r(t)(r(t)-2)\mu-2r(t)^2 K_c\right)\|\nabla (|w(t)|^{\frac{r(t)}{2}})\|_{L^2}^2,\nonumber \end{eqnarray} with \begin{equation}\label{mu} \mu=\inf_{3\leq r\leq q}\left\{1-\frac14C\varepsilon_0-\frac12 K_c-K_c-2C_rK_c-H_{\frac{3r}{r+1}}C \varepsilon_0\right\}> \frac 12. \end{equation} Applying the sharp logarithmic Sobolev inequality in \cite{Ca,Ngu}, we have \begin{equation} 2\int|u|^2\ln\left(\frac{|u|}{\|u\|_{L^2}}\right)dx+3(1+\ln a)\|u\|_{L^2}^2\leq\frac{a^2}{\pi}\int |\nabla u|^2dx. \end{equation} Using $a=\left(\pi\frac{4r(t)(r(t)-2)\mu-2K_c r(t)^2}{ \dot{r}(t)}\right)^{\frac12}$ and $u=|w|^{\frac{r(t)}{2}}$, we obtain \begin{eqnarray*} &&r(t)^{2}\|w(t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w(t)\|_{r(t)}\\ &\leq& -\dot{r}(t)\left(3+\frac{3}{2} \ln \frac{(4\pi\mu -2\pi K_c) r(t)^2-8\pi \mu r(t) }{\dot{r}(t)}\right)\|w\|_{L^r}^r. \end{eqnarray*} If we define $G(t) :=\ln \left(\|w(t)\|_{L^{r(t)}}\right)$ with $r(t)=\frac{1}{\frac{1}{T}(\frac{1}{q}-\frac{1}{3})t+\frac{1}{3}}$, there holds \begin{eqnarray*} \frac{\mathrm{d} G(t)}{\mathrm{d} t} &\leq& -\frac{\dot{r}(t)}{r^{2}(t)}\left(3+\frac{3}{2} \ln\frac{(4 \pi\mu-2K_c \pi)r(t)^2-8\pi\mu r(t)}{\dot{r}(t)} \right)\nonumber\\ &=& \frac{1}{T}\left(\frac{1}{q}-\frac{1}{3}\right)\left(3+\frac{3}{2} \ln \left( \frac{-8\pi\mu}{r(t)}+4 \pi\mu-2K_c \pi \right) \right) +\frac32\frac{1}{T}\left(\frac{1}{3}-\frac{1}{q}\right)\ln \frac{1}{T}\left(\frac{1}{3}-\frac{1}{q}\right)\nonumber\\ &\leq& \frac32\frac{1}{T}\left(\frac{1}{3}-\frac{1}{q}\right)\ln \frac{1}{T}\left(\frac{1}{3}-\frac{1}{q}\right).\nonumber \end{eqnarray*} Integrating this in time from 0 to $T$ yields $$ G(T) \leq G(0)+ \frac32\left(\frac1q-\frac13\right)\ln \left(\frac{T}{1 / 3-1 / q}\right). $$ We obtain $$ \ln \|w(\cdot, T)\|_{r(T)}\leq \ln \|w(\cdot, 0)\|_{L^3}+\frac32\left(\frac1q-\frac13\right)\ln \left(\frac{T}{1 / 3-1 / q}\right). $$ Hence, we obtain \begin{equation}\label{w} \|w(\cdot, t)\|_{L^q}\leq C_q t^{\frac32\left(\frac1q-\frac13\right)}\|w(\cdot, 0)\|_{L^3}, \end{equation} with $C_q=(\frac13-\frac1q)^{\frac32 (\frac13-\frac1q)}.$ To give strict proof of (\ref{nonlinear decay}), we consider the approximation scheme. Using method in \cite{Kw,Lem1,Lem}, the mollified system in $\mathbb{R}^{3} \times(0, \infty)$ is as follows \begin{equation}\label{PNS^e} \begin{cases} w^\epsilon_{t}-\Delta w^\epsilon+\left(\mathcal{J}_{\epsilon}\left(w^{\epsilon}\right) \cdot \nabla\right)w^{\epsilon}+(\mathcal{J}_{\epsilon}(w^\epsilon) \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right)\mathcal{J}_{\epsilon}(w^\epsilon) +\nabla \pi^\epsilon=0,\\ \nabla\cdot w^\epsilon=0,\\ w^\epsilon(x, 0) =w_{0}(x), \end{cases} \end{equation} where $\mathcal{J}_{\epsilon}(v)=v * \eta_{\epsilon}, \epsilon>0,$ the mollifier $\eta_{\epsilon}(x)=\varepsilon^{-3} \eta\left(\frac{x}{\epsilon}\right)$ with positive $\eta\in C_c^{\infty}(B(0,1)),$ $\int \eta dx=1.$ By classical approximation method, solution $w^\epsilon$ satisfies (\ref{nonlinear decay}). Similar as (\ref{remk_1}), we have \begin{eqnarray*} &&r(t)^{2}\|w^\epsilon(\cdot,t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w^\epsilon(\cdot,t)\|_{r(t)}\\ &=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)} \ln \left(|w^\epsilon(\cdot,t)|^{r(t)} /\|w^\epsilon(\cdot,t)\|_{r(t)}^{r(t)}\right) \mathrm{d} x\\ &&+r(t)^{2} \int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)-2} w^\epsilon_i\Big( \partial_j\partial_j w^\epsilon_i-\partial_j (\mathcal{J}_{\epsilon}(w^\epsilon)_iw^\epsilon_j +\mathcal{J}_{\epsilon}(w^\epsilon)_iv_j+v_i\mathcal{J}_{\epsilon}(w^\epsilon)_j)-\partial_i\pi^\epsilon\Big) \mathrm{d}x\\ &:=&\dot{r}(t) \int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)} \ln \left(|w^\epsilon(\cdot,t)|^{r(t)} /\|w^\epsilon(\cdot,t)\|_{r(t)}^{r(t)}\right) \mathrm{d} x\\ && +\uppercase\expandafter{\romannumeral1}^\epsilon+\uppercase\expandafter{\romannumeral2}^\epsilon +\uppercase\expandafter{\romannumeral3}^\epsilon+\uppercase\expandafter{\romannumeral4}^\epsilon+\uppercase\expandafter{\romannumeral5}^\epsilon. \end{eqnarray*} Integration by parts show that \begin{equation} \begin{aligned} \uppercase\expandafter{\romannumeral1}^\epsilon&= -r(t)^{2} \int_{\mathbb{R}^{3}} \partial_j(|w^\epsilon(\cdot, t)|^{r(t)-2} w^\epsilon_i) \partial_j w^\epsilon_idx \\ &= -4r(t)(r(t)-2)\|\nabla (|w^\epsilon(\cdot,t)|^{\frac{r(t)}{2}})\|_{L^2}^2-r(t)^{2} \int_{\mathbb{R}^{3}} |w^\epsilon(t)|^{r(t)-2}|\nabla w^\epsilon|^2dx. \end{aligned} \end{equation} Similar as (\ref{important_1})-(\ref{52}), we have \begin{eqnarray} \uppercase\expandafter{\romannumeral2}^\epsilon&=& -{r(t)^{2} }\int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)-2} w^\epsilon_i\partial_j (\mathcal{J}_{\epsilon}(w^\epsilon)_iw^\epsilon_j\mathrm{d}x\nonumber\\ &=& {r(t)^{2} }\int_{\mathbb{R}^{3}}\partial_j(|w^\epsilon(\cdot,t)|^{r(t)-2}) w^\epsilon_i\mathcal{J}_{\epsilon}(w^\epsilon)_iw^\epsilon_j\mathrm{d}x +{r(t)^{2} }\int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)-2}\partial_j( w^\epsilon_i)\mathcal{J}_{\epsilon}(w^\epsilon)_iw^\epsilon_j\mathrm{d}x\nonumber\\ &\leq& r(t)(r(t)-2)C\varepsilon_0 \left\|\nabla\left(|w^\epsilon(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}, \end{eqnarray} \begin{equation} \uppercase\expandafter{\romannumeral3}^\epsilon\leq 2 r(t)(r(t)-2) K_{c}\left\|\nabla\left(|w^\epsilon(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2}, \end{equation} \begin{eqnarray} \uppercase\expandafter{\romannumeral4}^\epsilon&\leq& \frac{r(t)^{2}}{2} K_c\int_{\mathbb{R}^{3}} |w^\epsilon(\cdot,t)|^{r(t)-2} |\nabla w^\epsilon(\cdot,t)|^{2} dx\\ &&+ (2 r(t)(r(t)-2)+4r(t)^{2}) K_c\left\|\nabla\left(|w^\epsilon(t)|^{\frac{r(t)}{2}}\right)\right\|_{L^{2}}^{2},\nonumber \end{eqnarray} \begin{equation} \uppercase\expandafter{\romannumeral5}^\epsilon\leq 4 r(t)(r(t)-2)(2C_{r} K_c+H_{\frac{3r}{r+1}}C \varepsilon_0) \| \nabla(|w^\epsilon(\cdot, t)|^{\frac{r}{2}})\|_{L^{2}}^2, \end{equation} and \begin{eqnarray} &&r(t)^{2}\|w^\epsilon(t)\|_{r(t)}^{r(t)-1} \frac{\mathrm{d}}{\mathrm{d} t}\|w^\epsilon(t)\|_{r(t)}\\ &\leq& \dot{r}(t) \int_{\mathbb{R}^{3}} |w^\epsilon(t)|^{r(t)} \ln \left(|w^\epsilon(t)|^{r(t)} /\|w^\epsilon(t)\|_{r(t)}^{r(t)}\right) \mathrm{d}x\nonumber\\ &&-\left(4r(t)(r(t)-2)\mu-2r(t)^2 K_c\right)\|\nabla (|w^\epsilon(t)|^{\frac{r(t)}{2}})\|_{L^2}^2,\nonumber \end{eqnarray} with \begin{equation} \mu=\inf_{3\leq r\leq q}\left\{1-\frac14C\varepsilon_0-\frac12 K_c-K_c-2C_rK_c-H_{\frac{3r}{r+1}}C \varepsilon_0\right\}> \frac 12. \end{equation} Similar as procedure in the proof of (\ref{w}), we obtain \begin{equation} \|w^\epsilon(\cdot, t)\|_{L^q}\leq (\frac13-\frac1q)^{\frac32 (\frac13-\frac1q)}t^{\frac32\left(\frac1q-\frac13\right)}\|w^\epsilon(\cdot, 0)\|_{L^3}. \end{equation} By compactness and convergence theory, solution $w$ satisfies (\ref{nonlinear decay}). Finally, we will prove (\ref{L^3 stability-2}). Since $w_0\in L^3$ and $\|w_0\|_{L^3}<\varepsilon_0$, there exists a subsequence denoted by $\{w_{0,n}\}$ such that $w_{0,n}\in L^2\cap L^3$ and $$ w_{0,n}\rightarrow w_0 \text{\ in\ }L^3 \text{\ as } n\rightarrow \infty. $$ According to Theorem \ref{s-w-uniqueness}, Corollary \ref{corollary} and Remark \ref{remark-Kar}, we have $$ \lim_{t\rightarrow \infty}\|w_n(\cdot, t)\|_{L^3}=0. $$ Based on similar proof of (\ref{Z-crucial}), when $\|w_{0,n}-w_0\|_{L^3}\leq (4C^2e^{2C \int_0^T \|w\|_{L^6}^4 dt})^{-1},$ we have $$ \|w_n-w\|_{L_t^{\infty}([0,\infty);L_x^3)}\leq 2C\|w_{0,n}-w_0\|_{L^3}e^{C\int_0^{\infty} \|w\|_{L^6}^4 dt}, $$ for a positive constant $C.$ According to Theorem \ref{L^3 well-posedness}, we have $$ \int_0^{\infty} \|w\|_{L^6}^4 dt\leq C \|w_0\|_{L^3(\mathbb{R}^3)}. $$ Hence, $$ \lim_{n\rightarrow \infty}\|w_n-w\|_{L_t^{\infty}([0,\infty);L_x^3)}=0. $$ Therefore, (\ref{L^3 stability-2}) holds. {\hfill $\square$\medskip} \section{The linear operator $\mathcal{L}$ }\label{linear} By Lemmas \ref{alemm} and \ref{p_alemm}, as stated in Sections \ref{Sec2} and \ref{Sec4}, $-\mathcal{L}$ is the infinitesimal generator of the strongly continuous semigroup of contraction $e^{-t\mathcal{L}}$ of bounded linear operators on $L_{\sigma}^q(\mathbb{R}^3)$, $1< q<\infty$. In this section, we prove that $e^{-t\mathcal{L}}$ is an analytic semigroup. We consider the following system \begin{equation}\label{lambda-PNS} \begin{cases} \lambda u -\Delta u+(u \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right) u+\nabla p=f,\\ \nabla\cdot u=0. \end{cases} \end{equation} For $\delta>0$ small, set $\Sigma_{\delta}=\{ \lambda\in \mathbb{C}\backslash \{ 0 \}: |\arg \lambda|<\frac{\pi}{2}+ \delta\}$. It is easy to see that for $\lambda = \sigma +\sqrt{-1}\tau \in \Sigma,$ $\sigma$, $\tau$ real, if $\sigma <0,$ then \begin{equation}\label{2-11} |\sigma |<\delta|\tau|. \end{equation} \begin{theorem}\label{linear ope property} For $1<q<\infty,$ there exist some positive constants $\delta$ and $\bar{c}_q$ which depend only on $q$ such that for any $|c|>\bar{c}_q,$ $\lambda \in \Sigma_{\delta},$ and $u\in C_{c,\sigma}^{\infty}(\mathbb{R}^3)$ satisfying system (\ref{lambda-PNS}), we have \begin{equation}\label{2-1} \|u\|_{L^q}\leq \frac{C}{|\lambda|}\|f\|_{L^q}, \end{equation} where $C$ is a constant depending only on $q$ and $\delta.$ Consequently, $e^{-t\mathcal{L}}$ is an analytic semigroup of bounded linear operators on $L^q_{\sigma}(\mathbb{R}^3)$ in the sector $\{\lambda: |arg \lambda|<\delta\}$. \end{theorem} The last statement in the above theorem follows from estimate (\ref{2-1}), together with the fact that $e^{-t\mathcal{L}}$ is a strongly continuous semigroup of contraction on $L_{\sigma}^q(\mathbb{R}^3)$ for $1< q<\infty$ which are established in Sections \ref{Sec2} and \ref{Sec4}, see Theorem 1.5.2 in \cite{Paz}. The following theorem for operators $\mathcal{L} u$ on scalar functions $u$ can be proved by using the arguments in the proof of Theorem \ref{linear ope property}, but much simpler since no pressure term is present. \begin{theorem} For $n\geq 3$ and $1<q<\infty,$ there exist some positive constants $\varepsilon$ and $\delta$ which depend only on $n$ and $q$ such that the operator $$\mathcal{L}u:=-\Delta u+a(x)u+b(x)\cdot \nabla u,$$ with $|a(x)|\leq \varepsilon|x|^{-2}$ and $|b(x)|\leq \varepsilon|x|^{-1}$ for all $x\in \mathbb{R}^n,$ has the property that $e^{-t\mathcal{L}}$ is an analytic semigroup of bounded linear operators on $L^q_{\sigma}(\mathbb{R}^3)$ in the sector $\{\lambda: |arg \lambda|<\delta\}$. \end{theorem} To prove Theorem \ref{linear ope property}, we need the following lemma. \begin{lemma}\label{sec3-lem1} Function $u$ has the following property \begin{equation} |\nabla(|u|^2)|^2\leq 4|\nabla u|^2|u|^2. \end{equation} Consequently, for $1\leq q\leq2,$ we have \begin{equation} \frac{q-2}{4}\int_{\mathbb{R}^3}|u|^{q-4} |\nabla (|u|^2)|^2dx+ \int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2}dx \geq (q-1)\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2}dx. \end{equation} \end{lemma} \noindent\textbf{Proof.} We have \begin{equation} |\nabla(|u|^2)|^2=\sum_j|\partial_j(|u|^2)|^2=\sum_j|\partial_j<u,u>|^2\leq4\sum_j|<\partial_ju,u>|^2. \end{equation} For fixed $j,$ using Cauchy-Schwartz inequality, we have \begin{eqnarray} |\nabla(|u|^2)|^2\leq 4\sum_j(|\partial_j u|^2|u|^2)=4|\nabla u|^2|u|^2. \end{eqnarray} {\hfill $\square$\medskip} \noindent\textbf{Proof of Theorem \ref{linear ope property}.} The value of $\delta$ will be chosen in the proof below. Multiplying the equation ($\ref{lambda-PNS})_1$ by $|u|^{q-2}\overline{u}$, and integrating it on $\mathbb{R}^3$, we have \begin{equation}\label{Sec3_0} \begin{aligned} \int_{\mathbb{R}^3} \nabla u \cdot \nabla(|u|^{q-2}\overline{u}) dx+ \lambda \int_{\mathbb{R}^3} |u|^q dx+ \int_{\mathbb{R}^3} (v_c \cdot \nabla u )\cdot (|u|^{q-2}\overline{u}) dx&\\ + \int_{\mathbb{R}^3} (u \cdot \nabla v_c )\cdot (|u|^{q-2}\overline{u}) dx+\int_{\mathbb{R}^3} \nabla p \cdot (|u|^{q-2}\overline{u}) dx &= \int_{\mathbb{R}^3} f \cdot (|u|^{q-2}\overline{u}) dx. \end{aligned} \end{equation} Set $\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2}+\uppercase\expandafter{\romannumeral 3}+\uppercase\expandafter{\romannumeral 4}+\uppercase\expandafter{\romannumeral 5} =\int_{\mathbb{R}^3} f \cdot (|u|^{q-2}\overline{u} )dx. $ For the first part, we have \begin{eqnarray*} \uppercase\expandafter{\romannumeral 1} &=&\int_{\mathbb{R}^3} \partial_j u_i \partial_j\left((u_m \overline{u}_m)^{\frac{q-2}{2}} \overline{u}_i\right) dx \\ &=&\int_{\mathbb{R}^3} (\partial_j u_i) \overline{u}_i \frac{q-2}{2}|u|^{q-4}\partial_j(|u|^2) dx + \int_{\mathbb{R}^3} (\partial_j u_i)(\overline{\partial_j u_i})|u|^{q-2} dx\\ &=&\int_{\mathbb{R}^3} (\partial_j u_i) \overline{u}_i \frac{q-2}{2}|u|^{q-4}\left[(\partial_ju_m) \overline{u}_m+ \overline {(\partial_ju_m) \overline{u}_m}\right] dx\\ && + \int_{\mathbb{R}^3} (\partial_j u_i)(\overline{\partial_j u_i})|u|^{q-2} dx\\ &=&\int_{\mathbb{R}^3} (\partial_j u_i) \overline{u}_i \frac{q-2}{2}|u|^{q-4}\left[(\partial_ju_m) \overline{u}_m+ \overline {(\partial_ju_m) \overline{u}_m}\right] dx + \int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx. \end{eqnarray*} Denote $\xi_j =(\partial_j u_m) \overline{u}_m=a_j+\sqrt{-1}b_j, $ then \begin{eqnarray} \partial_j(|u|^2)= (\partial_j u_m) \overline{u}_m+ \overline {(\partial_j u_m) \overline{u}_m}=2 \text{ Re }\xi_j= 2 a_j. \end{eqnarray} Therefore, we have \begin{eqnarray*} \uppercase\expandafter{\romannumeral 1} &=&\int_{\mathbb{R}^3} \frac{q-2}{2}|u|^{q-4}\xi_j\left[\xi_j+ \overline{\xi}_j\right] + |\nabla u|^2|u|^{q-2} dx\\ &=&\int_{\mathbb{R}^3} \frac{q-2}{2}|u|^{q-4}(a_j+\sqrt{-1}b_j)\left(2a_j\right) + |\nabla u|^2|u|^{q-2} dx\\ &=&\int_{\mathbb{R}^3}(q-2)|u|^{q-4} \Sigma(a_j)^2 dx+ \sqrt{-1} \int_{\mathbb{R}^3}\frac{q-2}{2}|u|^{q-4} 2a_j b_j dx + \int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx\\ &=&\int_{\mathbb{R}^3}\frac{q-2}{4}|u|^{q-4} |\nabla (|u|^2)|^2 dx+ \sqrt{-1} \int_{\mathbb{R}^3}\frac{q-2}{2}|u|^{q-4} 2a_j b_j dx + \int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx. \end{eqnarray*} For the second part $\uppercase\expandafter{\romannumeral 2}$ \begin{eqnarray} \uppercase\expandafter{\romannumeral 2} = (\sigma +\sqrt{-1}\tau) \int_{\mathbb{R}^3} |u|^q dx. \end{eqnarray} It is easy to see that \begin{eqnarray}\label{ineqaulity} 2 \sum_j|a_j||b_j|\leq |\nabla u|^2|u|^2. \end{eqnarray} Then, using Lemma \ref{sec3-lem1} and (\ref{ineqaulity}), we have \begin{eqnarray}\label{ineq3_1} |\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2} |&\geq & \text{Re }(\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2}) +|\text{Im }(\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2}) |\nonumber\\ &\geq& \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2}dx+\sigma\int_{\mathbb{R}^3}|u|^{q} dx\nonumber +\left|\tau\int_{\mathbb{R}^3}|u|^{q} dx-\int_{\mathbb{R}^3} \frac{q-2}{2}|u|^{q-4}2a_jb_jdx\right|\nonumber\\ &\geq& \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2}dx+\sigma\int_{\mathbb{R}^3}|u|^{q} dx\nonumber\\ &&+\left||\tau|\int_{\mathbb{R}^3}|u|^{q} dx-\left|\frac{q-2}{2}\right|\int_{\mathbb{R}^3} |u|^{q-4}|\nabla u|^2|u|^{q-2}dx\right|. \end{eqnarray} We distinguish into two cases: \noindent\textbf{Case 1.} $ \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx\geq 8\delta|\tau| \int_{\mathbb{R}^3}|u|^{q} dx.$ \noindent\textbf{Case 2.} $ \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx< 8\delta|\tau| \int_{\mathbb{R}^3}|u|^{q} dx.$ In Case 1, we deduce from (\ref{ineq3_1}), using (\ref{2-11}) and requiring $0<\delta\leq 1,$ that \begin{eqnarray} |\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2} |&\geq\frac12\text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+(4\delta|\tau|+\sigma) \int_{\mathbb{R}^3}|u|^{q} dx\nonumber\\ &\geq \frac12\text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+\delta(|\tau|+|\sigma|) \int_{\mathbb{R}^3}|u|^{q} dx. \end{eqnarray} In Case 2, we derive from (\ref{ineq3_1}) that \begin{equation} |\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2} |\geq \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+(|\tau|+\sigma-\frac{4|q-2|\delta |\tau|}{\text{min}\{q-1,1\}}) \int_{\mathbb{R}^3}|u|^{q} dx. \end{equation} Now we require that $\delta$ further satisfies $8|q-2|\delta<\text{min}\{q-1,1\}$ and $\delta\leq \frac 15.$ Then we have \begin{eqnarray*} |\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2} |&\geq& \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+(\frac 12|\tau|+\sigma) \int_{\mathbb{R}^3}|u|^{q} dx\\ &\geq& \text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+\frac 14(|\tau|+\sigma) \int_{\mathbb{R}^3}|u|^{q} dx. \end{eqnarray*} So in both cases, we have proved that \begin{equation}\label{Sec3_12} |\uppercase\expandafter{\romannumeral 1}+\uppercase\expandafter{\romannumeral 2} |\geq \frac12\text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+\delta(|\tau|+|\sigma|) \int_{\mathbb{R}^3}|u|^{q} dx. \end{equation} By H\"{o}lder's inequality, Hardy inequality, Cauchy inequality and Lemma \ref{sec3-lem1}, we have \begin{eqnarray}\label{Sec3_3} |\uppercase\expandafter{\romannumeral 3}|&\leq& K_c \int_{\mathbb{R}^3} \frac{|\nabla u|}{|x|} |u|^{q-1}dx\nonumber\\ &\leq& K_c \left\| |\nabla u||u|^{\frac{q-2}{2}}\right\|_{L^2}\left\|\frac{|u|^{\frac q2}}{|x|}\right\|_{L^2}\nonumber\\ &\leq& 2K_c \left\| |\nabla u||u|^{\frac{q-2}{2}}\right\|_{L^2}\left\|\nabla(|u|^{\frac q2})\right\|_{L^2}\\ &\leq& 2K_c \int_{\mathbb{R}^3} |\nabla u|^2 |u|^{q-2} dx+ CK_c \int_{\mathbb{R}^3} |u|^{q-4}|\nabla(| u|^2)|^2 dx\nonumber\\ &\leq& CK_c \int_{\mathbb{R}^3} |\nabla u|^2 |u|^{q-2} dx.\nonumber \end{eqnarray} Similarly, by Hardy inequality and Lemma \ref{sec3-lem1}, we deduce \begin{eqnarray}\label{Sec3_4} |\uppercase\expandafter{\romannumeral 4}|&\leq& K_c \int_{\mathbb{R}^3} \frac{|u|^{q}}{|x|^2}dx\nonumber\\ &\leq& 2 K_c \int_{\mathbb{R}^3} \nabla |(|u|^{\frac q2})|^2dx\\ &\leq& CK_c \int_{\mathbb{R}^3} |\nabla u|^2 |u|^{q-2} dx.\nonumber \end{eqnarray} According to (\ref{lambda-PNS}), we have \begin{eqnarray*} p= \frac{\text{div}}{\Delta}f-\frac{\partial_i\partial_j}{\Delta}(v_c \otimes u+ u \otimes v_c)_{ij}. \end{eqnarray*} By integration by parts, we have \begin{equation}\label{p} \uppercase\expandafter{\romannumeral 5}=\int_{\mathbb{R}^3} \frac{\partial_i\partial_j}{\Delta}f \cdot(|u|^{q-2} \overline{u}) dx +\int_{\mathbb{R}^3} \frac{\partial_i\partial_j}{\Delta}(v_c \otimes u+ u \otimes v_c)_{ij} \nabla\cdot(|u|^{q-2} \overline{u}) dx. \end{equation} By boundedness of the Riesz transforms on weighted $L^p$ spaces (Theorem 9.4.6 in \cite{Gra}) and H\"{o}lder's inequality, we have the following estimate \begin{eqnarray} \int_{\mathbb{R}^3} \frac{\partial_i\partial_j}{\Delta}f \cdot(|u|^{q-2} \overline{u}) dx\leq C\| f\|_{L^q}\| u\|_{L^q}^{q-1}. \end{eqnarray} By H\"{o}lder's inequality, Hardy inequality, Sobolev embedding, boundedness of the Riesz transforms on weighted $L^p$ spaces (Theorem 9.4.6 in \cite{Gra}) and Lemma \ref{sec3-lem1}, we have \begin{eqnarray}\label{p_2} &&\int_{\mathbb{R}^3} \frac{\partial_i\partial_j}{\Delta}(v_c \otimes u+ u \otimes v_c)_{ij} \nabla\cdot(|u|^{q-2} \overline{u}) dx \nonumber\\ &\leq& C\||x|^{\frac{q-2}{q}}\left(u\otimes v_c+v_c\otimes u\right)\|_{L^{q}} \|\nabla (|u|^{\frac q2})\|_{L^{2}} \left\|\frac{|u|^{\frac q2-1}}{|x|^{\frac{q-2}{q}}}\right\|_{L^{\frac{2q}{q-2}}}\nonumber\\ &\leq& C\||x|v_c\|_{L^{\infty}} \left\|\frac{u}{|x|^{\frac2q}}\right\|_{L^q}\|\nabla (|u|^{\frac q2})\|_{L^{2}} \left\|\nabla(|u|^{\frac q2})\right\|_{L^{2}}^{\frac {q-2} {q} }\\ &\leq& C K_c\left\|\frac{u}{|x|^{\frac2q}}\right\|_{L^q}\|\nabla (|u|^{\frac{q}{2}})\|_{L^2}^{\frac{2q-2}{q}}\nonumber\\ &\leq& C K_c \|\nabla (|u|^{\frac{q}{2}})\|^2_{L^2}\nonumber\\ &\leq& CK_c \int_{\mathbb{R}^3} |\nabla u|^2 |u|^{q-2} dx.\nonumber \end{eqnarray} Combining with (\ref{p})-(\ref{p_2}), we deduce \begin{eqnarray}\label{Sec3_5} |\uppercase\expandafter{\romannumeral 5}|\leq C\| f\|_{L^q}\| u\|_{L^q}^{q-1}+CK_c \int_{\mathbb{R}^3} |\nabla u|^2 |u|^{q-2} dx. \end{eqnarray} Combining with (\ref{Sec3_0}), (\ref{Sec3_12})-(\ref{Sec3_4}), (\ref{Sec3_5}) and the condition of $K_c$ small enough, by H\"{o}lder's inequality, we have \begin{equation} \begin{aligned} \frac12\text{min}\{q-1,1\}\int_{\mathbb{R}^3} |\nabla u|^2|u|^{q-2} dx+\delta(|\tau|+|\sigma|) \int_{\mathbb{R}^3}|u|^{q} dx\leq C\| f\|_{L^q}\| u\|_{L^q}^{q-1}. \end{aligned} \end{equation} Since $\lambda=\sigma+\sqrt{-1}\tau$, we deduce \begin{equation} \begin{aligned} C(\delta)|\lambda|\| u\|_{L^q}\leq C\| f\|_{L^q}, \end{aligned} \end{equation} which implies (\ref{2-1}). {\hfill $\square$\medskip} \section{Weak-Strong Uniqueness}\label{w-s} In this section, we will prove the Theorem \ref{s-w-uniqueness}, Proposition \ref{prop-s-w-uniqueness} and illustrate Corollary \ref{corollary} briefly. First, we give the detailed proof of Theorem \ref{s-w-uniqueness}. \noindent\textbf{Proof of Theorem \ref{s-w-uniqueness}.} Following the proof of Theorem 4.4 in \cite{Tsa}, setting $g=v-u$, we have \begin{equation}\label{f} \begin{cases} \partial_t g-\Delta g+\nabla \pi=-((u+g) \cdot \nabla) g-(g\cdot \nabla) u-g\cdot \nabla v_c-v_c\cdot \nabla g,\\ \nabla\cdot g=0,\\ g(x, 0) =0. \end{cases} \end{equation} Using $g$ itself as a test function and integrating this in time from 0 to $t$, we have \begin{equation} \int \frac{|g|^{2}}{2} d x+\int_0^t\int|\nabla g|^{2} d xdt\leq\int_0^t\int (u+v_c) \cdot(g \cdot \nabla) g d xdt. \end{equation} Denote $E(t)=\operatorname{ess} \sup _{s<t}\|g(s)\|_{2}^{2}+\int_{0}^{t}\|\nabla g\|_{2}^{2}d \tau$ and $t_{0}=\sup \{t \in[0,T] : g(s)=0 \text { if } 0<s<t\}$. We claim that $t_0=T.$ Using contradiction argument, we assume that $t_0<T.$ Since \begin{equation} \left|\iint u v \nabla w d x d t \right| \leq C\|u\|_{L_{t}^{s} L_{x}^{q}}\|v\|_{L_{t}^{\infty} L_{x}^{2}}^{2 / s}\|v\|_{L_{t}^{2} L_{x}^{6}}^{3 / q}\|\nabla w\|_{L_{t, x}^{2}}, \end{equation} for $\frac 3q+ \frac 2s=1$ with $1\leq q,s \leq \infty,$ we have \begin{equation} \left|\int_{t_{0}}^{t} \int u \cdot(g \cdot \nabla) g d x d \tau\right|\leq C\|u\|_{L_{t}^{s} L_{x}^{q}}E(t), \end{equation} for $t\in [t_0,T,]$ By H\"{o}lder inequality, Hardy inequality and Lemma \ref{|x|v_c}, we have \begin{eqnarray} \left|\int_{t_{0}}^{t} \int v_c \cdot(g \cdot \nabla) g d x d \tau\right| &\leq& \int_{t_{0}}^{t} \||x|v_c\|_{L^\infty}\left\|\frac{g}{|x|}\right\|_{L^2}\|\nabla g\|_{L^2}d \tau\nonumber\\ &\leq& 2\int_{t_{0}}^{t} \||x|v_c\|_{L^\infty}\|\nabla g\|^2_{L^2}d \tau\nonumber\\ &\leq& 2K_c\|\nabla g\|_{L_{t,x}^2}^2\nonumber\\ &\leq& 2K_cE(t), \end{eqnarray} for $t\in [t_0,T].$ Hence, there holds \begin{equation} \begin{aligned} E(t)\leq C\|u\|_{L_{t}^{s}([t_0,t]; L_{x}^{q})}E(t)+2 K_cE(t). \end{aligned} \end{equation} If $s<\infty,$ we have $C\|u\|_{L_{t}^{s}([t_0,t]; L_{x}^{q})}<\frac 14$ for $t$ sufficiently close to $t_0$. If $s=\infty,$ we need $C\|u\|_{L_{t}^{\infty}([t_0,t]; L_{x}^{3})}<\frac 14$. Moreover, $K_c<\frac14$ by assumption. Therefore, $E(s)=0$ for all $s\in[t_0,t],$ which makes a contradiction to the definition of $t_0$. Hence, $t_0=T$ and for all $t\in [0,T]$. {\hfill $\square$\medskip} Following is the proof of Proposition \ref{prop-s-w-uniqueness}. \noindent\textbf{Proof of Proposition \ref{prop-s-w-uniqueness}.} For $p\geq 3,$ our goal is to show that the $L^p$ mild solution $w$ is a $L^2$-weak solution. Crucial part is to prove that $w\in C_w([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$. Set $w=a+z$ as in Section \ref{Sec2}. We will prove $a\in C_w([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$ and $z\in C_w([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$ as follows. Multiplying $(\ref{a})_1$ by $a,$ then integrating it on $\mathbb{R}^3$, we have \begin{equation}\label{4.7} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^2_{L^2}+\|\nabla a\|^2_{L^2} =-\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot a dx-\int_{\mathbb{R}^3} \nabla \pi_1 \cdot a dx. \end{equation} By similar estimate as (\ref{prop- rh1}), using integration by parts and div$a=0$, we obtain \begin{equation} -\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot a dx \leq 2K_c\left\|\nabla a\right\|_{L^2}^2, \end{equation} and \begin{equation}\label{4.9} -\int_{\mathbb{R}^3} \nabla \pi_1 \cdot a dx=0. \end{equation} From (\ref{4.7})-(\ref{4.9}), we have \begin{equation} \frac 12\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^2_{L^2}+\|\nabla a\|^2_{L^2}\leq 2K_c \|\nabla a\|^2_{L^2}. \end{equation} Since $|c|>{c}_p$ where ${c}_p$ is as in Theorem \ref{p>3 result}, we can guarantee $1-2 K_c>0.$ Combining with similar argument as (\ref{con_1}), we have $a\in C([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$. When $p\in[3,4]$, using $(w_1,w_2)=(w,w),$ multiplying $(\ref{N})_1$ by $z$ and integrating it on $\mathbb{R}^3$, we obtain \begin{eqnarray} &&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\|z(t)\|^2_{L^2}+\|\nabla z\|^2_{L^2}\nonumber\\ &=&-\int_{\mathbb{R}^3}\text{div}(z\otimes v_c+v_c\otimes z)\cdot z dx-\int_{\mathbb{R}^3}\text{div}(w\otimes w)\cdot z dx-\int_{\mathbb{R}^3} \nabla \pi_2 \cdot z dx.\label{4.7-0} \end{eqnarray} By similar argument as (\ref{aaaaa}), using integration by parts, we have \begin{equation} \begin{aligned} -\int_{\mathbb{R}^3}\text{div}(z\otimes v_c+v_c\otimes z)\cdot z dx\leq 2K_c\|\nabla z\|_{L^2}^2. \end{aligned} \end{equation} By integration by parts, H\"{o}lder's inequality and Cauchy inequality, we have \begin{equation} -\int_{\mathbb{R}^3}\text{div}(w\otimes w)\cdot z dx=\int_{\mathbb{R}^3}(w\otimes w)\nabla z dx \leq \|w\otimes w\|_{L^2}\|\nabla z \|_{L^2} \leq C\|w\|_{L^4}^4+\frac {1}{10}\|\nabla z \|^2_{L^2}. \end{equation} For the pressure term, using integration by parts and div$z=0$, we get \begin{equation} -\int_{\mathbb{R}^3} \nabla \pi_2 \cdot z dx=0.\label{4.9-0} \end{equation} Then, from (\ref{4.7-0})-(\ref{4.9-0}), we have \begin{equation} \label{4.15} \frac12\left\|z\right\|_{L_T^{\infty}L_x^2}^2+(\frac9{10}-2K_c)\left\|\nabla z\right\|_{L_T^2L_x^2}^2\leq C \|w\|_{L_T^{4}L_x^4}^4. \end{equation} Since the $L^p$ mild solution $w\in L_T^{\infty}L_x^p\cap L_T^{\frac{4p}{3}}L_x^{2p}$, $p\in[3,4]$, by interpolation theory, we have $ w\in L_T^{\frac{8p}{3(4-p)}}L_x^4 $, and $z\in L^\infty([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$. Combining with similar argument as (\ref{con_1}), we have $z\in C([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$. Then, $w\in C_w([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$, one can easily prove that $w$ is a $L^2$-weak solution of system (\ref{PNS}) on $[0,T]$, and omit the details. When $p>4$, the $L^p$ mild solution $w\in L_T^{\infty}L_x^p\cap L_T^{\frac{4p}{3}}L_x^{2p}$, from Lemma \ref{p_lem3}, we could obtain that \begin{equation}\label{4.10} \|z\|_{C_tL^\frac{p}{2}\cap L^{\frac{2p}{3}}_tL^{p}_x}\leq C \|w\|_{L^\frac{2p}{p-3}_tL^{p}_x}\|w\|_{L^{\frac{2p}{3}}_tL^{p}_x}, \end{equation} and $z\in C([0,T];L_{x}^{\frac{p}{2}})\cap L^{\frac{2p}{3}}_tL^{p}_x$. Combing $a\in L_T^{\infty}L_x^p\cap L_T^{\frac{4p}{3}}L_x^{2p}\cap C([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$, we have that $w\in C([0,T];L_{x}^{\frac{p}{2}})\cap L^{\frac{2p}{3}}_tL^{p}_x$. By the induction, we can get $w\in C([0,T];L_{x}^{\frac{p}{2^K}})\cap L^{\frac{p2^{2-K}}{3}}_tL^{ p2^{1-K}}_x$, for some $K\in \mathbb{Z}^+$ such that $2<\frac{p}{2^K}\leq 4$. From the argument in (\ref{4.15}), we have that $z\in C([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$. Therefore, the $L^p$ mild solution $w\in C_w([0,T];L_{x}^2)\cap L_T^2 (\dot{H}_{x}^1)$, one can easily prove that $w$ is a $L^2$-weak solution of system (\ref{PNS}) on $[0,T]$, and omit the details. {\hfill $\square$\medskip} Based on the proof of Proposition \ref{prop-s-w-uniqueness}, we have the following results in global time. For simplicity, we omit the detailed proof. \begin{corollary}\label{cor2} For $p\geq 3,$ $T>0,$ let ${c}_p$ and $\varepsilon_{0}$ be as in Theorem \ref{p>3 result}, $|c|>{c}_p$. For $w_0\in L_{\sigma}^p(\mathbb{R}^3)\cap L_{\sigma}^2(\mathbb{R}^3)$ and $\left\|{w}_{0}\right\|_{L^{3}\left(\mathbb{R}^{3}\right)}<\varepsilon_{0} ,$ let $w$ be a global $L^p$ mild solution of system (\ref{PNS}). Then $w$ is a global $L^2$-weak solution of system (\ref{PNS}). \end{corollary} Combining with Theorem \ref{p>3 result} and Corollary \ref{cor2}, we deduce Corollary \ref{corollary}. \section{ Global $L^2+L^3$ weak solution}\label{Sec3} In this section, we will illustrate Theorem \ref{w-global-weak-existence}, i.e. we will give the global existence of $L^2+L^3$ weak solution to system (\ref{PNS}). Note that when we consider the existence of weak solution to the Navier-Stokes system, there are essentially two methods: the energy method and the perturbation theory. The energy method gives the global existence for any initial data $v_0\in L_{\sigma}^2(\mathbb{R}^3)$. We cannot use this method since space $L^2$ doesn't contain space $L^3$ in the whole space $\mathbb{R}^3$. In the perturbation theory, by means of contraction mapping theorem, there exists a unique global weak solution to the Navier-Stokes system for small initial data $v_0\in L_{\sigma}^3(\mathbb{R}^3)$. Both methods cannot give direct results on the global existence for arbitrary $v_0\in L_{\sigma}^3(\mathbb{R}^3)$. Hence, many authors have developed various approaches to adapt the theory of the weak solutions so that it could allow $v_0\in L_{\sigma}^3(\mathbb{R}^3)$. Calder{\'o}n \cite{Cal} raised a method such that a $L_{\sigma}^3(\mathbb{R}^3)$ initial data $v_0$ can be decomposed as \begin{equation} v_0=v_0^1+v_0^2, \end{equation} where $v_0^1$ is small in $L_{\sigma}^3(\mathbb{R}^3)$ and $v_0^2$ belongs to $L_{\sigma}^2\cap L_{\sigma}^3(\mathbb{R}^3)$. Because of the smallness, initial data $v_0^1$ generates a global smooth solution $v_1$ by perturbation theory. Then the equation for $v_2=v-v_1$ can be solved by energy method. Seregin and $\check {\mathrm S}$ver$\acute {\mathrm a}$k \cite{Se} used another method to obtain global weak solution for $v_0\in L_{\sigma}^3(\mathbb{R}^3)$. The main idea of \cite{Se} is as follows. Let $v_1$ be solution of the linear version of the Navier-Stokes system, seek solution $v$ of the Navier-Stokes system as $v=v_1+v_2$, write down the equation that $v_2$ satisfied, then get the property of $v$ by investigating $v_2$. It's a general idea that the correction term $v_2$ might be easier to deal with than the full solution $v$. Related work can be referred in \cite{Lem,Ler,Se}. Inspired by above methods, we will decompose initial data $w_0=v_{10}+v_{20}$ and investigate the global existence of solutions $w=v_1+v_2$ to system (\ref{PNS}). For $w_0\in L_{\sigma}^3(\mathbb{R}^3)$, we have the following decomposition \begin{equation} w_0=v_{10}+v_{20}, \end{equation} with $\|v_{10}\|_{L^3}<\varepsilon_0$ and $v_{20}\in L_{\sigma}^2\cap L_{\sigma}^3(\mathbb{R}^3).$ Since $\|v_{10}\|_{L^3}\ll1,$ there exists a unique global $L^3$ mild solution $v_1$ to system (\ref{v1}) according to Theorem \ref{L^3 well-posedness}. Crucial part is the global existence of $v_2$. Since $v_{20}\in L_{\sigma}^2\cap L_{\sigma}^3$, this is the standard reasoning based on the Galerkin method (cf. \cite{Ka} Proof of Theorem 2.7). We claim there exist a global weak solution $v_2\in C_{w}\left([0, T]; L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left([0, T]; \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right)$ for each $T>0.$ According to Definition \ref{global 3}, there exists a global $L^2+L^3$ weak solution to system (\ref{PNS}). Detailed proof of the global existence of $v_2$ can be seen below. First, we will construct weak solutions $v_2$ to the system (\ref{v2}). This is the standard reasoning based on the Galerkin method (cf. \cite{Ka} Proof of Theorem 2.1). Since $H_{\sigma}^{1}\left(\mathbb{R}^{3}\right)$ is separable, there exists a sequence $\left\{g_{m}\right\}_{m=1}^{\infty}$ which is free and total in $H_{\sigma}^{1}\left(\mathbb{R}^{3}\right)$. For each $m=1,2,\ldots$ Define an approximate solution $w_{m}=\sum_{i=1}^{m} d_{i m}(t) g_{i},$ which satisfies the following system of ordinary differential equations \begin{equation}\label{l} \begin{aligned} &\left<w_{m}^{\prime}(t), g_{j}\right>+\left<\nabla w_{m}(t), \nabla g_{j}\right>+\left<\left(w_{m}(t) \cdot \nabla\right) w_{m}(t), g_{j}\right>\\ &+\left<\left(w_{m}(t) \cdot \nabla\right) (v_{c}+v_1), g_{j}\right> +\left<\left((v_{c}+v_1) \cdot \nabla\right) w_{m}(t), g_{j}\right>=0 \text { for } j=1, \ldots, m, \end{aligned} \end{equation} where the term corresponding to the pressure in (\ref{v2}) vanishes in (\ref{l}) because of $\text{div} g_{j}=0$. We will prove terms $\left<\left(w_{m}(t) \cdot \nabla\right) (v_{c}+v_1), g_{j}\right> $ and $\left<\left((v_{c}+v_1) \cdot \nabla\right) w_{m}(t), g_{j}\right>$ in (\ref{l}) are convergent. By H\"{o}lder and Sobolev inequalities in the Lorentz $L^{p, q} \text { -spaces (see \cite{Ka})}$, we have \begin{eqnarray}\label{e} \left|\int_{\mathbb{R}^{3}} g_j (w_m\cdot \nabla) (v_c+v_1) \mathrm{d} x\right| &\leq& C\|(v_c+v_1)w_m\|_{L^2}\|\nabla g_j\|_{L^2} \nonumber\\ &\leq& C\|(v_c+v_1)w_m\|_{L^{2,2}}\|\nabla g_j\|_{L^2} \nonumber\\ &\leq& C\|v_c+v_1\|_{L^{3, \infty}}\|w_m\|_{L^{6,2}}\|\nabla g_j\|_{L^2} \nonumber\\ &\leq& C\|v_c+v_1\|_{L^{3, \infty}}\|\nabla w_m\|_{L^2}\|\nabla g_j\|_{L^2}. \end{eqnarray} Similar estimate holds for $\left<\left((v_{c}+v_1) \cdot \nabla\right) w_{m}(t), g_{j}\right>$. \begin{equation}\label{e_2} \begin{aligned} \left|\int_{\mathbb{R}^{3}} ((v_c+v_1)\cdot \nabla) w_m g_j \mathrm{d} x\right| \leq C\|v_c+v_1\|_{L^{3, \infty}}\|\nabla w_m\|_{L^2}\|\nabla g_j\|_{L^2}. \end{aligned} \end{equation} The system (\ref{l}) has a unique local solution $\left\{d_{i m}(t)\right\}_{i=1}^{m} .$ By $a$-$priori$ estimates of the sequence $\left\{w_{m}\right\}_{m=1}^{\infty}$ obtained below in (\ref{s}), solution $d_{i m}(t)$ is global. Multiplying equation (\ref{l}) by $d_{j m}$ and sum up equations for $j=1,2, \ldots, m ,$ we have \begin{equation} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t}\left\|w_{m}(t)\right\|_{2}^{2}+\left\|\nabla w_{m}(t)\right\|_{2}^{2}+\left<\left(w_{m}(t) \cdot \nabla\right)(v_c+v_1) , w_{m}(t)\right>=0. \end{equation} Using inequality (\ref{e}) and integrating it from $0$ to $t$, we obtain \begin{equation}\label{s} \left\|w_{m}(t)\right\|_{2}^{2}+2\left(1-K \sup _{t>0}\|v_c+v_1\|_{L^{3,\infty}}\right) \int_{0}^{t}\left\|\nabla w_{m}(\tau)\right\|_{2}^{2} \mathrm{d} \tau \leq \left\|w_{0}\right\|_{2}^{2}. \end{equation} Since $|c|$ big enough such that $K \sup _{t>0}\|v_c+v_1\|_{L^{3,\infty}}<1$. Thus we obtain a subsequence, also denoted by $\left\{w_{m}\right\}_{m=1}^{\infty}$, converging to $v_2\in C_{w}\left([0, T]; L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left([0, T]; \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right).$ Now, repeating the classical reasoning from \cite{Ka}, we obtain the existence of a weak solution in the energy space $C_{w}\left([0, T]; L_{\sigma}^{2}\left(\mathbb{R}^{3}\right)\right) \cap L^{2}\left([0, T]; \dot{H}_{\sigma}^{1}\left(\mathbb{R}^{3}\right)\right)$ for all $T>0$ which satisfies strong energy inequality (\ref{s}). Hence we get a global $L^2+L^3$ weak solution $w$ of the form $w=v_1+v_2$. Moreover, we have the asymptotic behavior of $v_2$ and omit the proof which can be referred in \cite{Ka}. {\hfill $\square$\medskip} \section{Proof of Theorem \ref{p>3 result}}\label{Sec4} In this section, we will give the proof of Theorem \ref{p>3 result}. Our method is based on contraction mapping in Lemma \ref{lem1} and the following $a$-$prioi$ estimates in Lemmas \ref{p_alemm}, \ref{p_lem2} and \ref{p_lem3}. \begin{lemma} \label{p_alemm} Let $p\in (1,\infty)$. For every $c$ satisfies (\ref{condition c_p1}) and (\ref{condition c_p2}), there exists a unique global-in-time solution $a(x,t)\in C_tL_x^p\cap L^{\frac{4p}{3}}_tL^{2p}_x$ to system (\ref{a}) with initial data $w_0\in L_{\sigma}^p({\mathbb{R}^{3}}).$ Moreover, \begin{equation}\label{6.1} \|a(\cdot,t)\|_{L^p}\leq \|a(\cdot,s)\|_{L^p}, \end{equation} for any $0\leq t\leq s<\infty,$ and for $2\leq p<\infty$, \begin{equation}\label{p_a-estimate} \|a\|_{C_tL_x^p\cap L^{\frac{4p}{3}}_tL^{2p}_x}+\left\|\nabla\left(|a|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_t^2L_x^2}\leq C\|w_0\|_{L^p}, \end{equation} for a universal constant $C$. \end{lemma} \noindent\textbf{Proof of Lemma \ref{p_alemm}.} Approach is similar as the proof of Lemma \ref{alemm}. By classical approximation method, it is easy to get the global existence of solutions $a$. For simplicity, we omit the detailed proof and give $a$-$prioi$ estimate for $a.$ Suppose $a$ is sufficiently smooth, we multiply the equation ($\ref{a})_1$ by $|a|^{p-2}a$ and integrate it on $\mathbb{R}^3$, we have \begin{equation} \int_{\mathbb{R}^3}\partial_t a \cdot (|a|^{p-2}a)dx=\frac{1}{p}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}, \end{equation} and \begin{eqnarray}\label{000} \int_{\mathbb{R}^3}-\Delta a \cdot (|a|^{p-2}a)dx &=&(p-2)\int_{\mathbb{R}^3}|a|^{p-4} \sum_i[(\partial_i a_l)a_l]^2 + \int_{\mathbb{R}^3} |\nabla a|^2|a|^{p-2}\nonumber\\ &=&\frac{4(p-2)}{p^2}\|\nabla(|a|^{\frac{p}{2}})\|^2_{L^2}+\||\nabla a||a|^{\frac{p-2}{2}}\|^2_{L^2}. \end{eqnarray} For $p\geq 2,$ we have \begin{eqnarray}\label{11} &&\frac{1}{p}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+\frac{4(p-2)}{p^2}\|\nabla(|a|^{\frac{p}{2}})\|^2_{L^2}+\||\nabla a||a|^{\frac{p-2}{2}}\|^2_{L^2}\nonumber\\ &=&-\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|^{p-2}a) dx-\int_{\mathbb{R}^3} \nabla \pi\cdot (|a|^{p-2}a) dx. \end{eqnarray} By using integration by parts, H\"{o}lder's inequality, Lemma \ref{|x|v_c} and the classical Hardy inequality in Lemma \ref{hardy}, we have \begin{eqnarray}\label{22} -\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|^{p-2}a) dx &=&\int_{\mathbb{R}^3}(a\otimes v_c+v_c\otimes a) \cdot \nabla(|a|^{p-2}a )dx\nonumber\\ &=& \int_{\mathbb{R}^3} a_i(v_c)_j\partial_i(|a|^{p-2}a_j) dx\nonumber\\ &\leq& C\int_{\mathbb{R}^3}|\nabla(|a|^{\frac{p}{2}} )||a|^{\frac{p}{2}} |v_c|dx\nonumber\\ &\leq& C\left\||x| v_c\right\|_{L^{\infty}}\left\|\nabla(|a|^{\frac{p}{2}} )\right\|_{L^2}\left\|\frac{|a|^{\frac{p}{2}}}{|x|}\right\|_{L^2}\nonumber\\ &\leq& CK_c\left\|\nabla(|a|^{\frac{p}{2}} )\right\|_{L^2}^2. \end{eqnarray} Similar as proof of (\ref{prop- rh2}), by H\"{o}lder's inequality, Hardy inequality, Sobolev embedding and boundedness of the Riesz transforms on weighted $L^p$ spaces (Theorem 9.4.6 in \cite{Gra}), we have \begin{eqnarray}\label{33} \int_{\mathbb{R}^3} \nabla \pi_1 \cdot (|a|^{p-2}a) dx &\leq& C\||x|^{\frac{p-2}{p}}\left(a\otimes v_c+v_c\otimes a\right)\|_{L^{p}} \|\nabla (|a|^{\frac p2})\|_{L^{2}} \left\|\frac{|a|^{\frac p2-1}}{|x|^{\frac{p-2}{p}}}\right\|_{L^{\frac{2p}{p-2}}}\nonumber\\ &\leq& C\||x|v_c\|_{L^{\infty}} \left\|\frac{a}{|x|^{\frac2p}}\right\|_{L^p}\|\nabla (|a|^{\frac p2})\|_{L^{2}} \left\|\nabla(|a|^{\frac p2})\right\|_{L^{2}}^{\frac {p-2} {p} }\nonumber\\ &\leq& C K_c\left\|\frac{a}{|x|^{\frac2p}}\right\|_{L^p}\|\nabla (|a|^{\frac{p}{2}})\|_{L^2}^{\frac{2p-2}{p}}\nonumber\\ &\leq& C K_c \|\nabla (|a|^{\frac{p}{2}})\|^2_{L^2}. \end{eqnarray} Combining (\ref{11})-(\ref{33}), we deduce \begin{equation}\label{a_p} \frac 1p\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+\frac{4(p-2)}{p^2}\|\nabla(|a|^{\frac{p}{2}})\|^2_{L^2}\leq C K_c \|\nabla (|a|^{\frac{p}{2}})\|^2_{L^2}. \end{equation} By assumption, we can guarantee \begin{equation}\label{condition c_p1} \frac{4(p-2)}{p^2}-CK_c>0, \end{equation} for a constant $C.$ Hence \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+C\|\nabla(|a|^{\frac{p}{2}})\|^2_{L^2}\leq 0, \end{equation} for a positive constant $C.$ Therefore, (\ref{6.1}) holds and we have \begin{equation} \sup_t\|a(t)\|^p_{L^p}+C\|\nabla(|a|^{\frac{p}{2}})\|^2_{L_t^2L^2_x}\leq \|w_0\|_{L^p}. \end{equation} By interpolation theory, we deduce (\ref{p_a-estimate}). By Lemma \ref{sec3-lem1}, we obtain \begin{equation} \sum_i[(\partial_i a_l)a_l]^2 \leq |a|^2|\nabla a|^2. \end{equation} For $1< p<2,$ we have \begin{eqnarray*} &&\frac{1}{p}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+(p-2)\int_{\mathbb{R}^3}|a|^{p-4} \sum_i[(\partial_i a_l)a_l]^2 + \int_{\mathbb{R}^3} |\nabla a|^2|a|^{p-2}\\ &=&-\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|^{p-2}a) dx-\int_{\mathbb{R}^3} \nabla \pi\cdot (|a|^{p-2}a) dx. \end{eqnarray*} Thanks to Lemma \ref{sec3-lem1}, there holds \begin{eqnarray*} &&\frac{1}{p}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+(p-1)\int_{\mathbb{R}^3} |\nabla a|^2|a|^{p-2}\\ &\leq&\Big|\int_{\mathbb{R}^3}\text{div}(a\otimes v_c+v_c\otimes a)\cdot (|a|^{p-2}a) dx\Big|+\Big|\int_{\mathbb{R}^3} \nabla \pi\cdot (|a|^{p-2}a) dx\Big|. \end{eqnarray*} Thanks to (\ref{22}) and (\ref{33}), there holds \begin{equation} \begin{aligned} &\frac{1}{p}\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+(p-1)\int_{\mathbb{R}^3} |\nabla a|^2|a|^{p-2}dx\leq C K_c \|\nabla (|a|^{\frac{p}{2}})\|^2_{L^2}. \end{aligned} \end{equation} Moreover, \begin{equation} \begin{aligned} C K_c \|\nabla (|a|^{\frac{p}{2}})\|^2_{L^2}&= C K_c \frac{p^2}{4}\int_{\mathbb{R}^3}|a|^{p-4} \sum_i[(\partial_i a_l)a_l]^2 dx\\ &\leq C(p)K_c \int_{\mathbb{R}^3}|a|^{p-2} |\nabla a|^2dx, \end{aligned} \end{equation} where $C(p)$ is a constant depending on $p.$ By assumption, we can guarantee \begin{equation}\label{condition c_p2} p-1-C(p)K_c>0. \end{equation} Therefore \begin{equation} \begin{aligned} &\frac{\mathrm{d}}{\mathrm{d} t}\|a(t)\|^p_{L^p}+C \int_{\mathbb{R}^3}|a|^{p-2} |\nabla a|^2dx\leq 0, \end{aligned} \end{equation} for a positive constant $C.$ Hence we deduce (\ref{6.1}). {\hfill $\square$\medskip} To get $a$-$prioi$ estimate of $z$, in which the crucial estimate is as follows: \begin{eqnarray*} -\int_{\mathbb{R}^3}\text{div}(w_1\otimes w_2) \cdot (|z|^{p-2}z) dx &=&\int_{\mathbb{R}^3}(w_1\otimes w_2)\cdot \nabla(|z|^{p-2}z )dx\\ &\leq& C\int_{\mathbb{R}^3}(w_1\otimes w_2)\cdot \nabla\left(|z|^{\frac p2} \right)|z|^{\frac p2-1} dx\\ &\leq& C\left\|\nabla\left(|z|^{\frac{p}{2}} \right)\right\|_{L^2}\left\||z|^{\frac{p}{2}-1}\right\|_{L^{\frac{2p}{p-2}}}\left\|w_1\otimes w_2\right\|_{L^p}\\ &\leq& C\left\|\nabla\left(|z|^{\frac{p}{2}} \right)\right\|_{L^2}\left\|z\right\|_{L^p}^{\frac{p}{2}-1}\left\|w_1\otimes w_2\right\|_{L^p}\\ &\leq& \varepsilon \left\|\nabla\left(|z|^{\frac{p}{2}} \right)\right\|^2_{L^2}+C(\varepsilon)\left\|z\right\|_{L^p}^{p-2}\left\|w_1\otimes w_2\right\|^2_{L^p}.\\ \end{eqnarray*} We have \begin{eqnarray*} \sup_t \|z(t)\|_{L_x^p}+\left\|\nabla(|z|^{\frac{p}{2}})\right\|^{\frac{2}{p}}_{L_t^2L_x^2} \leq C \|w_1\|_{L_t^{4}L_x^{2p}}\|w_2\|_{L_t^{4}L_x^{2p}} &\leq&C T^{\frac{p-3}{2p}} \|w_1\|_{ L^{\frac{4p}{3}}_t L^{2p}_x}\|w_2\|_{L^{\frac{4p}{3}}_t L^{2p}_x}. \end{eqnarray*} Hence, we have the following $a$-$prioi$ estimate and more detailed proof can be referred in the proof of Lemma \ref{lem2}. \begin{lemma}\label{p_lem2} Let $p\in (3,\infty)$. Assume that ${c}_p$ is as in Theorem \ref{p>3 result}. For every $|c|>{c}_p$, there exists a $L^p$ mild solution $z(x,t)$ on $[0,T]$ to system (\ref{N}) with $w_1, w_2\in L_t^{\frac{4p}{3}}([0,T];L^{2p}(\mathbb{R}^3))$, satisfying \begin{eqnarray}\label{p_z-estimate} \|z\|_{C_TL^p\cap L^{\frac{4p}{3}}_TL^{2p}_x}+\|\nabla(|z|^{\frac p2})\|_{L_t^2L_x^2}^{\frac2p}\leq C T^{\frac{p-3}{2p}} \|w_1\|_{L^{\frac{4p}{3}}_TL^{2p}_x}\|w_2\|_{L^{\frac{4p}{3}}_TL^{2p}_x}, \end{eqnarray} for a constant $C$. \end{lemma} When initial data $w_0\in L_{\sigma}^p\cap L_{\sigma}^3$ and $\|w_0\|_{L^3}<\varepsilon_0$, we have $w\in C_tL^3\cap L^{4}_tL^6_x$, $\nabla\left(|w|^{\frac{p}{2}}\right)\in L_t^2L_x^2$ according Theorem \ref{L^3 well-posedness}. By interpolation, we have $w\in L^{\frac{4p}{2p-3}}_tL^{2p}_x.$ Proof is very similar as the proof of Lemma \ref{lem2}, in which the crucial estimate is as follows: \begin{equation} \|w_1\otimes w_2\|_{L_t^{2}L_x^p}\leq \|w_1\|_{L^{\frac{4p}{2p-3}}_tL^{2p}_x}\|w_2\|_{L^{\frac{4p}{3}}_tL^{2p}_x}. \end{equation} Hence we have the following $a$-$prioi$ estimate and more detailed proof can be referred in the proof of Lemma \ref{lem2}. \begin{lemma}\label{p_lem3} Let $p\in (3,\infty)$. Assume that ${c}_p$ is as in Theorem \ref{p>3 result}. For every $|c|>{c}_p$, there exists a global-in-time $L^p$ mild solution $z(x,t)$ to system (\ref{N}) with $w_1\in L_t^{\frac{4p}{2p-3}}([0,\infty);L_x^{2p}(\mathbb{R}^3))$ and $w_2\in L_t^{\frac{4p}{3}}([0,\infty);L_x^{2p}(\mathbb{R}^3))$, satisfying \begin{equation}\label{p_z-estimate2} \|z\|_{C_tL^p\cap L^{\frac{4p}{3}}_tL^{2p}_x}+ \|\nabla(|z|^{\frac p2})\|_{L_t^2L_x^2}^{\frac2p}\leq C \|w_1\|_{L^\frac{4p}{2p-3}_tL^{2p}_x}\|w_2\|_{L^{\frac{4p}{3}}_tL^{2p}_x}, \end{equation} for a constant $C$. \end{lemma} \begin{remark} \label{p_existence} By classical method, it is easy to get the existence of solutions $a\in C([0,\infty);L^p(\mathbb{R}^3))$, $\nabla (|a|^{\frac p2})\in L^2([0,\infty);L^2(\mathbb{R}^3))$ to system (\ref{a}) satisfying (\ref{p_a-estimate}) and solutions $z\in C([0,\infty);L^p(\mathbb{R}^3))$, $\nabla (|z|^{\frac p2})\in L^2([0,\infty);L^2(\mathbb{R}^3))$ to system (\ref{N}). For simplicity, we omit the detailed proof. \end{remark} \noindent\textbf{Proof of Theorem \ref{p>3 result}.} For constant $|c|>{c}_p$ where ${c}_p$ depends only on $p$, according to Lemma \ref{p_alemm}, we have \begin{equation} \|a(t)\|_{ L^{\frac{4p}{3}}_tL^{2p}_x}\leq C \|w_0\|_{L^p}. \end{equation} Applying Lemma \ref{p_lem2} with $(w_1,w_2)=(w,w)$, we have \begin{equation} \|N\|\leq CT^{\frac{p-3}{2p}}. \end{equation} Using Lemma \ref{lem1} with $E= L^{\frac{4p}{3}}_TL^{2p}_x$, there exists $T>0$ and a unique solution $w\in L^{\frac{4p}{3}}_TL^{2p}_x$ on $[0,T].$ Then we will prove the global existence of $w$ with initial data $w_0\in L^p_{\sigma}\cap L_{\sigma}^3$ and $\|w_0\|_{L^3}<\varepsilon_0$. Since $\|w_0\|_{L^3}<\varepsilon_0$, according to Theorem \ref{L^3 well-posedness}, there exists a global unique solution $w\in C_tL^{3}_x\cap L^{4}_tL^6_x,$ $\nabla (|w|^{\frac 32})\in L_t^2 L_x^2,$ and $\|w\|_{C_tL^{3}_x\cap L^{4}_tL^6_x}+\|\nabla |w|^{\frac 23}\|^{\frac 23}_{L_t^2L_x^2}\leq C\|w_0\|_{L^3}.$ By interpolation, $w\in C_t{L_x^3}$ and $\nabla (|w|^{\frac 32})\in L_t^2 L_x^2$ deduce $w\in L^{\frac{4p}{2p-3}}_tL^{2p}_x$. Hence, \begin{equation}\label{3-p} \|w\|_{L^{\frac{4p}{2p-3}}_tL^{2p}_x}\leq C\|w_0\|_{L^3}< C\varepsilon_0 . \end{equation} Thanks to (\ref{p_a-estimate}) and (\ref{p_z-estimate2}), we have \begin{equation}\label{w_p} \|w\|_{C_tL^{p}_x\cap L^{\frac{4p}{3}}_tL^{2p}_x}\leq C\|w_0\|_{L^p}+C \|w\|_{L^{\frac{4p}{2p-3}}_tL^{2p}_x}\|w\|_{C_tL^{p}_x\cap L^{\frac{4p}{3}}_tL^{2p}_x}. \end{equation} Combining with (\ref{3-p}) and interpolation theory, we deduce (\ref{p>3-1}). {\hfill $\square$\medskip} \section{Proof of Theorems \ref{continuty} }\label{proof of {continuty} } In this section, we give the detailed proof of Theorems \ref{continuty}. \noindent\textbf{Proof of Theorem \ref{continuty}.}\\ Setting $Z=u-v$, we have \begin{equation}\label{u-v} \begin{cases} Z_{t}-\Delta Z + \text{div}(-Z \otimes Z+Z \otimes u+u\otimes Z )+ (Z \cdot \nabla) v_{c}+\left(v_{c} \cdot \nabla\right) z+\nabla \pi_{z}=0,\\ \nabla\cdot Z=0,\\ Z(x, 0) =Z_0. \end{cases} \end{equation} By the Duhamel principle, we can write solution $z$ into an integral formulation \begin{equation} Z(x,t)=e^{-t\mathcal{L}}u_0-\int_0^t e^{-(t-s)\mathcal{L}}\mathbb{P}\text{div}(-Z \otimes Z+Z \otimes u+u\otimes Z )ds. \end{equation} By contraction mapping theorem, it's easy to give the existence of solution. Next, we only give $a$-$prioi$ estimate. When $p=3,$ by Lemma \ref{alemm} and method in Lemma \ref{lem2}, we have \begin{equation}\label{eq1} \|Z\|_{C_TL_x^3\cap L^4_TL_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2} \leq C_1\|Z_0\|_{L^3}+C_2\|Z\|^2_{L^4_TL_x^6}+C_2\left(\int_0^T (\|Z\|_{L^6}\|u\|_{L^6})^2 dt\right)^{\frac12}. \end{equation} By interpolation inequality, H\"{o}lder's inequality and Young's inequality, we have \begin{eqnarray*} \left(\int_0^T (\|Z\|_{L^6}\|u\|_{L^6})^2 dt\right)^{\frac12} &\leq& \left(\int_0^T \left(\|Z\|^{\frac14}_{L^3}\|Z\|^{\frac34}_{L^9}\|u\|_{L^6}\right)^2 dt\right)^{\frac12}\\ &\leq& \left(\int_0^T \|Z\|^{\frac12}_{L^3}\|Z\|^{\frac 32}_{L^9}\|u\|_{L^6}^2 dt\right)^{\frac12}\\ &\leq& \|Z\|^{\frac 34}_{L_T^3L_x^9} \left(\int_0^T \|Z\|_{L^3}\|u\|_{L^6}^4 dt\right)^{\frac14}\\ &\leq& \varepsilon \|Z\|_{L_T^3L_x^9} +\frac{27}{256 \varepsilon^3} \int_0^T \|Z\|_{L^3}\|u\|_{L^6}^4 dt. \end{eqnarray*} Combining with (\ref{eq1}), we obtain \begin{eqnarray*} \|Z\|_{C_TL_x^3\cap L^4_TL_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2} &\leq& C\|Z_0\|_{L^3}+C\|Z\|^2_{L^4_TL_x^6}+C \varepsilon \|Z\|_{L_T^3L_x^9} +\frac{C}{ \varepsilon^3} \int_0^T \|Z\|_{L^3}\|u\|_{L^6}^4 dt. \end{eqnarray*} By Sobolev embedding $\dot{H}^{1}(\mathbb{R}^{3})\hookrightarrow L^6(\mathbb{R}^{3})$, we have \begin{eqnarray*} &&\|Z\|_{C_TL_x^3\cap L^4_TL_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2}\\ &\leq& C\|Z_0\|_{L^3}+C\|Z\|^2_{L^4_TL_x^6}+C \varepsilon\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2} +\frac{C}{ \varepsilon^3} \int_0^T \|Z\|_{L^3}\|u\|_{L^6}^4 dt. \end{eqnarray*} Taking $C \varepsilon=\frac12$, there holds \begin{eqnarray}\label{*} \|Z\|_{C_TL_x^3\cap L^4_TL_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2} &\leq& C\|Z_0\|_{L^3}+C\|Z\|^2_{L^4_TL_x^6} +C \int_0^T \|Z\|_{L^3}\|u\|_{L^6}^4 dt, \end{eqnarray} for a positive constant $C.$ According to Gronwall's inequality, there holds \begin{eqnarray} \|Z\|_{C_TL_x^3\cap L^4_TL_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_T^2L_x^2} &\leq& C(\|Z_0\|_{L^3}+\|Z\|^2_{L^4_TL_x^6})e^{C \int_0^T \|u\|_{L^6}^4 dt}. \end{eqnarray} When $\|Z_0\|_{L^3}\leq (4C^2e^{2C \int_0^T \|u\|_{L^6}^4 dt})^{-1},$ by continuity method, we have \begin{eqnarray}\label{Z-crucial} \|Z\|_{C_{T}L_x^3\cap L^4_{T}L_x^6 }+\left\|\nabla\left(|Z|^{\frac{3}{2}}\right)\right\|^{\frac23}_{L_{T}^2L_x^2}\leq 2C\|Z_0\|_{L^3}e^{C\int_0^{T} \|u\|_{L^6}^4 dt}. \end{eqnarray} Therefore, (\ref{sol}) holds with $p=3$. When $p>3,$ by Lemma \ref{p_alemm} and method in Lemma \ref{p_lem2}, we have \begin{eqnarray}\label{eq2} &&\|Z\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2}\\ &\leq& C\|Z_0\|_{L^p}+C \left(\int_0^T (\|Z\|_{L_x^{2p}}\|Z\|_{L_x^{2p}})^2 dt\right)^{\frac12}+C \left(\int_0^T (\|Z\|_{L_x^{2p}}\|u\|_{L_x^{2p}})^2 dt\right)^{\frac12}.\nonumber \end{eqnarray} By interpolation inequality, H\"{o}lder's inequality and Young's inequality, we have \begin{eqnarray*} \left(\int_0^T \left(\|Z\|_{L_x^{2p}}\|u\|_{L_x^{2p}}\right)^2 dt\right)^{\frac12} &\leq& \left(\int_0^T \left(\|Z\|^{\frac 34}_{L_x^{3p}}\|Z\|^{\frac 14}_{L_x^p}\|u\|_{L_x^{2p}}\right)^2 dt\right)^{\frac{1}{2}}\\ &\leq& \left\|\|Z\|^{\frac 34}_{L_x^{3p}}\right\|_{L_T^{\frac{4p}{3}}} \left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|_{L_T^{\frac{4p}{2p-3}}}\\ &\leq& \|Z\|^{\frac 34}_{L_T^{p}L_x^{3p}} \left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|_{L_T^{\frac{4p}{2p-3}}}\\ &\leq& \varepsilon\|Z\|_{L_T^pL_x^{3p}} +\frac{27}{256 \varepsilon^3}\left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|^4_{L_T^{\frac{4p}{2p-3}}}. \end{eqnarray*} Combining with (\ref{eq2}), we obtain \begin{eqnarray*} &&\|Z\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2}\\ &\leq& C\|Z_0\|_{L^p}+C \left(\int_0^T (\|Z\|_{L_x^{2p}}\|Z\|_{L_x^{2p}})^2 dt\right)^{\frac12} + C\varepsilon\|Z\|_{L_T^pL_x^{3p}} +\frac{C}{ \varepsilon^3}\left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|^4_{L_T^{\frac{4p}{2p-3}}}. \end{eqnarray*} By Sobolev embedding $\dot{H}^{1}(\mathbb{R}^{3})\hookrightarrow L^6(\mathbb{R}^{3})$, we have \begin{eqnarray*} &&\|Z\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2}\\ &\leq& C\|Z_0\|_{L^p}+C \left(\int_0^T (\|Z\|_{L_x^{2p}}\|Z\|_{L_x^{2p}})^2 dt\right)^{\frac12} + C\varepsilon\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2} +\frac{C}{ \varepsilon^3}\left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|^4_{L_T^{\frac{4p}{2p-3}}}. \end{eqnarray*} Taking $C\varepsilon=\frac 12$, there holds \begin{eqnarray}\label{**} &&\|Z\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2}\nonumber\\ &\leq& C\|Z_0\|_{L^p}+C \left(\int_0^T (\|Z\|_{L_x^{2p}}\|Z\|_{L_x^{2p}})^2 dt\right)^{\frac12}+C\left\|\|Z\|^{\frac 14}_{L_x^{p}}\|u\|_{L_x^{2p}}\right\|^4_{L_T^{\frac{4p}{2p-3}}}\nonumber\\ &\leq& C\|Z_0\|_{L^p}+CT^{\frac{p-3}{2p}}\|Z\|^2_{L^{\frac{4p}{3}}_TL_x^{2p} }+C\|Z\|_{L_T^{\infty}L_x^{p}}\|u\|^4_{L_T^{\frac{4p}{2p-3}}L_x^{2p}}, \end{eqnarray} for a positive constant $C.$ Thanks to Gronwall's inequality, there holds $$ \|Z\|_{C_TL_x^p\cap L^{\frac{4p}{3}}_TL_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_T^2L_x^2} \leq C\left(\|Z_0\|_{L^p}+T^{\frac{p-3}{2p}}\|Z\|^2_{L^{\frac{4p}{3}}_TL_x^{2p} }\right)\exp\left\{{C\|u\|^4_{L_T^{\frac{4p}{2p-3}}L_x^{2p}}}\right\}. $$ By the continuity method, when $4C^2T^{\frac{p-3}{2p}}\|Z_0\|_{L^p}\exp \{{2C\|u\|^4_{L_T^{\frac{4p}{2p-3}}L_x^{2p}}} \}<1$, we deduce \begin{equation}\label{qq} \begin{aligned} \|Z\|_{C_{T}L_x^p\cap L^{\frac{4p}{3}}_{T}L_x^{2p} }+\left\|\nabla\left(|Z|^{\frac{p}{2}}\right)\right\|^{\frac2p}_{L_{T}^2L_x^2}\leq 2C\|Z_0\|_{L^p}\exp\left\{{C\|u\|^4_{L_T^{\frac{4p}{2p-3}}L_x^{2p}}}\right\}. \end{aligned} \end{equation} When $\|Z_0\|_{L^p}\rightarrow0$, (\ref{qq}) implies (\ref{sol}) with $p>3$. {\hfill $\square$\medskip} \section*{Acknowledgment} The work of the first named author is partially supported by NSF grant DMS-1501004 and DMS-2000261. The work of the third named author is partially supported by the NSFC grant 11771389, 11931010 and 11621101.
{ "timestamp": "2020-12-29T02:28:22", "yymm": "2012", "arxiv_id": "2012.14211", "language": "en", "url": "https://arxiv.org/abs/2012.14211" }
\section{introduction} Two pure $n$-qubit states $\ket{\phi_1}$ and $\ket{\phi_2}$ define the same entanglement if there is a local gate in the unitary group, a matrix $U=U_1\otimes\dots\otimes U_n$ with each $U_i$ a 2 by 2 unitary matrix, such that $\ket{\phi_2}=U\ket{\phi_1}$. This relation between $n$-qubits is an equivalent relation and the quotient space is denoted as $\mathcal{E}_n$ and it is called the space of entanglements types, \cite{W}. In this space, we think of all the $n$-qubit states that are connected with local gates as a single element. We have another interpretation for this partition of the set of $n$-qubit states: Two $n$ qubit states $\ket{\phi_1}$ and $\ket{\phi_2}$ define the same entanglement if and only if they are LOCC equivalent, this is, if not only $\ket{\phi_2}$ transform into $\ket{\phi_1}$ using local operation and classical communications (LOCC) but also $\ket{\phi_1}$ transform into $\ket{\phi_2}$ using LOCC, \cite{B}. Even though entanglement is an important aspect of Quantum information and Computation \cite{N}, we could say that it is only completely understood for 2-qubit states. For example, it is not known if any pair of 3-qubit states can be connected with a circuit made of local gates and at most three $CZ$ gates. This is, we do not know if we can move from one entanglement type to another using three $CZ$ gates of less. It is known that we can do this with four $CZ$ gates. On the other hand it is known that every 3-qubit state can be reached from the state $\ket{000}$ by means of local gates and at most 3 $CZ$ gates, \cite{Z}. There is not doubt that to deal with real numbers and orthogonal matrices is easier (at least, computationally less expensive) than to deal with complex numbers and unitary matrices. We need only one real parameter to describe an orthogonal 2 by 2 matrix, while we need 4 real parameters (three if we take unitary matrices with determinant one) to describe a 2 by 2 unitary matrix. Let us say that $$\ket{\phi}=a_0 \ket{0\dots 00}+a_1\ket{0\dots 01}+\dots+a_{2^n-1} \ket{1\dots 1} $$ is a {\it real} $n$ qubit state if all the $a_i$ are real numbers. These states have been considered earlier as part of Real-Vector space Quantum Theory \cite{HW}. We say that two real $n$-qubit states $\ket{\phi_1}$ and $\ket{\phi_2}$ define the same real entanglement if there is a local gate in the orthogonal group, a matrix $U=U_1\otimes\dots\otimes U_n$ with each $U_i$ a 2 by 2 orthogonal matrix, such that $\ket{\phi_2}=U\ket{\phi_1}$. This relation between real $n$-qubits is an equivalent relation and the quotient space is denoted as $\mathcal{E}^{\mathbb{R}}_n$. There is a natural map $\chi_n:\mathcal{E}^{\mathbb{R}}_n\longrightarrow \mathcal{E}_n$ that sends the real entanglement of the real $n$-qubit $\ket{\phi}$ into the entanglement of $\ket{\phi}$ viewed as a regular pure state with complex amplitudes. For the case of 2-qubits, the map $\chi_2$ is one to one and onto, this is, we do not miss any entanglement by considering only real 2-qubit states and by considering only orthogonal gates, \cite{P}. This can be useful, for example, if we are training a quantum circuit (\cite{B1}, \cite{L}, \cite{PG}) to achieve a particular entanglement, there is not need to consider unitary matrices, it is enough to consider real 2-qubits states and $R_y$ gates. For 3-qubits, the map $\chi_3:\mathcal{E}^{\mathbb{R}}_3\longrightarrow \mathcal{E}_3$ is not onto because we need five real parameters to describe $\mathcal{E}_3$ (\cite{SM}, \cite{AA}, \cite{W}), while we only need four to describe $ \mathcal{E}^{\mathbb{R}}_3$. Moreover, $\chi_3$ is not one to one, since we can show that the GHZ state $\frac{1}{\sqrt{2}}(\ket{000}+\ket{111})$ and the state $\frac{1}{2}(\ket{001}-\ket{010}+\ket{100}+\ket{111})$ have different real entanglement (there is not local orthogonal gates connecting them) and they have the same entanglement because they can be connected with a unitary local gate, \cite{P1}. One of the goals of this paper is to show that for any real state $\ket{\phi_1}$ there is a real state $\ket{\phi_2}=R_y(\theta_2)\otimes R_y(\theta_1)\otimes R_y(\theta_0)\ket{\phi_1}$ of the form $\lambda_1 \ket{000} +\lambda_2 \ket{011}+\lambda_3\ket{101}+\lambda_4\ket{110}+\lambda_5\ket{111}$ with $\sum\lambda_i^2=1$. Recall that $$R_y(\theta)=\left( \begin{array}{cc} \cos \left(\frac{\theta}{2}\right) & -\sin \left(\frac{\theta}{2}\right) \\ \sin \left(\frac{\theta}{2}\right) & \cos \left(\frac{\theta}{2}\right) \\ \end{array} \right)\, .$$ We would like to point out that if we try to follow the lines presented in \cite{AA} to try to eliminate the coefficients of $\ket{001},\ket{010},\ket{011}$ we will fail due to the fact that the real numbers are not algebraically complete and not every quadratic equation has a solution in the real numbers. This is not only a proof technicality, since we can show that it is impossible to use local orthogonal gates to transform the state $\xi=\frac{1}{\sqrt{2}}\left(\ket{000}+\ket{001}+\ket{011}+\ket{101}-\ket{110}\right)$ into a state of the form $\lambda_1 \ket{000}+\lambda_2\ket{100}+\lambda_3\ket{101}+\lambda_4\ket{110}+\lambda_5\ket{111}$ with $\lambda_i$ real numbers. We have that the Schmidt representation (the one presented in \cite{AA} ) of the state $\xi$ is $$\frac{1}{\sqrt{10}} \left(2\ket{000}-i\ket{100}+\ket{101}+\ket{110}+\sqrt{3} \ket{111} \right)$$ Therefore, in order to find a Schmidt representation for real 3-qubits, the first task to eliminate three coefficients among real 3-qubit states is to decide which coefficients can be transform into zero. It turns out that we can easily make the coefficients of $\ket{001}$ and $\ket{100}$ vanish, see Theorem \ref{thm1}. Therefore we have a representation of every real 3-qubit state in the 5 dimensional sphere $S_0^5 $ of qubits $\sum u_{rst} \ket{rst}$ with $u_{010}=u_{100}=0$. In order to prove that we can make an additional coefficient equal to zero we need to learn how to navigate this 5 dimensional sphere. We define a vector field $X$ in $S_0^5 $ that shows us the right direction to move in $S_0^5 $ in such a way that we only touch states that are connected with local orthogonal gates. In a little more precise words, we can visualize the integral curves of the vector field $X$ as bridges form by states that define the same real entanglement, see Theorem \ref{X}. We manage to show that we can make the coefficients $u_{001}=u_{010}=u_{100}=0$ by studying these bridges, this is, by studying the integral curves of vector field $X$. The author would like to thank Jos\'e Ignacio Latorre for sharing his knowledge. \section{Representation in a 5 dimensional space} In this section we show that, up to local gates in the orthogonal group, we can write every real 3-qubit state as \begin{eqnarray}\label{sr1} \ket{\phi}=x_1 \ket{000}+x_2 \ket{010} +x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111} \end{eqnarray} More precisely, we have, \begin{thm}\label{thm1} Let $\ket{\phi}=u_0 \ket{000}+u_1 \ket{001}+u_2 \ket{010}+u_3 \ket{011}+u_4 \ket{100}+u_5 \ket{101}+u_6 \ket{110}+u_7 \ket{111}$, with $u_i$ real numbers. There exist $\theta_0$ and $\theta_2$ such that $$R_y(\theta_2)\otimes R_y(0)\otimes R_y(\theta_0) \ket{\phi}=x_1 \ket{000}+x_2 \ket{010} +x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111}$$ \end{thm} \begin{proof} A direct computation shows that if $\theta_0$ satisfies the equation $a_1 \sin (\theta_0)+a_2 \cos (\theta_0)=0$ with $a_1=\left(-u_0^2+u_1^2-u_4^2+u_5^2\right)$ and $a_2=-2 (u_0 u_1+u_4 u_5)$, and $\theta_2$ satisfies the equation $b_1 \sin (\frac{\theta_2}{2})+b_2 \cos (\frac{\theta_2}{2})=0$ with $$b_1= -u_4 \sin \left(\frac{\theta_0}{2}\right)-u_5 \cos \left(\frac{\theta_0}{2}\right)\quad\hbox{and}\quad b_2= u_1 \cos \left(\frac{\theta_0}{2}\right) +u_0 \sin \left(\frac{\theta_0}{2}\right) $$ then, $R_y(\theta_2)\otimes R_y(0)\otimes R_y(\theta_0) \ket{\phi}$ can be written as $x_1 \ket{000}+x_2 \ket{010} +x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111}$ for some real numbers $x_1\dots x_6$. \end{proof} The theorem above tell us that every possible entanglement create by a real 3-qubit states is realized by a state in the 5-dimensional sphere $S_0^5$ defines as follow \begin{mydef} We define $$S_0^5= \{x_1 \ket{000}+x_2 \ket{010} +x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111}: \, \, x_1^2+\dots + x_6^2=1\,\}$$ \end{mydef} Counting dimensions we have that it is expected that, in general, for any 3-qubit state in $\ket{\phi_0}\in S_0^5$, there is a curve of states in $S_0^5$ that can be reached by $\ket{\phi_0}$ using orthogonal local gates. To see the previous observation we notice that all the real states are described by points in the $7$-dimensional sphere of unit vectors in $\mathbb{R}^8$. Since the lie group $O(2)$ of orthogonal 2 by 2 matrices is 1 dimensional, then, the space of local orthogonal gates $O(2)\otimes O(2)\otimes O(2)$ is a three dimensional manifold. With this in mind it is expected that, up to local orthogonal gates, the space of 3-qubit states with real amplitudes is described with $7-3=4$ parameters. \begin{rem}\label{mm} If we identify $\ket{\phi}=u_0 \ket{000}+u_1 \ket{001}+u_2 \ket{010}+u_3 \ket{011}+u_4 \ket{100}+u_5 \ket{101}+u_6 \ket{110}+u_7 \ket{111}$ with the unit vector in $(u_0,\dots, u_7)\in \mathbb{R}^8$, then $S_0^5$ is identified with the manifold \begin{eqnarray} M=\{ (x_1,0,x_2,x_3,0,x_4,x_5,x_6)\in \mathbb{R}^8\,:\, x_1^2+\dots +x_6^2=1\,\} \end{eqnarray} \end{rem} The following tangent vector field on $S_0^5$ has the property that each one of its integral curves consists of states connected by local orthogonal gates. \begin{thm} \label{X} In this theorem we are using the identification given in Remark \ref{mm}. The vector field $X=(X_1,0,X_2,X_3,0,X_4,X_5,X_6)$ with \begin{eqnarray*} X_1&=& x_2 \left(x_1^2-x_4^2\right),\quad X_2=-x_1^3+\left(x_3^2+x_4^2+x_5^2\right) x_1+2 x_3 x_4 x_5\\ X_3 &=& (x_3 x_4+x_1 x_5) x_6-x_2 (x_1 x_3+x_4 x_5),\quad X_4=\left(x_1^2-x_4^2\right) x_6,\\ X_5&=& (x_1 x_3+x_4 x_5) x_6-x_2 (x_3 x_4+x_1 x_5), \, X_6=2 x_4^3+\left(x_2^2+x_6^2-1\right) x_4-2 x_1 x_3 x_5 \end{eqnarray*} defines a tangent vector field in $M$. Moreover, every solution $$\ket{\phi(t)}=x_1(t) \ket{000}+x_2(t) \ket{010} +x_3(t) \ket{011}(t)+x_4(t)\ket{101}+x_5(t)\ket{110}+x_6(t)\ket{111}$$ of the initial value problem $$\ket{\phi(t)}^\prime=X(\ket{\phi(t)}) \hbox{, with } \ket{\phi}(0)=\ket{\phi_0}$$ satisfies that any pair of 3-qubit states $\ket{\phi(t_1)} $ and $\ket{\phi(t_2)} $ can be transform into each other using a local orthogonal gate. Additionally, we have that $$\ket{\phi(t)}=R_y(\theta_2(t))\otimes R_y(\theta_1(t))\otimes R_y(\theta_0(t)) \ket{\phi}$$ with \begin{eqnarray*} \theta_0(t)&=& \int_0^tL_0\left(\ket{\phi}(\tau)\right)d\tau \hbox{ where } L_0=-2 (x_1 x_3+x_4 x_5)\\ \theta_1(t)&=& \int_0^tL_1\left(\ket{\phi}(\tau)\right)d\tau \hbox{ where } L_1=2 (x_4^2-x_1^2)\\ \theta_2(t)&=& \int_0^tL_2\left(\ket{\phi}(\tau)\right)d\tau \hbox{ where } L_2=-2 (x_3 x_4+x_1 x_5) \end{eqnarray*} \end{thm} \begin{proof} The fact that $X$ is tangent to $M$ follows by verifying that the vector $X$ is perpendicular to the vector $(x_1,0,x_2,x_3,0,x_4,x_5,x_6)$. Recall that a vector $v\in\mathbb{R}^8$ is tangent to the sphere $M$ at the point $p$ if and only the vector $v$ has its second and fifth entry equal to zero and it is perpendicular to the vector $p$. To proof the rest of the theorem, for any $\ket{\phi}\in M$, we define the manifold $$\Sigma_{\ket{\phi}}=\{\psi(\theta_2,\theta_1,\theta_0)=R_y(\theta_2)\otimes R_y(\theta_1)\otimes R_y(\theta_0) \ket{\phi}: \theta_2,\theta_1,\theta_0\in\mathbb{R}\}\subset \mathbb{R}^8$$ A direct verification shows that for any $\ket{\xi}\in \Sigma_{\ket{\phi}}$, the vector field $X$ is a linear combination of the vectors $\frac{\partial \psi}{\partial \theta_2}(\ket{\xi})$, $\frac{\partial \psi}{\partial \theta_1}(\ket{\xi})$ and $\frac{\partial \psi}{\partial \theta_0}(\ket{\xi})$. Therefore the integral curve of the vector field $X$ containing the state $\ket{\phi}$ is part of the intersection of the manifolds $\Sigma_{\ket{\phi} }$ and $M$. Hence, any pair of 3-qubit states $\ket{\phi(t_1)} $ and $\ket{\phi(t_2)} $ in an integral curve of $X$ are connected by a local orthogonal gate. The formula for the angles follows from the fact that $X=L_0\frac{\partial \psi}{\partial \theta_0}+L_1\frac{\partial \psi}{\partial \theta_1}+L_2\frac{\partial \psi}{\partial \theta_2}$. \end{proof} The following functions are first integrals for the vector field $X$. \begin{thm} The vector field $X$ defined on $M$ has the following first integrals \begin{eqnarray*} I_2&=&2 x_3^4+2 \left(x_2^2+2 x_4^2+2 x_6^2-1\right) x_3^2+4 x_2 x_5 x_6 x_3+2 x_4^4+2 x_6^4+2 x_5^2 x_6^2-\\ & &2 x_6^2+x_4^2 \left(4 x_6^2-2\right)+1\\ I_3&=&x_1^4+2 \left(x_2^2+x_4^2\right) x_1^2+4 x_2 x_4 x_6 x_1+x_2^4+x_3^4+x_4^4+x_5^4+x_6^4+2 x_3^2 x_5^2+\\ & &2 x_3^2 x_6^2+2 x_4^2 x_6^2+2 x_5^2 x_6^2+2 x_2^2 \left(x_3^2+x_5^2+x_6^2\right)\\ I_4&=&2 x_4^4+\left(4 x_5^2+4 x_6^2-2\right) x_4^2+2 x_5^4+2 x_6^4+2 x_3^2 x_6^2-2 x_6^2+4 x_2 x_3 x_5 x_6+\\ & &2 x_5^2 \left(x_2^2+2 x_6^2-1\right)+1\\ I_0&=&(x_2 x_4-x_1 x_6)^2+4 x_1 x_3 x_4 x_5 \end{eqnarray*} \end{thm} We decided to keep the notation used in \cite{S} for the invariants $I_2$, $I_3$ and $I_4$ where it is explained how these first integrals correspond to $tr(\rho^2_C)$, $tr(\rho^2_B)$ and $tr(\rho^2_A)$ respectively. We finish this section by showing the equilibrium points of the vector field $X$. \begin{thm} \label{equili} The equilibrium point of the vector field $X$ is the union of three 3-dimensional spheres, and a 1-dimensional manifold $P$. More precisely, they are $S_1^3\cup S_2^2\cup S_3^3\cup P$ where \begin{eqnarray*} S_1^3&=&\{x\in M: x_1=x_4, x_3=-x_5\} \\ S_2^3&=&\{x\in M: x_1=-x_4, x_3=x_5\} \\ S_3^3&=&\{x\in M: x_1=x_4=0\}\\ P&=&\{x\in M: -2 x_1^3+x_1+2 x_3 x_4 x_5=2 x_4^3-x_4-2 x_1 x_3 x_5=x_2=x_6=0 \} \end{eqnarray*} \end{thm} \section{Representation in a 4-dimensional space} In this section we show that up to a local orthogomal gate, every real 3-qubit state can be written in the form $\ket{\phi(t)}=x_1 \ket{000}+x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111}$. To prove this result we will be using the vector field $X$ defined in the previous section. We will first show that the result is true for the equilibrium points and then we will show the case for all other points in the space $M$ defined in Remark \ref{mm}. \begin{thm} \label{thm2} Let $\ket{\phi}=u_0 \ket{000}+u_1 \ket{001}+u_2 \ket{010}+u_3 \ket{011}+u_4 \ket{100}+u_5 \ket{101}+u_6 \ket{110}+u_7 \ket{111}$, with $u_i$ real numbers. There exist $\theta_0$, $\theta_1$ and $\theta_2$ such that $$R_y(\theta_2)\otimes R_y(\theta_1 )\otimes R_y(\theta_0) \ket{\phi}=x_1 \ket{000} +x_3 \ket{011}+x_4\ket{101}+x_5\ket{110}+x_6\ket{111}$$ \end{thm} \begin{proof} By Theorem \ref{thm1} we can assume that $\ket{\phi}=w_1 \ket{000}+w_2 \ket{010} +w_3 \ket{011}+w_4\ket{101}+w_5\ket{110}+w_6\ket{111}$. Let us consider first the case that $\ket{\phi}$ is an equilibrium point of the vector field $X$. By theorem \ref{equili} we have that $\ket{\phi}$ is in either $S_1^3$, $S_2^3$, $S_3^3$, or $P$. If $\ket{\phi}\in P$ then $w_2=0$ and the theorem is trivially true. Let us now assume that $\ket{\phi}\in S_3^3$. Then we have that $w_1=w_4=0$ and a direct computation shows that if we pick $\theta_0=\theta_1=0$ and $\theta_2$ an angle that satisfies $w_2 \cos \left(\frac{\theta_2}{2}\right)-w_5 \sin \left(\frac{\theta_2}{2}\right)=0$, then we have that $R_y(\theta_2)\otimes R_y(\theta_1 )\otimes R_y(\theta_0) \ket{\phi}$ has the desired form. Let us assume now that $\ket{\phi}$ is in $S_1^3$. Then we have that $w_1=w_4$ and $w_5=-w_3$. A direct computation shows that \begin{eqnarray*} R_y(\theta_2)\otimes R_y(\theta_1 )\otimes R_y(-\theta_2) \ket{\phi}&=&x_1 \ket{000}+z_1 \ket{001}+x_2 \ket{010} +x_3 \ket{011}-z_1 \ket{100}\\ & &+x_4\ket{101}+x_5\ket{110}+x_6\ket{111} \end{eqnarray*} with \begin{eqnarray}\label{eq1} z_1=\frac{1}{2} (w_2 \sin \theta_2+w_6 \sin\theta_2-2 w_3 \cos \theta_2)\sin \frac{\theta_1}{2} -w_1 \sin \theta_2\cos \frac{\theta_1}{2} \end{eqnarray} and \begin{eqnarray}\label{eq2} x_2=w_1 \sin\frac{\theta_1}{2} \cos \theta_2+\frac{1}{2} \cos \frac{\theta_1}{2} (2 w_3 \sin \theta_2+w_2 \cos \theta_2+w_6 \cos \theta_2+w_2-w_6) \end{eqnarray} We need to find $\theta_1$ and $\theta_2$ such that $x_2$ and $z_1$ are equal to zero. Let us make \begin{eqnarray}\label{rem} \sin \frac{\theta_1}{2}=\lambda w_1 \sin \theta_2\hbox{ and } \cos \frac{\theta_1}{2}=\frac{\lambda}{2} (w_2 \sin \theta_2+w_6 \sin\theta_2-2 w_3 \cos \theta_2) \end{eqnarray} With this substitution $z_1=0$ and $x_2=\lambda f$ with $$f=w_1^2 \sin \theta_2 \cos \theta_2+\frac{1}{4} (2 w_3 \sin \theta_2+(w_2+w_6) \cos \theta_2+w_2-w_6) ((w_2+w_6) \sin \theta_2-2 w_3 \cos \theta_2)$$ A direct computation shows that $$f(0)+f(\frac{\pi }{2})+f(\pi)+f(\frac{3\pi }{2})=0$$ Therefore by the intermediate value theorem we have that for some angle $\theta_2$ between $0$ and $\frac{3\pi}{2}$ we have that $f(\theta_2)=0$. Once we have $\theta_2$ is easy to find $\theta_1$ and $\lambda$ that satisfy Equation \ref{rem}. Therefore the result is also true for points in $S_1^3$. For points in $S_2^3$ the proof is similar but in this case we use the fact that since $w_4=-w_1$ and $w_5=w_3$, then a direct computation shows that \begin{eqnarray*} R_y(\theta_2)\otimes R_y(\theta_1 )\otimes R_y(\theta_2) \ket{\phi}&=&x_1 \ket{000}+z_1 \ket{001}+x_2 \ket{010} +x_3 \ket{011}+z_1 \ket{100}\\ & &+x_4\ket{101}+x_5\ket{110}+x_6\ket{111} \end{eqnarray*} Finally, let us consider the case where $\ket{\phi}$ is not an equilibrium point. Assume $$\ket{\xi(t)}=x_1(t) \ket{000}+x_2(t) \ket{010} +x_3(t) \ket{011}(t)+x_4(t)\ket{101}+x_5(t)\ket{110}+x_6(t)\ket{111}$$ is the integral curve of the vector field $X$ satisfying $\ket{\xi(0)}=\ket{\phi}$. We will be using the fact that every integral curve of the vector field $X$ is either topologically a circle or it contains a sequence of points that converges to an equilibrium point. Let us consider the case that $w_1=w_4$. Since \begin{eqnarray}\label{dex1x4} x_1^\prime(t)=x_2(t) (x_1^2(t)-x_4^2(t))\hbox{ and } x_4^\prime(t)=x_6(t) (x_1^2(t)-x_4^2(t)) \end{eqnarray} Then, by the existence and uniqueness theorem of differential equations we have that $x_1(t)=w_1$ and $x_4(t)=w_4$ for all $t$. Therefore $$x_2^\prime(t)=w_1 (x_3(t)+x_5(t))^2$$ Since we are assuming that $\ket{\phi}$ is not an equilibrium point then $w_1=w_4\ne0$ and $w_3\ne -w_5$. Therefore $x_2(t)$ is a non-constant function that satisfies $x_2^\prime(t)\ge 0$ or $x_2^\prime(t)\le 0$. In particular $x_2(t)$ cannot be a periodic solution and, as a consequence, the solution $\ket{\phi(t)}$ is not periodic. Therefore there exists a sequence of points $t_1,t_2,\dots$ such that $\ket{\xi(t_i)}$ converges to an equilibrium point $\ket{\phi_0}$. By continuity we have that $\ket{\phi}$ differ from $\ket{\phi_0}$ by a local orthogonal gate. Since we already have shown that the theorem holds true for the equilibrium points of $X$, then the result follows in this case. The proof is similar if we assume that $w_4=-w_1$. In the case that neither $w_4=w_1$ nor $w_4(t)=-w_1$, then, since $x_1(t)$ and $x_4(t)$ satisfies the system \ref{dex1x4}, then we have that $x^2_1(t)\ne x^2_4(t)$ for all $t$. Recall that if $x_1(t)$ is not a periodic function then the result follows due to the fact that the integral curve will be arbitrarily close to an equilibrium point. Under the assumption that $x_1(t)$ is a periodic function we have that $x_1^\prime(t_0)$ must be zero for some $t_0$. Since $ x_1^\prime(t_0)=x_2(t_0) (x_1^2(t_0)-x_4^2(t_0))$ then we must have that $x_2(t_0)=0$ and therefore the theorem follows also in this case. \end{proof}
{ "timestamp": "2020-12-29T02:22:24", "yymm": "2012", "arxiv_id": "2012.14031", "language": "en", "url": "https://arxiv.org/abs/2012.14031" }
\section{Introduction} \subsection{The setting} Let $(X,g)$ be a smooth Riemannian manifold of dimension $d$ without boundary, $(L,h^L)$ a Hermitian line bundle on $X$ with a Hermitian connection $\nabla^L$ and $(E,h^E)$ a Hermitian vector bundle of rank $r$ on $X$ with a Hermitian connection $\nabla^E$. We will suppose that $(X, g)$ is a manifold of bounded geometry and $L$ and $E$ have bounded geometry. This means that the curvatures $R^{TX}$, $R^L$ and $R^E$ of the Levi-Civita connection $\nabla^{TX}$, connections $\nabla^L$ and $\nabla^E$, respectively, and their derivatives of any order are uniformly bounded on $X$ in the norm induced by $g$, $h^L$ and $h^E$, and the injectivity radius $r_X$ of $(X, g)$ is positive. For any $p\in \field{N}$, let $L^p:=L^{\otimes p}$ be the $p$th tensor power of $L$ and let \[ \nabla^{L^p\otimes E}: {C}^\infty(X,L^p\otimes E)\to {C}^\infty(X, T^*X \otimes L^p\otimes E) \] be the Hermitian connection on $L^p\otimes E$ induced by $\nabla^{L}$ and $\nabla^E$. Consider the induced Bochner Laplacian $\Delta^{L^p\otimes E}$ acting on $C^\infty(X,L^p\otimes E)$ by \begin{equation}\label{e:def-Bochner} \Delta^{L^p\otimes E}=\big(\nabla^{L^p\otimes E}\big)^{\!*}\, \nabla^{L^p\otimes E}, \end{equation} where $\big(\nabla^{L^p\otimes E}\big)^{\!*}: {C}^\infty(X,T^*X\otimes L^p\otimes E)\to {C}^\infty(X,L^p\otimes E)$ is the formal adjoint of $\nabla^{L^p\otimes E}$. Let $V\in C^\infty(X,\operatorname{End}(E))$ be a self-adjoint endomorphism of $E$. We will assume that $V$ and its derivatives of any order are uniformly bounded on $X$ in the norm induced by $g$ and $h^E$. We will study the Bochner-Schr\"odinger operator $H_p$ acting on $C^\infty(X,L^p\otimes E)$ by \[ H_{p}=\frac 1p\Delta^{L^p\otimes E}+V. \] Since $(X,g)$ is complete, the operator $H_p$ is essentially self-adjoint in the Hilbert space $L^2(X,L^p\otimes E)$ with initial domain $C^\infty_c(X,L^p\otimes E)$, see \cite[Theorem 2.4]{ko-ma-ma}. We still denote by $H_p$ its unique self-adjoint extension, and by $\sigma(H_p)$ its spectrum in $L^2(X,L^p\otimes E)$. Consider the real-valued closed 2-form $\mathbf B$ (the magnetic field) given by \begin{equation}\label{e:def-omega} \mathbf B=iR^L. \end{equation} We assume that $\mathbf B$ is non-degenerate. Thus, $X$ is a symplectic manifold. In particular, its dimension is even, $d=2n$, $n\in \field{N}$. For $x\in X$, let $B_x : T_xX\to T_xX$ be the skew-adjoint operator such that \[ \mathbf B_x(u,v)=g(B_xu,v), \quad u,v\in T_xX. \] The operator $|B_x|:=(B_x^*B_x)^{1/2} : T_xX\to T_xX$ is a positive self-adjoint operator. We assume that it is uniformly positive on $X$: \begin{equation}\label{e:uniform-positive} |B_x|\geq b_0>0, \quad x\in X. \end{equation} \begin{rem} The operator $H_p$ was introduced and studied by Demailly in \cite{Demailly85,Demailly91}. The study of its spectrum plays an important role in the proof of holomorphic Morse inequalities. \end{rem} \begin{rem} Assume that the Hermitian line bundle $(L,h^L)$ is trivial and $(E,h^E)$ is a trivial Hermitian line bundle with a trivial connection $\nabla^E$. Then we can write $\nabla^L=d-i \mathbf A$ with a real-valued 1-form $\mathbf A$ (the magnetic potential), and we have \[ R^L=-id\mathbf A,\quad \mathbf B=d\mathbf A. \] The operator $H_p$ is related with the semiclassical magnetic Schr\"odinger operator \[ H_p=\hbar^{-1}[(i\hbar d+\mathbf A)^*(i\hbar d+\mathbf A)+\hbar V], \quad \hbar=\frac{1}{p},\quad p\in \field{N}. \] It can be also considered as the magnetic Schr\"odinger operator with strong electric and magnetic fields, growing at the same rate: \[ H_p=\frac{1}{p}[(d-ip\mathbf A)^*(d-ip\mathbf A)+pV], \quad p\in \field{N}. \] \end{rem} \begin{rem} If $X$ is the Euclidean space $\field{R}^{2n}$ with coordinates $Z=(Z_1,\ldots,Z_{2n})$, we can write the 1-form $\bf A$ as \[ {\bf A}= \sum_{j=1}^{2n}A_j(Z)\,dZ_j, \] the matrix of the Riemannian metric $g$ as $g(Z)=(g_{j\ell}(Z))_{1\leq j,\ell\leq 2n}$ and its inverse as $g(Z)^{-1}=(g^{j\ell}(Z))_{1\leq j,\ell\leq 2n}$. Denote $|g(Z)|=\det(g(Z))$. Then $\bf B$ is given by \[ {\bf B}=\sum_{j<k}B_{jk}\,dZ_j\wedge dZ_k, \quad B_{jk}=\frac{\partial A_k}{\partial Z_j}-\frac{\partial A_j}{\partial Z_k}. \] Moreover, the operator $H_p$ has the form \[ H_p=\frac{1}{p}\frac{1}{\sqrt{|g|}}\sum_{1\leq j,\ell\leq 2n}\left(i \frac{\partial}{\partial Z_j}+pA_j\right) \left[\sqrt{|g|} g^{j\ell} \left(i \frac{\partial}{\partial Z_\ell}+pA_\ell\right)\right]+V. \] Our assumptions hold, if the matrix $(B_{j\ell}(Z))$ has full rank $2n$ and its eigenvalues are separated from zero uniformly on $Z\in \field{R}^{2n}$, for any $\alpha \in \field{Z}^{2n}_+$ and $1\leq j,\ell\leq 2n$, we have \[ \sup_{Z\in \field{R}^{2n}}|\partial^\alpha g_{j\ell}(Z)|<\infty, \quad \sup_{Z\in \field{R}^{2n}}|\partial^\alpha B_{j\ell}(Z)|<\infty, \] and the matrix $(g_{j\ell}(Z))$ is positive definite uniformly on $Z\in \field{R}^{2n}$. \end{rem} \subsection{Description of the spectrum} Our first result gives an asymptotic description of the spectrum of $H_{p}$ as $p\to \infty$ in terms of the spectra of the model operators. For an arbitrary $x_0\in X$, the model operator at $x_0$ is a second order differential operator $\mathcal H^{(x_0)}_{p}$, acting on $C^\infty(T_{x_0}X, E_{x_0})$, which is obtained from the operator $H_p$ by freezing coefficients at $x_0$. This operator was introduced by Demailly \cite{Demailly85,Demailly91}. Consider the trivial Hermitian line bundle $L_0$ over $T_{x_0}X$ and the trivial Hermitian vector bundle $E_0$ over $T_{x_0}X$ with the fiber $E_{x_0}$. We introduce the connection \begin{equation}\label{e:nablaL0} \nabla^{L^p_0}_{v}=\nabla_{v}-ip\alpha_v, \end{equation} acting on $C^\infty(T_{x_0}X, L^p_0\otimes E_0)\cong C^\infty(T_{x_0}X, E_{x_0})$, with the connection one-form $\alpha\in C^\infty(T(T_{x_0}X),\field{R})$ given by \begin{equation}\label{e:Aflat} \alpha_v(w)=\frac{1}{2}\mathbf B_{x_0}(v,w),\quad v,w\in T_{x_0}X. \end{equation} Its curvature is constant: $d\alpha=\mathbf B_{x_0}$. Denote by $\Delta^{L_0^p}$ the associated Bochner Laplacian. The model operator $\mathcal H^{(x_0)}_{p}$ on $C^\infty(T_{x_0}X, E_{x_0})$ is defined as \begin{equation}\label{e:DeltaL0p} \mathcal H^{(x_0)}_{p}=\frac 1p\Delta^{L_0^p}+V(x_0). \end{equation} Since $B_{x_0}$ is skew-adjoint, its eigenvalues have the form $\pm i a_j(x_0), j=1,\ldots,n,$ with $a_j(x_0)>0$. By \eqref{e:uniform-positive}, $a_j(x_0)\geq b_0>0$ for any $x_0\in X$ and $j=1,\ldots,n$. Denote by $V_\mu(x_0), \mu=1,\ldots,r$, the eigenvalues of $V(x_0)$. The spectrum of $\mathcal H^{(x_0)}_{p}$ is independent of $p$ and consists of eigenvalues of infinite multiplicity \begin{equation}\label{e:def-Sigmax} \sigma(\mathcal H^{(x_0)}_{p})=\Sigma_{x_0}:=\left\{\Lambda_{\mathbf k,\mu}({x_0})\,:\, \mathbf k\in\field{Z}_+^n, \mu=1,\ldots,r\right\}, \end{equation} where, for $\mathbf k=(k_1,\cdots,k_n)\in\field{Z}_+^n$, $\mu=1,\ldots,r$ and $x_0\in X$, \begin{equation}\label{e:def-Lambda} \Lambda_{\mathbf k,\mu}(x_0)=\sum_{j=1}^n(2k_j+1) a_j(x_0)+V_\mu(x_0). \end{equation} In particular, the lowest eigenvalue of $\mathcal H^{(x_0)}_{p}$ is \[ \Lambda_0(x_0):=\sum_{j=1}^n a_j(x_0)+\min _\mu V_\mu(x_0). \] Let $\Sigma$ be the union of the spectra of the model operators: \[ \Sigma=\bigcup_{x\in X}\Sigma_x=\left\{\Lambda_\mathbf {k,\mu}(x)\,:\, \mathbf k\in\field{Z}_+^n, \mu=1,\ldots,r, x\in X \right\}. \] \begin{thm}\label{t:spectrum} For any $K>0$, there exists $c>0$ such that for any $p\in \field{N}$ the spectrum of $H_{p}$ in the interval $[0,K]$ is contained in the $cp^{-1/4}$-neighborhood of $\Sigma$. \end{thm} The set $\Sigma$ is a closed subset of $\field{R}$, which can be represented as the union of the closed intervals (bands): \[ \Sigma=\bigcup_{\mathbf k\in\field{Z}_+^n, \mu=1,\ldots,r}[\alpha_{\mathbf k,\mu}, \beta_{\mathbf k,\mu}] \] where, for any $\mathbf k\in\field{Z}_+^n$ and $\mu=1,\ldots,r$, the interval $[\alpha_{\mathbf k,\mu}, \beta_{\mathbf k,\mu}]$ is the range of the function $\Lambda_{\mathbf k,\mu}$ on $X$: $[\alpha_{\mathbf k,\mu}, \beta_{\mathbf k,\mu}]=\{\Lambda_{\mathbf k,\mu}({x_0}) : x_0\in X\}$. In general, the bands $[\alpha_{\mathbf k,\mu}, \beta_{\mathbf k,\mu}]$ can overlap without any gaps and $\Sigma$ is the semi-axis $[\Lambda_0,+\infty)$ with $\Lambda_0=\inf_{x\in X} \Lambda_0(x)$. In this case, Theorem \ref{t:spectrum} tells nothing about the location of the spectrum of $H_{p}$, except for the lower bound for its bottom $\lambda_0(H_p)=\inf \sigma(H_p)$: \begin{equation}\label{e:lower-lambda0} \lambda_0(H_p)\geq \Lambda_0-cp^{-1/4}, \quad p\in \field{N}. \end{equation} This estimate agrees with the similar estimate for the magnetic Laplacian obtained in \cite[Theorem 3.1]{HM96} (see also \cite{Morin19}). One should also emphasize that we make no assumptions on the curvature $\mathbf B$ except for full-rank condition. There is a number of papers devoted to the study of the asymptotic behavior of low-lying eigenvalues of the magnetic Schr\"odinger operator under some additional assumptions on the magnetic field like the existence of non-degenerate magnetic wells (see \cite{HK14,Morin19,raymond-book} and references therein for the case of non-degenerate magnetic field). In some cases, $\Sigma$ has gaps: $[\Lambda_0,+\infty)\setminus \Sigma\not=\emptyset$. For instance, if $V(x)\equiv 0$ and the functions $a_j$ can be chosen to be constants: \begin{equation}\label{e:aj-constant} a_j(x)\equiv a_j, \quad x\in X, \quad j=1,\ldots,n, \end{equation} then $\Sigma$ is a countable discrete set. In particular, if $J=\frac{1}{2\pi}B$ is an almost-complex structure (the almost K\"ahler case) and $V(x)\equiv 0$,, then $a_j=2\pi, j=1,\ldots,n$ and \begin{equation}\label{e:Kaehler} \Sigma=\left\{2\pi (2k+n)\,:\, k\in\field{Z}_+\right\}. \end{equation} The set $\Sigma$ may also have gaps if the functions $a_j$ are not constants, but varies slow enough. In these cases, Theorem \ref{t:spectrum} implies the existence of gaps in the spectrum of $H_{p}$. In particular, when $V(x)\equiv 0$ and condition \eqref{e:aj-constant} holds, then the spectrum of $H_{p}$ is contained in the union of neighborhoods of $a_j$'s of size $O(p^{-1/4})$. In the almost-K\"ahler case, Theorem \ref{t:spectrum} was proved in \cite{FT}. Our proof uses some ideas and constructions of the proof given in \cite{FT}, but it has some improvements and is shorter. The following example demonstrates what kind of information about the eigenvalues of $H_p$ can be obtained from Theorem \ref{t:spectrum} in the alomst-K\"ahler case. \begin{ex} Suppose that $X$ is the unit two-sphere $S^2=\{(x,y,z)\in \field{R}^3:x^2+y^2+z^2=1\}$. In the spherical coordinates $x=\sin\theta \cos\varphi, y=\sin\theta \sin\varphi, z=\cos\theta,\theta\in (0,\pi), \varphi\in (0,2\pi)$, we take the Riemannian metric $g=R^2(d\theta^2+\sin^2\theta d\varphi^2)$, and the magnetic field $\mathbf B=\frac{1}{2}\sin\theta d\theta\wedge d\varphi$. Let $L$ be the corresponding quantum line bundle. The only eigenvalue $a_1(\theta, \varphi)$ can be found from the relation $\mathbf B=a_1dv_X$, which gives \[ a_1(\theta, \varphi)=\frac{1}{2R^2}, \quad \Sigma=\left\{ (2k+1)\frac{1}{2R^2}\,:\, k\in\field{Z}_+ \right\}. \] By the classical formula for the eigenvalues of the magnetic Laplacian $\Delta^{L^p}$ [Tamm (1931), Wu-Yang (1976)], the eigenvalues of $H_p=\frac 1p\Delta^{L^p}$ are given by \[ \nu_{p,k}=(2k+1)\frac{1}{2R^2}+\frac{k(k+1)}{R^2p}, \quad k\in\field{Z}_+ \] with multiplicity $m_{p,k}=p+2k+1$. In particular, if the metric $g$ is K\"ahler, then $\frac{1}{2\pi}\mathbf B=dv_X$, which gives $R^2=\frac{1}{4\pi}$ and (cf. \eqref{e:Kaehler}) \[ \nu_{p,k}=2\pi (2k+1)+4\pi k(k+1)\frac 1p, \quad k\in\field{Z}_+. \] So we see that, in this example, Theorem \ref{t:spectrum} describes the leading term in the asymptotic expansion of each eigenvalue. Note that a description of the leading term in the asymptotic expansion of the multiplicites is given by Demailly's theorem (see \eqref{e:Demailly} below). \end{ex} Another way to obtain an operator $H_p$ with a gap in $\Sigma$ is to take $V(x)=-\tau(x)$ with $\tau(x):=\sum_{j=1}^n a_j(x), x\in X$. Then $H_p=\frac{1}{p}\Delta_p$, where $\Delta_p:=\Delta^{L^p\otimes E}-p\tau$ is the renormalized Bochner Laplacian introduced by Guillemin and Uribe in \cite{Gu-Uribe}. We get $\Lambda_0(x)\equiv 0$ and $\Sigma$ has a gap around zero: $\Sigma=\{0\}\cup [2b_0, \infty)$ with $b_0=\inf_{x\in X}|B_x|>0$. In this case, a better estimate for the spectrum of $H_p$ (with $p^{-1}$ instead of $p^{-1/4}$) holds true: there exists $c>0$ such that for any $p\in \field{N}$ the spectrum of $H_{p}$ is contained in $(-cp^{-1}, cp^{-1})\cup [2b_0-cp^{-1}, \infty)$. This estimate (with not precised constant $h_0$) is proved in \cite{Gu-Uribe} when $X$ is compact and $E$ is a trivial line bundle. It was proved for a general vector bundle $E$ on a compact manifold in \cite[Corollary 1.2]{ma-ma02} and for manifolds of bounded geometry in \cite[Theorem 1.1]{ko-ma-ma} (see also the references therein for some related works). In particular, the estimate \eqref{e:lower-lambda0} holds with $p^{-1}$ instead of $p^{-1/4}$ (cf. \cite[Remark 2.3]{HM96} in the case of the magnetic Laplacian). By constructing approximate eigenfunctions, one can show that each $\Lambda\in \Sigma$ is close to the spectrum of $H_p$ (cf. \cite[Theorem 2.2]{HM96} in the case of the magnetic Laplacian). Actually, when $X$ os compact, any neighborhood of $\Lambda$ contains infinitely many eigenvalues as follows from the asymptotic formula for the eigenvalue distribution function of the operator $H_p$ proved by Demailly \cite{Demailly85,Demailly91}. Let us briefly recall this result. Suppose that $X$ is compact, The eigenvalue distribution function $N_p(\lambda)$ of $H_p$ is defined by \[ N_p(\lambda)=\#\{j\in \field{Z}_+: \nu_{p,j}\leq \lambda \},\quad \lambda\in \field{R}, \] where $\nu_{p,j}, j\in \field{Z}_+$ are the eigenvalues of $H_p$ taken with multiplicities. For any $x\in X$, let $N(x,\lambda)$ be the eigenvalue distribution function of the model operator $\mathcal H^{(x)}_{p}$ defined by \[ N(x,\lambda)=\#\{(\mathbf k, \mu) : \Lambda_{\mathbf k,\mu}(x)\leq \lambda \},\quad \lambda\in \field{R}. \] By \cite[Theorem 0.6]{Demailly85} (see also \cite[Corollary 3.3]{Demailly91}), there exists a countable set $\mathcal D\subset \field{R}$ such that for any $\lambda\in \field{R}\setminus \mathcal D$ \begin{equation}\label{e:Demailly1} \lim_{p\to +\infty}p^{-n}N_p(\lambda)=\frac{1}{(2\pi)^n} \int_X \left(\prod_{j=1}^n a_j(x)\right) N(x,\lambda) dv_X(x), \end{equation} where $dv_X$ denotes the Riemannian volume form. The formula \eqref{e:Demailly1} can be rewritten in terms of the Liouville volume form $\Omega_{\mathbf B}=\frac{1}{n!} \mathbf B^n$ as follows: \begin{equation}\label{e:Demailly2} \lim_{p\to +\infty}p^{-n}N_p(\lambda)=\frac{1}{(2\pi)^n} \int_X N(x,\lambda)\Omega_{\mathbf B}(x). \end{equation} By \eqref{e:Demailly2}, for any interval $(\alpha,\beta)$, we have \begin{multline}\label{e:Demailly} \#\{j\in \field{Z}_+: \nu_{p,j}\in (\alpha,\beta)\}\\ =\frac{p^{n}}{(2\pi)^n} \sum_{\mathbf k, \mu}\mu_{\mathbf B}(\{x\in X : \Lambda_{\mathbf k,\mu}(x)\in (\alpha,\beta)\})+o(p^{n}), \quad p\to \infty, \end{multline} where $\mu_{\mathbf B}$ denotes the Liouville measure. In particular, if $(\alpha,\beta)\cap \Sigma=\emptyset$, \[ \#\{j\in \field{Z}_+: \nu_{p,j}\in (\alpha,\beta)\}=o(p^{n}), \quad p\to \infty. \] Apparently, Theorem \ref{t:spectrum} holds in the case when the magnetic field degenerates. If $\mathbf B_{x_0}$ is degenerate for some $x_0\in X$, the model operator $\mathcal H^{(x_0)}_{p}$ is still well defined, but its spectrum is the semi-axis $[V(x_0),\infty)$. Then again, Theorem \ref{t:spectrum} contains no information about the location of the spectrum of $H_{p}$, except the lower bound for its bottom. Therefore, we restrict ourselves to the case when $\mathbf B$ is non-degenerate. \subsection{Asymptotic behavior of the spectral projection} Consider an interval $I=(\alpha,\beta)$ such that $\alpha,\beta\not \in \Sigma$. By Theorem \ref{t:spectrum}, there exists $\mu_0>0$ and $p_0\in \field{N}$ such that for any $p>p_0$ \[ \sigma(H_{p})\subset (-\infty, \alpha-\mu_0) \cup I \cup (\beta+\mu_0, \infty). \] Let $P_{p,I}$ be the spectral projection of the operator $H_{p}$ associated with $I$ and $P_{p,I}(x,x^\prime)$, $x,x^\prime\in X$, be its smooth kernel with respect to the Riemannian volume form $dv_X$. We study the asymptotic behavior of the kernel $P_{p,I}(x,x^\prime)$ as $p\to \infty$. First, we establish the off-diagonal exponential estimate for $P_{p,I}(x,x^\prime)$. \begin{thm}\label{t:exp-Pp} There exists $c>0$ such that for any $k\in \mathbb N$, there exists $C_k>0$ such that for any $p\in \mathbb N$, $x, x^\prime \in X$, we have \begin{equation}\label{e1.9} \big|P_{p,I}(x, x^\prime)\big|_{{C}^k}\leq C_k p^{n+\frac{k}{2}} e^{-c\sqrt{p} \,d(x, x^\prime)}. \end{equation} \end{thm} Here $d(x,x^\prime)$ is the geodesic distance and $|P_{p,I}(x, x^\prime)|_{{C}^k}$ denotes the pointwise ${C}^k$-seminorm of the section $P_{p,I}$ at a point $(x, x^\prime)\in X\times X$, which is the sum of the norms induced by $h^L, h^E$ and $g$ of the derivatives up to order $k$ of $P_{p,I}$ with respect to the connection $\nabla^{L^p\otimes E}$ and the Levi-Civita connection $\nabla^{TX}$ evaluated at $(x, x^\prime)$. Then we describe an asymptotic expansion of $P_{p,I}$ as $p\to \infty$ in a fixed neighborhood of the diagonal (independent of $p$). Such kind of expansion is called full off-diagonal expansion. First, we introduce normal coordinates near an arbitrary point $x_0\in X$. We denote by $B^{X}(x_0,r)$ and $B^{T_{x_0}X}(0,r)$ the open balls in $X$ and $T_{x_0}X$ with center $x_0$ and radius $r$, respectively. We identify $B^{T_{x_0}X}(0,r_X)$ with $B^{X}(x_0,r_X)$ via the exponential map $\exp^X_{x_0}$. Furthermore, we choose trivializations of the bundles $L$ and $E$ over $B^{X}(x_0,r_X)$, identifying their fibers $L_Z$ and $E_Z$ at $Z\in B^{T_{x_0}X}(0,r_X)\cong B^{X}(x_0,r_X)$ with the spaces $L_{x_0}$ and $E_{x_0}$ by parallel transport with respect to the connections $\nabla^L$ and $\nabla^E$ along the curve $\gamma_Z : [0,1]\ni u \to \exp^X_{x_0}(uZ)$. Denote by $\nabla^{L^p\otimes E}$ and $h^{L^p\otimes E}$ the connection and the Hermitian metric on the trivial bundle with fiber $(L^p\otimes E)_{x_0}$ induced by these trivializations. We choose an orthonormal base $\{e_j : j=1,\ldots,2n\}$ in $T_{x_0}X$ such that \begin{equation}\label{e:obase} B_{x_0}e_{2k-1}=a_k(x_0)e_{2k}, \quad B_{x_0}e_{2k}=-a_k(x_0)e_{2k-1},\quad k=1,\ldots,n. \end{equation} It gives rise to a coordinate chart $\gamma_{x_0} : B(0,c)\subset \field{R}^{2n}\to X$ defined on the open ball $B(0,c)$ in $\field{R}^{2n}$ with center at the origin and radius $c\in (0,r_X)$, which is given by the restriction of the exponential map $\exp_{x_0}^X : T_{x_0}X \to X$ composed with the linear isomorphism $\mathbb R^{2n}\to T_{x_0}X$ determined by the base $\{e_j \}$. Let $dv_{TX}$ denote the Riemannian volume form of the Euclidean space $(T_{x_0}X, g_{x_0})$. We define a smooth function $\kappa$ on $B^{T_{x_0}X}(0,r_X)\cong B^{X}(x_0,r_X)$ by the equation \[ dv_{X}(Z)=\kappa(Z)dv_{TX}(Z), \quad Z\in B^{T_{x_0}X}(0,r_X). \] Consider the fiberwise product $TX\times_X TX=\{(Z,Z^\prime)\in T_{x_0}X\times T_{x_0}X : x_0\in X\}$. Let $\pi : TX\times_X TX\to X$ be the natural projection given by $\pi(Z,Z^\prime)=x_0$. The kernel $P_{p,I}(x,x^\prime)$ induces a smooth section $P_{p,I,x_0}(Z,Z^\prime)$ of the vector bundle $\pi^*(\operatorname{End}(E))$ on $TX\times_X TX$ defined for all $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$ with $|Z|, |Z^\prime|<r_X$. \begin{thm}\label{t:main} There exists $\varepsilon\in (0,r_X)$ such that for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$, $|Z|, |Z^\prime|<\varepsilon$, the sequence $P_{p,I,x_0}(Z,Z^\prime)$ admits an asymptotic expansion \begin{equation}\label{e:main-expansion} \frac{1}{p^n}P_{p,I,x_0}(Z,Z^\prime)\cong \sum_{r=0}^\infty F_{r,x_0}(\sqrt{p} Z, \sqrt{p}Z^\prime)\kappa^{-\frac 12}(Z)\kappa^{-\frac 12}(Z^\prime)p^{-\frac{r}{2}}, \end{equation} where the leading coefficient $F_{0,x_0}$ is the kernel of the spectral projection $\mathcal P_{I,x_0}$ of the model operator $\mathcal H^{(x_0)}:=\mathcal H^{(x_0)}_1$ associated with $I$: \begin{equation}\label{e:F0} F_{0,x_0}(Z,Z^\prime)=\mathcal P_{I, x_0}(Z,Z^\prime), \end{equation} and for any $r\geq 0$, the coefficients $F_{r,x_0}$ has the form \begin{equation}\label{e:Fr} F_{r,x_0}(Z,Z^\prime)=J_{r,x_0}(Z,Z^\prime)\mathcal P_{x_0}(Z,Z^\prime), \end{equation} where $\mathcal P_{x_0}\in C^\infty(\field{R}^{2n}\times \field{R}^{2n})$ is the Bergman kernel (see \eqref{e:Bergman} below) and $J_{r,x_0}(Z,Z^\prime)$ is a polynomial in $Z, Z^\prime$ with values in $\operatorname{End}(E_{x_0})$, depending smoothly on $x_0$, with the same parity as $r$ and $\operatorname{deg} J_{r,x_0}\leq \kappa(I)+3r$, where $\kappa(I)=\max \{|\mathbf k| : \Lambda_{\mathbf k,\mu}\in I\}$. For any $j\in \mathbb N$, the remainder \[ R_{j,p,x_0}(Z,Z^\prime):=\frac{1}{p^n}P_{p,I,x_0}(Z,Z^\prime) -\sum_{r=0}^jF_{r,x_0}(\sqrt{p} Z, \sqrt{p}Z^\prime)\kappa^{-\frac 12}(Z)\kappa^{-\frac 12}(Z^\prime)p^{-\frac{r}{2}} \] in the asymptotic expansion \eqref{e:main-expansion} satisfies the following condition. For any $m,m^\prime\in \mathbb N$, there exist positive constants $C$, $c$, $c_0$ and $M$ such that for any $p\geq 1$, $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$, $|Z|, |Z^\prime|<\varepsilon$, \begin{multline}\label{e:main-exp} \sup_{|\alpha|+|\alpha^\prime|\leq m}\Bigg|\frac{\partial^{|\alpha|+|\alpha^\prime|}}{\partial Z^\alpha\partial Z^{\prime\alpha^\prime}}R_{j,p,x_0}(Z,Z^\prime)\Bigg|_{C^{m^\prime}(X)}\\ \leq Cp^{-\frac{j-m+1}{2}}(1+\sqrt{p}|Z|+\sqrt{p}|Z^\prime|)^M\exp(-c\sqrt{p}|Z-Z^\prime|)+ O(e^{-c_0\sqrt{p}}). \end{multline} \end{thm} Here $C^{m^\prime}(X)$ is the $C^{m^\prime}$-norm for the parameter $x_0\in X$. We say that $G_p=O(p^{-\infty})$ if for any $l, l_1\in \mathbb N$, there exists $C_{l,l_1}>0$ such that $C^{l_1}$-norm of $G_p$ is estimated from above by $C_{l,l_1}p^{-l}$. The spectral projection $\mathcal P_{I,x_0}$ can be written as \begin{equation} \label{e:Lambda-Bergman} \mathcal P_{I,x_0}=\sum_{(\mathbf k,\mu) : \Lambda_{\mathbf k,\mu}\in I} \mathcal P_{\Lambda_{\mathbf k,\mu},x_0}, \end{equation} where $\mathcal P_{\Lambda_{\mathbf k,\mu},x_0}$ is the orthogonal projection to the eigenspace of the model operator $\mathcal H^{(x_0)}$ with the eigenvalue $\Lambda_{\mathbf k,\mu}$. One can give an explicit formula for its smooth Schwartz kernel in terms of the Laguerre polynomials. For the lowest eigenvalue $\Lambda_0(x_0)$, the kernel of the projection $P_{\Lambda_0,x_0}$ has the form \[ \mathcal P_{0,x_0}(Z,Z^\prime)=\mathcal P_{x_0}(Z,Z^\prime)\pi_{0,x_0}, \] where $\mathcal P_{x_0}\in C^\infty(\field{R}^{2n}\times \field{R}^{2n})$ is the Bergman kernel given by \begin{equation} \label{e:Bergman} \mathcal P_{x_0}(Z,Z^\prime)=\frac{1}{(2\pi)^n}\prod_{j=1}^na_j \exp\left(-\frac 14\sum_{k=1}^na_k(|z_k|^2+|z_k^\prime|^2- 2z_k\bar z_k^\prime) \right) \end{equation} and $\pi_{0,x_0}$ is the orthogonal projection in $E_{x_0}$ to the eigenspace of $V(x_0)$ associated with its lowest eigenvalue. In the case when $H_p=\frac{1}{p}\Delta_p$, where $\Delta_p$ is the renormalized Bochner Laplacian and $I=(\alpha,\beta)$ is any interval, which contains $0$, with $\beta<2b_0$, the projection $P_{p,I}$ is called the generalized Bergman projection in \cite{ma-ma08}, since it generalizes the Bergman projection on complex manifolds. Its kernel is called the generalized Bergman kernel. Asymptotic expansions of the Bergman kernels on complex manifolds were studied for a long time and have many applications (see \cite{ma-ma:book} and also the references therein for the previous results). For the Bergman kernel of the spin$^c$ Dirac operator on a symplectic manifold of bounded geometry, the same type of exponential estimate as in Theorems \ref{t:exp-Pp} is proved in \cite[Theorem 1]{ma-ma15}, and for the Bergman kernel of the renormalized Bochner Laplacian on a symplectic manifold of bounded geometry in \cite[Theorem 1.3]{ko-ma-ma}. The full off-diagonal expansion for the Bergman kernel of the spin$^c$ Dirac operator was proved in \cite[Theorem 4.18']{dai-liu-ma} (see also \cite[Theorem 4.2.1]{ma-ma:book}). For the generalized Bergman kernel associated with the renormalized Bochner Laplacian, it was shown in \cite[Theorem 1.19]{ma-ma08} ((see also \cite[Theorem 4.1.24]{ma-ma:book})) that the off-diagonal expansion holds in a neighborhood of size $1/\sqrt{p}$ of the diagonal. This is called near off-diagonal expansion. In \cite{lu-ma-ma} a less precise estimate than in \cite[Theorem 1.19]{ma-ma08} was obtained in a neighborhood of size $p^{-\theta}$, $\theta\in(0,1/2)$. The proofs are based on the spectral gap property of the Bochner Laplacian, finite propagation speed arguments for the wave equation and rescaling of the Bochner Laplacian near the diagonal, which is inspired by the analytic localization technique of Bismut-Lebeau \cite{BL}. In \cite{Kor18}, we combined the methods of \cite{dai-liu-ma,ma-ma:book,ma-ma08} with weighted estimates with appropriate exponential weights as in \cite{Kor91,Meladze-Shubin1,Meladze-Shubin2} to prove the full off-diagonal expansion for the generalized Bergman kernel. In \cite{ko-ma-ma}, these results were extended to the case of manifolds of bounded geometry. Theorem~\ref{t:main} is a rather straightforward extension of the results of \cite{Kor18,ko-ma-ma}. In a companion paper \cite{Kor20a}, we apply the results of the paper to construct a Berezin-Toeplitz quantization associated with higher Landau levels of the Bochner Laplacian on a symplectic manifold. We mention that, in two simultaneous papers \cite{charles20a,charles20b}, Charles studies the same subject, using different methods. The paper is organized as follows. In Section~\ref{s:description}, we prove Theorem \ref{t:spectrum}. In Section \ref{s:res-estimates}, we prove weighted estimates for the resolvent of the operator $H_p$. Section \ref{s:projection} is devoted to the study of the kernel of the spectral projections and contains the proofs of Theorems \ref{t:exp-Pp} and \ref{t:main}. This work was started as a joint project with L. Charles, but later we decided to work on our approaches separately. I would like to thank Laurent for his collaboration. \section{Description of the spectrum}\label{s:description} This section is devoted to the proof of Theorem \ref{t:spectrum}. \subsection{Approximate inverse} The proof of Theorem \ref{t:spectrum} is based on a construction of an approximate inverse for the operator $H_{p}-\lambda$ with $\lambda \not\in \Sigma$. The corresponding statement is given in the following proposition. \begin{prop}\label{p:Klambda} There exists a family $\{Q_p(\lambda): \lambda\not\in \Sigma\}$ of operators in $C^\infty_c(X, L^p\otimes E)$, which extend to bounded operators in $L^2(X, L^p\otimes E)$, such that, for any $\lambda\not\in \Sigma$, we have \[ \left(H_{p}-\lambda\right)Q_p(\lambda)u=u+K_p(\lambda)u,\quad u\in C_c^\infty(X, L^p\otimes E), \] where $\{K_p(\lambda): \lambda\not\in \Sigma\}$ is a family of bounded operators in $L^2(X,L^p\otimes E)$, satisfying the following condition. For any $K>0$, there exists $C_K>0$ such that for any $\lambda\not\in \Sigma$, $|\lambda|<K$ and for any $p\in \field{N}$, we have \[ \|K_p(\lambda):L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)\| \leq C_K p^{-1/4}d(\lambda,\Sigma)^{-1}, \] where $d(\lambda,\Sigma)$ denotes the distance from $\lambda$ to $\Sigma$. \end{prop} Theorem \ref{t:spectrum} is an immediate consequence of Proposition \ref{p:Klambda}. To see this, let us fix $K>0$ and apply Proposition \ref{p:Klambda}. We get that, for any $\lambda\in\field{C}$ such that $d(\lambda,\Sigma)>cp^{-1/4}$ with $c=2C_K$ and $|\lambda|<K$, \[ \|K_p(\lambda):L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)\| \leq \frac 12. \] Therefore, the operator $I +K_p(\lambda)$ is invertible in $L^2(X,L^p\otimes E)$. This immediately implies that the operator $H_{p}-\lambda$ is invertible in $L^2(X,L^p\otimes E)$ with \[ \left(H_{p}-\lambda\right)^{-1}=Q_p(\lambda)(I +K_p(\lambda))^{-1}, \] and $\lambda$ is not in $\sigma(H_{p})$ that completes the proof of Theorem \ref{t:spectrum}. The proof of Proposition \ref{p:Klambda} will be given in the rest of this section. \subsection{Approximation by the model operator} Our construction of an approximate inverse for the operator $H_{p}-\lambda$ with $\lambda \not\in \Sigma$ is based on the approximation of the operator $H_p$ by the model operator $\mathcal H^{(x_0)}_{p}$ in a sufficiently small neighborhood of an arbitrary point $x_0$. Since the spectrum of $\mathcal H^{(x_0)}_{p}$ coincides with $\Sigma_{x_0}$, the operator $\mathcal H^{(x_0)}_{p}-\lambda$ is invertible and we use its inverse to construct an approximate inverse for $H_{p}-\lambda$ in a small neighborhood of $x_0$. The global approximate inverse for $H_{p}-\lambda$ is constructed from these local approximate inverse, taking a cover by neighborhoods of size $O(p^{-1/4})$. First, we construct some special coordinates near $x_0$ (see, for instance, \cite[Theorem 6.2.2]{FT}). We start with the coordinate chart $\gamma_{x_0} : B(0,c)\subset \field{R}^{2n}\to X$ defined in Introduction. It satisfies \begin{equation}\label{e:x0gamma} \gamma_{x_0}(0)=x_0,\quad (D\gamma_{x_0})_0 \left(\frac{\partial}{\partial Z_j}\right)=e_j,\quad j=1,\ldots, 2n, \end{equation} and \begin{equation}\label{e:Bx0} (\gamma_{x_0}^*\mathbf B)_{0}=\sum_{k=1}^n a_k(x_0)dZ_{2k-1}\wedge dZ_{2k}. \end{equation} Then, using Darboux Lemma, we deform $\gamma_{x_0}$ into a coordinate chart $\varkappa_{x_0} : B(0,c)\subset \field{R}^{2n}\to U_{x_0}=\varkappa_{x_0}(B(0,c))\subset X$ such that \begin{equation}\label{e:x0} \varkappa_{x_0}(0)=x_0,\quad (D\varkappa_{x_0})_0 \left(\frac{\partial}{\partial Z_j}\right)=e_j,\quad j=1,\ldots, 2n, \end{equation} and $\varkappa_{x_0}^*\mathbf B$ is a constant 2-form on $B(0,c)$ given by \begin{equation}\label{e:kappaBx0} (\varkappa_{x_0}^*\mathbf B)_Z=\sum_{k=1}^n a_k(x_0) dZ_{2k-1}\wedge dZ_{2k}\quad Z\in B(0,c). \end{equation} Moreover, as can be seen from the proof of the Darboux Lemma based on well-known Moser's argument (see, for instance, Lemma 3.14 in \cite[p.94]{McDuff-Salamon} and its proof), we can choose the coordinate charts $\varkappa_{x_0}$ so that, for every $k>0$, they are bounded in the $C^k$ norm uniformly with respect to $x_0$ in the sense that \[ \|\varkappa^{-1}_{x_0} \circ \gamma_{x_0} : B(0,c)\to \field{R}^{2n}\|_{C^k}\leq C_k \] with $C_k$ independent of $x_0$. It is easy to see that there exists a trivialization of the Hermitian line bundle $L$ over $U_{x_0}$: \[ \tau^L_{x_0} : U_{x_0}\times \field{C} \stackrel{\cong}{\to}L\left|_{U_{x_0}}\right., \] such that the connection one-form of $\nabla^L$ in this trivialization coincides with the one-form $\alpha$ given by \eqref{e:Aflat}. We also assume that there exists a trivialization of the Hermitian bundle $E$ over $U_{x_0}$: \[ \tau^E_{x_0} : U_{x_0}\times E_{x_0} \stackrel{\cong}{\to}E\left|_{U_{x_0}}\right., \] These trivializations induce a trivialization of $L^p\otimes E$ over $U_{x_0}$: \[ \tau_{{x_0},p}=(\tau^L_{x_0})^p\otimes \tau^E_{x_0} : U_{x_0}\times E_{x_0} \stackrel{\cong}{\to}L^p\otimes E\left|_{U_{x_0}}\right.. \] For any $x\in U_{x_0}$, we will write $\tau_{x_0,p}(x) : E_{x_0}\to L^p_x\otimes E_x$ for the associated linear map in the fibers. Let $g_{x_0}=\varkappa^*_{x_0} g$ be the Riemannian metric on $B(0,c)$ induced by the Riemannian metric $g$ on $X$. We introduce a map \[ T^*_{{x_0},p} : C^\infty(X, L^p\otimes E)\to C^\infty(B(0,c), E_{x_0}), \] defined for $u\in C^\infty(X, L^p\otimes E)$ by \begin{equation}\label{e:defT} T^*_{{x_0},p} u(Z)=|g_{x_0}(Z)|^{1/4}\tau_{{x_0},p}^{-1}(\varkappa_{x_0}(Z))[u(\varkappa_{x_0}(Z))],\quad Z\in B(0,c). \end{equation} Consider the differential operator $H_p^{(x_0)}=T^*_{{x_0},p} \circ H_{p}\circ (T^*_{{x_0},p})^{-1}$ on $C^\infty(B(0,c), E_{x_0})$. It can be written as \[ H_p^{(x_0)}=|g_{x_0}(Z)|^{1/4} \tau^*_{{x_0},p} \circ H_{p}\circ (\tau^*_{{x_0},p})^{-1} |g_{x_0}(Z)|^{-1/4}. \] Using the standard formula for the Bochner Laplacian in local coordinates, one can write \begin{multline}\label{e:TDeltaT} \tau^*_{{x_0},p} \circ H_{p}\circ (\tau^*_{{x_0},p})^{-1}\\ =-\frac 1p \sum_{\ell,m=1}^{2n}g_{x_0}^{\ell m}\nabla^{L^p\otimes E}_{e_\ell}\nabla^{L^p\otimes E}_{e_m}+\frac 1p \sum_{\ell=1}^{2n} \Gamma^\ell \nabla^{L^p\otimes E}_{e_\ell}+V_{x_0}, \end{multline} where $\{e_j\}$ is the standard base in $\field{R}^{2n}$, $g_{x_0}^{\ell m}$ is the inverse of the matrix of $g_{x_0}$, $V_{x_0}=\tau^{E*}_{x_0} \circ V\circ (\tau^{E*}_{x_0})^{-1}\in C^\infty(B(0,c), \operatorname{End}(E_{x_0}))$ and $\Gamma^\ell\in C^\infty(B(0,c))$, $\ell=1,\ldots,2n,$ are some functions. If we denote by $\Gamma^E\in C^\infty(T(B(0,c)), \operatorname{End}(E_{x_0}))$ the connection one-form for the connection $\nabla^E$, we can write \[ \nabla^{L^p\otimes E}_{v}=\nabla^{L^p_0}_{v}+\Gamma^E(v), \quad v\in T(B(0,c))=B(0,c)\times \field{R}^{2n}. \] where the connection $\nabla^{L^p_0}$ is given by \eqref{e:nablaL0}. Then we have \[ |g_{x_0}|^{1/4} \nabla^{L^p\otimes E}_{v}|g_{x_0}|^{-1/4}=\nabla^{L^p_0}_{v}+\Gamma^E(v)-\frac 14v(\ln |g_{x_0}|). \] It follows that \begin{equation}\label{e:TDeltaT-D} H_p^{(x_0)}=-\frac 1p \sum_{\ell,m=1}^{2n}g_{x_0}^{\ell m}\nabla^{L^p_0}_{e_\ell}\nabla^{L^p_0}_{e_m}+\frac 1p \sum_{\ell=1}^{2n} F_{\ell,{x_0}} \nabla^{L^p_0}_{e_\ell}+V_{x_0}+\frac 1pG_{x_0} \end{equation} with some $F_{\ell,{x_0}}, G_{x_0}\in C^\infty(B(0,c), \operatorname{End}(E_{x_0}))$, uniformly bounded on $x_0$. By \eqref{e:TDeltaT-D}, it follows that \begin{multline}\label{e:TDeltaT-D1} H_p^{(x_0)}-\mathcal H^{(x_0)}_p\\ =-\frac 1p \sum_{\ell,m=1}^{2n}(g_{x_0}^{\ell m}-\delta^{\ell m})\nabla^{L^p_0}_{e_\ell}\nabla^{L^p_0}_{e_m}+\frac 1p \sum_{\ell=1}^{2n} F_{\ell,{x_0}} \nabla^{L^p_0}_{e_\ell}+V_{x_0}-V_{x_0}(0)+\frac 1pG_{x_0}. \end{multline} By \eqref{e:x0}, we have $g_{x_0}^{\ell m}(Z)=\delta^{\ell m}, \ell,m=1,\ldots,2n$. \subsection{Some estimates for the model operator} In this section, we will prove some norm estimates for the resolvent of the model operator. First, we prove an elliptic estimate, taking care of its dependence on $p$. We will denote by $\|\cdot\|$ the $L^2$-norm in $C^\infty_c(T_{x_0}X, E_{x_0})$. Recall that $\{e_j : j=1,\ldots,2n\}$ is a chosen orthonormal base in $T_{x_0}X$. \begin{lem} There exists $C>0$ such that for any $v\in C^\infty_c(T_{x_0}X, E_{x_0})$ and $p\in \field{N}$, \begin{equation}\label{e:ek-el} \sum_{k,\ell=1}^{2n} \left\|\nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v\right\|\leq C\left(\left\|\Delta^{L^p_0}v\right\|+\sqrt{p}\left\|\nabla^{L^p_0}v\right\|\right). \end{equation} \end{lem} \begin{proof} The operator $\nabla^{L^p_0}_{e_j}$ is formally skew-adjoint in ${L^2(T_{x_0}X, E_{x_0})}$: \begin{equation}\label{e:adjoint} \left(\nabla^{L^p_0}_{e_j}\right)^*=-\nabla^{L^p_0}_{e_j}, \end{equation} and, for the commutator $\left[\nabla^{L^p_0}_{e_j}, \nabla^{L^p_0}_{e_k}\right]$, we have \begin{equation}\label{e:commutator} \left[\nabla^{L^p_0}_{e_j}, \nabla^{L^p_0}_{e_k}\right]=pR_{jk}, \end{equation} where $R_{jk}$ is a constant function. Observe that \begin{equation} \label{e:Delta0} \Delta^{L^p_0}=-\sum_{k=1}^{2n} \left(\nabla^{L^p_0}_{e_k}\right)^2. \end{equation} By \eqref{e:adjoint}, we have \[ \|\nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v\|^2=\langle\nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v, \nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v\rangle = \langle \nabla^{L^p_0}_{e_\ell}(\nabla^{L^p_0}_{e_k})^2\nabla^{L^p_0}_{e_\ell}v,v\rangle. \] Now we move $\nabla^{L^p_0}_{e_\ell}$ to the right. Using \eqref{e:adjoint} and \eqref{e:commutator}, we get \begin{align*} \|\nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v\|^2=& \langle (\nabla^{L^p_0}_{e_k})^2 (\nabla^{L^p_0}_{e_\ell})^2v,v\rangle + 2pR_{\ell k}\langle \nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}v,v\rangle \\ = & \langle (\nabla^{L^p_0}_{e_\ell})^2v,(\nabla^{L^p_0}_{e_k})^2v\rangle - 2pR_{\ell k}\langle \nabla^{L^p_0}_{e_\ell}v,\nabla^{L^p_0}_{e_k}v\rangle. \end{align*} Since by \eqref{e:Delta0} we have \[ \sum_{k,\ell=1}^{2n}\langle (\nabla^{L^p_0}_{e_\ell})^2v,(\nabla^{L^p_0}_{e_k})^2v\rangle=\langle \sum_{\ell=1}^{2n} (\nabla^{L^p_0}_{e_\ell})^2v,\sum_{k=1}^{2n} (\nabla^{L^p_0}_{e_k})^2v\rangle=\left\|\Delta^{L^p_0}v\right\|^2, \] this immediately completes the proof. \end{proof} Since $\mathcal H^{(x_0)}_{p}$ is self-adjoint and its spectrum coincides with $\Sigma_{x_0}$, its resolvent $R^{(x_0)}_{p}(\lambda):=\left(\mathcal H^{(x_0)}_{p}-\lambda\right)^{-1}$ satisfies \begin{equation}\label{e:res1-s1} \left\|R^{(x_0)}_{p}(\lambda)\right\|\leq d(\lambda,\Sigma)^{-1},\quad \lambda\not\in \Sigma, \end{equation} where $\|\cdot\|$ denotes the operator norm for the $L^2$-norms. Next, assuming $\lambda\not\in \Sigma$, $|\lambda|<K$, we get \begin{equation}\label{e:res2-s1} \left\|\tfrac{1}{p}\Delta^{L^p_0}R^{(x_0)}_{p}(\lambda)\right\|=\left\|I+(\lambda-V(x_0))R^{(x_0)}_{p}(\lambda) \right\| \leq K_1d(\lambda,\Sigma)^{-1}, \end{equation} where $K_1=2K+\sup_{x\in X}|V(x)|$. (Here we use the fact that $d(\lambda,\Sigma)\leq |\lambda|\leq K$ and, therefore, $1\leq Kd(\lambda,\Sigma)^{-1}$.) Finally, for any $v\in C^\infty_c(T_{x_0}X, E_{x_0})$, we have \begin{multline*} \left\|\frac{1}{\sqrt{p}}\nabla^{L^p_0}R^{(x_0)}_{p}(\lambda)v\right\|^2=-\left\langle\frac{1}{p}\Delta^{L^p_0}R^{(x_0)}_{p}(\lambda)v,R^{(x_0)}_{p}(\lambda)v \right\rangle \\ \\ \leq \left\|\frac{1}{p}\Delta^{L^p_0}R^{(x_0)}_{p}(\lambda)v\right\| \left\| R^{(x_0)}_{p}(\lambda)v \right\|, \end{multline*} that, by \eqref{e:res1-s1} and \eqref{e:res2-s1}, gives the estimate \begin{equation}\label{e:res3-s1} \left\|\frac{1}{\sqrt{p}}\nabla^{L^p_0}R^{(x_0)}_{p}(\lambda)\right\|\leq K_1d(\lambda,\Sigma)^{-1}. \end{equation} By \eqref{e:ek-el}, \eqref{e:res2-s1} and \eqref{e:res3-s1}, we get \begin{multline}\label{e:ek-el-res} \sum_{k,\ell=1}^{2n} \left\|\frac 1p\nabla^{L^p_0}_{e_k}\nabla^{L^p_0}_{e_\ell}R^{(x_0)}_{p}(\lambda) \right\|\\ \leq C\left(\left\|\frac 1p \Delta^{L^p_0}R^{(x_0)}_{p}(\lambda)\right\|+\left\|\frac{1}{\sqrt{p}}\nabla^{L^p_0}R^{(x_0)}_{p}(\lambda)\right\|\right) \leq C_1d(\lambda,\Sigma)^{-1}. \end{multline} \subsection{Construction of an approximate inverse} For each $p\in \field{N}$, we consider the restrictions of the coordinates charts $\varkappa_{x_0}$ to the ball $B(0,p^{-1/4})$. We can choose a finite collection of coordinate charts \[ \varkappa_{\alpha,p}:=\varkappa_{x_{\alpha,p}}\left|_{B(0,p^{-1/4})}\right. : B(0,p^{-1/4}) \to U_{\alpha,p} := \varkappa_{\alpha,p}(B(0,p^{-1/4}))\subset X. \] with $1\leq \alpha\leq I_p$, $I_p\in \field{N}\cup \{\infty\}$, such that the open subsets $U_{\alpha,p}$ cover $X$. For simplicity of notation, we will often omit $p$, writing $\varkappa_{\alpha}$, $U_\alpha$ etc. By a classical argument similar to Vitali's lemma, one can show for the cardinality of the set $\mathcal I_{p,\alpha}= \{1\leq \beta \leq I_p : U_\alpha \cap U_\beta \neq \varnothing \}$, \begin{equation}\label{e:Leb} \# \mathcal I_{p,\alpha} \leq K_0, \quad 1\leq \alpha\leq I_p, \end{equation} with the constant $K_0$, depending only on the dimension $n$. Choose a family of smooth functions $\{\varphi_{\alpha}=\varphi_{\alpha,p} : \field{R}^{2n}\to [0; 1], 1\leq \alpha\leq I_p\}$ supported on the ball $B(0,p^{-1/4})$, which gives a partition of unity on $X$ subordinate to $\{U_\alpha\}$: \[ \sum_{\alpha=1}^{I_p}\varphi_\alpha \circ \varkappa^{-1}_\alpha \equiv 1\ \text{on}\ X, \] and satisfies the condition: for any $\gamma\in \field{Z}^{2n}_+$, there exists $C_\gamma>0$ such that \[ |\partial^\gamma\varphi_\alpha(Z)|<C_\gamma p^{(1/4)|\gamma|}, \quad Z\in \field{R}^{2n}, \quad 1\leq \alpha\leq I_p. \] For every $1\leq \alpha\leq I_p$, we denote by $g_\alpha$ the induced Riemannian metric $g_{x_\alpha}$ on $B(0,p^{-1/4})$. We will use notation \[ T^*_{\alpha}=T^*_{\alpha,p}: C^\infty(X, L^p\otimes E)\to C^\infty(B(0,p^{-1/4}), E_{x_\alpha}) \] for the composition of the map $T^*_{x_\alpha,p}$ defined by \eqref{e:defT} with the restriction map $C^\infty(B(0,c), E_{x_\alpha})\to C^\infty(B(0,p^{-1/4}), E_{x_\alpha})$. We have \begin{equation}\label{e:Tap-unitary} \|T^*_{\alpha}u\|^2_{L^2(B(0,p^{-1/4}), E_{x_\alpha})}=\|u\|^2_{L^2(U_\alpha,L^p\otimes E)}. \end{equation} If $U_\alpha \cap U_\beta \neq \varnothing$, we denote by $\varkappa_{\beta,\alpha} :=\varkappa^{-1}_\beta\circ \varkappa_\alpha : \varkappa^{-1}_\alpha(U_\alpha \cap U_\beta) \to \varkappa^{-1}_\beta(U_\alpha \cap U_\beta)$ and $\tau_{\alpha,\beta,p} :=\tau^{-1}_{\alpha,p}\circ \tau_{\beta,p} : (U_\alpha \cap U_\beta)\times E_{x_\beta}\to (U_\alpha \cap U_\beta)\times E_{x_\alpha}$ the associated coordinate change transformations. The family $\varkappa_{\beta,\alpha}$, $U_\alpha \cap U_\beta \neq \varnothing$, for any $\gamma\in \field{Z}^{2n}_+$ satisfies \[ |\partial^\gamma \varkappa_{\beta,\alpha}(Z)| < C_\gamma, \quad Z\in \varkappa^{-1}_\alpha(U_\alpha \cap U_\beta), \quad 1\leq \alpha,\beta\leq I_p. \] We can write each $\tau_{\alpha,\beta}$ as \[ \tau_{\alpha,\beta}(x,v)=(x, \tau_{\alpha,\beta}(x)v), \quad (x,v)\in (U_\alpha \cap U_\beta)\times E_{x_\beta}, \] where $\tau_{\alpha,\beta}(x): E_{x_\beta}\to E_{x_\alpha}$ is a linear unitary operator. Then for any $u\in C^\infty(X,L^p\otimes E)$ and for any $1\leq \alpha,\beta\leq I_p$ with $U_\alpha \cap U_\beta \neq \varnothing$, we have \begin{equation}\label{e:Tap-Tbp} T^*_{\alpha}u(Z) =\tau_{\alpha,\beta}(\varkappa_\alpha(Z)) J_{\alpha,\beta}(Z)T^*_{\beta}u(\varkappa_{\beta,\alpha}(Z)), \quad Z\in \varkappa^{-1}_\beta(U_\alpha \cap U_\beta), \end{equation} where \[ J_{\alpha,\beta}(Z)= \frac{|g_\alpha(Z)|^{1/4}}{| g_\beta(\varkappa_{\beta,\alpha}(Z))|^{1/4}}. \] Let $\psi : \field{R}^{2n}\to [0,1]$ be a smooth function such that $\psi(Z)=1$ for $|Z|\leq 1$, $\psi(Z)=0$ for $|Z|\geq 2$. Put $\psi_p(Z)=\psi(p^{1/4}Z)$. Observe that $\psi_p\varphi_\alpha=\varphi_\alpha$ for any $\alpha$. For any $\lambda\not\in \Sigma$ and $p\in \field{N}$, the operator $Q_p(\lambda)$ acting on $C^\infty(X, L^p\otimes E)$ is defined for any $u\in C^\infty(X, L^p\otimes E)$ by \begin{equation}\label{e:defQp0} Q_p(\lambda)u=\sum_{\beta=1}^{I_p}(T^*_{\beta})^{-1}\left(\psi_p\circ R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta\right)T^*_{\beta} u. \end{equation} It is easy to see that $Q_p(\lambda)$ is a bounded operator in $L^2(X, L^p\otimes E)$. \subsection{Proof of Proposition \ref{p:Klambda}} Let $u\in C^\infty(X, L^p\otimes E)$. Using \eqref{e:Tap-Tbp} and \eqref{e:defQp0}, for any $\alpha =1, \ldots, I_p$, we have \begin{multline*} T^*_{\alpha} \left(H_{p}-\lambda\right)Q_p(\lambda)u(Z)\\ \begin{aligned} = & \sum_{\beta=1}^{I_p}T^*_{\alpha} \left(H_{p}-\lambda\right) (T^*_{\beta})^{-1}\left(\psi_p\circ R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta\right)T^*_{\beta} u(Z) \\ = & \sum_{\beta\in \mathcal I_{p,\alpha}}\tau_{\alpha,\beta}(\varkappa_\alpha(Z))J_{\alpha,\beta}(Z)\left(H_p^{(x_\beta)}-\lambda\right) \times \\ & \times \left(\psi_p\circ R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta \right)T^*_{\beta}u(\varkappa_{\beta,\alpha}(Z)). \end{aligned} \end{multline*} We can write \[ T^*_{\alpha} \left(H_{p}-\lambda\right)Q_p(\lambda)u=T^*_{\alpha}(u+K_p(\lambda)u), \] where \begin{equation}\label{e:Kp} T^*_{\alpha} (K_p(\lambda)u) = R_{1,\alpha}u+R_{2,\alpha}u, \end{equation} with \begin{multline*} R_{1,\alpha}u(Z) = \sum_{\beta\in \mathcal I_{p,\alpha}} \tau_{\alpha,\beta}(\varkappa_\alpha(Z))J_{\alpha,\beta}(Z)\times \\ \times \left(\psi_p\circ (H_p^{(x_\beta)}-\mathcal H^{(x_\beta)}_{p}) \circ R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta\right)T^*_{\beta}u(\varkappa_{\beta,\alpha}(Z)), \end{multline*} \begin{multline*} R_{2,\alpha}u(Z) = \sum_{\beta\in \mathcal I_{p,\alpha}} \tau_{\alpha,\beta}(\varkappa_\alpha(Z))J_{\alpha,\beta}(Z) \times\\ \times\left([H_p^{(x_\beta)},\psi_p]\circ R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta\right)T^*_{\beta}u(\varkappa_{\beta,\alpha}(Z)). \end{multline*} Since $\psi_p$ is supported on the ball $B(0,2p^{-1/4})$, we have \[ |g^{\ell m}_\beta(Z)-\delta^{\ell m}|\leq Cp^{-1/4}, \quad |V_\beta(Z)-V_\beta(0)|\leq Cp^{-1/4}, \] on the support of $\psi_p$ and therefore using \eqref{e:ek-el}, \eqref{e:res1-s1}, \eqref{e:res2-s1}, \eqref{e:res3-s1}, \eqref{e:Tap-unitary} and \eqref{e:TDeltaT-D1}, we conclude: for any $u\in C^\infty(X, L^p\otimes E)$, we get \begin{multline*} \left\|\psi_p (H_p^{(x_\beta)}-\mathcal H^{(x_\beta)}_{p})R^{(x_\beta)}_{p}(\lambda)\varphi_\beta T^*_{\beta}u\right\|_{L^2(\field{R}^{2n}, E_{x_0})}\\ \begin{aligned} \leq & Cp^{-1/4}\sum_{\ell,m=1}^{2n}\left\| \frac 1p\nabla^{L^p_0}_{e_\ell}\nabla^{L^p_0}_{e_m} R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta T^*_{\beta}u\right\|_{L^2(\field{R}^{2n}, E_{x_0})}\\ & +Cp^{-1/2}\left\|\frac{1}{\sqrt{p}}\nabla^{L^p_0}R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta T^*_{\beta}u\right\|_{L^2(\field{R}^{2n}, E_{x_0})}\\ &+Cp^{-1/4}\left\|R^{(x_\beta)}_{p}(\lambda) \circ \varphi_\beta T^*_{\beta}u\right\|_{L^2(\field{R}^{2n}, E_{x_0})}, \end{aligned} \end{multline*} and \begin{equation} \|R_{1,\alpha}u\|_{L^2(\field{R}^{2n})} \leq Cp^{-1/4}d(\lambda,\Sigma)^{-1} \sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|_{L^2(U_\beta,L^p)}. \label{e:R2} \end{equation} By \eqref{e:TDeltaT-D}, for the commutator $[H_p^{(x_\beta)}, \psi_p]$, we get \[ [H_p^{(x_\beta)}, \psi_p] =-\frac 1p\sum_{\ell,m=1}^{2n}(2g^{\ell m}_\beta \nabla_{e_\ell}\psi_p\nabla^{L^p_0}_{e_m}+g^{\ell m}_\beta\nabla_{e_\ell,e_m}^2\psi_p) +\frac 1p \sum_{\ell=1}^{2n} F_{\ell,\beta} \nabla_{e_\ell}\psi_p. \] Since $|\nabla \psi_p|<C p^{1/4}$, $|\nabla^2\psi_p|<Cp^{1/2}$, using \eqref{e:res1-s1}, \eqref{e:res2-s1}, \eqref{e:res3-s1} and \eqref{e:Tap-unitary} as above, we conclude: \begin{align} \|R_{2,\alpha}u\|_{L^2(\field{R}^{2n})} \leq & Cp^{-1/2}\sum_{\beta\in \mathcal I_{p,\alpha}} \left\|R^{(x_\beta)}_{p}(\lambda)\varphi_\beta T^*_{\beta}u \right\|_{L^2(\field{R}^{2n}, E_{x_0})}\nonumber \\ & +C p^{-1/4}\sum_{\beta\in \mathcal I_{p,\alpha}} \left\|\frac{1}{\sqrt{p}}\nabla^{L^p_0}R^{(x_\beta)}_{p}(\lambda) \varphi_\beta T^*_{\beta}u\right\|_{L^2(\field{R}^{2n}, E_{x_0})}\nonumber \\ \leq & Cp^{-1/4} d(\lambda,\Sigma)^{-1} \sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|_{L^2(U_\beta,L^p)}. \label{e:R1} \end{align} By \eqref{e:Kp}, \eqref{e:R2} and \eqref{e:R1}, we arrive at the following estimate: \[ \|T^*_{\alpha}(K_p(\lambda)u)\|_{L^2(\field{R}^{2n}, E_{x_0})} \leq Cp^{-1/4}d(\lambda,\Sigma)^{-1}\sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|_{L^2(U_\beta,L^p\otimes E)}. \] By \eqref{e:Tap-unitary}, it follows that \[ \|K_p(\lambda)u\|_{L^2(U_\alpha,L^p\otimes E)} \leq Cp^{-1/4}d(\lambda,\Sigma)^{-1}\sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|_{L^2(U_\beta,L^p\otimes E)}, \] and, since $(\sum_{\beta\in \mathcal I_{p,\alpha}}a_\beta)^2 \leq K_0 \sum_{\beta\in \mathcal I_{p,\alpha}}a^2_\beta$, \begin{equation}\label{e:Kp-U-alpha} \|K_p(\lambda)u\|^2_{L^2(U_\alpha,L^p)\otimes E} \leq C^2K_0p^{-1/2}d(\lambda,\Sigma)^{-2}\sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|^2_{L^2(U_\beta,L^p\otimes E)}. \end{equation} Using the fact that $\{U_\alpha : \alpha =1, \ldots, I_p\}$ is a covering of $X$ and \eqref{e:Kp-U-alpha}, we infer \begin{multline*} \|K_p(\lambda) u\|^2_{L^2(X,L^p\otimes E)}\leq \sum_{\alpha=1}^{I_p} \|K_p(\lambda)u\|^2_{L^2(U_\alpha,L^p\otimes E)}\\ \leq C^2K_0p^{-1/2}d(\lambda,\Sigma)^{-2} \sum_{\alpha=1}^{I_p} \sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|^2_{L^2(U_\beta,L^p\otimes E)}, \end{multline*} Using \eqref{e:Leb}, it is easy to see that each term in the double sum in the right rand-side of the last estimate enters at most $K_0$ times. Therefore, we infer that \[ \sum_{\alpha=1}^{I_p} \sum_{\beta\in \mathcal I_{p,\alpha}} \left\|u\right\|^2_{L^2(U_\beta,L^p\otimes E)}\leq K_0 \sum_{\alpha=1}^{I_p} \left\|u\right\|^2_{L^2(U_\alpha,L^p\otimes E)} \] and \[ \|K_p(\lambda) u\|^2_{L^2(X,L^p\otimes E)} \leq C^2K^2_0p^{-1/2}d(\lambda,\Sigma)^{-2} \sum_{\alpha=1}^{I_p} \left\|u\right\|^2_{L^2(U_\alpha,L^p\otimes E)} \] Finally, by \eqref{e:Leb}, we have \[ \sum_{\alpha=1}^{I_p} \left\|u\right\|^2_{L^2(U_\alpha,L^p\otimes E)}\leq K_0\|u\|^2_{L^2(X,L^p\otimes E)}, \] that gives \[ \|K_p(\lambda) u\|^2_{L^2(X,L^p\otimes E)} \leq C^2K^3_0p^{-1/2}d(\lambda,\Sigma)^{-2} \|u\|^2_{L^2(X,L^p\otimes E)}. \] This completes the proof of Proposition \ref{p:Klambda}. \section{Weighted resolvent estimates}\label{s:res-estimates} In this section we prove norm weighted estimates for the resolvent of the operator $H_p$. These estimates is a slightly modified version of the estimates obtained in \cite[Theorems 3.3-3.5]{Kor18}, \cite[Theorems 3.2-3.4]{ko-ma-ma}, which are weighted analogs of \cite[Theorems 4.8-4.10]{dai-liu-ma}, \cite[Theorems 1.7-1.9]{ma-ma08}. The main difference is that we state explicitly the $\|\cdot\|^{m,m+2}$-norm estimate for the resolvent instead of the $\|\cdot\|^{-1,1}$-norm estimates for its iterated commutators (see Theorem \eqref{Thm1.9a} below). \subsection{Preliminaries on Sobolev spaces} We will need a specific choice of the Sobolev norms adapted to a particular sequence of vector bundles $L^p\otimes E, p\in \mathbb N$ as well as a slightly refined form of the Sobolev embedding theorem. In this section, we collect necessary information, referring the reader to \cite{Kor91,ko-ma-ma,ma-ma15} for more details. We will keep the setting described in Introduction. Recall that $dv_{X}$ denotes the Riemannian volume form of $(X,g)$. The $L^2$-norm is given by \begin{equation}\label{e2.5} \|u\|^2_{p,0}=\int_{X}|u(x)|^2dv_{X}(x), \quad u\in L^2(X,L^p\otimes E). \end{equation} For any integer $m>0$, we introduce the norm $\|\cdot\|_{p,m}$ on the Sobolev space $H^m(X,L^p\otimes E)$ of order $m$ by the formula \begin{equation}\label{e2.6} \|u\|_{p,m}=\left(\sum_{\ell=0}^m \int_{X} \left|\left(\tfrac{1}{\sqrt{p}} \nabla^{L^p\otimes E}\right)^\ell u(x)\right|^2 dv_{X}(x)\right)^{1/2}. \end{equation} Denote by $\langle\cdot,\cdot\rangle_{p,m}$ the corresponding inner product on $H^m(X,L^p\otimes E)$. For any integer $m<0$, we define the norm in the Sobolev space $H^m(X,L^p\otimes E)$ by duality. For any bounded linear operator $A : H^m(X,L^p\otimes E)\to H^{m^\prime}(X,L^p\otimes E)$, $m,m^\prime\in \mathbb Z$, we will denote its operator norm by $\|A\|^{m,m^\prime}_p$. Denote by $C^\infty_b(X, L^p\otimes E)$ the space of smooth sections of $L^p\otimes E$ whose covariant derivatives of any order are uniformly bounded in $X$. So $u\in C^\infty_b(X, L^p_0\otimes E)$ if, for any $k\in \mathbb Z_+$, we have \[ \|u\|_{{C}^k_b}:=\sup_{x\in X}\left|\left(\nabla^{L^p\otimes E}\right)^{k}u(x)\right| <\infty. \] \begin{prop}[\cite{ma-ma15}, Lemma 2]\label{p:Sobolev} For any $k, m\in \mathbb N$ with $m>k+n$, we have an embedding \begin{equation}\label{e2.16} H^m(X,L^p\otimes E)\subset {C}^k_b(X,L^p\otimes E). \end{equation} Moreover, there exists $C_{m,k}>0$ such that, for any $p\in \mathbb N$ and $u\in H^m(X,L^p\otimes E)$, \begin{equation}\label{e2.17} \|u\|_{{C}^k_b}\leq C_{m,k}p^{(n+k)/2}\|u\|_{p,m}. \end{equation} \end{prop} For any $y\in X$ and $v\in (L^p\otimes E)_y$, we define the delta-section $\delta_v\in \mathscr{C}^{-\infty}(X,L^p\otimes E)$ as a linear functional on ${C}^{\infty}_c(X,L^p\otimes E)$ given by \begin{equation}\label{e2.18} \langle \delta_v, \varphi\rangle =\langle v, \varphi(x)\rangle_{h^{L^p\otimes E}}, \quad \varphi \in {C}^{\infty}_c(X,L^p\otimes E). \end{equation} \begin{prop}[\cite{ko-ma-ma}, Proposition 2.3]\label{p:delta} For any $m>n$ and $v\in L^p\otimes E$, $\delta_v\in H^{-m}(X,L^p\otimes E)$ with the following norm estimate \begin{equation}\label{e2.19} \sup_{|v|=1}p^{-n/2}\|\delta_v\|_{p,-m}<\infty. \end{equation} \end{prop} \subsection{$L^2$-estimates}\label{s:L2estimates} For the rest of this section, we fix some $\delta>0$ and $K>0$. Denote \[ \Omega=\Omega_{\delta,K}=\{\lambda\in \field{C} : d(\lambda,\Sigma)>\delta, |\lambda|<K\}. \] By Theorem \ref{t:spectrum}, there exists $p_0\in \field{N}$ such that, for any $\lambda\in \Omega$ and $p>p_0$, the operator $\lambda-H_p$ is invertible in $L^2(X,L^p\otimes E)$, and the resolvent $R(\lambda,H_p):=(\lambda-H_p)^{-1}$ satisfies the estimate \begin{equation}\label{e:res1} \left\|R(\lambda,H_p)\right\|^{0,0}_p\leq \frac{1}{\delta}. \end{equation} By general elliptic theory, we know that the operator $R(\lambda,H_p)$ defines a bounded operator from $H^m(X,L^p\otimes E)$ to $H^{m+2}(X,L^p\otimes E)$ for any $m\in \field{Z}$. \begin{prop} There exists $C>0$ such that for all $\lambda\in \Omega$ and $p>p_0$ we have \begin{equation}\label{e:res3} \left\|R(\lambda,H_p)\right\|^{0,2}_p\leq C. \end{equation} \end{prop} \begin{proof} First, we observe that by the definition of the Bochner Laplacian \eqref{e:def-Bochner}, \begin{equation}\label{e:subelliptic-1} \left\| \nabla^{L^p\otimes E}u\right\|^2=\langle \Delta^{L^p\otimes E}u, u\rangle. \end{equation} Using \eqref{e:res1}, we obtain the estimate \begin{equation}\label{e:Delta-estimate} \left\|\frac{1}{p}\Delta^{L^p\otimes E}R(\lambda,H_p)\right\| = \left\|(\lambda-V) R(\lambda,H_p)+1\right\|\leq C. \end{equation} By \eqref{e:res1}, \eqref{e:subelliptic-1} and \eqref{e:Delta-estimate}, we obtain an estimate for the $H^1$-norm of $R(\lambda,H_p) u$: \begin{multline}\label{e:1-estimate} \left\|R(\lambda,H_p) u\right\|^2_{p,1}=\left\|\frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}R(\lambda,H_p) u\right\|^2+\|R(\lambda,H_p) u\|^2\\ = \left\langle \frac{1}{p} \Delta^{L^p\otimes E}R(\lambda,H_p) u,R(\lambda,H_p) u\right\rangle +\|R(\lambda,H_p)u\|^2\leq C\|u\|^2. \end{multline} Next, we estimate the $H^2$-norm of $R(\lambda,H_p) u$. We will use an equivalent definition of the Sobolev norms, given in terms of local coordinates. For any $x_0\in X$, we will consider normal coordinates $\gamma_{x_0}$ and trivializations of the bundles $L$ and $E$ defined on $B^{X}(x_0,\varepsilon)$ as in Introduction. We still denote by $e_{j}$ the constant vector field $e_j(Z)=e_j, j=1,\ldots,2n$ on $B^{T_{x_0}X}(0,\varepsilon)$. One can show that the restriction of the norm $\|\cdot\|_{p,m}$ to ${C}^\infty_c(B^{T_{x_0}X}(0,\varepsilon), L^p\otimes E) \cong {C}^\infty_c(B^{X}(x_0,\varepsilon), L^p\otimes E)$ is equivalent uniformly on $x_0\in X$ and $p\in \mathbb N$ to the norm $\|\cdot \|^\prime_{p,m}$ given for $u\in {C}^\infty_c(B^{T_{x_0}X}(0,\varepsilon), L^p\otimes E)$ by \begin{equation}\label{localSobolev} \|u\|^\prime_{p,m}=\left(\sum_{\ell=0}^m\sum_{j_1,\ldots,j_\ell=1}^{2n} \int_{X_0} \Big(\tfrac{1}{\sqrt{p}}\Big)^\ell| \nabla^{L_0^p\otimes E_0}_{e_{j_1}}\cdots \nabla^{L_0^p\otimes E_0}_{e_{j_\ell}}u|^2dZ\right)^{1/2}. \end{equation} That is, there exists $C_m>0$ such that, for any $x_0\in X$, $p\in \mathbb N$ we have \begin{equation}\label{pm-prime} C_m^{-1}\|u\|^\prime_{p,m}\leq \|u\|_{p,m}\leq C_m\|u\|^\prime_{p,m}\,, \end{equation} for any $u\in {C}^\infty_c(B^{T_{x_0}X}(0,\varepsilon), L^p\otimes E) \cong {C}^\infty_c(B^{X}(x_0,\varepsilon), L^p\otimes E)$. By choosing an appropriate covering of $X$ by normal coordinate charts, we can reduce our considerations to the local setting. Without loss of generality, we can assume that $u\in {C}^\infty_c(B^{T_{x_0}X}(0,\varepsilon), L^p\otimes E)$ for some $x_0\in X$ and the Sobolev norm of $u$ is given by the norm $\|u\|^\prime_{p,m}$ given by \eqref{localSobolev}. (Later on, we omit `prime' for simplicity.) One can write \begin{equation}\label{Delta-p} \Delta^{L^p\otimes E}=-\sum_{j,k=1}^{2n}g^{jk}(Z)\left[\nabla^{L^p\otimes E}_{e_j} \nabla^{L^p\otimes E}_{e_k}- \sum_{\ell =1}^{2n} \Gamma^{\ell}_{jk}(Z) \nabla^{L^p\otimes E}_{e_\ell}\right], \end{equation} where $(g^{jk}(Z))$ is the inverse of the metric tensor and the functions $\Gamma^{\ell}_{jk}, $ are defined by $\nabla^{TX}_{e_j}e_k=\sum_{\ell}\Gamma^{\ell}_{jk}e_\ell$. We also observe that \begin{equation}\label{e:adj} (\nabla^{L^p\otimes E}_{e_k})^*=-\nabla^{L^p\otimes E}_{e_k}+f_k \end{equation} for any $k=1,\ldots,2n$ with some $f_k\in C^\infty(X)$. By \eqref{e:1-estimate} and \eqref{e:subelliptic-1}, we get \begin{multline*} \|R(\lambda,H_p) u\|^2_{p,2}\leq C_1\sum_{j,k=1}^{2n}\left\|\frac{1}{p} \nabla^{L^p\otimes E}_{e_j} \nabla^{L^p\otimes E}_{e_k}R(\lambda,H_p) u\right\|^2+\|R(\lambda,H_p) u\|^2_{p,1}\\ \leq C_1\sum_{k=1}^{2n} \left\langle \frac{1}{p} \Delta^{L^p\otimes E}\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u, \frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u\right\rangle +C_2\|u\|^2. \end{multline*} The first term in the right-hand side of the last inequality can be written as follows: \begin{multline*} \sum_{k=1}^{2n} \left\langle \frac{1}{p} \Delta^{L^p\otimes E}\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u, \frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u\right\rangle \\ =\sum_{k=1}^{2n} \left\langle \frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_k} \frac{1}{p} \Delta^{L^p\otimes E} R(\lambda,H_p) u, \frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u\right\rangle \\ +\sum_{k=1}^{2n} \left\langle \left[\frac{1}{p} \Delta^{L^p\otimes E}, \frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_k}\right] R(\lambda,H_p) u, \frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u\right\rangle \\ = I_{1}+I_{2}. \end{multline*} For the $I_{1}$-term, using \eqref{e:Delta-estimate}, \eqref{e:1-estimate} and \eqref{e:adj}, we get \begin{align*} I_{1}= & \sum_{k=1}^{2n} \left\langle \frac{1}{p} \Delta^{L^p\otimes E} R(\lambda,H_p) u, \left(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_k}\right)^*\frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_k} R(\lambda,H_p) u\right\rangle \\ \leq & C_3\left\|R(\lambda,H_p) u\right\|_{p,2} \|u\|. \end{align*} The commutator $\left[\frac{1}{p}\Delta^{L^p\otimes E}, \frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{k}}\right]$ is a second order differential operator whose coefficients are uniformly bounded in $x_0\in X$ (cf. \eqref{e:comm} below). By \eqref{e:1-estimate}, it follows that \[ I_{2}\leq C_4\left\|R(\lambda,H_p) u\right\|_{p,2} \|u\|. \] Combining the above estimates, we conclude that \[ \|R(\lambda,H_p) u\|^2_{p,2}\leq C_5\left\|R(\lambda,H_p) u\right\|_{p,2} \|u\|+C_6\|u\|^2. \] Applying the inequality $ab\leq \frac 12(\epsilon^2a^2+\epsilon^{-2}b^2)$ with a suitable $\epsilon>0$ to the first term in the right-hand side of this inequality, we complete the proof. \end{proof} \subsection{Weighted estimates} We will consider a sequence of weight functions $\Phi_p\in C^\infty(X)$, $p\in \field{N}$, satisfying the following condition: for any integer $k>0$ there exists $C_k>0$ such that \begin{equation}\label{e:chi-p} \left(\frac{1}{\sqrt{p}}\right)^{k-1}\left|\nabla^k\Phi_p(x)\right|<C_k, \quad x\in X. \end{equation} Define a family of differential operators on $C^\infty(X,L^p\otimes E)$ by \begin{align}\label{e:weight-operator} H_{p;\alpha}:= e^{\alpha \Phi_p} H_p e^{-\alpha \Phi_o}, \quad \alpha\in \field{R}. \end{align} An easy computation gives that \begin{equation}\label{e:DpaW} H_{p;\alpha}=H_p+\frac 1p(\alpha A_{p}+\alpha^2B_{p}), \end{equation} where \begin{equation} \label{e:ApBp} A_{p}=-2\nabla \Phi_p\cdot \nabla^{L^p\otimes E} +\Delta \Phi_p,, \quad B_{p}=-|\nabla \Phi_p|^2. \end{equation} From \eqref{e:ApBp}, we immediately infer that, for any $m\in \mathbb N$, there exists $C_m>0$ such that, for any $p\in \mathbb N$ and $u\in H^{m}(X,L^p\otimes E)$, \begin{equation}\label{e:Sobolev-mapping} \|A_{p}u\|_{p,m-1}\leq C_mp^{1/2}\|u\|_{p,m},\quad \|B_{p}u\|_{p,m} \leq C_m\|u\|_{p,m}. \end{equation} Moreover, $C_m$ depends on $C^{m+2}$-norm of $\Phi_p$ in \eqref{e:chi-p}. The following theorem is a refinement of \cite[Theorem 3.4]{ko-ma-ma}. Recall that $p_0\in \field{N}$ is defined in the beginning of Section \ref{s:L2estimates}. \begin{thm}\label{Thm1.9a} There exists $c_0>0$ such that, for any $p\in \mathbb N$, $p>p_0$, $\lambda\in \Omega$, and $|\alpha|<c_0\sqrt{p}$, the operator $\lambda-H_{p;\alpha}$ is invertible in $L^2(X, L^p\otimes E)$. Moreover, for any $m\in \mathbb N$, the resolvent $\left(\lambda-H_{p;\alpha}\right)^{-1}$ maps $H^m(X, L^p\otimes E)$ to $H^{m+2}(X, L^p\otimes E)$ with the following norm estimates: \begin{equation}\label{e:mm+2} \left\|\Big(\lambda-H_{p;\alpha}\Big)^{-1} \right\|_{p}^{m,m+2}\leq C_m, \end{equation} where $C_m>0$ is independent of $p\in \mathbb N$, $p>p_0$, $\lambda\in \Omega$, and $|\alpha|<c_0\sqrt{p}$. \end{thm} \begin{proof As above, we denote $R(\lambda, H_p)=(\lambda-H_p)^{-1}$. We can write \[ \Big(\lambda-H_{p;\alpha}\Big)R(\lambda,H_p)=1-(H_{p;\alpha}-H_p)R(\lambda,H_p). \] By \eqref{e:DpaW}, \eqref{e:Sobolev-mapping} and \eqref{e:res3}, it follows that, for all $\lambda\in \Omega$, $p\in \mathbb N$ and $\alpha\in \mathbb R$, we have \begin{multline} \left\|(H_{p;\alpha}-H_p)R(\lambda,H_p)\right\|^{0,0}_p\\ \leq C\left(\frac{|\alpha|}{\sqrt{p}} \left\|R(\lambda,H_p)\right\|^{0,1}_p +\frac{\alpha^2}{p}\left\|R(\lambda,H_p \right\|^{0,0}_p\right) \leq C\left(\frac{|\alpha|}{\sqrt{p}}+\frac{\alpha^2}{p}\right), \end{multline} where $C>0$ is some constant. From now on, we will assume that $c_0>0$ satisfies $C(c_0+c_0^2)<\frac 12$. Then, if $|\alpha|<c_0\sqrt{p}$, we have \begin{equation}\label{e:d} \left\|(H_{p;\alpha}-H_p) R(\lambda,H_p)\right\|^{0,0}_p <\frac 12. \end{equation} Therefore, for all $\lambda\in \Omega$, $p\in \mathbb N$, $\alpha\in \mathbb R$, and $|\alpha|<c\sqrt{p}$, the operator $\lambda-H_{p;\alpha}$ is invertible in $L^2(X,L^p\otimes E)$, and, for $R(\lambda,H_{p;\alpha}):=(\lambda-H_{p;\alpha})^{-1} $, we have \begin{equation}\label{e:res} R(\lambda,H_{p;\alpha})= R(\lambda,H_p)+R(\lambda, H_{p;\alpha})(H_{p;\alpha}-H_p)R(\lambda, H_p). \end{equation} By general elliptic theory, we know that the operator $R(\lambda,H_{p;\alpha})$ defines a bounded operator from $H^m(X,L^p\otimes E)$ to $H^{m+2}(X,L^p\otimes E)$ for any $m\in \field{Z}$. It remains to prove \eqref{e:mm+2}. By \eqref{e:d} and \eqref{e:res}, we get \begin{multline*} \left\|R(\lambda,H_{p;\alpha})\right\|^{0,2}_p\leq \left\|R(\lambda,H_p)\right\|^{0,2}_p +\left\| R(\lambda,H_{p;\alpha}) \right\|^{0,2}_p\left\|(H_{p;\alpha}-H_p)R(\lambda,H_p) \right\|^{0,0}_p \\ \leq \left\|R(\lambda,H_p)\right\|^{0,2}_p +\frac 12\left\| R(\lambda,H_{p;\alpha}) \right\|^{0,2}_p, \end{multline*} that gives \[ \left\|R(\lambda,H_{p;\alpha})\right\|^{0,2}_p\leq 2\left\|R(\lambda,H_p)\right\|^{0,2}_p \] and, by \eqref{e:res3}, proves \eqref{e:mm+2} for $m=0$. Now we again will work locally. Let $\{e_j\}$ be a local frame of vector fields on $X$. By \eqref{localSobolev}, we see that for any $k\geq 1$ there exists $C_k>0$ such that \begin{equation}\label{e:k-k-1} \|v\|_{p,k} \leq C_k\left(\sum_{j=1}^{2n}\left\|\left(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}\right)v\right\|_{p,k-1}+\|v\|_{p,k-1}\right) \end{equation} for any $v\in {C}^\infty_c(X, L^p\otimes E).$ For any $1\leq j \leq 2n$ and $u\in {C}^\infty_c(X, L^p\otimes E).$, we can write \begin{multline}\label{e:estj-k} \Big(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}\Big) R(\lambda,H_{p;\alpha})u=R(\lambda,H_{p;\alpha})\Big(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}\Big)u\\ +R(\lambda,H_{p;\alpha})\left[\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}, H_{p;\alpha}\right] R(\lambda,H_{p;\alpha})u. \end{multline} that gives the estimate \begin{multline}\label{e:estj-k-1} \left\|\left(\tfrac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}\right) R(\lambda,H_{p;\alpha})u\right\|_{p.m+1}\\ \begin{aligned} \leq & \left\|R(\lambda,H_{p;\alpha})\right\|^{m-1,m+1}_{p}\left\|\left(\tfrac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}\right)u\right\|_{p,m-1}\\ & +\left\|R(\lambda,H_{p;\alpha})\right\|_p^{m-1.m+1} \left\|\left[\tfrac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}, H_{p;\alpha}\right]\right\|_p^{m+1.m-1}\times \\ & \times \left\|R(\lambda,H_{p;\alpha})\right\|_p^{m-1.m+1} \|u\|_{p,m-1}. \end{aligned} \end{multline} As in \cite[Proposition 3.5]{ko-ma-ma}, for any $1\leq j \leq 2n$, the commutator $\left[\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}, \frac{1}{p}\Delta_{p;\alpha}\right]$ is a second order differential operator of the form \begin{multline}\label{e:comm} \left[\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_{j}}, H_{p;\alpha}\right]=\sum_{i,j}\tilde a^{ij}_{p;\alpha}(Z) \Big(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_i}\Big) \Big(\frac{1}{\sqrt{p}}\nabla^{L^p\otimes E}_{e_j}\Big)\\ +\sum_{\ell}\tilde a^{\ell}_{p;\alpha}(Z)\frac{1}{\sqrt{p}} \nabla^{L^p\otimes E}_{e_\ell}+\tilde a_{p;\alpha}(Z), \end{multline} whose coefficients $\tilde a_{p;\alpha}^{ij}$, $\tilde a_{p;\alpha}^\ell$ and $\tilde a_{p;\alpha}$, bounded uniformly on $p\in \mathbb N$ and $|\alpha|<c\sqrt{p}$. Using \eqref{e:k-k-1}, \eqref{e:estj-k-1} and \eqref{e:comm}, we infer that for any $m\geq 1$ \begin{multline} \|R(\lambda,H_{p;\alpha})\|^{m, m+2}_{p}\\ \leq C_m\left(\|R(\lambda,H_{p;\alpha})\|^{m-1,m+1}_{p}+(\|R(\lambda,H_{p;\alpha})\|_p^{m-1.m+1})^2\right), \end{multline} that proves recursively \eqref{e:mm+2} for all $m\geq 0$. \end{proof} \section{Spectral projection}\label{s:projection} Let us consider an interval $I=(\alpha,\beta)$ such that $\alpha,\beta\not \in \Sigma$. Let $P_{p,I}$ be the spectral projection of the operator $H_{p}$ associated with $I$ and $P_{p,I}(x,x^\prime)$, $x,x^\prime\in X$, be its smooth kernel with respect to the Riemannian volume form $dv_X$. In this section, we will study the asymptotic behavior of $P_{p,I}(x,x^\prime)$ as $p\to \infty$. By Theorem \ref{t:spectrum}, there exists $\mu_0>0$ and $p_0\in \field{N}$ such that for any $p>p_0$ \[ \sigma(H_{p})\subset (-\infty, \alpha-\mu_0) \cup I \cup (\beta+\mu_0, \infty). \] Let $\Gamma$ be the boundary of the rectangle $\Pi=(\alpha-\mu_0/2,\beta+\mu_0/2)+i(-\mu_0/2, \mu_0/2)$ in $\field{C}$ counterclockwise oriented. Then for any integer $m>0$ and $p>p_0$, we can write \begin{equation}\label{e:Pqp-integral} P_{p,I}=\frac{1}{2\pi i} \int_\Gamma \lambda^{m-1}(\lambda-H_{p})^{-m}d\lambda. \end{equation} \subsection{Off-diagonal estimates}\label{s:far-off-diagonal} The proof of Theorem~\ref{t:exp-Pp} closely follows the proof of \cite[Theorem 1.2]{ko-ma-ma}, so we just give a short outline. As shown in \cite[Proposition 4.1]{Kor91} (see also \cite[Section 3.1]{ko-ma-ma}), for any $p\in \mathbb N$, there exists a function $\widetilde{d}_p$, satisfying the following conditions: (1) we have \begin{equation}\label{(1.1)} \vert \widetilde{d}_p(x,y) - d (x,y)\vert < \frac{1}{\sqrt{p}}\;, \quad x, y\in X; \end{equation} (2) for any $k>0$, there exists $c_k>0$ such that \begin{equation} \label{dist} \left(\frac{1}{\sqrt{p}}\right)^{k-1} \left| \nabla^k_{x} \widetilde{d}_p(x,y)\right| < c_{k}\:,\quad x, y\in X. \end{equation} We get a family $\{\widetilde{d}_{p,y} : y\in X\}$ of weight functions on $X$ given by \begin{equation}\label{e:3.10} \widetilde{d}_{p,y}(x) = \widetilde{d}_p(x,y), \quad x\in X, \end{equation} which satisfy \eqref{e:chi-p} uniformly on $y\in X$. As in \eqref{e:weight-operator}, consider the family of differential operators \[ H_{p;\alpha,y}:= e^{\alpha \widetilde{d}_{p,y}} H_{p} e^{-\alpha \widetilde{d}_{p,y}} \quad \alpha\in \field{R}, \quad y\in X. \] By Theorem~\ref{Thm1.9a}, we get Sobolev norm estimates, uniform in $y$, for the operator $(\lambda-H_{p;\alpha,y})^{-m}$ for any $m\in \field{N}$. Next, we derive pointwise exponential estimates for the Schwartz kernel of this operator and its derivatives of an arbitrary order, using a refined form of the Sobolev embedding theorem stated in Propositions~\ref{p:Sobolev} and \ref{p:delta}. Finally, we use the formula \eqref{e:Pqp-integral} to complete the proof of Theorem~\ref{t:exp-Pp}. \subsection{Localization of the problem}\label{local} Next, we localize the problem, slightly modifying the constructions of \cite[Sections 1.1 and 1.2]{ma-ma08}. We fix $x_0\in X$. We will use normal coordinates and trivializations of the bundles $L$ and $E$ defined on $B^{X}(x_0,\varepsilon)$ as in Introduction. The fixed orthonormal basis $\{e_j\}$ of $T_{x_0}X$ gives rise to an isomorphism $X_0:=\mathbb R^{2n}\cong T_{x_0}X$. Consider the trivial bundles $L_0$ and $E_0$ on $X_0$ with fibers $L_{x_0}$ and $E_{x_0}$, respectively. The above identifications induce the Riemannian metric $g$ on $B^{T_{x_0}X}(0,\varepsilon)$ as well as the connections $\nabla^L$ and $\nabla^E$ and the Hermitian metrics $h^L$ and $h^E$ on the restrictions of $L_0$ and $E_0$ to $B^{T_{x_0}X}(0,\varepsilon)$. In particular, $h^L$, $h^E$ are the constant metrics $h^{L_0}=h^{L_{x_0}}$, $h^{E_0}=h^{E_{x_0}}$. For some $\varepsilon \in (0,r_X/4)$, which will be fixed later, we extend these geometric objects from $B^{T_{x_0}X}(0,\varepsilon)$ to $X_0\cong T_{x_0}X$ in the following way. Let $\rho : \mathbb R\to [0,1]$ be a smooth even function such that $\rho(v)=1$ if $|v|<2$ and $\rho(v)=0$ if $|v|>4$. Let $\varphi_\varepsilon : \mathbb R^{2n}\to \mathbb R^{2n}$ be the map defined by $\varphi_\varepsilon(Z)=\rho(|Z|/\varepsilon)Z$. Set $\nabla^{E_0}=\varphi^*_\varepsilon\nabla^E$. Define a Hermitian connection $\nabla^{L_0}$ on $(L_0,h^{L_0})$ by \[ \nabla^{L_0}_u=\nabla^L_{d\varphi_\varepsilon(Z)(u)}+\frac 12(1-\rho^2(|Z|/\varepsilon))R^L_{x_0}(\mathcal R(Z),u). \quad Z\in X_0, \quad u\in T_ZX_0 \] where we use the canonical isomorphism $X_0\cong T_ZX_0$ and $\mathcal R(Z)=\sum_{j=1}^{2n} Z_je_j\in \mathbb R^{2n}\cong T_ZX_0$. Its curvature is given by \cite[(1.22)]{ma-ma08} \begin{equation}\label{e:RL0} \begin{split} R^{L_0}_Z=& (1-\rho^2(|Z|/\varepsilon))R^L_{x_0}+\rho^2(|Z|/\varepsilon)R^L_{\varphi_\varepsilon(Z)}\\ & -(\rho\rho^\prime)(|Z|/\varepsilon)\frac{\sum_{j=1}^{2n}Z_je^j}{\varepsilon |Z|}\wedge [R^L_{x_0}(\mathcal R,\cdot)-R^L_{\varphi_\varepsilon(Z)}(\mathcal R,\cdot)], \end{split} \end{equation} where $e^j$ is the dual base in $T^*_{x_0}X\cong T^*_ZX_0$. Now we proceed in a slightly different way than in \cite{ma-ma08}. Recall that, for any $Z\in B^{T_{x_0}X}(0,r_X)\cong B^{X}(x_0,r_X)$, we have a skew-adjoint operator $B_Z : T_ZX_0\to T_ZX_0$ such that \[ iR^L_Z(u,v)=g_Z(B_Zu,v), \quad u,v\in T_ZX_0. \] Its eigenvalues have the form $\pm i a_j(Z), j=1,\ldots,n,$ with $a_j(Z)>0$. We set \[ B^{X_0}_Z=B_{\varphi_\varepsilon(Z)},\quad V^{X_0}(Z)=V(\varphi_\varepsilon(Z)), \quad Z\in X_0, \] and define a symmetric bilinear form $g^{X_0}_Z$ on $T_ZX_0$ by \[ g^{X_0}(u,v)=iR^{L_0}_Z((B^{X_0}_Z)^{-1}u,v), \quad u,v\in T_ZX_0, \] By \eqref{e:RL0}, tt is easy to sse that, for $\varepsilon$ small enough, $g^{X_0}$ is positive definite and define a Riemannian metric on $X_0$. From now on, we fix such an $\varepsilon>0$. Let $\Delta^{L_0^p\otimes E_0}$ be the associated Bochner Laplacian acting on $C^\infty(X_0,L_0^p\otimes E_0)$. Introduce the operator $H^{X_0}_p$ acting on $C^\infty(X_0,L_0^p\otimes E_0)$ by \[ H^{X_0}_p=\frac 1p \Delta^{L_0^p\otimes E_0}+ V^{X_0}(Z). \] It is clear that, for any $u \in {C}^\infty_c(B^{X_0}(0,2\varepsilon))$, we have \begin{align}\label{e:4.10} H_pu(Z)=H^{X_0}_pu(Z). \end{align} Moreover, the eigenvalues of $B^{X_0}_Z$ are given by $\pm i a_j(\varphi_\varepsilon(Z)), j=1,\ldots,n$. Therefore, the set $\Sigma$ for $H^{X_0}_p$ is contained in that for the operators $H_p$. Therefore, $\alpha,\beta\not \in \Sigma$. By Theorem \ref{t:spectrum}, there exists $p_0\in \field{N}$ such that for any $p>p_0$ \begin{equation}\label{e:DeltaX0-spectrum} \sigma(H^{X_0}_p)\subset (-\infty, \alpha-\mu_0) \cup I \cup (\beta+\mu_0, \infty). \end{equation} with the same $\mu_0>0$ as above. Let $P^0_{p,I}$ be the spectral projection of $H^{X_0}_p$ corresponding to the interval $I$ and $P^0_{p,I}(Z,Z^\prime)$, $Z,Z^\prime\in X_0$, be its smooth kernel with respect to the Riemannian volume form $dv_{X_0}$. As in \cite[Theorem 4.1]{ko-ma-ma} (extending \cite[Proposition 1.3]{ma-ma08}), one can show that the kernels $P_{p,I,x_0}(Z,Z^\prime)$ and $P^0_{p,I}(Z,Z^\prime)$ are asymptotically close on $B^{T_{x_0}X}(0,\varepsilon)$ in the $C^\infty$-topology, as $p\to \infty$. \begin{thm} \label{p:Pq-difference} There exists $c_0>$ such that, for any $k\in \mathbb N$, there exists $C_{k}>0$ such that for any $p>p_0$, $x_0\in X$ and $Z,Z^\prime\in B^{X_0}(0,\varepsilon)$, \[ |P_{p,I,x_0}(Z,Z^\prime)-P^0_{p,I}(Z,Z^\prime)|_{C^k}\leq C_{k}e^{-c_0\sqrt{p}}. \] \end{thm} \subsection{Rescaling and formal expansions}\label{scale} Theorem~\ref{p:Pq-difference} reduces our considerations to the operator family $H^{X_0}_{p}$ acting on $C^\infty(X_0, L_0^p\otimes E_0)\cong C^\infty(\field{R}^{2n},E_{x_0})$ (parametrized by $x_0\in X$). We use the rescaling introduced in \cite[Section 1.2]{ma-ma08}. Let $dv_{X_0}$ be the Riemannian volume form of $(X_0, g^{X_0})$. Then $\kappa_{X_0}$ is the smooth positive function on $X_0$ defined by the equation \[ dv_{X_0}(Z)=\kappa_{X_0}(Z)dZ, \quad Z\in X_0. \] Denote $t=\frac{1}{\sqrt{p}}$. For $s\in C^\infty(\mathbb R^{2n}, E_{x_0})$, set \[ S_ts(Z)=s(Z/t), \quad Z\in \mathbb R^{2n}. \] Define the rescaling of the operator $H_p^{X_0}$ by \begin{equation}\label{scaling} \mathcal H_t=S^{-1}_t\kappa_{X_0}^{\frac 12}H_p^{X_0}\kappa_{X_0}^{-\frac 12}S_t. \end{equation} This is a second order differential operator. We expand its coefficients in Taylor series in $t$. For any $m\in \field{N}$, we get \begin{equation}\label{e:Ht-formal} \mathcal H_t=\mathcal H^{(0)}+\sum_{j=1}^m \mathcal H^{(j)}t^j+\mathcal O(t^{m+1}), \end{equation} where there exists $m^\prime\in \field{N}$ so that for every $k\in\field{N}$ and $t\in [0,1]$ the derivatives up to order $k$ of the coefficients of the operator $O(t^{m+1})$ are bounded by $Ct^{m+1}(1+|Z|)^{m^\prime}$. The leading term $\mathcal H^{(0)}$ is given by \eqref{e:DeltaL0p}. By \cite[Theorem 1.4]{ma-ma08}, the next terms $\mathcal H^{(j)}, j\geq 1,$ have the form \begin{equation}\label{e:Hj} \mathcal H^{(j)}=\sum_{k,\ell=1}^{2n} a_{k\ell,j}\frac{\partial^2}{\partial Z_k\partial Z_\ell}+\sum_{k=1}^{2n} b_{k,j}\frac{\partial}{\partial Z_k}+c_{j}, \end{equation} where $a_{k\ell,j}$ is a homogeneous polynomial in $Z$ of degree $j$, $b_{kj}$ is a polynomial in $Z$ of degree $\leq j+1$ (of the same parity with $j-1$) and $c_{j}$ is a polynomial in $Z$ of degree $\leq j+2$ (of the same parity with $j$). More precisely, for the operator $H_p=\frac{1}{p}\Delta_p$, the operator $\mathcal H^{(j)}$ coincides with the operator $\mathcal O_j$ introduced in that theorem. In the geeral case, we have \[ \mathcal H^{(j)}=\mathcal O_j+\sum_{|\alpha|=j}(\partial^\alpha(V+\tau))_{x_0}\frac{Z^\alpha}{\alpha!},\quad j=1,2,\ldots, \] In \cite[Theorem 1.4]{ma-ma08}, explicit formulas are given for $\mathcal O_1$ and $\mathcal O_2$. We refer the reader to \cite{ma-ma08,ma-ma:book} for more details. \subsection{Asymptotic expansions of the spectral projection} \label{asymp} By construction, the operator $\mathcal H_{t}$ is a self-adjoint operator in $L^2(\field{R}^{2n}, E_{x_0})$, and its spectrum coincides with the spectrum of $H_p^{X_0}$. By \eqref{e:DeltaX0-spectrum}, there exists $t_0>0$ such that for any $t\in (0,t_0]$ \[ \sigma(\mathcal H_{t})\subset (-\infty, \alpha-\mu_0) \cup I \cup (\beta+\mu_0, \infty). \] Let $\mathcal P_{t}$ be the spectral projection of $\mathcal H_t$, corresponding to the interval $I$ and $\mathcal P_{t}(Z,Z^\prime)=\mathcal P_{t,x_0}(Z,Z^\prime)$ be its smooth kernel with respect to $dZ$. For any integer $k>0$, we can write (with $\Gamma$ as above) \begin{equation}\label{e:Pqt-s7} \mathcal P_{t}=\frac{1}{2\pi i} \int_\Gamma \lambda^{k-1}(\lambda-\mathcal H_{t})^{-k}d\lambda. \end{equation} Now we can proceed as in \cite{Kor18,ma-ma08}. We only observe that all the constants in the estimates in \cite{Kor18,ma-ma08} depend on finitely many derivatives of $g$, $h^L$, $\nabla^L$, $h^E$, $\nabla^E$ and the lower bound of $g$. Therefore, by the bounded geometry assumptions, all the estimates are uniform on the parameter $x_0\in X$. We will omit the details and give only the final result. \begin{thm}\label{t:thm7.2} The function $\mathcal P_{t}(Z,Z^\prime)$ admits an asymptotic expansion \[ \mathcal P_{t}(Z,Z^\prime)\cong \sum_{r=0}^{\infty}F_{r}(Z,Z^\prime)t^r, \quad t\to 0. \] For any $j\in \mathbb N$, the remainder $\mathcal R_{j,t}(Z,Z^\prime):=\mathcal P_{t}(Z,Z^\prime) -\sum_{r=0}^jF_{r}(Z,Z^\prime)t^r$ satisfies the condition: for any $m,m^\prime\in \mathbb N$, there exist $C>0$ and $M>0$ such that for any $0\leq t\leq 1$ and $Z,Z^\prime\in T_{x_0}X$ \begin{multline} \sup_{|\alpha|+|\alpha^\prime|\leq m}\Bigg|\frac{\partial^{|\alpha|+|\alpha^\prime|}}{\partial Z^\alpha\partial Z^{\prime\alpha^\prime}}\mathcal R_{j,t}(Z,Z^\prime)\Bigg|_{C^{m^\prime}(X)}\\ \leq Ct^{j+1}(1+|Z|+|Z^\prime|)^M\exp(-c|Z-Z^\prime|). \end{multline} \end{thm} By \eqref{scaling}, we have \[ P^0_{p,I}(Z,Z^\prime)=t^{-2n}\kappa^{-\frac 12}(Z)\mathcal P_{t}(Z/t,Z^\prime/t)\kappa^{-\frac 12}(Z^\prime), \quad Z,Z^\prime \in \mathbb R^{2n}, \] that completes the proof of the asymptotic expansion \eqref{e:main-exp} in Theorem~\ref{t:main}. \subsection{Computation of the coefficients} We will use the formal power series technique developed in \cite[Section 1.5]{ma-ma08}. We take the formal asymptotic expansion for the operator $\mathcal H_{t}$ given by \eqref{e:Ht-formal} and find a formal asymptotic expansion for the resolvent $(\lambda-\mathcal H_t)^{-1}$, $\lambda\in \Pi$, solving the formal power series equation \begin{equation}\label{e:Ltres-formal} (\lambda-\mathcal H_t)f(t,\lambda)=I, \end{equation} where \[ f(t,\lambda)=\sum_{r=0}^{\infty} t^rf_r(\lambda), \quad f_r(\lambda)\in \operatorname{End}(L^2(\field{R}^{2n}, E_{x_0})). \] Identifying the coefficients in $t$, we get \[ f_0(\lambda)=(\lambda-\mathcal H^{(0)})^{-1}, \] \[ f_r(\lambda)=(\lambda-\mathcal H^{(0)})^{-1}\sum_{j=1}^r\mathcal H^{(j)}f_{r-j}, \quad r\geq 1. \] We find that \[ f_r(\lambda)=\sum_{\substack{k\geq 1, j_\ell \geq 1\\ j_1+\ldots+j_k=r}}(\lambda-\mathcal H^{(0)})^{-1}\mathcal H^{(j_1)}(\lambda-\mathcal H^{(0)})^{-1}\mathcal H^{(j_2)}\ldots \mathcal \mathcal H^{(j_k)} (\lambda-\mathcal H^{(0)})^{-1}, \] Recall that $\mathcal P_{I}$ denotes the spectral projection of $\mathcal H^{(0)}$, corresponding to $I$, and put $\mathcal P^\bot_{I}=I-\mathcal P_{I}$. Using \eqref{e:Lambda-Bergman}, we can write \[ (\lambda-\mathcal H^{(0)})^{-1}=\sum_{(\mathbf k,\mu) : \Lambda_{\mathbf k,\mu}\in I} \frac{1}{\lambda-\Lambda_{\mathbf k,\mu}}\mathcal P_{\Lambda_{\mathbf k,\mu}}+(\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I}, \] Observe that the second term in the right hand side of this equality is an analytic function for $\lambda\in\Pi$. We infer that \begin{equation}\label{e:fr} f_r(\lambda)=\Phi_{r}(\lambda)+\Phi^\bot_{r}(\lambda). \end{equation} where $\Phi^\bot_{r}$ is an analytic function in $\Pi$ given by \[ \Phi^\bot_{r}(\lambda)=\sum_{\substack{k\geq 1, j_\ell \geq 1\\ j_1+\ldots+j_k=r}}(\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I} \mathcal H^{(j_1)}(\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I}\mathcal H^{(j_2)}\ldots \mathcal H^{(j_k)} (\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I}, \] and \begin{equation}\label{e:Pr} \Phi_{r}(\lambda)=\sum_{\substack{k\geq 1, j_\ell \geq 1\\ j_1+\ldots+j_k=r}}\mathcal R_0\mathcal H^{(j_1)}\mathcal R_1\mathcal H^{(j_2)}\ldots \mathcal H^{(j_k)} \mathcal R_k, \end{equation} where at least one of $\mathcal R_0, \ldots, \mathcal R_k$ equals $\frac{1}{\lambda-\Lambda_{\mathbf k,\mu}}\mathcal P_{\Lambda_{\mathbf k,\mu}}$ with $\Lambda_{\mathbf k, \mu}\in I$. Using \eqref{e:Pqt-s7} and \eqref{e:Ltres-formal}, we get a formal asymptotic expansion for $\mathcal P_{t}$: \[ \mathcal P_{t}=\frac{1}{2\pi i}\int_\Gamma (\lambda-\mathcal H_{t})^{-1}d\lambda=\frac{1}{2\pi i} \sum_{r=0}^{\infty} t^r \int_\Gamma f_r(\lambda)d\lambda=\frac{1}{2\pi i} \sum_{r=0}^{\infty} t^r \int_\Gamma \Phi_{r}(\lambda) d\lambda, \] which gives \begin{equation}\label{e:FrPhi} F_{r}=\frac{1}{2\pi i}\int_\Gamma \Phi_{r}(\lambda)d\lambda. \end{equation} For $r=0$, we have \[ \Phi_0(\lambda)=\sum_{(\mathbf k,\mu) : \Lambda_{\mathbf k, \mu}\in I} \frac{1}{\lambda-\Lambda_{\mathbf k,\mu}}\mathcal P_{\Lambda_{\mathbf k,\mu}}. \] By \eqref{e:FrPhi}, this proves \eqref{e:F0}. Consider the set $\mathcal A$ of operators in $L^2(\field{R}^{2n},E_{x_0})$ with smooth kernel of the form $K(Z,Z^\prime)\mathcal P(Z,Z^\prime)$, where $K(Z,Z^\prime)$ is a polynomial in $Z, Z^\prime$ (here $\mathcal P(Z,Z^\prime)$ is the Bergman kernel given by \eqref{e:Bergman}). Let us show that, for any $\lambda\not\in \Sigma\cap I$, the operator $\Phi_{r}(\lambda)$ is in $\mathcal A$. By \eqref{e:FrPhi}, this will immediately imply that $F_r\in\mathcal A$ and prove \eqref{e:Fr} It is easy to see that $\mathcal A$ is an involutive algebra with respect to the composition and the adjoint. Moreover, it is a filtered algebra with filtration given by the degree of the polynomial $K$. By these properties, it is easy to see that it is suffices to prove that, for any $j_1,j_2,\ldots,j_k$, \begin{equation}\label{e:inclusion} (\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I} \mathcal H^{(j_1)}\ldots (\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I} \mathcal H^{(j_k)}\mathcal P_{\Lambda_{\mathbf k,\mu}}\in \mathcal A_N \end{equation} with $N=\kappa(I)+j_1+\ldots+j_k+2k$. First, observe that $\mathcal P_{\Lambda_{\mathbf k,\mu}}\in \mathcal A_{\kappa(I)}$. Using the description of the coefficients $\mathcal H^{(j)}$ given by \eqref{e:Hj}, one can easily see that, for any $j$ and $A\in \mathcal A_N$, the operator $\mathcal H^{(j)}A$ belongs to $\mathcal A_{j+N+2}$. Finally, one can show that, for any $A\in \mathcal A_N$, the operator $(\lambda-\mathcal H^{(0)})^{-1}\mathcal P^\bot_{I} A$ belongs to $\mathcal A_N$. It follows immediately if we diagonalize the operator $\mathcal H^{(0)}$ in the orhonormal base of its eigenfunction and use the explicit description of its eigenvalues given, for instance, in \cite[Section 1.4]{ma-ma08}. This immediately completes the proof of \eqref{e:inclusion}.
{ "timestamp": "2020-12-29T02:27:57", "yymm": "2012", "arxiv_id": "2012.14196", "language": "en", "url": "https://arxiv.org/abs/2012.14196" }
\section{Introduction} Non-hermitian Hamiltonian with real eihenvalues in the context of ${\mathcal {PT}}$ symmetry has become an interesting area of investigation for last couple of decades \cite{bender98, bender99, bender02, brody16, mosta02, brody14,moise11,baga15}. The present article stems from a recent study of partial $\mathcal{PT}$ symmetry by Beygi et. al. \cite{beygi15} where a variable specific action of symmetry operator is understood considering a model of an N-coupled harmonic oscillator Hamiltonian with purely imaginary coupling terms. It has also been observed that the reality and partial reality of the spectrum have direct correspondences with the classical trajectories.The present formulation attempts to explore the possibility of partial $\mathcal{PT}$ symmetry in a Bose-Hubberd hamiltonian operator\cite{graefe08} as well as in its eigenstates in a typical Fock space environment. The relevant Fock space \cite{zhu12} has been viewed as a Reproducing Kernel Hilbert Space \cite{paulsen16,hai18, hai16, hai18c, gar05, gar07}and the symetry operators are understood as Weighted Composition Conjugation \cite{hai18, hai16, hai18c} acting on it. We begin with the following definition of Fock space involving functions of $n$ complex variable. \begin{definition}{Definition} A Fock (or Segal-Bargmann) space $(\mathcal{F}^2({C}^n))$ is a separable complex Hilbert space of entire functions (of the complex variables $\{\zeta_j : j=1\dots n\}$ ) equipped with an inner-product \begin{eqnarray} \langle \psi, \phi\rangle=\prod_{j=1}^n\int_{W(\zeta_j)} \psi(\zeta_j : j=1\dots n)\overline{\phi(\zeta_j : j=1\dots n)}\nonumber\\ \;\; {\rm{with}} \prod_{j=1}^n\int_{W(\zeta_j)}\equiv \int\int d{W(\zeta_1)}\dots d{W(\zeta_n)}. \end{eqnarray} Here, ${d}{W(u)}=\frac{1}{\pi}e^{-\vert u\vert^2}{d}({\rm{Re}}(u)) d({\rm{Im}}(u))$ represents the relevant Gaussian measure relative to the complex variable $u$. \end{definition} In a Fock space of one complex variable ${\mathcal {PT}}$ symmetry is often understood as a consequence of the more general notion of \textbf{weighted composition conjugation} \cite{hai18} defined as follows. \begin{definition}{Definition} Let, $\zeta$ is a complex variable and $\{\vartheta, \eta, \upsilon\}$ are complex numbers satisfying the set of necessary and sufficient conditions : $\vert\vartheta\vert=1,\bar{\vartheta}\eta+\bar{\eta}=0$ and $\vert\upsilon\vert^2e^{\vert\eta\vert^2}=1$.The weighted composition conjugation is defined as \begin{eqnarray} \mathcal{C}_{(\vartheta, \eta, \upsilon)}\psi(\zeta)=\upsilon e^{\eta\zeta}\overline{\psi(\overline{\vartheta\zeta+\eta})}. \end{eqnarray} \end{definition} The anti-linear operator $\mathcal{C}_{(\vartheta, \eta, \upsilon)}$ is a \textbf{conjugation} since it is involutive and isometric. The action of the operator $\mathcal{PT}$ is equivalent to the choice : $\vartheta=-1=-\upsilon,\eta=0$ which results to the following equation \begin{eqnarray} \mathcal{C}_{(\vartheta, 0, 1)}\vert_{\vartheta=-1}\psi(\zeta)=\overline{\psi(\overline{-\zeta})}. \end{eqnarray} Similarly, the action of $\mathcal{T}$ is indicative of the choice : $\vartheta=1, \eta=0, \upsilon=1$ giving \begin{eqnarray} \mathcal{C}_{(\vartheta, 0, 1)}\vert_{\vartheta=1}\psi(\zeta)=\overline{\psi(\overline{\zeta})}. \end{eqnarray} If $\psi$ is a function of several complex variables $\{\zeta_j : j=1\dots n\}$ one can define an operator $\mathcal{C}_{(\vartheta_j,\eta_j, \upsilon_j : j=1\dots n )}$ with the action \begin{eqnarray} \mathcal{C}_{(\vartheta_j,\eta_j= 0,\upsilon_j= 1 : j=1\dots n)}\psi(\zeta_1,\dots,\zeta_j,\dots,\zeta_n)=\overline{\psi(\overline{\vartheta_1\zeta_1},\dots,\overline{\vartheta_j\zeta_j},\dots, \overline{\vartheta_n\zeta_n})}. \end{eqnarray} Let us introduce an operator $\mathcal{C}^{(j)}_n=\mathcal{C}_{(\vartheta_j,\eta_j= 0,\upsilon_j= 1 ; j=1\dots n)}\vert_{\vartheta_1=1,\dots,\vartheta_j=-1,\dots,\vartheta_n=1}$ as $j$-th partial $\mathcal{PT}$ symmetry ($\partial_{\mathcal{PT}}$) operator through the following action \begin{eqnarray} \mathcal{C}^{(j)}_n\psi(\zeta_1,\dots,\zeta_j,\dots,\zeta_n)=\overline{\psi(\bar{\zeta_1},\dots,\overline{-\zeta_j},\dots,\bar{\zeta_n})}. \end{eqnarray} and a global $\mathcal{PT}$ symmetry operator $\mathcal{C}_n$ through the action \begin{eqnarray} \mathcal{C}_n\psi(\zeta_1,\dots,\zeta_j,\dots,\zeta_n)=\overline{\psi(\overline{-\zeta_1},\dots,\overline{-\zeta_j},\dots,\overline{-\zeta_n})}. \end{eqnarray} For our present purpose we shall only consider the operators $\mathcal{C}_2$ and $\{\mathcal{C}_2^{(j)} : j=1,2\}$. Now, global and partial $\mathcal{PT}$ symmetries of any function $\psi(\zeta_1, \zeta_2)$ are understood through the following equations \begin{eqnarray} \mathcal{C}_2\psi(\zeta_1, \zeta_2)=\psi(\zeta_1, \zeta_2)\:\:{\rm and}\:\:\mathcal{C}_2^{(j)}\psi(\zeta_1, \zeta_2)=\psi(\zeta_1, \zeta_2)\:\:\forall\:\: j=1, 2 \end{eqnarray} respectively. \section{The model Hamiltonian and $\partial_{\mathcal{PT}}$ symmetry in Fock space} In the present discussion, following \cite{graefe08} a Bose-Hubbard type Hamiltonian has been considered. Such a Hamiltonian has been invoked as a two mode version for a second quantized many particle system showing Bose-Einstein Condensation (BEC) in a double well potential at low temperature. The said Hamiltonian becomes non-hermitian if one of the interaction terms present in it is taken as purely imaginary.The model Hamiltonian under consideration is given by \begin{eqnarray} H=\epsilon_0(a_1^{\dagger}a_1-a_2^{\dagger}a_2)+\epsilon(a_1^{\dagger}a_2+a_2^{\dagger}a_1)+\alpha(a_1^{\dagger}a_1-a_2^{\dagger}a_2)^2. \end{eqnarray} Here $\epsilon_0$ represents the on site energy difference, $\epsilon$ is the single particle tunneling and $\alpha$ stands for the interaction strength. $\{a_j, a_j^{\dagger} : j=1, 2\}$ are boson operators satisfying the condition $[a_j, a^\dagger_k] - \delta_{jk}=[a_{j}, a_{k}]=[a^{\dagger}_{j}, a^{\dagger}_{k}]=0$. The Hamiltonian commutes with the number operator $N=a_1^{\dagger}a_1+a_2^{\dagger}a_2$ indicating particle conservation. For the time being we consider $\epsilon_0=1, \epsilon=i\gamma$ and $\gamma$ and $\alpha$ to be real. In order to understand $\partial_{\mathcal{PT}}$ symmetry in Fock space we rewrite the Hamiltonian using Bargmann-Fock correspondence : $a_j^{\dagger}=\zeta_j, a_j=\partial_{\zeta_j}$ as follows \begin{eqnarray} H=(\zeta_1\partial_{\zeta_1}-\zeta_2\partial_{\zeta_2})+i\gamma(\zeta_1\partial_{\zeta_2}+\zeta_2\partial_{\zeta_1})+\alpha(\zeta_1\partial_{\zeta_1}-\zeta_2\partial_{\zeta_2})^2. \end{eqnarray} Following \cite{hai18, hai16, hai18c} we shall demonstrate the actions of weighted composition conjugations $\mathcal{C}_2$ and $\mathcal{C}_2^{(j)}$ on $H$ via the notion of Reproducing Kernel Hilbert Space (RKHS). \begin{definition}{Definition} A function of the form $K^{[m_j]}_{\{\zeta_j\}}(u_j:j=1\dots n)=\prod_{j=1}^nu_j^{m_j}e^{u_j\overline{\zeta_j}}$($m_j\in{N}$ and $\zeta_j,u_j\in {C}\:\:\forall\:\:j=1\dots n$) is called a kernel function (or a \textbf{reproducing kernel}) which satisfies the condition \begin{eqnarray} \psi^{(m_1,m_2,\dots, m_n)}(\zeta_1, \zeta_2,\dots,\zeta_n)=\langle\psi, K^{[m_j]}_{\{\zeta_1\}}\rangle\nonumber\\ =\int_{W(u_1)}\int_{W(u_2)}\dots \int_{W(u_n)}\psi(u_1, u_2,\dots,u_n)\overline{K^{[m_j]}_{\{\zeta_1\}}}. \end{eqnarray} \end{definition} Considering the case with two complex variables the following proposition is immediate. \begin{proposition}{Proposition} $H^{\star}\neq H$ where $H^{\star}$ is defined as $\langle H\psi(u_1,u_2), K^{[m_1, m_2]}_{\zeta_1, \zeta_2}\rangle=\langle\psi(u_1,u_2), H^{\star}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}\rangle$. \end{proposition} {\it Proof} : We shall first show that $\langle u_1\partial_{u_1}\psi(u_1,u_2), K^{[m_1, m_2]}_{\zeta_1, \zeta_2}\rangle=\langle\psi(u_1,u_2), u_1\partial_{u_1}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}\rangle $. \begin{eqnarray} \langle u_1\partial_{u_1}\psi(u_1,u_2), K^{[m_1, m_2]}_{\zeta_1, \zeta_2}\rangle=\langle u_1\partial_{u_1}\psi{(u_1, u_2)},u_1^{m_1}u_2^{m_2}e^{u_1\overline{\zeta_1}+u_2\overline{\zeta_2}}\rangle\nonumber\\ =\int_{W(u_1)}\int_{W(u_2)}u_1\partial_{u_1}\psi(u_1, u_2)\overline{u_1}^{m_1}\overline{u_2}^{m_2}e^{\overline{u_1}{\zeta_1}+\overline{u_2}{\zeta_2}}\nonumber\\ =\int_{W(u_1)}\int_{W(u_2)}u_1[\int_{W(v_1)}\int_{W(v_2)}\overline{v_1}\psi(v_1, v_2)e^{\overline{v_1}{u_1}+\overline{v_2}{u_2}}]\overline{u_1}^{m_1}\overline{u_2}^{m_2}e^{\overline{u_1}{\zeta_1}+\overline{u_2}{\zeta_2}}\nonumber\\ =\int_{W(v_1)}\int_{W(v_2)}\overline{v_1}\psi(v_1, v_2)[\int_{W(u_1)}\int_{W(u_2)}u_1\overline{u_1}^{m_1}\overline{u_2}^{m_2}e^{\overline{u_1}{\zeta_1}+\overline{u_2}{\zeta_2}}e^{\overline{v_1}{u_1}+\overline{v_2}{u_2}}]\nonumber\\ =\int_{W(v_1)}\int_{W(v_2)}\overline{v_1}\psi(v_1, v_2)\langle u_1e^{\overline{v_1}{u_1}+\overline{v_2}{u_2}}, {u_1}^{m_1}{u_2}^{m_2}e^{{u_1}\overline{\zeta_1}+{u_2}\overline{\zeta_2}}\rangle\nonumber\\ = \int_{W(v_1)}\int_{W(v_2)}\overline{v_1}\psi(v_1, v_2)\partial_{\zeta_1}^{m_1}\partial_{\zeta_2}^{m_2}(\zeta_1e^{\overline{v_1}{\zeta_1}+\overline{v_2}{\zeta_2}})\nonumber\\ = \int_{W(v_1)}\int_{W(v_2)}\overline{v_1}\psi(v_1, v_2)\partial_{\zeta_1}^{m_1}\partial_{\zeta_2}^{m_2}\partial_{\overline{v_1}}(e^{\overline{v_1}{\zeta_1}+\overline{v_2}{\zeta_2}})\nonumber\\ =\int_{W(v_1)}\int_{W(v_2)}\psi(v_1, v_2)\overline{v_1}\partial_{\overline{v_1}}\partial_{\zeta_1}^{m_1}\partial_{\zeta_2}^{m_2}(e^{\overline{v_1}{\zeta_1}+\overline{v_2}{\zeta_2}})\nonumber\\ =\langle\psi(v_1, v_2), v_1\partial_{v_1}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(v_1, v_2)\rangle. \end{eqnarray} An identical argument holds for $ u_2\partial_{u_2}$ and similar calculations justify forms of adjoints for the operators $i u_j\partial_{u_k}$ for $j\neq k$. For example, it can be shown that \begin{eqnarray} \langle iu_1\partial_{u_2}\psi(u_1, u_2), u_1^{m_1}u_2^{m_2}e^{u_1\overline{\zeta_1}+u_2\overline{\zeta_2}}\rangle\nonumber\\ =\langle\psi(v_1, v_2), -iv_2\partial_{v_1}v_1^{m_1}v_2^{m_2}e^{v_1\overline{\zeta_1}+v_2\overline{\zeta_2}}\rangle. \end{eqnarray} Using these results in the expression of Hamiltonian the proposition is verified.\rule{5pt}{5pt} \begin{proposition}{Proposition} $H$ is $\mathcal{C}_2$ self-adjoint i. e.; $\mathcal{C}_2H^{\star}\mathcal{C}_2=H$ \end{proposition} {\it Proof} : We shall first show the case with $u_1\partial_{u_1}$. Considering the conjugation operator $\mathcal{C}_{2}$ we find \begin{eqnarray} \mathcal{C}_{2}(u_1\partial_{u_1})^{\star}\mathcal{C}_{2}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)\nonumber\\ =\mathcal{C}_{2}u_1\partial_{u_1}\mathcal{C}_{2}u_1^{m_1}u_2^{m_2}e^{u_1\overline{\zeta_1}+u_2\overline{\zeta_2}}\nonumber\\ =\mathcal{C}_{2}u_1\partial_{u_1}\overline{\overline{(-u_1)}^{m_1}}\overline{\overline{(-u_2)}^{m_2}}\overline{e^{\overline{-u_1}\overline{\zeta_1}+\overline{-u_2}\overline{\zeta_2}}}\nonumber\\ =\mathcal{C}_{2}u_1\partial_{u_1}(-1)^{m_1+m_2}u_1^{m_1}u_2^{m_2}e^{-u_1{\zeta_1}-u_2{\zeta_2}}\nonumber\\ =\mathcal{C}_{2}u_1(-1)^{m_1+m_2}[u_1^{m_1}u_2^{m_2}(-{\zeta_1})e^{-u_1{\zeta_1}-u_2{\zeta_2}}+m_1u_1^{m_1-1}u_2^{m_2}e^{-u_1{\zeta_1}-u_2{\zeta_2}}]\nonumber\\ =-u_1(-1)^{m_1+m_2}[(-1)^{m_1+m_2}u_1^{m_1}u_2^{m_2}(-\overline{\zeta_1})\nonumber\\ +m_1(-1)^{m_1+m_2-1}u_1^{m_1-1}u_2^{m_2}]e^{u_1\overline{\zeta_1}+u_2\overline{\zeta_2}}\nonumber\\ =u_1\partial_{u_1}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2). \end{eqnarray} Similarly one can prove $\mathcal{C}_{2}(iu_1\partial_{u_2})^{\star}\mathcal{C}_{2}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)=(iu_2\partial_{u_1})K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)$ and $\mathcal{C}_{2}(iu_2\partial_{u_1})^{\star}\mathcal{C}_{2}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)=(iu_1\partial_{u_2})K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)$. Using these result in the expression of the Hamiltonian the above proposition is verified. \rule{5pt}{5pt} \begin{proposition}{Proposition} $H$ has $\partial_{\mathcal{PT}}$ symmetry i. e.; $\mathcal{C}_2^{(j)}H\mathcal{C}_2^{(j)}=H$, for $j=1, 2$ but it lacks global $\mathcal{PT}$ symmetry i. e.; $\mathcal{C}_2H\mathcal{C}_2\neq H$. \end{proposition} {\it Proof} : We shall only verify the that $\mathcal{C}_2^{(1)}u_1\partial_{u_1}\mathcal{C}_2^{(j)}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)=u_1\partial_{u_1}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)$ through the following steps \begin{eqnarray} \mathcal{C}_2^{(1)}u_1\partial_{u_1}\mathcal{C}_2^{(1)}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)\nonumber\\ =\mathcal{C}_2^{(1)}u_1\partial_{u_1}[(-u_1)^{m_1}(u_2)^{m_2}e^{-u_1\zeta_1+u_2\zeta_2}]\nonumber\\ =\mathcal{C}_2^{(1)}u_1(-1)^{m_1}[m_1u_1^{m_1-1}u_2^{m_2}+(-\zeta_1)u_1^{m_1}u_2^{m_2}]e^{-u_1\zeta_1+u_2\zeta_2}\nonumber\\ =-u_1(-1)^{m_1}[m_1(-u_1)^{m_1-1}u_2^{m_2}+\overline{(-\zeta_1)}(-u_1)^{m_1}u_2^{m_2}]e^{u_1\overline{\zeta_1}+u_2\overline{\zeta_2}}\nonumber\\ =u_1\partial_{u_1}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2). \end{eqnarray} Similarly, \begin{eqnarray} \mathcal{C}_2^{(1)}iu_1\partial_{u_2}\mathcal{C}_2^{(1)}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2)\nonumber\\ =\mathcal{C}_2^{(1)}iu_1\partial_{u_2}[(-u_1)^{m_1}(u_2)^{m_2}e^{-u_1\zeta_1+u_2\zeta_2}]\nonumber\\ =\mathcal{C}_2^{(1)}iu_1(-1)^{m_1}[(u_1)^{m_1}m_2(u_2)^{m_2-1}+(u_1)^{m_1}(u_2)^{m_2}(\zeta_2)]e^{-u_1\zeta_1+u_2\zeta_2}\nonumber\\ =iu_1\partial_{u_2}K^{[m_1, m_2]}_{\zeta_1, \zeta_2}(u_1, u_2). \end{eqnarray} Using these and similar results for the expression of the Hamiltonian the proposition can be verified.\rule{5pt}{5pt} \section{$\partial_{\mathcal{PT}}$ symmetry of the eigenstates of the Hamiltonian $H$} First we shall show that the reality of the eigenvalues is directly related to the $\partial_{\mathcal{PT}}$ symmetry of the eigen states of the Hamiltonian. It is readily observed that the present Hamiltonian leaves the homogeneous polynomial space of two indeterminates $(\zeta_1, \zeta_2)$ invariant. Considering such a space of degree of homogeneity $m$ and polynomial bases $\{f_{k} = \zeta_1^{m-k} \zeta_2^{k} : k = 0 \dots m\}$ the operator $H$ has the following tridiagonal representation \begin{eqnarray}\label{tri1} H = \left( \begin{array}{cccccccc} \beta_m & i\gamma & 0 & 0 & 0& \dots & 0& 0 \\ im\gamma & \beta_{m-2} & 2i\gamma & 0 & 0 &\dots &0 & 0 \\ 0& i\gamma(m-1) & \beta_{m-4} & 3i\gamma & 0 & \dots &0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \dots & \beta_{-(m-2)} & im\gamma \\ 0 & 0& 0& 0& 0& \dots & i\gamma & \beta_{-m} \end{array} \right). \end{eqnarray} Here, $\beta_{\mu}=\mu+\alpha{\mu}^2$. The the eigenvalues of such a matrix can be found out with the help of the following theorem \cite{sandry13}. \begin{theorem}{Theorem} Given a tri-diagonal matrix \begin{equation}\label{tri2} {\mathcal M} = \left( \begin{array}{cccccccc} b_0 & d_0 & 0 & 0 & 0 & \dots & 0 & 0 \\ c_0 & b_1 & d_1 & 0 & 0 & \dots & 0 & 0 \\ 0 & c_1 & b_2 & d_2 & 0 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \dots & b_{l-2} & d_{l-2} \\ 0 & 0 & 0 & 0 & 0 & \dots & c_{l-2} & b_{l-1} \end{array}\right) \end{equation} with $d_i \neq 0 \:\:\forall\:\: i$, let us consider a polynomial $P_n (\lambda)$ that follows the well-known three term recursion relation \cite{dunkl14, ismail05} \begin{equation}\label{rec1} P_{n+1}(\lambda) = \frac{1}{d_n}[(\lambda - b_n )P_n (x) - c_{n-1} P_{n-1} (\lambda)]. \end{equation} If $P_{-1}(\lambda) = 0$ and $P_0(\lambda) = 1$ the eigenvalues are given by the zeros of the polynomial $P_l(\lambda)$ and eigenvector corresponding to $j$-th eigenvalue $\lambda_j$ is given by the vector \begin{equation} \left( \begin{array}{c} P_0 (\lambda_j ) \\ P_1 (\lambda_j )\\ \vdots \\ P_{l-2} (\lambda_j)\\ P_{l-1} (\lambda_j ) \end{array}\right). \end{equation} \end{theorem} {\it Proof} : Let $ \left( \begin{array}{c} v_0 \\ v_1 \\ \vdots \\ v_{l-2} \\ v_{l-1} \end{array}\right). $be the eigen vector corresponding to the eigenvalue $\lambda$. Then the eigenvalue equation gives us \begin{eqnarray} b_0v_0+d_0v_1=\lambda v_0\nonumber\\ c_0v_0+b_1v_1+d_1v_2=\lambda v_1\nonumber\\ \vdots\nonumber\\ c_{l-2}v_{l-2}+b_{l-1}v_{l-1}=\lambda v_{l-1} \end{eqnarray} Since $P_0(\lambda)=1$, one can write $v_0=P_0(\lambda)v_0$. Now in view of the recurrence relation (equation-\ref{rec1}) \begin{eqnarray} v_1=\frac{1}{d_0}(\lambda-b_0)P_0(\lambda)v_0=P_1(\lambda) \end{eqnarray} Continuing the substitution recursively we get \begin{equation}\label{rec2} v_n=P_n(\lambda)v_0 : 0\leq n <l \end{equation} Substituting $n=l-1$ in equation-\ref{rec2} and using the three term relation (equation-\ref{rec1}) we get \begin{equation} P_l(\lambda)v_0=0 \end{equation} giving the characteristic equation $P_l(\lambda)=0$.\rule{5pt}{5pt} Now going back to the matrix in equation-\ref{tri1} and comparing the matrices in equation-\ref{tri1} and equation-\ref{tri2} we get $\{b_0,\dots, b_{l-1}\}=\{\beta_m,\dots,\beta_{-m}\}$, $\{c_0,\dots, c_{l-2}\}=\{im\gamma,\dots,i\gamma\}$ and $\{d_0,\dots, d_{l-2}\}=\{i\gamma,\dots,im\gamma\}$, one can determine the eigenvectors and eigenvalues of the matrix using the above algorithm. \subsection{\textbf{Reality of the eigenvalues and $\partial_{\mathcal{PT}}$ symmetry of the eigenstates}} Without calculating the eigenvalues explicitly, an immediate inference regarding the symmetry behaviour of the eigenfunctions is possible in view of the reality of the eigenvalues. We shall consider the following two cases. \vspace{0.5cm} \textbf{Case-I} : $\vert m\vert\in\{2s : s\in {Z^+}\}$ Let us begin from the value $\vert m\vert=2$. The matrix of $H$ is a $3\times 3$ matrix. Correspondingly the matrix ${\mathcal M}$ has the diagonal elements $b_0=2+4\alpha, b_1=0, b_2=-2+4\alpha$. This implies $l=3$. When $\vert m\vert=2s$ for some fixed $s$, $l=2s+1$ and the matrix ${\mathcal M}$ becomes a $(2s+1)\times (2s+1)$ matrix. As a consequence a polynomial equation $P_l(x)=0$ implies $P_{2s+1}(x)=0$ and roots of this polynomial equation gives us the eigenvalues. The eigenvector corresponding to an eigenvalue $\lambda_0$ would be given by the vector \begin{eqnarray} (P_0 (\lambda_0 )\:\:\:P_1 (\lambda_0 )\dots P_{2s} (\lambda_0 )). \end{eqnarray} The first term is equal to one for all values of $\lambda_0$. Now for real $\lambda_0$ it is easy to verify that starting from the purely imaginary second term the subsequent terms are alternatively real and purely imaginary leading to the following equivalence \begin{eqnarray} (P_0 (\lambda_0 )\:\:\:P_1 (\lambda_0 )\dots P_{2s} (\lambda_0 ))=(A_0 (\lambda_0 )\:\:\:iA_1 (\lambda_0 )\dots A_{2s} (\lambda_0 )). \end{eqnarray} Here, $\{A_l : l=0\dots 2s\}$ are real functions of $\lambda_0$ with $A_0=1$. In the Fock space setting the eigenfunction in $({\zeta_1, \zeta_2})$ can be given by \begin{eqnarray} \psi^{(I)}_{2s}(\zeta_1, \zeta_2)=A_0\zeta_1^{2s}+iA_1\zeta_1^{2s-1}\zeta_2+A_2\zeta_1^{2s-2}\zeta_2^{2}+\dots +A_{2s}\zeta_2^{2s}. \end{eqnarray} Now the action of partial parity operator $\{\mathcal{C}_2^{(j)} : j=1, 2 \}$ on $\psi^{(I)}_{2s}(\zeta_1, \zeta_2)$ can be understood through the following equation \begin{eqnarray} \mathcal{C}_2^{(1)}\psi^{(I)}_{2s}(\zeta_1, \zeta_2)\nonumber\\ =\overline{A_0\overline{(-\zeta_1)}^{2s}+iA_1\overline{(-\zeta_1)}^{2s-1}\overline{\zeta_2}+A_2\overline{(-\zeta_1)}^{2s-2}\overline{\zeta_2}^{2}+\dots +A_{2s}\overline{(\zeta_2)}^{2s}}\nonumber\\ =\psi^{(I)}_{2s}(\zeta_1, \zeta_2). \end{eqnarray} Similarly, $\mathcal{C}_2^{(2)}\psi^{(I)}_{2s}(\zeta_1, \zeta_2)=\psi^{(I)}_{2s}(\zeta_1, \zeta_2)$. \vspace{0.5cm} \textbf{Case-II} : $\vert m\vert\in\{2s-1 : s\in {Z^+}\}$ Similar argument as given above can lead to the following eigenfunction for odd $m$ \begin{eqnarray} \psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)=B_0\zeta_1^{2s-1}+iB_1\zeta_1^{2s-2}\zeta_2+B_2\zeta_1^{2s-3}\zeta_2^{3}+\dots +iB_{2s-1}\zeta_2^{2s-1} \end{eqnarray} where, $\{B_l : l=0\dots 2s-1\}$ are real functions of some real eigenvalue $\lambda_1$ with $B_0=1$. Now actions of $\{\mathcal{C}_2^{(j)} : j=1, 2\}$ on $\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)$ are given by \begin{eqnarray} \mathcal{C}_2^{(1)}\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)=-\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)\:\:{\rm and}\:\:\mathcal{C}_2^{(2)}\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)=\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2). \end{eqnarray} {\it \bf{Remark-I}} : It is observed that the eigen functions for even $m$ are $\partial_{\mathcal{PT}}$ symmetric in either of the variables whereas, for odd $m$ the eigenfunctions are symmetric in one variable $(\zeta_2)$ and anti-symmetric in the other $(\zeta_1)$. It can also be shown that $\mathcal{C}_2\psi^{(I)}_{2s}(\zeta_1, \zeta_2)\neq\psi^{(I)}_{2s}(\zeta_1, \zeta_2)$ and $\mathcal{C}_2\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)\neq\psi^{(II)}_{2s-1}(\zeta_1, \zeta_2)$ implying the fact that neither of them has any global $\mathcal{PT}$ symmetry. {\bf{Remark-II}} : If the eigenvalue has any nonzero complex part the antilinear action of the operators representing $\partial_{\mathcal{PT}}$ symmetry replaces the coefficients $\{A_l\}$ or $\{B_l\}$ by their respective complex conjugates thus destroying the symmetry of the eigenfunctions. This phenomenon may termed as \textbf{$\partial_{\mathcal{PT}}$ symmetry breaking}. It is obvious that the eigenvalues have a strong parametric dependence on the values of the parameters $\gamma$ and $\alpha$ an issue that may be discussed elsewhere. Considering $m=2$ and $\gamma=\alpha=\frac{1}{2}$ the eigenvalues are $\lambda_1=3.87513, \lambda_{2, 3}=0.06244\pm 0.71569 i$. This means, according to the above discussion, the states corresponding to $\lambda_1$ is a $\partial_{\mathcal{PT}}$ symmetric state but those corresponding to $\lambda_2$ and $\lambda_3$ are symmetry breaking states. On the other hand, for $\alpha=4=8\gamma$ the eigenvalues are $(0.06375, 13.96387, 17.97237)$ and consequently, all the states are {$\partial_{\mathcal{PT}}$ symmetric. In the above discussion only the cases with non-degenerate eigenvalues have been considered. The situation with degenerate eigenvalues may be an interesting area of investigation. \section{Conclusion} The present investigation deals with a new kind of symmetry operator that acts on operators or functions in a variable specific way. In this article, the presence of such symmetry known as Partial $\mathcal{PT}$ symmetry is understood in a typical Fock space setting for a two-boson Bose-Hubbard type Hamiltonian with one purely imaginary interaction term. The symmetry operators are represented as \textbf{Weighted Composition Conjugations} acting on a \textbf{Reproducing Kernel Hilbert Space}. The existence of such symmetry as well as the possibility of breaking of such symmetry are found to have direct correspondence with the reality of eigenvalues of the Hamiltonian.
{ "timestamp": "2020-12-29T02:25:05", "yymm": "2012", "arxiv_id": "2012.14101", "language": "en", "url": "https://arxiv.org/abs/2012.14101" }
\section{Introduction} The traveling salesman problem (TSP) finds a shortest \emph{Hamiltonian cycle} in a given complete graph with edge length, when a cycle is called \emph{Hamiltonian} (also called a \emph{tour}) if it visits every vertex exactly once. TSP is one of the most fundamental NP-hard optimization problems in operations research and computer science, and has been intensively studied from both practical and theoretical view points \cite{cook2011pursuit,monnot2014traveling,punnen2007traveling,shmoys1985traveling}. It has a number of applications such as planning, logistics, and the manufacture of microchips \cite{bland1989large,grotschel1991optimal}. Because of these importance, many heuristics and exact algorithms have been proposed \cite{bellman1961dynamic,held1962dynamic,lin1973effective,little1963algorithm}. From a view point of computational complexity, TSP is NP-hard, even in the Euclidean case, which includes the metric case. It is known that metric TSP is approximable with factor $1.5$ \cite{christofides1976worst}, and inapproximable with factor $117/116$ \cite{chlebik2019approximation}. Euclidean TSP admits a polynomial-time approximation scheme (PTAS), if the dimension of the Euclidean space is bounded by a constant \cite{arora1998polynomial}. We note that the approximation factors (i.e., ratios) above are widely used to analyze approximation algorithms. Let $\Pi$ be an optimization problem, and let $I$ be an instance of $\Pi$. We denote by $\mathop{\rm opt}(I)$ the value of an optimal solution to $I$. For an approximation algorithm $A$ for $\Pi$, we denote by ${\textstyle {\rm apx}_{A}}(I)$ the value of the approximate solution computed by $A$ for the instance $I$. Let \[r_A(I)={\rm apx}_A(I)/\mathop{\rm opt}(I), \] and define the \emph{standard approximation ratio} of $A$ by $\sup_{I \in \Pi}r_A(I)$, where we assume that $\Pi$ is a minimization problem. Although the standard approximation ratio is well-studied and an important concept in algorithm theory, it is not invariant under affine transformation of the objective function. Namely, if the objective function $f(x)$ is replaced by $a+bf(x)$ for some constant $a$ and $b$, which might depend on the instance $I$, the standard ratio is not preserved. For example, the vertex cover problem and the independent set problem have affinely dependent objective functions. However they have different characteristics in the standard approximation ratio. The vertex cover problem is $2$-approximable \cite{papadimitriou1998combinatorial}, while the independent set problem is inapproximable within $O(n^{1-\epsilon})$ for any $\epsilon > 0$ \cite{engebretsen2000clique}, where $n$ denotes the number of vertices in a given graph. In order to remedy to this phenomenon, Demange and Paschos \cite{demange1996approximation} proposed the \emph{differential approximation ratio} defined by $\sup_{I \in \Pi} \rho_A(I)$, where \[\rho_A(I)=\frac{\mathop{\rm wor}(I)-{\rm apx}_A(I)}{\mathop{\rm wor}(I)-\mathop{\rm opt}(I)}\] and $\mathop{\rm wor}(I)$ denotes the value of a worst solution to $I$. Note that for any instance $I$ of $\Pi$ \[{\rm apx}_A(I) = \rho_A(I)\mathop{\rm opt}(I) + (1-\rho_A(I))\mathop{\rm wor}(I).\] Thus we have $0 \leq \rho_A(I) \leq 1$ and the larger $\rho_A(I)$ implies the better approximation for the instance $I$. Moreover, by definition, the differential approximation ratio remains invariant under affine transformation of the objective function. For this, it has been recently attracted much attention in approximation algorithm \cite{ausiello2005completeness}. It is known \cite{monnot2003approximation} that TSP, metric TSP, max TSP, and max metric TSP are affinely equivalent, i.e., their objective functions are transferred to each other by affine transformations, where max TSP is the problem to find a longest Hamiltonian cycle and max metric TSP is max TSP, in which the input weighted graph satisfies the metric condition. Therefore, these problems have the identical differential approximation ratio. Hassin and Khuller \cite{hassin2001z} first studied differential approximability of TSP, and showed that it is $2/3$-differential approximable. Escoffier and Monnot \cite{escoffier2008better} improved it to $3/4-O(1/n)$, where $n$ denotes the number of vertices of a given graph. Monnot et al. \cite{monnot2002differential, monnot2003differential} showed that TSP is $3/4$-differential approximable if each edge length is restricted to one or two. In this paper, we show that TSP is $3/4$-differential approximable, which inproves the currently best known results \cite{escoffier2008better,monnot2002differential, monnot2003differential}. Our algorithm is based on an idea in \cite{escoffier2008better} for the case in which a given graph $G$ has an even number of vertices and a triangle (i.e., cycle with $3$ edges\todo[fancyline]{of length three}) is contained in a minimum weighted $2$-factor of $G$. Their algorithm first computes minimum weighted $1$- and $2$-factors of a given graph, modify them to four path covers $P_i$ (for $i=1, \ldots , 4$), and then extend each path cover $P_i$ to a tour by adding edge set $F_i$ to it in such a way that at least one of the tours guarantees $3/4$-differential approximation ratio. Here the definitions of factor and path cover can be found in Section 2. We generalize their idea to the general even case. Note that $\bigcup_{i=1, \ldots ,4} F_i$ in their algorithm always forms a tour, where in general it does not. We show that there exists a way to construct path covers such that the length of $\bigcup_{i=1, \dots ,4} F_i$ is at most the worst tour length. Our algorithm for odd case is much more involved. For each path with three edges\todo[fancyline]{of length 3}, we first construct a $2$-factor and two path covers of a given graph which has minimum length among all these which completely and partially contains the path, modify them to eight path covers, and then extend each path cover to a tour, in such a way that at least one of the eight tours guarantees $3/4$-differential approximation ratio. The rest of the paper is organized as follows. In Section 2, we define basic concepts of graphs and discuss some properties on 2-matchings, which will be used in the subsequent sections. In Sections 3 and 4, we provide an approximation algorithms for TSP in which a given graph $G$ has even and odd numbers of vertices, respectively. \section{Preliminary} Let $G=(V,E)$ be an undirected graph, where $n$ and $m$ denote the number of vertices and edges in $G$, respectively. In this paper, we assume that a given graph $G$ of TSP is complete, i.e., $E={{V}\choose {2}}$, and it has an edge length function $\ell:E \to \mathbb{R}_+$, where $\mathbb{R}_+$ denotes the set of nonnegative reals. For a set $F \subseteq E$, let $V(F)$ denote the set of vertices with incident edges in $F$, i.e., $V(F) = \{v \in V \mid \exists (v,w) \in F\}$. A set $F \subseteq E$ is called \emph{spanning} if $V(F)=V$, and \emph{acyclic} if $F$ contains no cycle. For a positive integer $k$, a set $F \subseteq E$ is called a \emph{$k$-matching} (resp., \emph{$k$-factor}) if each vertex has at most (resp., exactly) $k$ incident edges in $F$. Here $1$-matching is simply called a \emph{matching}. Note that an acyclic $2$-matching $F$ corresponds to a family of vertex-disjoint paths denoted by $\mathcal{P}(F) \subseteq 2^E$. A $2$-matching is called a \emph{path cover} if it is spanning and acyclic. For a set $F \subseteq E$, $V_1(F)$ and $V_2(F)$ respectively denote the sets of vertices with one and two incident edges in $F$. For a set $F \subseteq E$ and a vertex $v \in V$, let $\delta_F(v) = \{e \in F\mid e \text{ is incident to } v\}$. \begin{dfn}\label{dfn:pair-property} A pair of spanning 2-matchings $(S,T)$ is called \emph{valid} if it satisfies the following three conditions: \begin{align} (\mathrm{i})&\ T \text{ is acyclic}. \notag \\ (\mathrm{ii})&\ \delta_S(v) = \delta_T(v) \text{ for any } v \in V_2(S) \cap V_2(T). \label{dfn:pair-property:edge}\\ (\mathrm{iii})&\ V(C) \not = V(P) \text{ for any cycle } C \subseteq S \text{ and any path } P \subseteq \mathcal{P}(T). \label{dfn:pair-property:path} \end{align} \end{dfn} Figure 1 shows a valid pair of spanning $2$-matchings. \begin{lem} \label{lem:movable-edges} Let $(S,T)$ be a valid pair of spanning $2$-matchings. If $S$ contains a cycle $C$, then $C$ contains two edges $e_i$ for $i=1,2$ such that $S_i=S\setminus \{e_i\}$ and $T_i=S\cup \{e_i\}$ satisfy the following three conditions: \begin{align} &(S_i, T_i) \text{ is valid for } i=1,2. \label{lem:movable-edges:valid}\\ &V_1(S_i) \cup V_1(T_i) = V_1(S) \cup V_1(T) \text{ and } V_1(S_i) \cap V_1(T_i) = V_1(S) \cap V_1(T) \text{ for } i = 1,2. \label{lem:movable-edges:vertex}\\ &\mathcal{P}(T) \text{ contains a path } P \text{ such that } P \cup \{e_1\} \text{ and } P \cup \{e_2\} \text{ are both paths. }\label{lem:movable-edges:path} \end{align} \end{lem} \begin{proof} Let $C=\{(v_0,v_1),(v_1,v_2), \ldots , (v_{k-1},v_k)\}$ for $k\geq 3$, where $v_k=v_0$. If $\mathcal{P}(T)$ contains a $(s,t)$-path $P$ with $s \in V(C)$ and $t \not\in V(C)$, then it follows from (\ref{dfn:pair-property:edge}) that $V(P) \cap V(C) = \{s\}$. We assume that $s = v_1$ without loss of generality. Let $e_1=(v_0,v_1)$ and $e_2=(v_1,v_2)$. It is not difficult to see that $(S_i, T_i)$ is valid and $P \cup \{e_i\}$ is a path for every $i=1,2$. On the other hand, if $\mathcal{P}(T)$ contains a $(s,t)$-path $P$ with $s,t \in V(C)$, we assume without loss of generality that $s=v_1$, $t=v_j$ for some $j$ with $2 \leq j \leq k-1$, and $P$ does not contain $(v_0,v_1)$. Note that such a path exists, since $T$ is spanning. Define $e_1=(v_0,v_1)$ and $e_2=(v_j,v_{j+1})$. By (\ref{dfn:pair-property:edge}) and (\ref{dfn:pair-property:path}), we can show that $(S_i, T_i)$ is valid and $P \cup \{e_i\}$ is a path for every $i=1,2$. This completes the proof. \end{proof} Note that $(S_1,T_1)$ and $(S_2,T_2)$ in Lemma 2 satisfy \begin{align} S_i \cup T_i = S \cup T \text{ and } S_i \cap T_i = S \cap T \text{ for } i = 1,2,\label{lem:movable-edges:edge} \end{align} which immediately implies \begin{align} \ell(S_i) + \ell(T_i) = \ell(S) + \ell(T) \text{ for } i = 1,2, \label{lem:movable-edges:weight} \end{align} where $\ell(F)=\sum_{e \in F}\ell(e)$ for a set $F \subseteq E$.\\ \textcolor{black}{Figure 2 shows two pairs $(S\setminus \{e_c\}, T \cup \{e_c\})$ and $(S\setminus \{e_d\}, T \cup \{e_d\})$ satisfying $(\ref{lem:movable-edges:valid})$, $(\ref{lem:movable-edges:vertex})$ and $(\ref{lem:movable-edges:path})$, which are obtained from $(S,T)$, $e_1=e_c$, and $e_2=e_d$ in Fig. \ref{fig:exeven}.} \begin{figure} \centering \figexeven \caption{A valid pair $(S,T)$ of spanning $2$-matchings.}\label{fig:exeven} \end{figure} \begin{figure} \centering \figexevene \caption{Two pairs $(S\setminus \{e_c\}, T \cup \{e_c\})$ and $(S\setminus \{e_d\}, T \cup \{e_d\})$ satisfying $(\ref{lem:movable-edges:valid})$, $(\ref{lem:movable-edges:vertex})$ and $(\ref{lem:movable-edges:path})$, which are obtained from $(S,T)$, $e_1=e_c$, and $e_2=e_d$ in Fig. \ref{fig:exeven}.}\label{fig:exevene} \end{figure} \section{Approximation for even instances} In this section, we construct an approximation algorithm for TSP in which a given graph has an even number of vertices. Our algorithm first construct four path covers from minimum weighted $1$- and $2$-factors of a given graph $G$, and then extend each path cover to a tour in such a way that at least one of the tours guarantees 3/4-differential approximation ratio. Let us first describe the procedure \texttt{FourPathCovers}. Let $(S,T)$ be a valid pair of spanning 2-matchings of $(G,\ell)$ such that $S$ is a 2-factor. The procedure computes from $(S,T)$ four path covers $S_1$, $S_2$, $T_1$, and $T_2$ that satisfies (\ref{lem:movable-edges:vertex}), (\ref{lem:movable-edges:edge}), $V_1(S_i)$ and $V_1(T_i)$ is a partition of $V_1(T)$ for $i=1,2$, i.e., \begin{align} \begin{split} &V_1(S_i) \cup V_1(T_i) = V_1(T) \text{ and } V_1(S_i) \cap V_1(T_i) = \emptyset \text{ for } i=1,2, \end{split}\label{lem:4as2m:vertex} \end{align} and \begin{align} \begin{split} &\text{ there exist } e_1, e_2 \in E \text{ and } P \in \mathcal{P}(T_1 \cap T_2) \text{ such that }\\ &\quad T_1 \setminus T_2=\{e_1\},\ T_2 \setminus T_1=\{e_2\},\ P \cup \{e_1\} \in \mathcal{P}(T_1), \text{ and } P \cup \{e_2\} \in \mathcal{P}(T_2). \end{split}\label{lem:4as2m:diff} \end{align} \begin{figure}[h!t] \begin{procedure}[H] \caption{\texttt{FourPathCovers}$(S,T)$\\ /*$(S, T)$ is a valid pair of spanning $2$-matchings such that $S$ has a cycle. The procedure returns $4$ path covers $S_1$, $S_2$, $T_1$, and $T_2$ that sasifies (\ref{lem:movable-edges:vertex}), (\ref{lem:movable-edges:edge}), and (\ref{lem:4as2m:diff}).*/} \label{alg:FourPathCovers} \begin{algorithmic} \If{$S$ has exactly one cycle} \State Take two edges $e_1$ and $e_2$ in Lemma \ref{lem:movable-edges}. \State \Return $S_1 = S \setminus \{e_1\}$, $T_1 = T \cup \{e_1\}$, $S_2 = S \setminus \{e_2\}$, and $T_2 = T \cup \{e_2\}$ \Else \Comment{$S$ has at least two cycles.} \State Take an edge $e_1$ in Lemma \ref{lem:movable-edges}. \State \Return \texttt{FourPathCovers}$(S \setminus \{e_1\}, T \cup \{e_1\})$ \EndIf \end{algorithmic} \end{procedure} \end{figure} \textcolor{black}{In Fig. \ref{fig:exeven-st} we apply \textbf{Procedure} \texttt{FourPathCovers} to $(S,T)$ in Fig. \ref{fig:exeven}.} \begin{lem}\label{lem:4as2m} For a graph $G=(V,E)$, let $(S,T)$ be a valid pair of spanning $2$-matchings such that $S$ has a cycle. Then \textbf{Procedure} \texttt{FourPathCovers} returns four path covers $S_1$, $S_2$, $T_1$, and $T_2$ that satisfy $(\ref{lem:movable-edges:vertex})$, $(\ref{lem:movable-edges:edge})$, and $(\ref{lem:4as2m:diff})$. Furthermore, if $S$ is addition a $2$-factor of $G$, then the four path covers satisfy $(\ref{lem:4as2m:vertex})$. \end{lem} \begin{proof} By repeatedly applying Lemma \ref{lem:movable-edges} to $(S,T)$, we can see that four path covers $S_1$, $S_2$, $T_1$, and $T_2$ returned by \textbf{Procedure} \texttt{FourPathCovers} satisfy (\ref{lem:movable-edges:vertex}), (\ref{lem:movable-edges:edge}), and (\ref{lem:4as2m:diff}). Furthermore, if $S$ is a $2$-factor of $G$, we have (\ref{lem:4as2m:vertex}), since $V_1(S) = \emptyset$. \end{proof} Note that $(S,T)$ is a valid and $V_1(S) \cup V_1(T) = V$, if $S$ and $T$ are $2$- and $1$-factor of $G$, respectively. \begin{figure} \centering \figexevenst \caption{Two pairs $(S_1,T_1)$ and $(S_2,T_2)$ computed by \textbf{Procedure} \texttt{FourPathCovers} for a valid pair $(S,T)$, $e^{(1)}_1 = e_a$, $e^{(2)}_1 = e_b$, $e^{(3)}_1 = e_c$ and $e^{(3)}_2 = e_d$ in Fig. \ref{fig:exeven}, where $e^{(j)}_i$ denotes the edge chosen as $e_i$ in the $j$-th round of the procedure.}\label{fig:exeven-st} \end{figure} Let $S$ and $T$ be 2- and 1- factors of $G$, respectively. Note that our algorithm explain later makes use of minimum weighted 2-factor $S$ and 1-factor $T$ of $(G,\ell)$ which can be computed from $(G, \ell)$ in polynomial time. We assume that $S$ is not a tour of $G$, i.e., $S$ contains at least two cycles, since otherwise, $S$ itself is an optimal tour. Let $S_1$, $S_2$, $T_1$, and $T_2$ be path covers returned by \textbf{Procedure} \texttt{FourPathCover}$(S,T)$. Let us then show how to construct edge sets $A_1$, $A_2$, $B_1$, and $B_2$, such that $S_i \cup A_i$ and $T_i \cup B_i$ (for $i = 1, 2$) are tours and $\ell(A_1)+\ell(A_2)+\ell(B_1)+\ell(B_2) \leq \mathop{\rm wor}(G,\ell)$, where $\mathop{\rm wor}(G,\ell)$ denotes the length of a longest tour of $(G,\ell)$. Let $e_1$ = $(p_1, p_2)$ and $e_2=(p_3, p_4)$ be edges in Lemma \ref{lem:4as2m}. Since $e_1$ and $e_2$ are chosen from a cycle $C$, we can assume that $p_1 \not=p_3,p_4$ and $p_4 \not=p_1,p_2$, where $p_2 = p_3$ might hold. We note that $\mathcal{P}(S_1) \setminus \mathcal{P}(S_2)$ consists of a $(p_1,p_2)$-path $P_1=C \setminus \{e_1\}$, and $\mathcal{P}(S_2) \setminus \mathcal{P}(S_1)$ consists of a $(p_3,p_4)$-path $P_2=C \setminus \{e_2\}$. Let $Q_i \, (i=1, \ldots, k)$ denote vertex-disjoint $(x_i,y_i)$-paths such that $\{Q_1, \dots , Q_k\} = \mathcal{P}(S_1) \cap \mathcal{P}(S_2)$ and $x_1$ and $y_1$ satisfy \begin{align} \ell(p_2, x_1) + \ell(p_3, y_1) \leq \ell(p_2, y_1) + \ell(p_3, x_1). \label{xy-ineq} \end{align} \textcolor{black}{Figure \ref{fig:even-s} shows $S_1$ and $S_2$ computed by \textbf{Procedure} \texttt{FourPathCover}$(S,T)$, where two cases $p_2=p_3$ and $p_2\not=p_3$ are separately described.} \begin{figure} \centering \figevens \caption{Two cases $p_2=p_3$ and $p_2\not=p_3$ for path covers $S_1$ and $S_2$ returned by \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$.}\label{fig:even-s} \end{figure} Define $A_1$ and $A_2$ by \begin{align} \begin{split} A_1 &= \{(p_2, x_1)\} \cup \{(y_i, x_{i+1}) \mid i = 1, \dots, k-1\} \cup \{(y_k, p_1)\}\\ A_2 &= \{(p_3, y_1)\} \cup \{(x_i, y_{i+1}) \mid i = 1, \ldots, k-1\} \cup \{(x_k, p_4)\}, \end{split}\label{addingA} \end{align} \textcolor{black}{where the illustration can be found in Fig. \ref{fig:even-a}}. Then we have the following lemma. \begin{lem}\label{lem:addingA} Two sets $A_1$ and $A_2$ defined in $(\ref{addingA})$ satisfy the following three conditions. \begin{enumerate} \item[$(\mathrm{i})$] $S_i \cup A_i$ is a tour of $G$ for $i=1,2$. \item[$(\mathrm{ii})$] $V(A_i)=V_1(S_i)$ for $i=1,2$. \item[$(\mathrm{iii})$] $A_1 \cap A_2 = \emptyset$ and $A_1 \cup A_2$ consists of \begin{enumerate} \item[$(\mathrm{iii}$-$1)$] a $(p_1, p_4)$-path if $p_2 = p_3$. \item[$(\mathrm{iii}$-$2)$] vertex-disjoint $(p_1, p_3)$- and $(p_2, p_4)$-paths if $p_2 \not= p_3$ and $k$ is odd. \item[$(\mathrm{iii}$-$3)$] vertex-disjoint $(p_1, p_2)$- and $(p_3, p_4)$-paths if $p_2 \not= p_3$ and $k$ is even. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Note that $\mathcal{P}(S_1) = \{Q_1, \dots , Q_k\} \cup \{P_1\}$ and $\mathcal{P}(S_2) = \{Q_1, \dots , Q_k\} \cup \{P_2\}$. Thus it follows from the definition of $A_1$ and $A_2$. \end{proof} \begin{figure} \centering \figevena \caption{Two edge sets $A_1$ and $A_2$ for path covers $S_1$ and $S_2$ (as illustrated in Fig. \ref{fig:even-s}).}\label{fig:even-a} \end{figure} \begin{figure} \centering \figexevena \caption{Two edge sets $A_1$ and $A_2$ for path covers $S_1$ and $S_2$ in Fig. \ref{fig:exeven-st}.}\label{fig:exeven-a} \end{figure} \textcolor{black}{Figure \ref{fig:exeven-a} shows two edge sets $A_1$ and $A_2$ for $S_1$ and $S_2$ in Fig. \ref{fig:exeven-st}.} Let us next construct $B_1$ and $B_2$. Let $O_i \, (i=1,\ldots, d)$ denote\todo[fancyline]{odd に合わせてbe を denote に} vertex-disjoint $(z_i,w_i)$-paths such that $\{O_1, \ldots, O_d \} = \mathcal{P}(T_1) \cap \mathcal{P}(T_2)$. Note that $\mathcal{P}(T_1) \cap \mathcal{P}(T_2)=\emptyset$ (i.e., $d=0$) might hold. We separately consider the following four cases\textcolor{black}{, where the illustration can be found in Fig. \ref{fig:even-t}}. \begin{enumerate} \item $p_2=p_3$ and $\mathcal{P}(T_1 \cap T_2)$ contains a $(p_1,p_4)$-path. \item $p_2=p_3$ and $\mathcal{P}(T_1 \cap T_2)$ contain no $(p_1,p_4)$-path. \item $p_2\not=p_3$ and $\mathcal{P}(T_1 \cap T_2)$ contains $(p_1,p_4)$- and $(p_2,p_3)$-paths. \item $p_2\not=p_3$ and $\mathcal{P}(T_1 \cap T_2)$ contains a $(p_2,p_3)$-path and no $(p_1,p_4)$-path. \end{enumerate} Here we recall that $e_1=(p_1,p_2)$ and $e_2=(p_3,p_4)$ satisfy Lemma 3.\medskip \begin{figure} \centering \figevent \caption{Four cases for path covers $T_1$ and $T_2$ returned by \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$.}\label{fig:even-t} \end{figure} \textbf{Case 1}: Let $R_1$ denote a $(p_1,p_4)$-path in $\mathcal{P}(T_1 \cap T_2)$, and for some vertex $q_2$, let $R_2$ denote $(p_2, q_2)$-path in $\mathcal{P}(T_1 \cap T_2)$. Then\todo[fancyline]{By definition から変更(各Case)}, we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \ldots, O_d\} \cup \{R_1 \cup \{ e_1\} \cup R_2\}\\ \mathcal{P}(T_2)&= \{O_1, \ldots, O_d\} \cup \{R_1 \cup \{ e_2\} \cup R_2\}, \end{align*} where $R_1\cup \{e_1\} \cup R_2$ and $R_1\cup \{e_2\} \cup R_2$ are $(p_4,q_2)$- and $(p_1,q_2)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \begin{cases} \{(q_2,p_4)\} & \text{if } d=0\\ \{(q_2,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, p_4)\} & \text{if } d \geq 1 \end{cases}\\ B_2 &= \begin{cases} \{(q_2,p_1)\} & \text{if } d=0\\ \{(q_2,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, p_1)\} & \text{if } d \geq 1, \end{cases} \end{split}\label{addingB-1} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:even-b-1}}. By definition, we have \todo[fancyline]{Hamilton cycle を tourに} \begin{align} &T_i \cup B_i \text{ is a tour of } G \text{ for } i = 1,2, \label{addingB:Ham}\\ &B_1 \cap B_2 = \emptyset \text{ and } V(B_i) = V_1(T_i) \text{ for } i = 1,2, \text{and} \label{addingB:vertex} \\ &B_1 \cup B_2 \text{ is a $(p_1, p_4)$-path.} \label{addingB:path} \end{align} \medskip \begin{figure} \centering \figevenb[1] \caption{Two edge sets $B_1$ and $B_2$ for Case 1 (as illustrated in Fig. \ref{fig:even-t}).}\label{fig:even-b-1} \end{figure} \textbf{Case 2}: For some vertices $q_1,q_2$ and $q_4$, let $R_1$, $R_2$ and $R_4$ respectively denote $(p_1,q_1)$-, $(p_2, q_2)$-, and $(p_4,q_4)$-paths in $\mathcal{P}(T_1 \cap T_2)$. Then\todo[fancyline]{}, we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_4, R_1\cup \{e_1\} \cup R_2\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_1, R_4\cup \{e_2\} \cup R_2\}, \end{align*} where $R_1\cup \{e_1\} \cup R_2$ and $R_4\cup \{e_2\} \cup R_2$ are $(q_1,q_2)$- and $(q_4,q_2)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \begin{cases} \{(q_2,q_4),(p_4,q_1)\} & \text{if } d=0\\ \{(q_2,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, q_4), (p_4,q_1)\} & \text{if } d \geq 1 \end{cases}\\ B_2 &= \begin{cases} \{(q_2,q_1),(p_1,q_4)\} & \text{if } d=0\\ \{(q_2,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, q_1), (p_1,q_4)\} & \text{if } d \geq 1, \end{cases} \end{split}\label{addingB-2} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:even-b-2}}. Similarly to Case 1, we have (\ref{addingB:Ham}), (\ref{addingB:vertex}) and (\ref{addingB:path}).\medskip \begin{figure} \centering \figevenb[2] \caption{Two edge sets $B_1$ and $B_2$ for Case 2 (as illustrated in Fig. \ref{fig:even-t}).}\label{fig:even-b-2} \end{figure} \textbf{Case 3}: Let $R_1$ and $R_2$ respectively denote $(p_1,p_4)$- and $(p_2,p_3)$-paths in $\mathcal{P}(T_1 \cap T_2)$. Then\todo[fancyline]{}, we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_1\cup \{e_1\} \cup R_2\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_1\cup \{e_2\} \cup R_2\}, \end{align*} where $R_1\cup \{e_1\} \cup R_2$ and $R_1\cup \{e_2\} \cup R_2$ are $(p_3,p_4)$- and $(p_1,p_2)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \begin{cases} \{(p_3,p_4)\} & \text{if } d=0\\ \{(p_3,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, p_4)\} & \text{if } d \geq 1 \end{cases}\\ B_2 &= \begin{cases} \{(p_2,p_1)\} & \text{if } d=0\\ \{(p_2,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, p_1)\} & \text{if } d \geq 1, \end{cases} \end{split}\label{addingB-3} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:even-b-3}}. Similarly to the previous cases, we have (\ref{addingB:Ham}) and (\ref{addingB:vertex}). Furthermore, $B_1 \cup B_2$ consist of\todo[fancyline]{is を変更} vertex-disjoint $(p_1, p_2)$- and $(p_3, p_4)$-paths if $d$ is even, and vertex-disjoint $(p_1, p_3)$- and $(p_2, p_4)$-paths if $d$ is odd.\medskip \begin{figure} \centering \figevenb[3] \caption{Two edge sets $B_1$ and $B_2$ for Case 3 (as illustrated in Fig. \ref{fig:even-t}).}\label{fig:even-b-3} \end{figure} \textbf{Case 4}: Let $R_2$ denote $(p_2,p_3)$-path in $\mathcal{P}(T_1 \cap T_2)$, and for some vertices $q_1$ and $q_4$, let $R_1$ and $R_4$ respectively denote $(p_1, q_1)$- and $(p_4,q_4)$-paths in $\mathcal{P}(T_1 \cap T_2)$. Then\todo[fancyline]{}, we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_4, R_1\cup \{e_1\} \cup R_2\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_1, R_4\cup \{e_2\} \cup R_2\}, \end{align*} where $R_1\cup \{e_1\} \cup R_2$ and $R_4\cup \{e_2\} \cup R_2$ are $(q_1,p_3)$- and $(q_4,p_2)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \begin{cases} \{(p_3,q_4),(p_4,q_1)\} & \text{if } d=0\\ \{(p_3,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, q_4), (p_4,q_1)\} & \text{if } d \geq 1 \end{cases}\\ B_2 &= \begin{cases} \{(p_2,q_1),(p_1,q_4)\} & \text{if } d=0\\ \{(p_2,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, q_1), (p_1,q_4)\} & \text{if } d \geq 1, \end{cases} \end{split}\label{addingB-4} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:even-b-4}}. Similarly to the previous cases, we have (\ref{addingB:Ham}) and (\ref{addingB:vertex}). Furthermore, $B_1 \cup B_2$ consist of\todo[fancyline]{} vertex-disjoint $(p_1, p_3)$- and $(p_2, p_4)$-paths if $d$ is even, and vertex-disjoint $(p_1, p_2)$- and $(p_3, p_4)$-paths if $d$ is odd. \begin{figure} \centering \figevenb[4] \caption{Two edge sets $B_1$ and $B_2$ for Case 4 (as illustrated in Fig. \ref{fig:even-t}).}\label{fig:even-b-4} \end{figure} In summary, we have the following lemma. \begin{lem}\label{lem:addingB} Let $B_1$ and $B_2$ be two edge sets defined as above. Then they satisfy $(\ref{addingB:Ham})$ and $(\ref{addingB:vertex})$, and $B_1 \cup B_2$ consists of $(\mathrm{i})$ a $(p_1,p_4)$-path if $q_2=q_3$, and either $(\mathrm{ii})$ vertex-disjoint $(p_1,p_2)$- and $(p_3,p_4)$-paths or $(\mathrm{iii})$ vertex-disjoint $(p_1,p_3)$- and $(p_2,p_4)$-paths if $q_2\not=q_3$. \end{lem} \todo[inline,fancyline, caption={Lemma5}]{修正前:Let $B_1$ and $B_2$ be two edge sets defined as above. Then they satisfy $(\ref{addingB:Ham})$ and $(\ref{addingB:vertex})$, and $B_1 \cup B_2$ consists of either $(\mathrm{i})$ a $(p_1,p_4)$-path, $(\mathrm{ii})$ vertex-disjoint $(p_1,p_2)$- and $(p_3,p_4)$-paths, or $(\mathrm{iii})$ vertex-disjoint $(p_1,p_3)$- and $(p_2,p_4)$-paths.} \begin{figure} \centering \figexevenb \caption{Two edge sets $B_1$ and $B_2$ for path covers $T_1$ and $T_2$ in Fig. \ref{fig:exeven-st}.}\label{fig:exeven-b} \end{figure} \textcolor{black}{Figure \ref{fig:exeven-b} shows two edge sets $B_1$ and $B_2$ for path covers $T_1$ and $T_2$ in Fig. \ref{fig:exeven-st}.} Furthermore, $A_i$ and $B_i$ ($i=1,2$) satisfy the following properties. \begin{lem}\label{lem:addingAB} Let $A_1$, $A_2$, $B_1$, and $B_2$ be defined as above. Then they are all pairwise disjoint, and $C = A_1 \cup A_2 \cup B_1 \cup B_2$ is a 2-factor, consisting of either one or two cycles. Furthermore, there exists a tour $H$ of $G$ such that $\ell(H) \geq \ell(C)$. \end{lem} \todo[inline,fancyline, caption={Lemma6}]{修正前:Let $A_1$, $A_2$, $B_1$, and $B_2$ be defined as above. Then they are all pairwise disjoint, and $A_1 \cup A_2 \cup B_1 \cup B_2$ is a 2-factor, consisting of either one or two cycles. Furthermore, if it consists of two cycles, then there exists a tour $H$ in $G$ such that $\ell(H) \geq \ell(A_1) + \ell(A_2)+\ell(B_1)+\ell(B_2)$. } \begin{proof} It is not difficult to see that $A_1$, $A_2$, $B_1$, and $B_2$ are pairwise disjoint. Lemmas \ref{lem:4as2m}, \ref{lem:addingA}, and \ref{lem:addingB} imply that $C=A_1 \cup A_2 \cup B_1 \cup B_2$ is a 2-factor consisting of either one or two cycles. Thus if $C$ is a 2-factor, the latter statement of the lemma holds. Assume that $C$ consists of two cycles. In this case, we can see that two edges $(p_2,x_1)$ and $(p_3,y_1)$ belong to different cycles by (\ref{addingA}). Let $H=(C \setminus \{(p_2,x_1), (p_3,y_1)\}) \cup \{(p_2,y_1), (p_3,x_1)\}$ \textcolor{black}{(see in Fig. \ref{fig:cdh})}. Then $H$ is a tour of $G$. By assumption (\ref{xy-ineq}), we have $\ell(H)\geq\ell(C)$, which completes the proof. \end{proof} \todo[inline,caption={Lemma6 proof}]{修正前:It is not difficult to see that $A_1$, $A_2$, $B_1$, and $B_2$ are pairwise disjoint. Lemmas \ref{lem:4as2m}, \ref{lem:addingA}, and \ref{lem:addingB} imply that $C=A_1 \cup A_2 \cup B_1 \cup B_2$ is a 2-factor consisting of either one or two cycles. Let us then consider the case in which $C$ consists of two cycles. In this case, we can see that two edges $(p_2,x_1)$ and $(p_3,y_1)$ belong to different cycles. Let $H=(C \setminus \{(p_2,x_1), (p_3,y_1)\}) \cup \{(p_2,y_1), (p_3,x_1)\}$. Then $H$ is a cycle (i.e., a tour), and by assumption (\ref{xy-ineq}), we have $\ell(H)\geq\ell(C)$, which completes the proof. } \begin{figure} \centering \CDH \caption{A tour $H$ in the proof of Lemma \ref{lem:addingAB}, when $C$ consists of two cycles.}\label{fig:cdh} \end{figure} We are now ready to describe our approximation algorithm. \begin{figure}[h!t] \begin{algorithm}[H] \caption{\texttt{TourEven}} \label{alg:TourEven} \begin{algorithmic} \Require A complete graph $G=(V,E)$ with even $|V|$, and an edge length function $\ell :E \to \mathbb{R}_+$. \Ensure A tour $T_{{\rm apx}}$ in $G$.\vspace{3pt} \State Compute minimum weighted 2-factor $S$ and 1-factor $T$ of $(G,\ell)$. \If{$S$ is a tour} \State $T_{{\rm apx}} := S$. \Else \State $S_1, T_1, S_2, T_2 := \texttt{FourPathCovers}(S,T)$.\vspace{1pt} \State Compute edge sets $A_1$, $A_2$, $B_1$, $B_2$ defined in (\ref{addingA}), (\ref{addingB-1}), (\ref{addingB-2}), (\ref{addingB-3}) and (\ref{addingB-4}).\vspace{1pt} \State $\mathcal{T} := \{S_1 \cup A_1, S_2 \cup A_2, T_1 \cup B_1, T_2 \cup B_2 \}$.\vspace{1pt} \State $T_{{\rm apx}} := \mathop{\rm argmin}\limits_{T \in \mathcal{T}} \ell(T)$.\vspace{1pt} \EndIf \State Outputs $T_{{\rm apx}}$ and halt. \end{algorithmic} \end{algorithm} \end{figure} \begin{thm}\label{thm:even-delta} For a complete graph $G=(V,E)$ with an even number of vertices and an edge length function $\ell: E \to R_+$, \textbf{Algorithm} \texttt{TourEven} computes a $3/4$-differential approximate tour of $(G,\ell)$ in polynomial time. \end{thm} \todo[inline, caption={Theorem7}]{修正前:Let $G = (V, E)$ be a complete graph with an even number of vertices, and $\ell: E \to R_+$ be an edge length function. Then \textbf{Algorithm} \texttt{TourEven} computes a $3/4$-differential approximate tour for TSP in polynomial time.} \begin{proof} We show that \textbf{Algorithm} \texttt{TourEven} outputs a 3/4-differential approximate tour $T_{{\rm apx}}$ in polynomial time. If a minimum weighted 2-factor $S$ of $(G, \ell)$ computed in the algorithm is a tour, then clearly $T_{{\rm apx}}=S$ is an optimal tour. On the other hand, if $S$ is not a tour, then we have \begin{align*} 4\ell(T_{{\rm apx}}) &\leq \ell(S_1 \cup A_1) + \ell(S_2 \cup A_2) + \ell(T_1 \cup B_1) + \ell(T_2 \cup B_2)\\ &= 2(\ell(S)+\ell(T)) + \ell(A_1 \cup A_2 \cup B_1 \cup B_2)\\ &\leq 3\mathop{\rm opt}(G, \ell) + \mathop{\rm wor}(G, \ell), \end{align*} where the first equality follows from Lemmas \ref{lem:addingA}, \ref{lem:addingB}, and \ref{lem:addingAB}, and the last inequality follows from Lemma \ref{lem:addingAB}, and $\ell(S) \leq \mathop{\rm opt}(G,\ell)$, and $2\ell(T) \leq \mathop{\rm opt}(G,\ell)$. Thus $T_{{\rm apx}}$ is a 3/4-differential approximate tour. Note that minimum weighted $1$- and $2$- factors can be computed in polynomial time, and $A_i$ and $B_i$ ($i=1,2$) can be computed in polynomial time. Thus \textbf{Algorithm} \texttt{TourEven} is polynomial, which completes the proof. \end{proof} Before concluding the section, let us remark that $3/4$-differential approximability is known for graph with an even number of vertices \cite{escoffier2008better}. Different from the algorithm in \cite{escoffier2008better}, ours is constructed in a uniform framework, which can further be extended to the odd case. \section{Approximation for odd instances} In this section, we construct an approximation algorithm for TSP with an odd number of vertices. Our algorithm is much more involved than the even case. It first guesses a path $P$ with three edges\todo[fancyline]{of length 3} in an optimal tour, constructs eight path covers based on $P$, and extend each path cover to a tour in such a way that at least one of the eight tours guarantees 3/4-differential approximation ratio. More precisely, for each path $P$ with three edges\todo[fancyline]{of length 3}, say, $P=\{(v_1,v_2),(v_2,v_3),(v_3,v_4)\}$ with all $v_i$'s distinct, let $S$ be a minimum weighted $2$-factor among those containing $P$, let $T$ be a minimum weighted path cover among those satisfying $(v_1, v_2), (v_2, v_3) \in T$ and $V_1(T)=V \setminus \{v_2\}$, and let $T^\prime$ be a minimum weighted path cover among those satisfying $(v_2, v_3), (v_3, v_4) \in T^\prime$ and $V_1(T^\prime)=V \setminus \{v_3\}$. Assume that $S$ is not a tour, i.e., it contains at least two cycles, since otherwise, is optimal, and hence ensures $3/4$-differential approximability if some optimal tour contains $P$. We note that $(S,T)$ and $(S,T^\prime)$ are both valid pairs of spanning 2-matchings. We apply \textbf{Procedure} \texttt{FourPathCovers} to them, but not arbitrarily. Let us specify two cycles $C^*$ and $C^{**}$ in $S$ such that $P \subseteq C^*$ and $P \cap C^{**} = \emptyset$. We define two vertices $v_0$ and $v_5$ in $V(C^*)$ such that $v_0\not=v_2$, $v_5\not=v_3$, and $(v_0, v_1) ,(v_4, v_5) \in C^*$. By definition $v_0=v_4$ and $v_5=v_1$ hold if $|C^*|=4$\todo[fancyline]{$C^*$ has length 4}. Furthermore, we define two edges $f$ and $f^\prime$ in $C^{**}$ that satisfy the properties in the next lemma. \begin{lem} \label{lem:first-edge} Let $C^{**}$, $T$ and $T^\prime$ be defined as above. Then there exist two edges $f \in C^{**}\setminus T$ and $f^\prime \in C^{**}\setminus T^\prime$ such that \begin{enumerate} \item[$(\mathrm{i})$] they have a common endpoint $q$, and \item[$(\mathrm{ii})$] $T \cup \{f\}$ and $T^\prime \cup \{f^\prime\}$ are path covers. \end{enumerate} \end{lem} \begin{proof} If $C^{**} \setminus (T \cup T^\prime) \not= \emptyset$, then arbitrarily take an edge $f=f^\prime$ in $C^{**} \setminus (T \cup T^\prime)$. It is not difficult to see that (i) and (ii) in the lemma are satisfied. On the other hand, if $C^{**} \setminus (T \cup T^\prime) = \emptyset$. Then $C^{**}$ is even and it is covered with two matchings $C^{**} \cap T$ and $C^{**} \cap T^\prime$. This again implies the existence of two edges. \end{proof} \begin{figure} \centering \figexodd \caption{A 2-factor $S$ and two path covers $T$ and $T^\prime$ defined before Lemma \ref{lem:first-edge}, and an example of $f$ and $f^\prime$, which contain $q$.}\label{fig:exodd} \end{figure} We note that $f$ and $f^\prime$ in Lemma \ref{lem:first-edge} might be identical, and (ii) in Lemma \ref{lem:first-edge} implies that two pairs $(S \setminus \{f\}, T \cup \{f\})$ and $(S \setminus \{f^\prime\}, T^\prime \cup \{f^\prime\})$ are valid. \textcolor{black}{Figure \ref{fig:exodd} shows an example of $S$, $T$, $T^\prime$, $f$ and $f^\prime$.} Our algorithm uses \textbf{Procedure} \texttt{FourPathCovers} for $(S,T)$ defined as above in such a way that edge $e_1=f$ is chosen in the first round and two edges $e_1=(v_3,v_4)$ and $e_2=(v_0,v_1)$ are chosen in the last round. Similarly, our algorithm uses \textbf{Procedure} \texttt{FourPathCovers} for $(S,T^\prime)$ defined as above in such a way that edge $e_1=f^\prime$ is chosen in the first round and two edges $e_1=(v_1,v_2)$ and $e_2=(v_4,v_5)$ are chosen in the last round. Let $S_1$, $T_1$, $S_2$, and $T_2$ be four path covers obtained by \textbf{Procedure} \texttt{FourPathCover}$(S, T)$, and let $S^\prime_1$, $T^\prime_1$, $S^\prime_2$, and $T^\prime_2$ be four path covers returned by \textbf{Procedure} \texttt{FourPathCover}$(S, T^\prime)$. \begin{lem}\label{lem:4as2m-odd} Let $S$, $T$, $S_i$, and $T_i\,(i=1,2)$ be defined as above. Then $S_1$, $S_2$, $T_1$, and $T_2$ are path covers such that \begin{align} &S_i \cup T_i = S \cup T \text{ and } S_i \cap T_i = S \cap T \text{ for } i=1,2, \label{lem:4as2m-odd:edge}\\ &V_1(S_i) \text{ and } V_1(T_i) \text{ is a partition of } V \setminus \{v_2\} \text{ for } i=1,2, \label{lem:4as2m-odd:vertex}\\ &T_1 \setminus T_2 = \{(v_3,v_4)\}, T_2 \setminus T_1 = \{(v_0,v_1)\}, \text{and } \{(v_1,v_2), (v_2,v_3)\} \in \mathcal{P}(T_1 \cap T_2), \text{and}\label{lem:4as2-odd:path}\\ &q \in V_1(S_1) \cap V_1(S_2),\label{lem:4as2m-odd:commonendpoint} \end{align} where $v_i \in V(C^*)\, (i=0,\ldots,4)$ are defined as above and $q$ is a common endpoint of $f$ and $f^\prime$ in \mbox{Lemma \ref{lem:first-edge}}. \end{lem} \begin{proof} By definition, we have $V_1(S) = \emptyset$ and $V_1(T) = V \setminus \{v_2\}$. Moreover, since an edge $f$ in Lemma \ref{lem:first-edge} is chosen in the first round of \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$, and $(v_3,v_4)$ and $(v_0,v_1)$ are chosen in the last round of \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$, Lemma \ref{lem:4as2m} implies the statement of lemma. \end{proof} \textcolor{black}{Figure \ref{fig:exodd-st} shows $(S_1,T_1)$ and $(S_2, T_2)$ computed by \textbf{Procedure} \texttt{FourPathCovers} for $(S,T)$ in Fig. \ref{fig:exodd}.} \begin{figure} \centering \figexoddst \caption{Two pairs $(S_1,T_1)$ and $(S_2,T_2)$ computed by \textbf{Procedure} \texttt{FourPathCovers} for $(S,T)$, $e^{(1)}_1 = f$, $e^{(2)}_1 = e$, $e^{(3)}_1 = (v_3,v_4)$ and $e^{(3)}_2 = (v_0,v_1)$ in Fig. \ref{fig:exodd}, where $e^{(j)}_i$ denotes the edge chosen as $e_i$ in the $j$-th round of the procedure.}\label{fig:exodd-st} \end{figure} Similarly, we have the following lemma. \begin{lem}\label{lem:4as2m-odd'} Let $S$, $T^\prime$, $S^\prime_i$, and $T^\prime_i\,(i=1,2)$ be defined as above. Then $S^\prime_1$, $S^\prime_2$, $T^\prime_1$, and $T^\prime_2$ are path covers such that \begin{align} &S^\prime_i \cup T^\prime_i = S \cup T^\prime \text{ and } S^\prime_i \cap T^\prime_i = S \cap T^\prime \text{ for } i=1,2, \label{lem:4as2m-odd':edge}\\ &V_1(S^\prime_i) \text{ and } V_1(T^\prime_i) \text{ is a partition of } V \setminus \{v_3\} \text{ for } i=1,2, \label{lem:4as2m-odd':vertex}\\ &T^\prime_1 \setminus T^\prime_2 = \{(v_1,v_2)\}, T^\prime_2 \setminus T^\prime_1 = \{(v_4,v_5)\}, \text{and } \{(v_2,v_3), (v_3,v_4)\} \in \mathcal{P}(T^\prime_1 \cap T^\prime_2), \text{and}\label{lem:4as2m-odd':path}\\ &q \in V_1(S^\prime_1) \cap V_1(S^\prime_2),\label{lem:4as2m-odd':commonendpoint} \end{align} where $v_i \in V(C^*)\, (i=1,\ldots,5)$ are defined as above and $q$ is a common endpoint of $f$ and $f^\prime$ in \mbox{Lemma \ref{lem:first-edge}}. \end{lem} \textcolor{black}{Figure \ref{fig:exodd-st'} shows $(S^\prime_1,T^\prime_1)$ and $(S^\prime_2, T^\prime_2)$ computed by \textbf{Procedure} \texttt{FourPathCovers} for $(S,T^\prime)$ in Fig. \ref{fig:exodd}.} \begin{figure} \centering \figexoddstp \caption{Two pairs $(S^\prime_1,T^\prime_1)$ and $(S^\prime_2,T^\prime_2)$ computed by \textbf{Procedure} \texttt{FourPathCovers} for $(S,T^\prime)$, $e^{(1)}_1 = f^\prime$, $e^{(2)}_1 = e^\prime$, $e^{(3)}_1 = (v_1,v_2)$ and $e^{(3)}_2 = (v_4,v_5)$ in Fig. \ref{fig:exodd}, where $e^{(j)}_i$ denotes the edge chosen as $e_i$ in the $j$-th round of the procedure.}\label{fig:exodd-st'} \end{figure} Let us then show how to construct edge sets $A^{(\prime)}_i$ and $B^{(\prime)}_i$ (for $i=1,2$), such that $S^{(\prime)}_i \cup A^{(\prime)}_i$ and $T^{(\prime)}_i \cup B^{(\prime)}_i$ (for $i = 1,2$) are tours and \[\ell(A_1)+\ell(A_2)+\ell(B_1)+\ell(B_2)+\ell(A^\prime_1)+\ell(A^\prime_2)+\ell(B^\prime_1)+\ell(B^\prime_2) \leq 2\mathop{\rm wor}(G,\ell) - 2\ell(v_2,v_3),\]where $\mathop{\rm wor}(G,\ell)$ denotes the length of a longest tour of $(G,\ell)$. \begin{figure} \centering \figodds \caption{Two cases $|C^*| > 4$ and $|C^*| = 4$ for path covers $S_1$ and $S_2$ returned by \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$.}\label{fig:odd-s} \end{figure} Let us first show how to construct $A_1$ and $A_2$. By definition, $\mathcal{P}(S_1) \setminus \mathcal{P}(S_2)$ consists of a $(v_4,v_3)$-path $P_1 = C^* \setminus \{(v_4,v_3)\}$, and $\mathcal{P}(S_2) \setminus \mathcal{P}(S_1)$ consists of a $(v_1,v_0)$-path $P_2 = C^* \setminus \{(v_1,v_0)\}$. Let $Q_i \, (i=1,\ldots,k)$ denote $(x_i,y_i)$-paths such that $\{Q_1, \ldots, Q_k\} = \mathcal{P}(S_1) \cap \mathcal{P}(S_2)$, where $x_1=q$ in \mbox{Lemma \ref{lem:4as2m-odd}}. \textcolor{black}{Figure \ref{fig:odd-s} shows $S_1$ and $S_2$ computed by \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$, where two cases $|C^*| > 4$ and $|C^*| = 4$ are separately described.} Define $A_1$ and $A_2$ by \begin{align} \begin{split} A_1 &= \{(v_3, x_1)\} \cup \{(y_i, x_{i+1}) \mid i = 1, \dots, k-1\} \cup \{(y_k, v_4)\}\\ A_2 &= \{(v_1, y_1)\} \cup \{(x_i, y_{i+1}) \mid i = 1, \ldots, k-1\} \cup \{(x_k, v_0)\}, \end{split}\label{addingA-odd} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:odd-a}}. Then we have the following lemma. \begin{figure} \centering \figodda \caption{Two edge sets $A_1$ and $A_2$ for path covers $S_1$ and $S_2$ (as illustrated in Fig. \ref{fig:odd-s}).}\label{fig:odd-a} \end{figure} \begin{lem}\label{lem:addingA-odd} Two sets $A_1$ and $A_2$ defined in $(\ref{addingA-odd})$ satisfy the following three conditions. \begin{enumerate} \item[$(\mathrm{i})$] $S_i \cup A_i$ is a tour of $G$ for $i=1,2$. \item[$(\mathrm{ii})$] $V(A_i)=V_1(S_i)$ for $i=1,2$. \item[$(\mathrm{iii})$] $A_1 \cap A_2 = \emptyset$ and $A_1 \cup A_2$ consists of \begin{enumerate} \item[$(\mathrm{iii}$-$1)$] a $(v_1, v_3)$-path if $|C^*|=4$. \item[$(\mathrm{iii}$-$2)$] vertex-disjoint $(v_0, v_3)$- and $(v_1, v_4)$-paths if $|C^*|>4$ and $k$ is odd. \item[$(\mathrm{iii}$-$3)$] vertex-disjoint $(v_0, v_1)$- and $(v_3, v_4)$-paths if $|C^*|>4$ and $k$ is even. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Note that $\mathcal{P}(S_1) = \{Q_1, \dots , Q_k\} \cup \{P_1\}$ and $\mathcal{P}(S_2) = \{Q_1, \dots , Q_k\} \cup \{P_2\}$. Thus it follows from the definition of $A_1$ and $A_2$. \end{proof} \begin{figure} \centering \figexodda \caption{Two edge sets $A_1$ and $A_2$ for path covers $S_1$ and $S_2$ in Fig. \ref{fig:exodd-st}.}\label{fig:exodd-a} \end{figure} Similarly, let us define $A^\prime_1$ and $A^\prime_2$. Recall that $\mathcal{P}(S^\prime_1) \setminus \mathcal{P}(S^\prime_2)$ consists of a $(v_1,v_2)$-path $P^\prime_1 = C^* \setminus \{(v_1,v_2)\}$, and $\mathcal{P}(S^\prime_2) \setminus \mathcal{P}(S^\prime_1)$ consists of a $(v_4,v_5)$-path $P^\prime_2 = C^* \setminus \{(v_4,v_5)\}$. Let $Q^\prime_i \, (i=1,\ldots,k)$ denote $(x^\prime_i,y^\prime_i)$-paths such that $\{Q^\prime_1, \ldots, Q^\prime_k\} = \mathcal{P}(S^\prime_1) \cap \mathcal{P}(S^\prime_2)$, where $x^\prime_1=q$ in \mbox{Lemma \ref{lem:4as2m-odd}}. Define $A^\prime_1$ and $A^\prime_2$ by \begin{align} \begin{split} A^\prime_1 &= \{(v_2, x^\prime_1)\} \cup \{(y^\prime_i, x^\prime_{i+1}) \mid i = 1, \dots, k-1\} \cup \{(y^\prime_k, v_1)\}\\ A^\prime_2 &= \{(v_4, y^\prime_1)\} \cup \{(x^\prime_i, y^\prime_{i+1}) \mid i = 1, \ldots, k-1\} \cup \{(x^\prime_k, v_5)\}. \end{split}\label{addingA-odd'} \end{align} Then we have the following lemma. \begin{lem}\label{lem:addingA-odd'} Two sets $A^\prime_1$ and $A^\prime_2$ defined in $(\ref{addingA-odd'})$ satisfy the following three conditions. \begin{enumerate} \item[$(\mathrm{i})$] $S^\prime_i \cup A^\prime_i$ is a tour of $G$ for $i=1,2$. \item[$(\mathrm{ii})$] $V(A^\prime_i)=V_1(S^\prime_i)$ for $i=1,2$. \item[$(\mathrm{iii})$] $A^\prime_1 \cap A^\prime_2 = \emptyset$ and $A^\prime_1 \cup A^\prime_2$ consists of \begin{enumerate} \item[$(\mathrm{iii}$-$1)$] a $(v_2, v_4)$-path if $|C^*|=4$. \item[$(\mathrm{iii}$-$2)$] vertex-disjoint $(v_1, v_4)$- and $(v_2, v_5)$-paths if $|C^*|>4$ and $k$ is odd. \item[$(\mathrm{iii}$-$3)$] vertex-disjoint $(v_1, v_2)$- and $(v_4, v_5)$-paths if $|C^*|>4$ and $k$ is even. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Note that $\mathcal{P}(S^\prime_1) = \{Q^\prime_1, \dots , Q^\prime_k\} \cup \{P^\prime_1\}$ and $\mathcal{P}(S^\prime_2) = \{Q^\prime_1, \dots , Q^\prime_k\} \cup \{P_2^\prime\}$. Thus it follows from the definition of $A^\prime_1$ and $A^\prime_2$. \end{proof} \textcolor{black}{Figures \ref{fig:exodd-a} and \ref{fig:exodd-a'} show an example of edge sets $A_1$, $A_2$, $A^\prime_1$, and $A^\prime_2$ for path covers $S_1$, $S_2$, $S^\prime_1$, and $S^\prime_2$ in Figs. \ref{fig:exodd-st} and \ref{fig:exodd-st'}} \begin{figure} \centering \figexoddap \caption{Two edge sets $A^\prime_1$ and $A^\prime_2$ for path covers $S^\prime_1$ and $S^\prime_2$ in Fig. \ref{fig:exodd-st'}.}\label{fig:exodd-a'} \end{figure} \begin{figure} \centering \figoddt \caption{Three cases for path covers $T_1$ and $T_2$ returned by \textbf{Procedure} \texttt{FourPathCovers}$(S,T)$.}\label{fig:odd-t} \end{figure} Let us next construct $B_1$, $B_2$, $B^\prime_1$, and $B^\prime_2$. Let $O_i \, (i=1,\ldots,d)$ denote vertex-disjoint\todo[fancyline]{even に合わせてvertex-disjointを追加} $(z_i,w_i)$-paths such that $\{O_1, \ldots, O_d\} = \mathcal{P}(T_1) \cap \mathcal{P}(T_2)$, where $z_1$ and $w_1$ satisfy \begin{align} \ell(v_1, z_1) + \ell(v_3, w_1) \leq \ell(v_1, w_1) + \ell(v_3, z_1). \label{zw-ineq} \end{align} We remark that $d \geq 1$ (i.e., $\mathcal{P}(T_1) \cap \mathcal{P}(T_2) \not = \emptyset$) holds if $n \geq 16$. To see this, we have $|\mathcal{P}(T_1)| = \lfloor n/2 \rfloor - (k+1)$, where $k+1$ is equal to the number of cycles in $S$. Since each cycle in $S$ has size at least $3$, $k+1 \geq \lfloor n/3 \rfloor$ holds, which implies that $|\mathcal{P}(T_1)| \geq 3$ if $n \geq 16$. Since $|\mathcal{P}(T_1) \setminus \mathcal{P}(T_2)| \leq 2$, we have $d=|\mathcal{P}(T_1) \cap \mathcal{P}(T_2)| \geq 1$ if $n\geq16$. In the subsequent discussion, we assume that $n \geq 16$, and construct $B_1$ and $B_2$ by considering the following three cases \textcolor{black}{(see in Fig. \ref{fig:odd-t})}. \begin{enumerate} \item $|C^*|>4$ and $\mathcal{P}(T_1 \cap T_2)$ contains a $(v_0,v_4)$-path. \item $|C^*|>4$ and $\mathcal{P}(T_1 \cap T_2)$ contains no $(v_0,v_4)$-path. \item $|C^*|=4$. \end{enumerate} \textbf{Case 1}: Let $R_0$ denote a $(v_0,v_4)$-path in $\mathcal{P}(T_1 \cap T_2)$, and let $R_1 = \{(v_1,v_2),(v_2,v_3)\}$. By definition $R_1$ is a $(v_1, v_3)$-path in $\mathcal{P}(T_1 \cap T_2)$. Then we note that \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_0\cup \{(v_3,v_4)\} \cup R_1\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_0\cup \{(v_0,v_1)\} \cup R_1\}, \end{align*} where $R_0\cup \{(v_3,v_4)\} \cup R_1$ and $R_0\cup \{(v_0,v_1)\} \cup R_1$ are $(v_0,v_1)$- and $(v_3,v_4)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \{(v_1,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, v_0)\}\\ B_2 &= \{(v_3,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, v_4)\}, \end{split}\label{addingB-odd-1} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:odd-b-1}}. Then we have \begin{align} &T_i \cup B_i \text{ is a tour of } G \text{ for } i = 1,2, \label{addingB-odd:Ham}\\ &B_1 \cap B_2 = \emptyset \text{ and } V(B_i) = V_1(T_i) \text{ for } i = 1,2, \text{and} \label{addingB-odd:vertex} \\ \begin{split} &B_1 \cup B_2 \text{ consist of vertex-disjoint } (v_0,v_1) \text{- and } (v_3,v_4)\text{-paths if } d \text{ is even,} \\ &\quad\text{and vertex-disjoint } (v_0,v_3) \text{- and } (v_1,v_4)\text{-paths if } d \text{ is odd.} \end{split}\label{addingB-odd:path} \end{align}\medskip \begin{figure} \centering \figoddb{3}{1} \caption{Two edge sets $B_1$ and $B_2$ for Case 1 (as illustrated in Fig. \ref{fig:odd-t}).}\label{fig:odd-b-1} \end{figure} \textbf{Case 2}: Let $R_1 = \{(v_1,v_2), (v_2,v_3)\}$ (i.e., let $R_1$ be a $(v_1,v_3)$-path in $\mathcal{P}(T_1 \cap T_2)$). Let $R_0$ and $R_4$ respectively denote $(v_0, r_0)$- and $(v_4,r_4)$-paths in $\mathcal{P}(T_1 \cap T_2)$. Then, we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_0, R_1\cup \{(v_3,v_4)\} \cup R_4\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_4, R_0\cup \{(v_0,v_1)\} \cup R_1\}, \end{align*} where $R_1\cup \{(v_3,v_4)\} \cup R_4$ and $R_0\cup \{(v_0,v_1)\} \cup R_1$ are $(v_1,r_4)$- and $(v_3,r_0)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \{(v_1,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, r_0), (v_0,r_4)\} \\ B_2 &= \{(v_3,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, r_4), (v_4,r_0)\}, \end{split}\label{addingB-odd-2} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:odd-b-2}}. Similarly to Case 1, we have (\ref{addingB-odd:Ham}) and (\ref{addingB-odd:vertex}). Furthermore, $B_1 \cup B_2$ consists of $(v_0, v_3)$- and $(v_1, v_4)$-paths if $d$ is even, and vertex-disjoint $(v_0, v_1)$- and $(v_3, v_4)$-paths if $d$ is odd. \medskip \begin{figure} \centering \figoddb{4}{2} \caption{Two edge sets $B_1$ and $B_2$ for Case 2 (as illustrated in Fig. \ref{fig:odd-t}).}\label{fig:odd-b-2} \end{figure} \textbf{Case 3}: In this case, we have $v_0=v_4$. Let $R_1 = \{(v_1,v_2), (v_2,v_3)\}$ (i.e., let be a $(v_1,v_3)$-path in $\mathcal{P}(T_1 \cap T_2)$), let $R_4$ denote $(v_4, r_4)$-path in $\mathcal{P}(T_1 \cap T_2)$. Then we have \begin{align*} \mathcal{P}(T_1)&= \{O_1, \dots, O_d\} \cup \{R_1\cup \{(v_3,v_4)\} \cup R_4\}\\ \mathcal{P}(T_2)&= \{O_1, \dots, O_d\} \cup \{R_1\cup \{(v_1,v_4)\} \cup R_4\}, \end{align*} where $R_1\cup \{(v_3,v_4)\} \cup R_4$ and $R_1\cup \{(v_1,v_4)\} \cup R_4$ are $(v_1,r_4)$- and $(v_3,r_4)$-paths, respectively. Define $B_1$ and $B_2$ by \begin{align} \begin{split} B_1 &= \{(v_1,z_1)\} \cup \{(w_i,z_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w_d, r_4)\} \\ B_2 &= \{(v_3,w_1)\} \cup \{(z_i,w_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z_d, r_4)\}, \end{split}\label{addingB-odd-3} \end{align} \textcolor{black}{as illustrated in Fig. \ref{fig:odd-b-3}}. Similarly to the previous cases, we have (\ref{addingB-odd:Ham}) and (\ref{addingB-odd:vertex}). Furthermore, we have $B_1 \cup B_2$ is a $(v_1,v_3)$-path. \begin{figure} \centering \figoddb{5}{3} \caption{Two edge sets $B_1$ and $B_2$ for Case 3 (as illustrated in Fig. \ref{fig:odd-t}).}\label{fig:odd-b-3} \end{figure} In summary, we have the following lemma. \begin{lem}\label{lem:addingB-odd} Let $B_1$ and $B_2$ be edge sets defined as above. Then they satisfy $(\ref{addingB-odd:Ham})$ and $(\ref{addingB-odd:vertex})$, and $B_1 \cup B_2$ consists of either $(\mathrm{i})$ vertex-disjoint $(v_0,v_3)$- and $(v_1,v_4)$-paths or $(\mathrm{ii})$ vertex-disjoint $(v_0,v_1)$- and $(v_3,v_4)$-paths if $|C^*|>4$, and $(\mathrm{iii})$ a $(v_1,v_3)$-path if $|C^*|=4$. \end{lem} Similarly, $B^\prime_1$ and $B^\prime_2$ can be obtained from $T^\prime_1$ and $T^\prime_2$ as follows. Let $O^\prime_i \, (i=1,\ldots,d)$ denote vertex-disjoint\todo[fancyline]{even に合わせてvertex-disjointを追加} $(z^\prime_i,w^\prime_i)$-paths such that $\{O^\prime_1, \ldots, O^\prime_d\} = \mathcal{P}(T^\prime_1) \cap \mathcal{P}(T^\prime_2)$, where $z^\prime_1$ and $w^\prime_1$ satisfy \begin{align} \ell(v_4, z^\prime_1) + \ell(v_2, w^\prime_1) \leq \ell(v_4, w^\prime_1) + \ell(v_2, z^\prime_1). \label{zw-ineq'} \end{align} Recall that $d \geq 1$ (i.e., $\mathcal{P}(T^\prime_1) \cap \mathcal{P}(T^\prime_2) \not = \emptyset$) holds if $n \geq 17$. We construct $B^\prime_1$ and $B^\prime_2$ by considering the following three cases. \begin{enumerate} \item $|C^*|>4$ and $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$ contains a $(v_1,v_5)$-path. \item $|C^*|>4$ and $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$ contains no $(v_1,v_5)$-path. \item $|C^*|=4$. \end{enumerate} \textbf{Case 1}: Let $R^\prime_1$ denote a $(v_1,v_5)$-path in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$, and let $R^\prime_2 = \{(v_2,v_3),(v_3,v_4)\}$. By definition $R^\prime_2$ is a $(v_2, v_4)$-path in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$. Then we note that \begin{align*} \mathcal{P}(T^\prime_1)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2\}\\ \mathcal{P}(T^\prime_2)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_1\cup \{(v_4,v_5)\} \cup R^\prime_2\}, \end{align*} where $R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2$ and $R^\prime_1\cup \{(v_4,v_5)\} \cup R^\prime_2$ are $(v_4,v_5)$- and $(v_1,v_2)$-paths, respectively. Define $B^\prime_1$ and $B^\prime_2$ by \begin{align} \begin{split} B^\prime_1 &= \{(v_4,z^\prime_1)\} \cup \{(w^\prime_i,z^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w^\prime_d, v_5)\}\\ B^\prime_2 &= \{(v_2,w^\prime_1)\} \cup \{(z^\prime_i,w^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z^\prime_d, v_1)\}. \end{split}\label{addingB-odd'-1} \end{align} Then we have \begin{align} &T^\prime_i \cup B^\prime_i \text{ is a tour of } G \text{ for } i = 1,2, \label{addingB-odd':Ham}\\ &B^\prime_1 \cap B^\prime_2 = \emptyset \text{ and } V(B^\prime_i) = V_1(T^\prime_i) \text{ for } i = 1,2, \text{and} \label{addingB-odd':vertex} \\ \begin{split} &B^\prime_1 \cup B^\prime_2 \text{ consist of vertex-disjoint } (v_1,v_2) \text{- and } (v_4,v_5)\text{-paths if } d \text{ is even}, \\ &\quad\text{and vertex-disjoint } (v_1,v_4) \text{- and } (v_2,v_5)\text{-paths if } d \text{ is odd.} \end{split}\label{addingB-odd':path} \end{align} \textbf{Case 2}: Let $R^\prime_2 = \{(v_2,v_3), (v_3,v_4)\}$ (i.e., let be a $(v_2,v_4)$-path in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$). Let $R^\prime_1$ and $R^\prime_5$ respectively denote $(v_1, r^\prime_1)$- and $(v_5,r^\prime_5)$-paths in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$. Then, we have \begin{align*} \mathcal{P}(T^\prime_1)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_5, R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2\}\\ \mathcal{P}(T^\prime_2)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_1, R^\prime_2\cup \{(v_4,v_5)\} \cup R^\prime_5\}, \end{align*} where $R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2$ and $R^\prime_2\cup \{(v_4,v_5)\} \cup R^\prime_5$ are $(v_4,r^\prime_1)$- and $(v_2,r^\prime_5)$-paths, respectively. Define $B^\prime_1$ and $B^\prime_2$ by \begin{align} \begin{split} B^\prime_1 &= \{(v_4,z^\prime_1)\} \cup \{(w^\prime_i,z^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w^\prime_d, r^\prime_5), (v_5,r^\prime_1)\} \\ B^\prime_2 &= \{(v_2,w^\prime_1)\} \cup \{(z^\prime_i,w^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z^\prime_d, r^\prime_1), (v_1,r^\prime_5)\}. \end{split}\label{addingB-odd'-2} \end{align} Similarly to Case 1, we have (\ref{addingB-odd':Ham}) and (\ref{addingB-odd':vertex}). Furthermore, $B^\prime_1 \cup B^\prime_2$ consists of $(v_1, v_4)$- and $(v_2, v_5)$-paths if $d$ is even, and vertex-disjoint $(v_1, v_2)$- and $(v_4, v_5)$-paths if $d$ is odd. \medskip \textbf{Case 3}: In this case, we have $v_5=v_1$. Let $R^\prime_2 = \{(v_2,v_3), (v_3,v_4)\}$ (i.e., let $R_1$ be a $(v_2,v_4)$-path in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$), let $R^\prime_1$ denote $(v_1, r^\prime_1)$-path in $\mathcal{P}(T^\prime_1 \cap T^\prime_2)$. Then we have \begin{align*} \mathcal{P}(T^\prime_1)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2\}\\ \mathcal{P}(T^\prime_2)&= \{O^\prime_1, \dots, O^\prime_d\} \cup \{R^\prime_1\cup \{(v_1,v_4)\} \cup R^\prime_2\}, \end{align*} where $R^\prime_1\cup \{(v_1,v_2)\} \cup R^\prime_2$ and $R^\prime_1\cup \{(v_1,v_4)\} \cup R^\prime_2$ are $(v_4,r^\prime_1)$- and $(v_2,r^\prime_1)$-paths, respectively. Define $B^\prime_1$ and $B^\prime_2$ by \begin{align} \begin{split} B^\prime_1 &= \{(v_4,z^\prime_1)\} \cup \{(w^\prime_i,z^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(w^\prime_d, r^\prime_1)\} \\ B^\prime_2 &= \{(v_2,w^\prime_1)\} \cup \{(z^\prime_i,w^\prime_{i+1}) \mid i = 1, \ldots, d-1\} \cup \{(z^\prime_d, r^\prime_1)\}. \end{split}\label{addingB-odd'-3} \end{align} Similarly to the previous cases, we have (\ref{addingB-odd':Ham}) and (\ref{addingB-odd':vertex}). Furthermore, we have $B^\prime_1 \cup B^\prime_2$ is a $(v_2,v_4)$-path. In summary, we have the following lemma. \begin{lem}\label{lem:addingB-odd'} Let $B^\prime_1$ and $B^\prime_2$ be edge sets defined as above. Then they satisfy $(\ref{addingB-odd':Ham})$ and $(\ref{addingB-odd':vertex})$, and $B^\prime_1 \cup B^\prime_2$ consists of either $(\mathrm{i})$ vertex-disjoint $(v_1,v_4)$- and $(v_2,v_5)$-paths or $(\mathrm{ii})$ vertex-disjoint $(v_1,v_2)$- and $(v_4,v_5)$-paths if $|C^*|>4$, and $(\mathrm{iii})$ a $(v_2,v_4)$-path if $|C^*|=4$. \end{lem} \begin{figure} \centering \figexoddb \caption{Two edge sets $B_1$ and $B_2$ for path covers $T_1$ and $T_2$ in Fig. \ref{fig:exodd-st}.}\label{fig:exodd-b} \end{figure} \begin{figure} \centering \figexoddbp \caption{Two edge sets $B^\prime_1$ and $B^\prime_2$ for path covers $T^\prime_1$ and $T^\prime_2$ in Fig. \ref{fig:exodd-st'}.}\label{fig:exodd-b'} \end{figure} \textcolor{black}{Figures \ref{fig:exodd-b} and \ref{fig:exodd-b'} show an example of edge sets $B_1$, $B_2$, $B^\prime_1$, and $B^\prime_2$ for path covers $T_1$, $T_2$, $T^\prime_1$, and $T^\prime_2$ in Figs. \ref{fig:exodd-st} and \ref{fig:exodd-st'}.} Furthermore, $A^{(\prime)}_i$ and $B^{(\prime)}_i$ ($i=1,2$) satisfy the following properties. \begin{lem}\label{lem:addingAB-odd} Let $A_1$, $A_2$, $B_1$, and $B_2$ be defined as above. Then they are all pairwise disjoint, and $C = A_1 \cup A_2 \cup B_1 \cup B_2$ consists of either one or two cycles such that $V(C) = V\setminus \{v_2\}$. Furthermore, there exists a cycle $D$ such that $V(D) = V\setminus \{v_2\}$, $\ell(D) \geq \ell(C)$ and $(q, v_3) \in D$. \end{lem} \begin{proof} It is not difficult to see that $A_1$, $A_2$, $B_1$, and $B_2$ are pairwise disjoint. Lemmas \ref{lem:4as2m-odd}, \ref{lem:addingA-odd}, and \ref{lem:addingB-odd} imply that $C=A_1 \cup A_2 \cup B_1 \cup B_2$ consists of either one or two cycles such that $V(C) = V\setminus \{v_2\}$. By $x_1=q$ and $(\ref{addingA-odd})$, we have $(q,v_3) \in C$. Thus if $C$ is a single cycle, the latter statement in the lemma holds. Assume that $C$ consists of two cycles. In this case, we can see that two edges $(v_1,z_1)$ and $(v_3,w_1)$ belong to different cycles by $(\ref{addingB-odd-1})$, $(\ref{addingB-odd-2})$, and $(\ref{addingB-odd-3})$. Let $D=(C \setminus \{(v_1,z_1), (v_3,w_1)\}) \cup \{(v_1,w_1), (v_3,z_1)\}$. Then $D$ is a cycle such that $V(D) = V\setminus \{v_2\}$. By assumption $(\ref{zw-ineq})$, we have $\ell(D)\geq\ell(C)$. Since $C$ contains $(q,v_3)$, so does $D$, which completes the proof. \end{proof} \begin{lem}\label{lem:addingAB-odd'} Let $A^\prime_1$, $A^\prime_2$, $B^\prime_1$, and $B^\prime_2$ be defined as above. Then they are all pairwise disjoint, and $C^\prime = A^\prime_1 \cup A^\prime_2 \cup B^\prime_1 \cup B^\prime_2$ consists of either one or two cycles such that $V(C^\prime) = V\setminus \{v_3\}$. Furthermore, there exists a cycle $D^\prime$ such that $V(D^\prime) = V\setminus \{v_3\}$, $\ell(D^\prime) \geq \ell(C^\prime)$ and $(q, v_2) \in D^\prime$. \end{lem} \begin{proof} It is not difficult to see that $A^\prime_1$, $A^\prime_2$, $B^\prime_1$, and $B^\prime_2$ are pairwise disjoint. Lemmas \ref{lem:4as2m-odd'}, \ref{lem:addingA-odd'}, and \ref{lem:addingB-odd'} imply that $C^\prime=A^\prime_1 \cup A^\prime_2 \cup B^\prime_1 \cup B^\prime_2$ consists of either one or two cycles such that $V(C^\prime) = V\setminus \{v_3\}$. By $x^\prime_1 = q$ and $(\ref{addingA-odd'})$, we have $(q, v_2) \in C^\prime$. Thus if $C^\prime$ is a single cycle, the latter statement in the lemma holds. Assume that $C^\prime$ consists of two cycles. In this case, we can see that two edges $(v_4,z^\prime_1)$ and $(v_2,w^\prime_1)$ belong to different cycles by $(\ref{addingB-odd'-1})$, $(\ref{addingB-odd'-2})$, and $(\ref{addingB-odd'-3})$. Let $D^\prime=(C^\prime \setminus \{(v_4,z^\prime_1), (v_2,w^\prime_1)\}) \cup \{(v_4,w^\prime_1), (v_2,z^\prime_1)\}$. Then $D^\prime$ is a cycle such that $V(D^\prime) = V\setminus \{v_3\}$. By assumption (\ref{zw-ineq'}), we have $\ell(D^\prime)\geq\ell(C^\prime)$. Since $C^\prime$ contains $(q,v_2)$, so does $D^\prime$, which completes the proof. \end{proof} \begin{lem}\label{lem:2tours} Let $A^{(\prime)}_i$ and $B^{(\prime)}_i$ for $i=1,2$ be defined as above. Them there exist two tours $H$ and $H^\prime$ in $G$ such that $\ell(H) + \ell(H^\prime) \geq \ell(A_1)+ \ell(A_2) + \ell(B_1)+ \ell(B_2) + \ell(A^\prime_1)+ \ell(A^\prime_2) + \ell(B^\prime_1)+ \ell(B^\prime_2) + 2\ell(v_2,v_3)$. \end{lem} \begin{proof} Let $D$ and $D^\prime$ be a cycles in Lemmas \ref{lem:addingAB-odd} and \ref{lem:addingAB-odd'}, respectively. Then we have $V(D) = V \setminus \{v_2\}$, $V(D^\prime) = V \setminus \{v_3\}$, $(q, v_3) \in D$, and $(q, v_2) \in D^\prime$. Define $H$ and $H^\prime$ by \begin{align*} H &= (D \setminus \{(q,v_3)\}) \cup \{(q,v_2), (v_2,v_3)\}\\ H^\prime &= (D^\prime \setminus \{(q,v_2)\}) \cup \{(q,v_3), (v_2,v_3)\}. \end{align*} Then $H$ and $H^\prime$ are tours. Furthermore, we have \begin{align*} \ell(H) + \ell(H^\prime) &= \ell(D) + \ell(D^\prime) + 2\ell(v_2,v_3)\\ &\geq \ell(A_1 \cup A_2 \cup B_1 \cup B_2) + \ell(A^\prime_1 \cup A^\prime_2 \cup B^\prime_1 \cup B^\prime_2) + 2\ell(v_2,v_3), \end{align*} which completes the proof. \end{proof} We are now ready to describe our approximation algorithm, called \texttt{TourOdd}. \todo[inline,caption={algorithm内のピリオド}]{algorithm内のピリオドに関して,"if" "for" "return" 等の予約語を含まない文,式にピリオドをつけるように統一しました.} \begin{figure}[h!t] \begin{algorithm}[H] \caption{\texttt{TourOdd} \label{alg:TourOdd} \begin{algorithmic} \Require A complete graph $G=(V,E)$ with odd $|V|$, and an edge length function $\ell :E \to \mathbb{R}_+$. \Ensure A tour $T_{{\rm apx}}$ in $G$.\vspace{5pt} \If{$n < 17$} \State Compute an optimal tour $T_{\mathop{\rm opt}}$ of $(G,\ell)$ by exhaustive search. \State Output $T_{\mathop{\rm opt}}$ and halt. \Else \State $\mathcal{T} := \emptyset$. \For{$v_1$,$v_2$,$v_3$ and $v_4$ in the 4-permutations of $V$} \State Compute a minimum weighted 2-factor $S$ among those containing $\{(v_1,v_2), (v_2,v_3), (v_3,v_4)\}$. \State Compute a minimum weighted path cover $T$ among those satisfying $(v_1, v_2), (v_2, v_3) \in T$ \State \ \ \ \ and $V_1(T)=V \setminus \{v_2\}$. \State Compute a minimum weighted path cover $T^\prime$ among those satisfying $(v_2, v_3), (v_3, v_4) \in T^\prime$ \State \ \ \ \ and $V_1(T^\prime)=V \setminus \{v_3\}$. \If{$S$ is a tour} \State $\mathcal{T} := \mathcal{T} \cup \{S\}$. \Else \State $S_1, T_1, S_2, T_2 := \texttt{FourPathCovers}(S,T)$.\vspace{1.5pt} \State Compute edge sets $A_1$, $A_2$, $B_1$, $B_2$ defined in (\ref{addingA-odd}), (\ref{addingB-odd-1}), (\ref{addingB-odd-2}), and (\ref{addingB-odd-3}).\vspace{1.5pt} \State $\mathcal{T} := \mathcal{T} \cup \{S_1 \cup A_1, S_2 \cup A_2, T_1 \cup B_1, T_2 \cup B_2 \}$.\vspace{1.5pt} \State $S^\prime_1, T^\prime_1, S^\prime_2, T^\prime_2 := \texttt{FourPathCovers}(S,T^\prime)$.\vspace{1.5pt} \State Compute edge sets $A^\prime_1$, $A^\prime_2$, $B^\prime_1$, $B^\prime_2$ defined in (\ref{addingA-odd'}), (\ref{addingB-odd'-1}), (\ref{addingB-odd'-2}), and (\ref{addingB-odd'-3}).\vspace{1.5pt} \State $\mathcal{T} := \mathcal{T} \cup \{S^\prime_1 \cup A^\prime_1, S^\prime_2 \cup A^\prime_2, T^\prime_1 \cup B^\prime_1, T^\prime_2 \cup B^\prime_2 \}$.\vspace{1.5pt} \EndIf \EndFor \State $T_{{\rm apx}} := \mathop{\rm argmin}\limits_{T \in \mathcal{T}} \ell(T)$.\vspace{1.5pt} \State Output $T_{{\rm apx}}$ and halt. \EndIf \end{algorithmic} \end{algorithm} \end{figure} Before analyzation of $T_{{\rm apx}}$, let us evaluate $\ell(S)$, $\ell(T)$ and $\ell(T^\prime)$. \begin{lem}\label{lem:odd-opt} For a path $P = \{(v_1,v_2), (v_2,v_3), (v_3,v_4)\}$, let $S$, $T$ and $T^\prime$ be defined as above. If there exists an optimal tour that contains $P$, then \begin{align} 2\ell(S) + \ell(T)+\ell(T^\prime) \leq 3\mathop{\rm opt}(G, \ell)+\ell(v_2,v_3). \end{align} \end{lem} \begin{proof} Obviously $\ell(S) \leq \mathop{\rm opt}(G,\ell)$ by definition. Let $T_{\mathop{\rm opt}} = \{(v_i, v_{i+1}) \mid i=1, \ldots, n\}$ be an optimal tour of $(G, \ell)$, where $v_{n+1}=v_1$, and let $U$ and $U^\prime$ be two path covers defined by \begin{align*} U&=\{(v_i,v_{i+1}) \mid i = 2,4,\ldots,n-1 \} \cup \{(v_1,v_2) \}\\ U^\prime&=\{(v_{i},v_{i+1}) \mid i = 3,5,\ldots,n \} \cup \{(v_2,v_3) \}. \end{align*} Then by the definition of $T$ and $T^\prime$, $\ell(T) \leq \ell(U)$ and $\ell(T^\prime) \leq \ell(U^\prime)$. Therefore, we have \begin{align*} \ell(T)+\ell(T^\prime) \leq \ell(U)+\ell(U^\prime) = \mathop{\rm opt}(G, \ell) + \ell(v_2,v_3). \end{align*} \end{proof} \begin{thm}\label{thm:odd-delta} For a complete graph $G=(V,E)$ with an odd number of vertices and an edge length function $\ell:E \to \mathbb{R}_+$, \textbf{Algorithm} \texttt{TourOdd} computes a $3/4$-differential approximate tour of $(G,\ell)$ in polynomial time. \end{thm} \begin{proof} If $n < 17$, \textbf{Algorithm} \texttt{TourOdd} clearly outputs an optimal tour in constant time. Otherwise (i.e., $n\geq17$), let $T_{\mathop{\rm opt}}$ be an optimal tour of $(G, \ell)$ and let $P = \{(v_1, v_2), (v_2, v_3), (v_3, v_4)\}$ be a path contained in $T_{\mathop{\rm opt}}$. For this $P$, let $S$, $T$, and $T^\prime$ be defined as above. If $S$ is a tour, then $S$ is an optimal tour of $(G,\ell)$ and is output by the algorithm, which guarantees the statement of the theorem. On the other hand, if $S$ is not a tour, then we have \begin{align*} 8\ell(T_{{\rm apx}}) &\leq \ell(S_1 \cup A_1)+\ell(S_2 \cup A_2)+\ell(T_1 \cup B_1)+\ell(T_2 \cup B_2)\\ &\quad+ \ell(S^\prime_1 \cup A^\prime_1)+\ell(S^\prime_2 \cup A^\prime_2)+\ell(T^\prime_1 \cup B^\prime_1)+\ell(T^\prime_2 \cup B^\prime_2)\\ &= 2(2\ell(S) + \ell(T) + \ell(T^\prime)) + \ell(A_1 \cup A_2 \cup B_1 \cup B_2) + \ell(A^\prime_1 \cup A^\prime_2 \cup B^\prime_1 \cup B^\prime_2)\\ &\leq 6\mathop{\rm opt}(G,\ell) + 2\mathop{\rm wor}(G,\ell), \end{align*} where the first equality follows from Lemmas \ref{lem:addingA-odd}, \ref{lem:addingA-odd'}, \ref{lem:addingB-odd}, and \ref{lem:addingB-odd'}, and the last inequality follows from Lemmas \ref{lem:2tours} and \ref{lem:odd-opt}. Thus $T_{{\rm apx}}$ is a 3/4-differential approximate tour of $(G,\ell)$. Note that $S$, $T$, and $T^\prime$ can be computed in polynomial time, since minimum weighted 1- and 2-factors can be computed in polynomial time. Furthermore, $A^{(\prime)}_i$ and $B^{(\prime)}_i$ for $i=1,2$ can be computed in polynomial time. Thus \textbf{Algorithm} \texttt{TourOdd} is polynomial, which completes the proof. \end{proof} \section*{Acknowledgement} This work was partially supported by the joint project of Kyoto University and Toyota Motor Corporation, titled "Advanced Mathematical Science for Mobility Society" and by KAKENHI. \bibliographystyle{plain}
{ "timestamp": "2020-12-29T02:24:03", "yymm": "2012", "arxiv_id": "2012.14079", "language": "en", "url": "https://arxiv.org/abs/2012.14079" }
\section{Introduction} \label{sec:1} Gas transport through long pipes is usually dominated by friction at the pipe walls. On the practically relevant time scales $t=O(1/\varepsilon)$, the governing balance equations for mass and momentum can be phrased in rescaled form as \begin{align} a \partial_{\tau} \rho + \partial_x m &= 0 \label{eq:sys1}\\ \varepsilon^2 \partial_{\tau} w + \partial_x h + \gamma |w| w &= 0. \label{eq:sys2} \end{align} Here $a$ is the cross section of the pipe, $\rho$ is the density of the gas, and $w$, $\tau$, $\gamma>0$ denote the rescaled velocity, time, and friction coefficient, respectively. Furthermore \begin{align} m = a \rho w \qquad \text{and} \qquad h = \frac{\varepsilon^2 w^2}{2} + P'(\rho) + g z \label{eq:sys3} \end{align} are the rescaled mass flow rate and total specific enthalpy with $P(\rho)$ denoting the pressure potential, $g$ the gravity constant, and $z$ the elevation of the pipe. For convenience of the reader, a detailed derivation of the above equations, which are assumed to hold for $0 < x < \ell$ and for time $\tau \ge 0$, is given in the appendix. In rescaled variables, the physical energy of the system is given by \begin{align} \mathcal{H}(\rho,w) &= \int_0^\ell a \left( \varepsilon^2 \frac{\rho w^2}{2} + P(\rho) + g z \rho \right) \, dx. \label{eq:sys4} \end{align} Let us note that the state variables $(\rho,w)$ and the co-state variables $(h,m)$ are directly linked via the derivative of this energy functional; details will be given below. As a direct consequence of the particular problem structure, sufficiently smooth solutions of the system \eqref{eq:sys1}--\eqref{eq:sys3} can be shown to satisfy the following energy-dissipation inequality \begin{align} \frac{d}{d\tau} \mathcal{H}(\rho,w) + \mathcal{D}(\rho,w) &= -m h \big|_0^\ell \label{eq:sys5} \end{align} with dissipation functional $\mathcal{D}(\rho,w) = \int_0^\ell \gamma \rho |w|^3 \, dx \ge 0$. The free energy of the system thus changes only due to friction at the pipe walls and energy transfer across the boundary. To fully describe the evolution, the problem has to be complemented by appropriate initial and boundary conditions. For instance, one may prescribe \begin{align} \label{eq:sys6} h = h_\partial \qquad \text{on } \{0,\ell\}; \end{align} alternative conditions will be discussed later. We will however mostly study properties of general solutions of \eqref{eq:sys1}--\eqref{eq:sys3} without restriction of the boundary values. \subsection*{Scope and summary of main results.} In this paper, we investigate the stability of solutions to the nonlinear system \eqref{eq:sys1}--\eqref{eq:sys3} with respect to perturbations of the initial and boundary values as well as the model parameters $\varepsilon$ and $\gamma$. Our analysis is done under the following structural assumptions: \begin{itemize} \item[(A1)] The pressure potential $P: \mathbb{R}_+ \to \mathbb{R}$ is smooth and strictly convex and there exist positive constants $\ubar \rho,\bar \rho, \bar w$ and $\bar \varepsilon$ such that \begin{align} \label{eq:ass1} \rho P''(\rho) \ge 4 \bar \varepsilon^2 |\bar w|^2 \qquad \forall \ubar \rho \le \rho \le \bar \rho. \end{align} Moreover, $0<\ubar a \le a \le \bar a$ and $|gz| \le \bar g\bar z$ for appropriate constants $\ubar a,\bar a$, and $\bar g \bar z$. \item[(A2)] Sufficiently smooth solutions $(\rho,w)$ and $(\hat \rho,\hat w)$ to the system \eqref{eq:sys1}--\eqref{eq:sys5} exist for parameters $0 \le \varepsilon, \hat \varepsilon \le \bar \varepsilon$ and $0 < \ubar \gamma \le \gamma,\hat \gamma \le \bar \gamma$ with $\ubar \gamma,\bar \gamma$ constant, which satisfy \begin{align} \label{eq:ass2} \ubar{\rho} \le \rho,\hat \rho \le \bar \rho \qquad \text{and} \qquad - \bar w \le w,\hat w \le\bar w. \end{align} \end{itemize} Conditions (A1) and (A2) imply uniform bounds for $P$ and its derivatives and ensure that the flow is subsonic; we refer to \cite{Dafermos2016} for details. Under these conditions, we will show the following stability result: Let $(\rho,w)$ and $(\hat \rho,\hat w)$ be sufficiently regular solutions of \eqref{eq:sys1}--\eqref{eq:sys3} with parameters $\varepsilon,\gamma$ and $\hat \varepsilon,\hat \gamma$, and boundary values $\hat h_\partial$ and $h_\partial$ as described in \eqref{eq:sys6}. Then \begin{align*} \|\rho(\tau) - \hat \rho(\tau)\|_{L^2}^2 &+ \varepsilon^2 \|w(\tau) - \hat w(\tau)\|^2_{L^2} + \int_0^\tau \|w(s)-\hat w(s)\|^3_{L^3} ds \\ &\le \hat C e^{\hat c \tau} \Big(\|\rho(0) - \hat \rho(0)\|_{L^2}^2 + \|\varepsilon w(0) - \varepsilon \hat w(0)\|_{L^2}^2 \\ & \qquad \qquad \qquad + |\gamma - \hat \gamma|^{3/2} + |\varepsilon^2 -\hat \varepsilon^2| + \int_0^\tau |h_\partial(s) - \hat h_\partial(s)| \, ds \Big), \end{align*} with constants $\hat c,\hat C$ depending only on the bounds in assumptions (A1)--(A2) and on bounds for time derivatives of $\hat \rho$ and $\hat w$. Note that the energy \eqref{eq:sys4} and dissipation functional \eqref{eq:sys5}, and as a consequence also the co-state variables \eqref{eq:sys3} depend explicitly on the parameters $\varepsilon$ and $\gamma$, so their definition changes for the perturbed problem. A precise statement of our results and of the regularity assumption on the solutions is given in Section~\ref{sec:4}, where we also discuss various generalizations including perturbations in other model parameters and the choice of different boundary conditions. Let us mention two immediate consequences of the above stability estimate, namely \begin{itemize} \item uniqueness of regular subsonic bounded state solutions for specified initial and boundary values, and their stability with respect to perturbations in these problem data as well as the model parameters; \item convergence of solutions to those of the parabolic limit problem which results by formally setting $\varepsilon=0$ in equations \eqref{eq:sys1}--\eqref{eq:sys6}. \end{itemize} Existence and uniqueness of solutions to the parabolic limit problem can be proven rigorously by variational arguments; see \cite{Bamberger79,Raviart70} or \cite{SchoebelKroehn2020}. This parabolic problem also serves as the basis for simulation codes utilized in the gas network community \cite{BrouwerGasserHerty2011,Osiadacz87}, and the stability estimate above allows us to obtain a quantitative justification for the use of this model. Due to the energy-based modelling, the results can be generalized almost verbatim to gas networks by utilizing appropriate coupling conditions; details will be discussed in Section~\ref{sec:5}. \subsection*{Main tools.} The proof of our main result is based on the observation that \eqref{eq:sys1}--\eqref{eq:sys3} can be written as an abstract port-Hamiltonian system \cite{VanDerSchaftMaschke2002} \begin{align} \label{eq:dhs} \mathcal{C} \partial_{\tau} {\boldsymbol u} + (\mathcal{J}+\mathcal{R}({\boldsymbol u})) {\boldsymbol z}({\boldsymbol u}) &= \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}), \end{align} with ${\boldsymbol u}=(\rho,w)$ and ${\boldsymbol z}({\boldsymbol u})=(h,m)$ denoting the state and co-state variables, which are linked via the energy functional $\mathcal{H}({\boldsymbol u})=\mathcal{H}(\rho,w)$; moreover, $\mathcal{C}$, $\mathcal{J}$, $\mathcal{R}({\boldsymbol u})$, and $\mathcal{B}_\partial$, are appropriate operators, the last one incorporating the boundary conditions; see Section~\ref{sec:2}. Any smooth function $\widehat\bu=(\hat \rho,\hat w)$, e.g., a solution of \eqref{eq:sys1}--\eqref{eq:sys3} with perturbed model parameters, may be considered as a solution of the system \begin{align} \label{eq:dhsp} \mathcal{C} \partial_{\tau} \widehat\bu + (\mathcal{J} + \mathcal{R}(\widehat\bu)) {\boldsymbol z}(\widehat\bu) &= \mathcal{B}_\partial {\boldsymbol z}(\widehat\bu) + \widehat\be, \end{align} with the same operators $\mathcal{C}$, $\mathcal{J}$, $\mathcal{R}(\cdot)$, $\mathcal{B}_\partial$, and the same energy functional $\mathcal{H}(\cdot)$ and state to co-state mapping ${\boldsymbol z}(\cdot)$, up to some perturbation $\widehat\be$. The coincidence of the underlying Hamiltonian structure will allow us to estimate the difference between ${\boldsymbol u}$ and $\widehat\bu$ in terms of the perturbations $\widehat\be$ by means of relative energy estimates. For the stability analysis on the abstract level, we require some general conditions that will be verified in detail for the gas transport problem under investigation using the assumptions (A1)--(A2). The use of an abstract problem structure greatly simplifies the analysis and allows to generalize our results in various directions. We briefly discuss the incorporation of perturbations in other model parameters and the extension to gas networks. \subsection*{Review of related work} Relative entropy or energy techniques have been used intensively for the existence, stability, and discretization error analysis of time dependent partial differential equations. We refer to \cite{Juengel2016} for a recent summary of corresponding results for parabolic evolution problems. In the present paper, we are interested in hyperbolic problems, where the use of relative entropy arguments goes back to the seminal works of DiPerna \cite{DiPerna79} and Dafermos \cite{Dafermos79}; also see \cite{Dafermos2016} for an introduction to the field. Typical aspects that are addressed are: convergence to steady states, stable dependence of solutions on initial data and parameters, and asymptotic limits. Examples for the latter include the low Mach limit of Euler and Navier-Stokes equations, which are investigated in \cite{FeireislNovotny2017} for instance. Long time convergence of solutions to damped Euler equations to Barenblatt solutions were investigated by Huang and coworkers in a series of papers \cite{GengHuang2019}. One particular aspect that we want to address in the present study are parabolic limits of quasilinear hyperbolic equations; see \cite{MarcatiMilani1990,JuncaRascle2002,LattanzioTzavaras2013,LattanzioTzavaras2017} for some exemplary results in this direction. The latter reference as well as \cite{CarrilloPengWroblewskaKaminska2020,GiesselmannLattanzioTzavaras2017} strongly rely on the underlying dissipative Hamiltonian structure of many equations in fluid mechanics, which will also play a major role in our analysis below. The previous papers, however, use formulations in conservative variables, which allows to deal with one solution being a weak solution in the classical sense of hyperbolic conservation laws \cite{Dafermos2016}. Parabolic limits of hyperbolic $2\times2$ systems have also been studied in \cite{LinCoulombel2013,XuKawashima2014} using compensated compactness arguments \cite{HuangMarcatiPan2005, HuangPanWang2011}. This has the advantage that no additional regularity of solutions to the parabolic limit problem is required, but no quantitative information about the speed of convergence is obtained. Spectral estimates for the linear part of the problem are used in \cite{DuanLiuZhu2015} to derive convergence in the parabolic limit. Let us note that most of the above works consider only linear friction laws and unbounded domains or periodic boundary conditions. In contrast to work mentioned previously, our study is based on a formulation in primitive variables, which requires both solutions to have some minimal smoothness. On the other hand, this formulation allows us to incorporate boundary conditions more naturally and to extend our results to networks in a straight-forward manner using appropriate coupling conditions at network junctions \cite{Reigstad2014,Egger2018} that guarantee energy conservation or dissipation at network junctions. Similar formulations for compressible flow were also considered in the context of port-Hamiltonian systems; see \cite{VanDerSchaftMaschke2002,VanDerSchaftJeltsema2014} for the models and \cite{Cardoso2019,LiljegrenSailer2020} for corresponding discretization strategies. Other systems, that fit into the general framework that we develop in this paper include the Euler-Korteweg system, the system of quantum hydrodynamics and the Euler-Poisson equations. An overview about corresponding results can be found in \cite{GiesselmannLattanzioTzavaras2017}. \subsection*{Outline of the manuscript.} The remainder of the manuscript is organized as follows: Section~\ref{sec:2} is concerned with the perturbation analysis for the abstract system \eqref{eq:dhs}. In Section~\ref{sec:3}, we then briefly review the derivation of the model equations \eqref{eq:sys1}--\eqref{eq:sys3} starting from the usual balance equations of gas dynamics, and we show that they fit into the abstract form \eqref{eq:dhs}. In Section~\ref{sec:4}, we verify the assumptions required for our abstract analysis for the gas transport problem under investigation, which allows us to state and prove our main results. Their generalization to gas networks is discussed in Section~\ref{sec:5}. \section{An abstract stability estimate} \label{sec:2} In this section, we consider abstract evolution problems of the form \begin{align} \label{eq:abs1} \mathcal{C} \partial_{\tau} {\boldsymbol u} + (\mathcal{J} + \mathcal{R}({\boldsymbol u})) \, {\boldsymbol z}({\boldsymbol u}) = \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}), \\ {\boldsymbol z}({\boldsymbol u}) = \mathcal{C}^{-1} \mathcal{H}'({\boldsymbol u}), \label{eq:abs2} \end{align} with state and co-state variables ${\boldsymbol u}$ and ${\boldsymbol z}({\boldsymbol u})$ that are directly connected via the derivative $\mathcal{H}'({\boldsymbol u})$ of an associated energy functional $\mathcal{H}({\boldsymbol u})$. After briefly introducing a reasonable functional analytic setting, we derive stability estimates for solutions. \subsection{Notation and basic assumptions} Let $\mathbb{W} \subset \mathbb{V}$ be real Hilbert spaces, with $\mathbb{V}' \subset \mathbb{W}'$ denoting the corresponding dual spaces. We use $\langle \cdot,\cdot \rangle$ to denote the duality product on $\mathbb{V}' \times \mathbb{V}$ and $\mathbb{W}' \times \mathbb{W}$. Furthermore, let $\mathcal{H} : \mathbb{D} \subset \mathbb{V} \to \mathbb{R}$ be a smooth and strictly convex energy functional, with $\mathbb{D} \subset \mathbb{V}$ denoting some appropriate, e.g., convex and closed, subset. We consider abstract evolution problems of the form \eqref{eq:abs1}--\eqref{eq:abs2}, with operators satisfying the following conditions. \begin{assumption} \label{ass:1} $\mathcal{C}:\mathbb{V} \to \mathbb{V}'$ is linear, bounded, self-adjoint, and elliptic on $\mathbb{V}$, i.e., \begin{align} \langle \mathcal{C} {\boldsymbol u}, {\boldsymbol v} \rangle = \langle \mathcal{C} {\boldsymbol v}, {\boldsymbol u}\rangle \qquad &\forall {\boldsymbol u},{\boldsymbol v} \in \mathbb{V}, \label{eq:assC1}\\ c \|{\boldsymbol v}\|_\mathbb{V}^2 \le \langle \mathcal{C} {\boldsymbol v}, {\boldsymbol v}\rangle \le C \|{\boldsymbol v}\|_{\mathbb{V}}^2 \qquad &\forall {\boldsymbol v} \in \mathbb{V}, \label{eq:assC2} \end{align} with positive constants $c,C$ independent of ${\boldsymbol v}$. For any ${\boldsymbol u} \in \mathbb{D}$ the operator $\mathcal{R}({\boldsymbol u}) : \mathbb{W} \to \mathbb{W}'$ is linear, bounded, self-adjoint, and non-negative, i.e., \begin{alignat}{5} \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol w}, {\boldsymbol z} \rangle &= \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}, {\boldsymbol w}\rangle \qquad &&\forall {\boldsymbol w},{\boldsymbol z} \in \mathbb{W}, \label{eq:assR1}\\ \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol w}, {\boldsymbol w}\rangle &\ge 0 \qquad &&\forall {\boldsymbol w} \in \mathbb{W}.\label{eq:assR2} \end{alignat} Furthermore, $\mathcal{J} : \mathbb{W} \to \mathbb{W}'$ is linear and anti-symmetric, i.e., \begin{align} \langle \mathcal{J} {\boldsymbol w}, {\boldsymbol z} \rangle = - \langle \mathcal{J} {\boldsymbol z}, {\boldsymbol w} \rangle \qquad \forall {\boldsymbol w},{\boldsymbol z} \in \mathbb{W}, \label{eq:assJ} \end{align} and finally, the operator $\mathcal{B}_\partial: \mathbb{W} \to \mathbb{W}'$ is linear and bounded. \end{assumption} From conditions \eqref{eq:assC1}--\eqref{eq:assC2}, one can immediately see that \begin{align} \label{eq:scalprod} \langle {\boldsymbol u}, {\boldsymbol v} \rangle_\mathcal{C}:=\langle \mathcal{C} {\boldsymbol v}, {\boldsymbol u} \rangle \end{align} defines a scalar product on $\mathbb{V}$ and the associated norm $\|{\boldsymbol v}\|_\mathcal{C}^2 = \langle {\boldsymbol v},{\boldsymbol v}\rangle_\mathcal{C} = \langle \mathcal{C} {\boldsymbol v}, {\boldsymbol v} \rangle$ is equivalent to the standard norm $\|\cdot\|_\mathbb{V}$. The expression ${\boldsymbol z}({\boldsymbol u}) = \mathcal{C}^{-1} \H'({\boldsymbol u}) = \operatorname{grad}_\mathcal{C} \mathcal{H}({\boldsymbol u})$ then denotes the gradient of the functional $\mathcal{H}$ at ${\boldsymbol u}$ with respect to this scalar product. We further introduce the symbol \begin{align} \label{eq:hess} \mathcal{G}({\boldsymbol u}) = \mathcal{C}^{-1} \H''({\boldsymbol u}) \end{align} for the Hessian operator $\mathcal{G}({\boldsymbol u}) : \mathbb{V} \to \mathbb{V}'$, and note that \begin{align} \label{eq:hess2} \langle \mathcal{G}({\boldsymbol u}) {\boldsymbol v}, {\boldsymbol w} \rangle_\mathcal{C} = \langle \mathcal{H}''({\boldsymbol u}) {\boldsymbol v}, {\boldsymbol w} \rangle = \langle \mathcal{H}''({\boldsymbol u}) {\boldsymbol w}, {\boldsymbol v}\rangle = \langle \mathcal{G}({\boldsymbol u}) {\boldsymbol w}, {\boldsymbol v}\rangle_\mathcal{C}, \end{align} i.e., the Hessian is symmetric with respect to the scalar product induced by $\mathcal{C}$. \begin{notation} By a classical solution of \eqref{eq:abs1}--\eqref{eq:abs2} on $[0,T]$, we mean a function $$ {\boldsymbol u} \in C^1([0,T];\mathbb{V}) \cap C^0([0,T];\mathbb{D}) \quad \text{with} \quad {\boldsymbol z}( {\boldsymbol u}) \in C^0([0,T];\mathbb{W}) $$ that satisfies \eqref{eq:abs1}--\eqref{eq:abs2} for all $0 \le \tau \le T$ in the sense of $\mathbb{W}'$. \end{notation} \subsection{Power balance} As a direct consequence of the underlying port-Hamiltonian structure and our assumptions on the operators, we obtain the following power balance relation. \begin{lemma} Let ${\boldsymbol u}$ be a classical solution of \eqref{eq:abs1}--\eqref{eq:abs2}. Then \begin{align} \label{eq:powerbalance} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}) = -\mathcal{D}({\boldsymbol u}) + \langle \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}), {\boldsymbol z}({\boldsymbol u})\rangle. \end{align} with $\mathcal{D}({\boldsymbol u}) := \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}), {\boldsymbol z}({\boldsymbol u})\rangle$ denoting the dissipation functional, i.e., the total energy of the system can only change via dissipation or power flowing over the ports. \end{lemma} \begin{proof} By formal computation and equations~\eqref{eq:abs1}--\eqref{eq:abs2}, we obtain \begin{align*} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}) &= \langle \mathcal{H}'({\boldsymbol u}), \partial_{\tau} {\boldsymbol u}\rangle = \langle \mathcal{C} \partial_{\tau} {\boldsymbol u}, {\boldsymbol z}({\boldsymbol u})\rangle \\ &= -\langle \mathcal{J} {\boldsymbol z}({\boldsymbol u}), {\boldsymbol z}({\boldsymbol u})\rangle - \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}), {\boldsymbol z}({\boldsymbol u})\rangle + \langle \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}), {\boldsymbol z}({\boldsymbol u})\rangle. \end{align*} The result then follows immediately from the assumptions on the operators. \end{proof} \subsection{Evolution of the relative energy} We now study the stability of solutions to \eqref{eq:abs1}--\eqref{eq:abs2} with respect to perturbations. To this end, let $\widehat\bu$ denote a classical solution of \begin{align} \mathcal{C} \partial_{\tau} \widehat\bu + [\mathcal{J}+\mathcal{R}(\widehat\bu)] {\boldsymbol z}(\widehat\bu) &= \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}) + \widehat\be, \label{eq:abs1p}\\ z(\widehat\bu) &= \mathcal{C}^{-1} \mathcal{H}'(\widehat\bu), \label{eq:abs2p} \end{align} with appropriate perturbation described by the residual functional $\widehat\be \in C^0([0,T];\mathbb{W}')$. As a measure for the difference of ${\boldsymbol u}$ and $\widehat\bu$, we utilize the relative energy \cite{Dafermos2016}, defined by \begin{align} \label{eq:relenergy} \mathcal{H}({\boldsymbol u}|\widehat\bu) := \mathcal{H}({\boldsymbol u}) - \mathcal{H}(\widehat\bu) - \langle \H'(\widehat\bu), {\boldsymbol u}-\widehat\bu \rangle. \end{align} Using the particular problem structure and some elementary computations, we can prove the following basic identity for the temporal change of the relative entropy. \begin{lemma} \label{lem:abs} Let ${\boldsymbol u}$, $\widehat\bu$ be classical solutions of \eqref{eq:abs1}--\eqref{eq:abs2} and \eqref{eq:abs1p}--\eqref{eq:abs2p}. Then \begin{align*} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}|\widehat\bu) = -\langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}) &- \mathcal{R}(\widehat\bu) {\boldsymbol z}(\widehat\bu), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle + \langle \mathcal{B}_\partial ({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle \\ & +\langle \mathcal{C} \partial_{\tau} \widehat\bu, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu)\rangle + \langle \widehat\be, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle. \end{align*}\end{lemma} \begin{proof} By formal differentiation of the relative energy, we obtain \begin{align*} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}|\widehat\bu) &= \langle \H'({\boldsymbol u}), \partial_{\tau} {\boldsymbol u}\rangle - \langle \H'(\widehat\bu), \partial_{\tau} \widehat\bu \rangle - \langle \H'(\widehat\bu),\partial_{\tau} {\boldsymbol u} - \partial_{\tau} \widehat\bu\rangle - \langle \H''(\widehat\bu) \partial_{\tau} \widehat{\boldsymbol u}, {\boldsymbol u} - \widehat\bu\rangle \\ &= \langle \mathcal{H}'({\boldsymbol u}) - \mathcal{H}'(\widehat\bu), \partial_{\tau} {\boldsymbol u} - \partial_{\tau} \widehat\bu\rangle + \langle \mathcal{H}'({\boldsymbol u}) - \mathcal{H}'(\widehat\bu) - \mathcal{H}''(\widehat\bu) ({\boldsymbol u} - \widehat\bu), \partial_{\tau} \widehat\bu\rangle \\ &= \langle \mathcal{C} \partial_{\tau} {\boldsymbol u} - \mathcal{C} \partial_{\tau} \widehat\bu, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle + \langle \mathcal{C} \partial_{\tau} \widehat\bu, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu)\rangle. \end{align*} Here we used the symmetry of $\H''(\widehat\bu)$ in the second, and the definitions of the gradient and Hessian operators in the last step. We now use \eqref{eq:abs1} and \eqref{eq:abs1p} to replace the time derivatives in the first term, and arrive at \begin{align*} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}|\widehat\bu) = -\langle \mathcal{J} ({\boldsymbol z}({\boldsymbol u}) &- {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle -\langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}) - \mathcal{R}(\widehat\bu) {\boldsymbol z}(\widehat\bu), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle \\ & + \langle \mathcal{B}_\partial ({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle + \langle \widehat\be, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle \\ & +\langle \mathcal{C} \partial_{\tau} \widehat\bu, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu)\rangle. \end{align*} Due to the anti-symmetry property \eqref{eq:assJ} of the operator $\mathcal{J}$, the first term vanishes, and we already obtain the assertion of the lemma. \end{proof} \subsection{An abstract stability result} \label{sec:abs_st} We now derive quantitative estimates for the difference of the solutions ${\boldsymbol u}$ and $\widehat\bu$ to \eqref{eq:abs1}--\eqref{eq:abs2} and \eqref{eq:abs1p}--\eqref{eq:abs2p} with respect to perturbations in the right hand side and the initial and boundary values. To do so, we make some abstract assumptions that will later be verified for the gas transport problem under consideration. \begin{assumption} \label{ass:2} There exist constants $\hat c_0=\hat c_0(\mathbb{D})>0$, $ \hat C_0=\hat C_0(\mathbb{D})>0$ such that \begin{align} \label{eq:norm} \hat c_0 \|{\boldsymbol u} - \widehat\bu\|_{\mathcal{C}}^2 \le \mathcal{H}({\boldsymbol u}|\widehat\bu) \le \hat C_0 \|{\boldsymbol u}-\widehat\bu\|_{\mathcal{C}}^2 \qquad \text{for all } {\boldsymbol u}, \widehat\bu \in \mathbb{D} \subset \mathbb{V}. \tag{C0} \end{align} Moreover, there exists a \emph{relative dissipation} functional $\mathcal{D}(\cdot|\cdot): \mathbb{D} \times \mathbb{D} \rightarrow [0,\infty)$ and perturbation functionals $\mathcal{P}(\cdot): \mathbb{W}' \to \mathbb{R}$ and $\mathcal{P}_\partial(\cdot): \mathbb{W} \to \mathbb{R}$ such that \begin{align} -\langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}) - \mathcal{R}(\widehat\bu) {\boldsymbol z}(\widehat\bu), {\boldsymbol z}({\boldsymbol u})-{\boldsymbol z}(\widehat\bu) \rangle & \le \hat C_1 \mathcal{H}({\boldsymbol u}|\widehat\bu) - 2 \mathcal{D}({\boldsymbol u}|\widehat\bu), \label{eq:term1} \tag{C1}\\ \langle \mathcal{C} \partial_{\tau} \widehat\bu, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu)\rangle &\le \hat C_2 \mathcal{H}({\boldsymbol u}|\widehat\bu), \label{eq:term2} \tag{C2} \\ \langle \widehat\be, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle &\le \hat C_3 \mathcal{H}({\boldsymbol u}|\widehat\bu) + \mathcal{D}({\boldsymbol u}|\widehat\bu) + \mathcal{P}(\widehat\be), \label{eq:term3} \tag{C3} \\ \langle \mathcal{B}_\partial ({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle &\le \mathcal{P}_\partial({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)), \label{eq:term4} \tag{C4} \end{align} for classical solutions ${\boldsymbol u},\widehat\bu$ of \eqref{eq:abs1}--\eqref{eq:abs2} and \eqref{eq:abs1p}--\eqref{eq:abs2p} with positive constants $\hat C_i$, which may depend on the set $\mathbb{D}$, the constant $\hat c_0$, $\hat C_0$, and the solution $\widehat\bu$ and its derivatives, but not on ${\boldsymbol u}$. \end{assumption} Together with the relative energy identity stated in the previous lemma, these abstract conditions immediately lead to the following stability estimate. \begin{lemma}\label{lem:abs_st} Let Assumptions~\ref{ass:1} and \ref{ass:2} hold. Then any pair of classical solutions ${\boldsymbol u}$ and $\widehat\bu$ to the evolution equations \eqref{eq:abs1}--\eqref{eq:abs2} and \eqref{eq:abs1p}--\eqref{eq:abs2p} satisfies \begin{align*} \hat c_0 \|{\boldsymbol u}(\tau) &- \widehat\bu(\tau)\|_{\mathcal{C}}^2 + \int_0^\tau e^{\hat c (\tau-\sigma)}\mathcal{D} ({\boldsymbol u}|\widehat\bu) d\sigma \\ &\le \hat C_0 e^{\hat c \tau} \|{\boldsymbol u}(0) - \widehat\bu(0)\|^2_{\mathcal{C}} + \int_0^\tau e^{\hat c (\tau - \sigma)} \left[\mathcal{P}(\widehat\be(\sigma)) + \mathcal{P}_\partial({\boldsymbol z}({\boldsymbol u}(\sigma)) - {\boldsymbol z}(\widehat\bu(\sigma))) \right] \, d\sigma, \end{align*} with constant $\hat c = \hat C_1+\hat C_2 + \hat C_3$ and $\hat c_0,\hat C_0$ obtained from Assumption~\ref{ass:2}. \end{lemma} \begin{proof} From Lemma~\ref{lem:abs} and Assumption~\ref{ass:2}, we immediately obtain \begin{align*} \frac{d}{d\tau} \mathcal{H}({\boldsymbol u}|\widehat\bu) \le - \mathcal{D}({\boldsymbol u}|\widehat\bu) + (\hat C_1 + \hat C_2 + \hat C_3) \mathcal{H}({\boldsymbol u}|\widehat\bu) + \mathcal{P}(\widehat\be) + \mathcal{P}_\partial({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)). \end{align*} The assertion then follows by application of the Gronwall lemma \cite[Ch.~29]{Wloka}, the definition of $\hat c$, and the estimates for the relative energy in condition \eqref{eq:norm}. \end{proof} \section{Application to gas networks} \label{sec:3} We now return to the gas transport problem stated in the introduction and show that it fits into the abstract framework discussed in the previous section. In addition, we collect some auxiliary results that will be useful for the stability analysis of the next section. \subsection{Variational formulation and canonical form} We may multiply \eqref{eq:sys1}--\eqref{eq:sys2} by appropriate test functions $q$, $r$, integrate over the spatial domain, and use integration-by-parts in the second equation, to obtain \begin{align} (a \partial_{\tau} \rho, q) + (\partial_x m, q) &= 0, \label{eq:var1}\\ (\varepsilon^2 \partial_{\tau} w, r) - (h, \partial_x r) + (\gamma \frac{|w|}{a\rho} m, r) &= - h r\big|_{0}^\ell, \label{eq:var2} \end{align} where $(a,b) := \int_0^\ell a(x) b(x) dx$ is used to abbreviate the $L^2$-scalar product. Note that boundary conditions \eqref{eq:sys6} for $h$ could be incorporated naturally in the last term of \eqref{eq:var2}. The two variational identities \eqref{eq:var1}--\eqref{eq:var2} hold for all time $t>0$ of relevance and for all smooth test functions $q$, $r$, independent of time, and they are satisfied, in particular, by all smooth solutions of \eqref{eq:sys1}--\eqref{eq:sys3}. In compact notation, the system \eqref{eq:var1}--\eqref{eq:var2} can be stated as \begin{align} \label{eq:dhvar} \langle \mathcal{C} \partial_{\tau} {\boldsymbol u}, {\boldsymbol w} \rangle + \langle \mathcal{J} {\boldsymbol z}({\boldsymbol u}), {\boldsymbol w} \rangle + \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}), {\boldsymbol w} \rangle &= \langle \mathcal{B}_\partial {\boldsymbol z}, {\boldsymbol w}\rangle, \end{align} with state variable ${\boldsymbol u}=(\rho,w)$, co-state variable ${\boldsymbol z}({\boldsymbol u}) = (h,m)$ defined by \eqref{eq:sys3}, time independent test function ${\boldsymbol v}=(q,r)$, and operators $\mathcal{C}$, $\mathcal{J}$, $\mathcal{R}({\boldsymbol u})$, and $\mathcal{B}_\partial$ given by \begin{align*} \langle \mathcal{C} {\boldsymbol u}, {\boldsymbol v}\rangle &= (a \rho, q) + (\varepsilon^2 w, r), & \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}, {\boldsymbol v} \rangle &= (\gamma \frac{|w|}{a\rho} m, r) \\ \langle \mathcal{J} {\boldsymbol z}, {\boldsymbol v}\rangle &= (\partial_x m,q) - (h, \partial_x r), & \langle \mathcal{B} \bz_{\boldsymbol \partial}, {\boldsymbol v}\rangle &= - h r\big|_0^\ell. \end{align*} From these variational characterizations, it is not difficult to see that $\mathcal{J}$ is anti-symmetric and that $\mathcal{C}$ and $\mathcal{R}({\boldsymbol u})$ are symmetric and at least positive semi-definite, if the parameters $a$, $\varepsilon^2$, $\gamma$ and the density $\rho$ are positive. The operator $\mathcal{B}_\partial$ associated with the boundary terms does not have a particular property, except being supported only at the boundary. \begin{remark} Equation \eqref{eq:dhvar} is the variational form of an abstract evolution problem \eqref{eq:abs1} with state and co-state variables ${\boldsymbol u}$ and ${\boldsymbol z}({\boldsymbol u})$, and energy functional $\mathcal{H}({\boldsymbol u})=\mathcal{H}(\rho,w)$. Problem \eqref{eq:sys1}--\eqref{eq:sys3} thus corresponds to an abstract port-Hamiltonian system \eqref{eq:abs1}--\eqref{eq:abs2}. \end{remark} \subsection{Auxiliary results} A quick inspection of the above derivations shows that the operators $\mathcal{C}$, $\mathcal{R}({\boldsymbol u})$, and $\mathcal{J}$, and $\mathcal{B}_\partial$, can be formally identified with \begin{align*} \mathcal{C} = \begin{pmatrix} a & 0 \\ 0 & \varepsilon^2 \end{pmatrix}, \qquad \mathcal{R}({\boldsymbol u}) = \begin{pmatrix} 0 & 0 \\ 0 & \frac{\gamma |w|}{a \rho}\end{pmatrix}, \qquad \text{and} \qquad \mathcal{J} -\mathcal{B}_\partial = \begin{pmatrix} 0 & \partial_x \\ \partial_x & 0 \end{pmatrix}. \end{align*} The latter follows rigorously by reversing the order of arguments in the derivations of the weak formulation. The energy of the system is here given by \begin{align*} \mathcal{H}({\boldsymbol u}) = \int_0^\ell a (\varepsilon^2 \rho \frac{w^2}{2} + P(\rho) + \rho g z ) dx, \end{align*} and by elementary computations, we obtain the formulas \begin{align*} {\boldsymbol z}({\boldsymbol u}) = \begin{pmatrix} \varepsilon^2 \frac{w^2}{2} + P'(\rho) \\ a \rho w \end{pmatrix} \qquad \text{and} \qquad \mathcal{G}({\boldsymbol u}) = \begin{pmatrix} P''(\rho) & \varepsilon^2 w \\ a w & a \rho \end{pmatrix} \end{align*} for the gradient ${\boldsymbol z}({\boldsymbol u}) = \mathcal{C}^{-1} \mathcal{H}'({\boldsymbol u})$ and Hessian $\mathcal{G}({\boldsymbol u}) = \mathcal{C}^{-1} \mathcal{H}''({\boldsymbol u})$ of the energy functional. \begin{remark} Let us emphasize that the operators $\mathcal{C}$ and $\mathcal{R}(\cdot)$, the energy functional $\mathcal{H}(\cdot)$, and thus the functions ${\boldsymbol z}(\cdot)$, $\mathcal{G}(\cdot)$ explicitly depend on the model parameters $\varepsilon$ and $\gamma$. \end{remark} \subsection{Functional analytic setting}\label{sec:funcana} As a next step, we briefly discuss the choice of suitable function spaces for the gas transport problem under consideration. We define \begin{align} \label{eq:spaces} \mathbb{V} := L^2(0,\ell) \times L^2(0,\ell) \qquad \text{and} \qquad \mathbb{W} :=H^1(0,\ell) \times H^1(0,\ell), \end{align} where $H^1(0,\ell)$ denotes the standard Sobolev space on the interval $(0,\ell)$. We further introduce the set of admissible states \begin{align*} \mathbb{D} = \{(\rho,w) \in \mathbb{V} : (A1)-(A2) \text{ are satisfied}\}. \end{align*} Let us recall that classical solutions of \eqref{eq:sys1}--\eqref{eq:sys3} satisfy ${\boldsymbol u}=(\rho,w) \in C^1([0,T];\mathbb{V})\cap C^0([0,T];\mathbb{D})$ and ${\boldsymbol z}({\boldsymbol u}) = (h,m) \in C^0([0,T];\mathbb{W})$. Then $\mathcal{C}$ and $\mathcal{R}({\boldsymbol u})$ with ${\boldsymbol u} \in \mathbb{D}$ can be understood as self-adjoint and positive semi-definite bounded linear operators mapping from $\mathbb{V}$ or $\mathbb{W}$ to the dual spaces $\mathbb{V}'$ or $\mathbb{W}'$. For $\varepsilon$, $a$ uniformly positive and bounded, $\mathcal{C}$ induces a norm \begin{align} \label{eq:normC} \|{\boldsymbol u}\|_\mathcal{C}^2 = \|\sqrt{a} \rho\|^2_{L^2(0,\ell)} + \|\varepsilon w\|^2_{L^2(0,\ell)}, \end{align} which is equivalent to the standard norm on $\mathbb{V} = L^2(0,\ell) \times L^2(0,\ell)$. Adopting the previous notation, we write $\langle \cdot, \cdot\rangle$ for the duality products on $\mathbb{V}' \times \mathbb{V}$ and $\mathbb{W}' \times \mathbb{W}$, respectively, and use $(a,b)=\int_0^\ell a(x) b(x) \, dx$ to denote the scalar product of $L^2(0,\ell)$. From the variational definition of the operator $\mathcal{J}$, one can see that \begin{align} \label{eq:skew} \langle \mathcal{J} {\boldsymbol w}, \widetilde {\boldsymbol w} \rangle:= & (\partial_x m, \tilde h) - (h, \partial_x \tilde m) = -\langle \mathcal{J} \widetilde {\boldsymbol w}, {\boldsymbol w}\rangle, \end{align} for all ${\boldsymbol w}=(h,m)$, $\widetilde {\boldsymbol w}=(\tilde h, \tilde m) \in \mathbb{W}$, i.e., $\mathcal{J}$ is skew-symmetric on $\mathbb{W}$. The formula $\langle \mathcal{B}_\partial {\boldsymbol z}({\boldsymbol u}), {\boldsymbol v}\rangle = - h r\big|_0^\ell$ with ${\boldsymbol v}=(q,r)$ finally shows that the boundary operator $\mathcal{B}_\partial$ acts on the co-state variables. Note that the required boundary values are well-defined for functions ${\boldsymbol z}({\boldsymbol u}) = (h,m)$ and ${\boldsymbol v}=(q,r) \in \mathbb{W} = H^1(0,\ell) \times H^1(0,\ell)$. In summary, we thus have shown that \eqref{eq:sys1}--\eqref{eq:sys3} can be interpreted as an abstract port-Hamiltonian system \eqref{eq:abs1}--\eqref{eq:abs2}, and under conditions (A1)--(A2) also Assumption~\ref{ass:1} is valid. \section{Stability analysis and parabolic limit} \label{sec:4} In the following, we verify the conditions of Assumption~\ref{ass:2} for the gas transport problem \eqref{eq:sys1}--\eqref{eq:sys3}, and then utilize the abstract stability results to prove stability of solutions with respect to perturbations in the model parameters as well as initial and boundary values. As a case of particular interest, we will study convergence in the parabolic limit $\varepsilon \to 0$. \subsection{Perturbed problem} Let $(\hat \rho,\hat w)$ denote a solution to the perturbed equations \begin{align} a \partial_{\tau} \hat \rho + \partial_x (a \hat \rho \hat w) &= 0, \label{eq:pert1} \\ \hat \varepsilon^2 \partial_{\tau} \hat w + \partial_x ( P'(\hat \rho) + \hat \varepsilon^2 \frac{\hat w^2}{2} + g z) + \hat \gamma \frac{|\hat w|}{ a\hat \rho} \hat m &= 0, \label{eq:pert2} \end{align} which are again assumed to hold for all $0 < x < \ell$ and time $t>0$. The terms carrying spatial derivatives are the corresponding co-state variables \begin{align} \label{eq:pert3} \hat h(\hat \rho,\hat v) = \hat \varepsilon^2\frac{\hat w^2}{2} + P'(\hat \rho) + g z \qquad \text{and} \qquad \hat m(\hat \rho,\hat v) = a \hat \rho \hat w, \end{align} which are again directly related to the derivatives of the corresponding energy functional $\hat \mathcal{H}(\hat \rho,\hat w) = \int_0^\ell a (\hat \varepsilon^2 \frac{\hat w^2}{2\hat \rho} + P(\hat \rho) + \hat \rho g z ) \, dx$. By the some elementary manipulations and the same arguments as employed above, this system can again be written in the abstract form \begin{align} \mathcal{C} \partial_{\tau} \widehat\bu + (\mathcal{J} + \mathcal{R}(\widehat\bu)) {\boldsymbol z}(\widehat\bu) &= \mathcal{B}_\partial {\boldsymbol z}(\widehat\bu) + \widehat\be \\ {\boldsymbol z}(\widehat\bu) &= \mathcal{C}^{-1} \mathcal{H}'(\widehat\bu), \end{align} with the functionals $\mathcal{H}(\cdot)$ and ${\boldsymbol z}(\cdot)$, and the operators $\mathcal{C}$, $\mathcal{J}$, and $\mathcal{R}(\cdot)$ denoting those for the unperturbed problem with parameters $\varepsilon$ and $\gamma$, and residual $\widehat\be=(\widehat\be_1,\widehat\be_2)$ given by % \begin{align} \label{eq:beh} \widehat\be_1 = 0 \qquad \text{and} \qquad \widehat\be_2 = (\varepsilon^2- \hat\varepsilon^2) (\partial_{\tau} \hat w + \tfrac{1}{2} \partial_x |\hat w|^2) + (\gamma - \hat \gamma) |\hat w| \hat w. \end{align} In the following section, we verify the abstract assumptions required for our stability analysis, without explicitly taking into account the special form of $\widehat\be_1$ and $\widehat\be_2$. \subsection{Verification of conditions (C0)--(C4)} \label{sec:aux} We always assume in the following that assumptions (A1)--(A2) are valid. Constants arising in the estimates may depend on the bounds of these conditions. Since we later consider the case $\varepsilon \to 0$, we will make explicit the dependence of the constants on this scaling parameter. \subsection*{Condition (C0)} From Taylor's formula, we know that \begin{align*} f({\boldsymbol u}|\hat {\boldsymbol u}) &= f({\boldsymbol u}) - f(\widehat\bu) - f'(\widehat\bu) ({\boldsymbol u}-\widehat\bu) \\ &= \frac{1}{2} \int_0^1 \langle (1-s) f''(\widehat\bu + s ({\boldsymbol u}-\widehat\bu)) ({\boldsymbol u}-\widehat\bu), {\boldsymbol u}-\widehat\bu) \rangle ds. \end{align*} Now let $f({\boldsymbol u}) = a(\varepsilon^2 \rho \frac{|w|^2}{2} + P(\rho))$ denote the integrand of the energy functional defined above, and note that its Hessian is given by \begin{align*} f''(\rho,w) = a \begin{pmatrix} P''(\rho) & \varepsilon^2 w \\ \varepsilon^2 w & \varepsilon^2 \rho \end{pmatrix}. \end{align*} By multiplying with ${\boldsymbol v}=(x,y)$ from the left and right, one can see that \begin{align*} \frac{1}{a} \langle f''(\rho,w) {\boldsymbol v}, {\boldsymbol v}\rangle &= P''(\rho) x^2 + 2 \varepsilon^2 w x y + \varepsilon^2 \rho y^2 \\ &= P''(\rho) x^2 + 2 \varepsilon w x (\varepsilon y) + \rho (\varepsilon y)^2. \end{align*} The second term in the second line can be estimated by \begin{align*} |2 \varepsilon w x (\varepsilon y) | &\le 2 \frac{\varepsilon^2 w^2}{\rho} x^2 + \frac{1}{2} \rho (\varepsilon y)^2. \end{align*} From condition \eqref{eq:ass1}, one can see that $2 \frac{\varepsilon^2 w^2}{\rho} \le P''(\rho)/2$, which leads to the lower bound \begin{align*} \langle f''(\rho,w) {\boldsymbol v}, {\boldsymbol v}\rangle \ge \frac{a}{2} \left( P''(\rho) x^2 + \rho |\varepsilon y|^2 \right), \\ \intertext{and the corresponding upper bound} \langle f''(\rho,w) {\boldsymbol v}, {\boldsymbol v}\rangle \le \frac{3a}{2} \left( P''(\rho) x^2 + \rho | \varepsilon y|^2 \right). \end{align*} Using the uniform bounds for $\rho$, $P''(\rho)$, and $a$, we arrive at the following result. \begin{lemma} \label{lem:C0} Let (A1)--(A2) hold. Then \begin{align*} \hat c_0 \|{\boldsymbol u} - \widehat\bu\|_\mathcal{C}^2 \le \mathcal{H}({\boldsymbol u}|\widehat\bu) \le \hat C_0 \|{\boldsymbol u} - \widehat\bu\|_\mathcal{C}^2 \end{align*} with positive constants $\hat c_0,\hat C_0$ only depending on the bounds on $\rho$ and $P''(\rho)$ in (A1)--(A2). \end{lemma} \subsection*{Condition (C1)} Using ${\boldsymbol u}=(\rho,w)$ and $\widehat\bu=(\hat \rho,\hat w)$, and the definition of $\mathcal{R}({\boldsymbol u})$ for our particular problem, the left hand side of \eqref{eq:term1} can be expressed as \begin{align*} -\langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}({\boldsymbol u}) &- \mathcal{R}(\widehat\bu) {\boldsymbol z}(\widehat\bu), {\boldsymbol z}({\boldsymbol u}) -{\boldsymbol z}(\widehat\bu)\rangle \\ &= - \int_0^\ell \gamma \, (|w| w - |\hat w| \hat w) \, a \, (\rho w - \hat \rho \hat w) \, dx =: (*). \end{align*} The first term in the integrand can be written as $|w| w - |\hat w| \hat w = \int_0^1 |\hat w+s(w-\hat w)| ds (w - \hat w)$, and one can observe that $\frac{|w|+|\hat w|}{4} \le \int_0^1 |\hat w + s (w-\hat w)| ds \le \frac{|w|+|\hat w|}{2}$. The second term in the integrand can be expanded as $\rho w - \hat \rho \hat w = \hat \rho (w - \hat w) + (\rho - \hat \rho) w $, which leads to \begin{align*} (|w| w - |\hat w| \hat w) \, (\rho w - \hat \rho \hat w) &\ge \hat \rho \frac{|w|+|\hat w|}{4} |w-\hat w|^2 - |w-\hat w| \frac{|w+\hat w|}{2} |w| |\rho - \hat \rho| \\ &\ge \hat \rho \frac{|w|+|\hat w|}{8} |w - \hat w|^2 - (|w|+|\hat w|) \frac{|w|^2}{\hat \rho} |\rho - \hat \rho|^2. \end{align*} In summary, we thus arrive at the estimate \begin{align*} (*) \le - \frac{1}{8} \int_0^\ell \gamma a \hat \rho (|w|+|\hat w|) |w-\hat w|^2 dx + \int_0^\ell \gamma \frac{|w|^2}{\hat \rho} (|w| + |\hat w|) a |\rho - \hat \rho|^2 dx. \end{align*} The uniform bounds for $\hat \rho$, $w$, $\hat w$, $\gamma$, $a$ and condition \eqref{eq:norm} then lead to the following result. \begin{lemma} \label{lem:C1} Let assumptions (A1)--(A2) be valid. Then condition \eqref{eq:term1} holds with \begin{align} \label{eq:reldiss3} \mathcal{D} ({\boldsymbol u}|\widehat\bu) = \frac{1}{16} \int_0^\ell \gamma a \hat \rho (|w| + |\hat w|) (w- \hat w)^2 ds, \end{align} where ${\boldsymbol u}=(\rho,w)$ and $\widehat\bu=(\hat \rho,\hat w)$, and with constant $\hat C_1=2 \bar \gamma |\bar w|^2 \ubar \rho^{-1} \hat c_0^{-1}$. \end{lemma} \begin{remark} By the elementary fact that $|w|+|\hat w| \ge |w-\hat w|$, one can see that \begin{align} \label{eq:reldiss4} \mathcal{D} ({\boldsymbol u}|\widehat\bu) \ge c_D \|w-\hat w\|_{L^3}^3, \end{align} with positive constant $c_D=\frac{\ubar \gamma \ubar a \ubar \rho}{16 }$. Thus, the relative dissipation $\mathcal{D}({\boldsymbol u}|\widehat\bu)$ provides control over the velocity perturbation even in the case $\varepsilon \to 0$, where the velocity contribution to the relative energy $\mathcal{H}({\boldsymbol u}|\widehat\bu)$ disappears. This will later be used in our stability analysis. \end{remark} \subsection*{Condition (C2)} Let us start with noting that \begin{align}\label{eq:zzG} {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu) = \begin{pmatrix} P'(\rho|\hat \rho) + \varepsilon^2 (w - \hat w)^2/2\\ a (\rho - \widehat \rho) (w - \widehat w) \end{pmatrix}, \end{align} which follows directly from the definitions of the gradient ${\boldsymbol z}({\boldsymbol u})$ and the Hessian $\mathcal{G}({\boldsymbol u})$ for the problem under investigation. By assumption (A1)--(A2), the pressure potential $P$ is smooth and $\rho,\hat \rho$ are uniformly bounded, and consequently \begin{align*} P'(\rho|\hat \rho) \le C a |\rho - \hat \rho|^2, \end{align*} with some constant $C$ only depending on the bounds of the coefficients, the density, and the pressure potential. The second line in \eqref{eq:zzG} can be estimated by \begin{align*} a (\rho - \widehat \rho) (w - \widehat w) \le \frac{1}{\varepsilon} ( a^2 (\rho - \hat \rho)^2 + \varepsilon^2 (w - \hat w)^2 ) \end{align*} via Young's inequality. The left hand side of \eqref{eq:term2} can then be bounded by \begin{align*} \langle \mathcal{C} \partial_{\tau} \widehat\bu, &{\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) - \mathcal{G}(\widehat\bu) ({\boldsymbol u} - \widehat\bu)\rangle \\ &= \int_0^\ell a \partial_{\tau} \hat \rho \big(P'(\rho|\hat \rho) + \varepsilon^2 (w-\hat w)^2/2 \big)dx + \int_0^\ell \varepsilon^2 \partial_{\tau} \hat w a (\rho - \hat \rho) (w - \hat w) dx \\ &\le (\|a \partial_{\tau} \hat \rho\|_{L^\infty} + \|\varepsilon \partial_{\tau} \hat w\|_{L^\infty}) ((C+a) |\rho - \hat \rho|^2 + \tfrac{3}{2} \varepsilon^2 |w - \hat w|^2). \end{align*} Together with the bounds \eqref{eq:norm}, we thus obtain the following result. \begin{lemma} \label{lem:C2} Let (A1)--(A2) hold. Then \eqref{eq:term2} is valid with $\hat C_2 = C (\|\partial_{\tau} \hat \rho\|_{L^\infty} + \|\varepsilon \partial_{\tau} \hat w\|_{L^\infty})$ and constant $C \ge 0$ depending only on the bounds in the assumptions. \end{lemma} \subsection*{Condition (C3)} We start by expanding \begin{align*} \langle \widehat\be, {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle &= \int_0^\ell \widehat\be_1 \left(\frac{\varepsilon^2}{2} (|w|^2-|\hat w|^2) + P'(\rho) - P'(\hat \rho) \right) dx \\ & \qquad \qquad \qquad + \int_0^\ell \widehat\be_2 a (\rho w - \hat \rho \hat w) \, dx = (i)+(ii). \end{align*} Using the uniform bounds in assumption (A2) and smoothness of $P(\cdot)$, we obtain \begin{align*} (i) &\le (\bar w \varepsilon^2 \|w - \hat w\|_{L^2} + C_P'' \|\rho - \hat \rho\|_{L^2}) \|\widehat\be_1\|_{L^2} \\ &\le \varepsilon^2 \|w-\hat w\|^2_{L^2} + \frac{1}{2}\|\sqrt{a}(\rho - \hat \rho)\|^2_{L^2} + C_1 \|\widehat\be_1\|^2_{L^2}, \end{align*} where we applied H\"older's inequality in the last step. Condition \eqref{eq:norm} then allows to bound the first two terms by the relative energy. For the second term, we obtain \begin{align*} (ii) &= \int_0^\ell \widehat\be_2 a ( (\rho - \hat \rho) w +\hat \rho (w-\hat w)) dx \\ &\le \bar w \sqrt{\bar a} \|\widehat\be_2\|_{L^2} \|\sqrt{a}(\rho - \hat \rho)\|_{L^2} + \bar a \bar \rho \| \widehat\be_2\|_{L^{3/2}} \|(w - \hat w)\|_{L^3} \\ &\le \frac{1}{2}\|\sqrt{a}(\rho - \hat \rho)\|^2 + C_2 \|\widehat\be_2\|_{L^2}^2 + \delta \|w-\hat w\|_{L^3}^3 + C(\delta) \|\widehat\be_2\|_{L^{3/2}}^{3/2}. \end{align*} Choosing $\delta=c_D$ and using \eqref{eq:reldiss4} allows to bound the third term in the last line by the relative dissipation. In summary, we then obtain the following result. \begin{lemma} \label{lem:C3} Let (A1)--(A2) be valid. Then condition \eqref{eq:term3} holds with $\hat C_3 = 1$ and \begin{align*} \mathcal{P}(\widehat\be) = C_1 \|\widehat\be_1\|_{L^2}^2 + C_2 \|\widehat\be_2\|_{L^2}^2 + C_3 \|\widehat\be_2\|_{L^{3/2}}^{3/2}, \end{align*} with constants $C_1$, $C_2$, and $C_3$ only depending on the bounds in (A1)--(A2). \end{lemma} Using the specific form of the residual given in \eqref{eq:beh}, we may further estimate the perturbation functional $\mathcal{P}(\widehat\be)$ as follows. \begin{corollary} \label{cor:C3} Let the conditions of Lemma~\ref{lem:C3} be valid and $\widehat\be$ be defined as in \eqref{eq:beh}. Then \begin{align*} \mathcal{P}(\widehat\be) \le \hat C_3' |\varepsilon^2 - \hat \varepsilon^2|^{3/2} + \hat C_3'' |\gamma - \hat \gamma|^{3/2}, \end{align*} with constants $\hat C_3'$, $\hat C_3''$ only depending on the bounds in assumptions (A1)--(A2) as well as on $\|\partial_{\tau} \hat w\|_{L^\infty(0,T;L^2)}$ and $\|\partial_x \hat w\|_{L^\infty(0,T;L^2)}$. \end{corollary} \subsection*{Condition (C4)} Using the definition of the state and co-state variables, as well as the variational characterization of the boundary operator, we immediately obtain \begin{align*} \langle \mathcal{B}_\partial ({\boldsymbol z}({\boldsymbol u}) &- {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu) \rangle \\ &= - (h(\rho,w) - h(\hat \rho,\hat w)) (m(\rho,w) - m(\hat \rho,\hat w))\big|_0^\ell \\ &\le |h(\rho,w) - h(\hat \rho,\hat w)|_\partial |m(\rho,w) - m(\hat \rho,\hat w)|_\partial. \end{align*} with $h(\tilde \rho,\tilde w) = \varepsilon^2 \frac{\tilde w^2}{2} + P'(\tilde \rho)$ and $m(\tilde \rho,\tilde w)=a\tilde \rho \tilde w$ denoting the co-state mappings of the unperturbed problem, and $|a|_\partial = \sqrt{|a(0)|^2 + |a(\ell)|^2}$ denoting the $\ell^2$-scalar product on the space of boundary values. We thus obtain \begin{lemma} \label{lem:C4} Let assumptions (A1)--(A2) hold. Then condition \eqref{eq:term4} is valid with \begin{align*} \mathcal{P}_\partial({\boldsymbol z}({\boldsymbol u})-{\boldsymbol z}(\widehat\bu)) &= |h(\rho,w) - h(\hat \rho,\hat w)|_\partial |m(\rho,w) - m(\hat \rho,\hat w)|_\partial. \end{align*} \end{lemma} While $h(\rho,w)$ and $m(\rho,w)$ amount to the natural boundary values of the unperturbed problem, the evaluation $h(\hat \rho,\hat w)$ and $m(\hat \rho,\hat v)$ of the unperturbed co-state mappings at the perturbed states does not have a physical meaning. We therefore decompose \begin{align*} h(\hat \rho,\hat v) &= \hat h(\hat \rho,\hat v) + (h(\hat \rho,\hat v) - \hat h(\hat \rho,\hat v)) \end{align*} into the natural boundary value $\hat h(\hat \rho,\hat w) = \hat \varepsilon^2 \frac{\hat w^2}{2} + P'(\hat \rho)$ and a corresponding perturbation $h(\hat \rho,\hat v) - \hat h(\hat \rho,\hat v)$ of the state to co-state mapping. The latter can be estimated by the bounds in assumptions (A1)--(A2), leading to the following result. \begin{corollary} \label{cor:C4} Let the assumptions of Lemma~\ref{lem:C4} hold and $h_\partial=h(\rho,v)|_{\{0,\ell\}}$ and $\hat h_\partial=\hat h(\hat \rho,\hat v)|_{\{0,\ell\}}$ denote the boundary values of the co-state variables of the unperturbed and perturbed system, respectively. Then condition \eqref{eq:term4} holds with \begin{align*} \mathcal{P}_\partial({\boldsymbol z}({\boldsymbol u})-{\boldsymbol z}(\widehat\bu)) &= \hat C_\partial \left( |h_\partial - \hat h_{\partial}| + |\varepsilon^2 - \hat \varepsilon^2| \right) \end{align*} with $C_\partial$ depending only on the bounds in (A1)--(A2) and the Lipschitz bounds for $(\hat \rho,\hat w)$. \end{corollary} \subsection{Stability estimate} Having verified the conditions of our abstract stability analysis, we can now apply Lemma \ref{lem:abs_st} to obtain the following stability estimate. \begin{theorem} \label{thm:main} Let assumptions (A1)--(A2) hold and let $(\rho,w)$ and $(\hat \rho, \hat w)$ denote corresponding classical solutions of \eqref{eq:sys1}--\eqref{eq:sys3} for parameters $\varepsilon,\gamma$ and $\hat \varepsilon,\hat \gamma$, respectively. Further assume that $(\hat \rho,\hat w)$ is Lipschitz continuous. Then \begin{align*} \|\rho(\tau) - \hat \rho(\tau)\|_{L^2(0,\ell)}^2 &+ \varepsilon^2 \|w(\tau) - \hat w(\tau)\|_{L^2(0,\ell)}^2 + \int_0^\tau \| w(s) - \hat w(s)\|_{L^3(0,\ell)}^3 ds\\ &\le \hat C e^{\hat c \tau} \big(\|\rho(0) - \hat \rho(0)\|_{L^2(0,\ell)}^2 + \varepsilon^2 \|w(0) - \hat w(0)\|_{L^2(0,\ell)}^2 \\ & \qquad\qquad\qquad \qquad + |\gamma - \hat \gamma|^{3/2} + |\varepsilon^2 - \hat \varepsilon^2| + \int_0^\tau |h_\partial(s) - \hat h_\partial(s)|_\partial ds \big), \end{align*} where $h_\partial = h(\rho,w)|_{\{0,\ell\}}$ and $\hat h_\partial = \hat h(\hat \rho,\hat w)|_{\{0,\ell\}}$ are the boundary values of the corresponding co-state variables. Moreover, the constants $\hat c$, $\hat C$ in this estimate only depend on the bounds in assumptions (A1)--(A2) and the Lipschitz bounds for $(\hat \rho,\hat w)$. \end{theorem} \begin{remark} Let us briefly discuss the conditions and conclusions of the theorem: The additional regularity for the reference solution $(\hat \rho,\hat w)$ is required in Lemma~\ref{lem:C2} and Corollary~\ref{cor:C3}. The reduction in the convergence rate with respect to $\gamma$ comes from the fact, that the error in $w$ is estimated by the friction term rather than the kinetic energy, in order to obtain estimates that are uniform in $\varepsilon$. The further reduction of the convergence order in $\varepsilon$ is due to effects from the boundary. With minor modifications of the proofs, one could handle further perturbations, e.g., in the pressure potential $P(\cdot)$, the cross-section $a$, or the height function $z$, and also deal with other boundary conditions. A careful inspection of the proofs would also allow to relax the smoothness assumptions on $(\rho,w)$ and $(\hat \rho,\hat w)$ to some extent. \end{remark} \subsection{The parabolic limit problem} We now study convergence of solutions to \eqref{eq:sys1}--\eqref{eq:sys3} in the limit $\varepsilon \to 0$. For $\hat \varepsilon=0$ and $\hat \gamma=\gamma$, the resulting limit problem reads \begin{align} \label{eq:par1} a \partial_t \hat \rho + \partial_x (a \hat \rho \hat w) &= 0, \\ \partial_x (P'(\hat \rho)) + \gamma |\hat w| \hat w &= 0. \label{eq:par2} \end{align} This is a degenerate parabolic problem, whose solvability can be deduced from the results in \cite{Bamberger79,Raviart70}; a detailed analysis can be found in \cite{SchoebelKroehn2020}. A formal application of Theorem~\ref{thm:main} to this limiting case directly leads to the following result. \begin{theorem} \label{thm:main2} Let (A1)--(A2) hold and $(\rho,w)$, $(\hat \rho,\hat w)$ denote classical solutions of \eqref{eq:sys1}--\eqref{eq:sys3} and \eqref{eq:par1}--\eqref{eq:par2}, respectively. Further assume that the initial and boundary values coincide, i.e., $\rho=\hat \rho$ at time $t=0$ and $\varepsilon^2 \frac{w^2}{2} + P'(\rho) = P'(\hat \rho)$ at the boundary $x\in\{0,\ell\}$, and that $(\hat \rho,\hat w)$ is Lipschitz continuous. Then \begin{align*} \|\rho(\tau) - \hat \rho(\tau)\|_{L^2(0,\ell)}^2 + \int_0^\tau \| w(s) - \hat w(s)\|_{L^3(0,\ell)}^3 ds &\le \hat C e^{\hat c \tau} \varepsilon^2 , \end{align*} with constants $\hat c,\hat C$ having the same properties as in Theorem~\ref{thm:main}. \end{theorem} \begin{remark} On bounded time intervals, the quadratic norm difference between solutions of the hyperbolic problem \eqref{eq:sys1}--\eqref{eq:sys3} and the parabolic limit problem \eqref{eq:par1}--\eqref{eq:par2} is thus bounded by $O(\varepsilon^2)$. Let us note that a formal asymptotic analysis would predict a rate $O(\varepsilon^4)$ and such a rate has actually been proven in \cite{LattanzioTzavaras2013} for linear friction and unbounded domains. As mentioned in the previous remark, the reduction of the convergence order in our results is due to the nonlinear friction term and the perturbations coming from the boundary. \end{remark} \section{Extension to gas networks} \label{sec:5} As a next step, we now show that, using the underlying abstract framework, the results of the previous section can be extended almost verbatim to gas networks. We start with extending the rescaled model \eqref{eq:sys1}--\eqref{eq:sys6} to gas networks and then discuss the modifications needed in the stability and asymptotic analysis. \subsection{Network topology} Let $(\mathcal{V},\mathcal{E})$ denote a directed and connected finite graph with vertices $v \in \mathcal{V}$ and edges $e \in \mathcal{E}$, which are identified with intervals $(0,\ell^e)$. We denote by $\mathcal{E}(v)$ the set of edges incident to the vertex $v$, and decompose $\mathcal{V}=\mathcal{V}_0 \cup \mathcal{V}_\partial $ into the sets of interior and boundary vertices, characterized by $\mathcal{V}_0=\{v \in \mathcal{V}: |\mathcal{E}(v)|>1\}$ and $\mathcal{V}_\partial=\{v \in \mathcal{V}: |\mathcal{E}(v)|=1\}$. Here $|\mathcal{E}(v)|$ denotes the cardinality of the set $\mathcal{E}(v)$. We further associate to any vertex $v \in \mathcal{V}$ and edge $e \in \mathcal{E}(v)$ a number \begin{align*} n^e(v) = \begin{cases} 1 & \text{if } e=(\cdot,v), \\ -1 & \text{if } e=(v,\cdot). \end{cases} \end{align*} The vertex $v$ thus corresponds to the end point $\ell^e$ or the start point $0$ of the interval $(0,\ell^e)$ representing the edge $e$. \subsection{Gas transport on networks} After rescaling as outlined in the introduction, also see Appendix~\ref{sec:6}, the gas transport on every edge of the network is described by \begin{align} a^e \partial_{\tau} \rho^e + \partial_x m^e &= 0 , \qquad e \in \mathcal{E} \label{eq:net1}\\ \varepsilon^2 \partial_{\tau} w^e + \partial_x h^e + \gamma^e |w^e| w^e &= 0, \qquad e \in \mathcal{E}. \label{eq:net2} \end{align} By superscript $^e$ we here denote functions or parameters restricted to the edge $e$. The corresponding co-state variables are defined accordingly by \begin{align} \label{eq:net3} h^e &= \varepsilon^2 \frac{|w^e|^2}{2} + P'(\rho^e) + g z^e, \qquad e \in \mathcal{E}, \\ m^e &= a^e \rho^e w^e, \qquad \qquad \qquad \qquad \quad \! e \in \mathcal{E}. \label{eq:net4} \end{align} These equations, which correspond to the conservation of mass and the balance of momentum, describe the flow of gas in the individual pipes. As outlined in \cite{Reigstad2014,Egger2018}, the coupling across pipe junctions can be modeled by the following conditions \begin{align} \sum_{e \in \mathcal{E}(v)} m^e(v) n^e(v) &= 0, \qquad v \in \mathcal{V}_0, \label{eq:net5}\\ h^e(v) &= h^v, \qquad e \in \mathcal{E}(v), \ v \in \mathcal{V}_0, \label{eq:net6} \end{align} which correspond to conservation of mass and continuity of the total specific enthalpy $h$ at pipe junctions. Note that $h^v$ thus corresponds to the unique value of the enthalpy at the junction $v \in \mathcal{V}_0$. A combination of the two conditions allows to show that no energy is produced via flow over junctions \cite{Reigstad2014,Egger2018}. Similar to the case of a single pipe, we may again prescribe the enthalpy at the boundary vertices by \begin{align} \label{eq:net7} h^e(v) = h_\partial^v, \qquad v \in \mathcal{V}_\partial. \end{align} \subsection{Weak formulation and canonical form} In a similar manner to Section~\ref{sec:3}, we multiply \eqref{eq:net1}--\eqref{eq:net2} with suitable test functions, integrate over the edges, use integration-by-parts, and then sum over all edges, which immediately leads to the variational equations \begin{align*} \sum_{e \in \mathcal{E}} (a^e \partial_{\tau} \rho^e, q^e)_e + (\partial_x m^e, q^e)_e &= 0, \\ \sum_{e \in \mathcal{E}} (\varepsilon^2 \partial_{\tau} w^e, r^e)_e - (h^e, \partial_x r^e)_e + (\gamma^e \frac{|w^e|}{a^e \rho^e} m^e, r^e)_e &= -\sum_{v \in \mathcal{V}} \sum_{e \in \mathcal{E}(v)} h^e(v) r^e(v) n^e(v), \end{align*} which hold for all time independent piecewise regular test functions $q$, $r$, and all $t>0$ of relevance. In the last term, we used the elementary identity \begin{align*} \sum_{e \in \mathcal{E}} \sum_{v \in e} h^e(v) r^e(v) n^e(v) &= \sum_{v \in \mathcal{V}} \sum_{e \in \mathcal{E}(v)} h^e(v) r^e(v) n^e(v) \end{align*} to change the order of summation. In summary, we see that the weak formulation of \eqref{eq:net1}--\eqref{eq:net4} again amounts to an abstract port-Hamiltonian system \eqref{eq:abs1}--\eqref{eq:abs2} with operators \begin{align*} \langle \mathcal{C} {\boldsymbol u}, {\boldsymbol v}\rangle &= \sum_e (a^e \rho^r, q^e)_e + (\varepsilon^2 w^e, r^e)_e \\ \langle \mathcal{R}({\boldsymbol u}) {\boldsymbol z}, {\boldsymbol v}\rangle &= \sum_e (\gamma^e \frac{|w^e|}{a^e \rho^e} m^e, r^e)_e, \\ \langle \mathcal{J} {\boldsymbol z}, {\boldsymbol v}\rangle &= \sum_e (\partial_x m^e, q^e)_e - (h^e, \partial_x r^e)_e, \end{align*} and boundary operator defined by \begin{align*} \langle \mathcal{B}_\partial {\boldsymbol z}, {\boldsymbol v}\rangle &= -\sum_{v \in \mathcal{V}} \sum_{e \in \mathcal{E}(v)} h^e(v) r^e(v) n^e(v). \end{align*} Let us note that the coupling and boundary conditions \eqref{eq:net5}-\eqref{eq:net7} have not been incorporated up to this point. At this moment, the above variational equations thus describe the gas transport in a collection of separated pipes. The corresponding function spaces $\mathbb{V}$ and $\mathbb{W}$ for the network are thus simply given by \begin{align*} \mathbb{V} = \prod_{e \in \mathcal{E}} \mathbb{V}^e \qquad \text{and} \qquad \mathbb{W} = \prod_{e \in \mathcal{E}} \mathbb{W}^e \end{align*} with $\mathbb{V}^e = L^2(0,\ell^e) \times L^2(0,\ell^e)$ and $\mathbb{W}^e= H^1(0,\ell^e) \times H^1(0,\ell^e)$ denoting the corresponding spaces for the individual pipes. \subsection{Verification of conditions (C0)--(C4)} By the simple \emph{additive} construction, conditions (C0)-(C3) can be obtained with the same arguments as for a single edge $e \in \mathcal{E}$ and summation over all edges. It thus remains to consider condition (C4) in detail. We start with splitting the boundary term via \begin{align*} &\langle \mathcal{B}_\partial ({\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)), {\boldsymbol z}({\boldsymbol u}) - {\boldsymbol z}(\widehat\bu)\rangle \\ &= -\sum_{v \in \mathcal{V}_0} \sum_{e \in \mathcal{E}(v)} (h^e(\rho^e,w^e) - h^e(\hat \rho^e,\hat w^e)) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \\ & \quad -\sum_{v \in \mathcal{V}_\partial} \sum_{e \in \mathcal{E}(v)} (h^e(\rho^e,w^e) - h^e(\hat \rho^e,\hat w^e)) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e = (i)+(ii), \end{align*} into contributions coming from the junctions $v \in \mathcal{V}_0$ and the boundary vertices $v \in \mathcal{V}_\partial$. For the latter, we can use the results for a single pipe derived in Section~\ref{sec:4}, and hence \begin{align*} (ii) \le \hat C_\partial \, ( |h_\partial - \hat h_\partial|_\partial + |\varepsilon^2 - \hat \varepsilon^2| ). \end{align*} Note that $|h_\partial|_\partial^2 = \sum_{v \in \mathcal{V}_\partial} |h_\partial(v)|^2$ now is the corresponding $\ell^2$-norm on $\mathbb{R}^{|\mathcal{V}_\partial|}$. For the remaining junctions $v \in \mathcal{V}_0$, we proceed as follows: We now make use of the coupling condition \eqref{eq:net6} and let $h^v$ respectively $\hat h^v$ denote the uniquely determined values of the corresponding co-state variable at the junction $v \in \mathcal{V}_0$. Then \begin{align*} \sum_{e \in \mathcal{E}(v)} (h^e(\rho^e,w^e) &- h^e(\hat \rho^e,\hat w^e)) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \\ &= \sum_{e \in \mathcal{E}(v)} (h^e(\rho^e,w^e) - h^v) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \\ &\qquad + \sum_{e \in \mathcal{E}(v)} (h^v - \hat h^v ) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \\ &\qquad + \sum_{e \in \mathcal{E}(v)} (\hat h^v - h^e(\hat \rho^e,\hat w^e)) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \\ &= (a) + (b) + (c). \end{align*} Due to the coupling condition \eqref{eq:net6}, the first term (a) vanishes. Since $h^v$ and $\hat h^v$ are both single valued on $v \in \mathcal{V}_0$ and $m^e(\hat \rho^e,\hat w^e) = \hat m^e(\hat \rho^e,\hat w^e)$, we further obtain \begin{align*} (b) = (h^v - \hat h^v) \sum_{e \in \mathcal{E}(v)} (m^e(\rho^e,w^e) - \hat m^e(\hat \rho^e,\hat w^e)) = 0, \end{align*} where we used the second coupling condition \eqref{eq:net5} for the perturbed and unperturbed problem, respectively. Using \eqref{eq:net6} for the perturbed problem, we finally get \begin{align*} (c) &= \sum_{e \in \mathcal{E}(v)} (\hat h^e(\hat \rho^e,\hat w^e) - h^e(\hat \rho^e,\hat w^e)) (m^e(\rho^e,w^e) - m^e(\hat \rho^e,\hat w^e)) n^e \le C |\varepsilon^2 - \hat \varepsilon^2|, \end{align*} with a constant $C$ depending only on the bounds $\bar a, \bar \rho$ and $\bar w$ in assumption (A1)--(A2). By summation over all edges, we thus obtain the following result. \begin{lemma} \label{lem:C3net} Let (A1)--(A2) hold uniformly for all edges $e \in \mathcal{E}$. Furthermore, let ${\boldsymbol u}=(\rho,w)$ and $\hat {\boldsymbol u}=(\hat \rho,\hat w)$ denote classical solutions of \eqref{eq:net1}--\eqref{eq:net7} with parameters and data $\varepsilon,\gamma,h_\partial$ and $\hat \varepsilon,\hat \gamma, \hat h_\partial$, respectively, and assume that $(\hat \rho,\hat w)$ is Lipschitz on every edge $e \in \mathcal{E}$. Then condition (C4) holds with perturbation functional \begin{align*} P_\partial({\boldsymbol z}({\boldsymbol u})-{\boldsymbol z}(\widehat\bu)) = \hat C_\partial (| h_\partial - \hat h_\partial| + |\varepsilon^2 - \hat \varepsilon^2|), \end{align*} and $\hat C_\partial$ depending only on the bounds in (A1)--(A2) and the Lipschitz bounds for $(\hat \rho,\hat w)$. \end{lemma} \subsection{Stability and parabolic limit for gas networks} By the considerations of the previous sections, we immediately see that the assertions of Theorem~\ref{thm:main} and \ref{thm:main2} and also the remarks concerning possible generalizations remain valid verbatim also for gas networks. For the parabolic limit $\varepsilon \to 0$, we state the corresponding result in detail. \begin{theorem} \label{thm:main3} Let assumptions (A1)--(A2) hold and let $(\rho,w)$, $(\hat \rho,\hat w)$ denote classical solutions of \eqref{eq:net1}--\eqref{eq:net7} with $\varepsilon>0$, $\hat \varepsilon=0$, and $\gamma=\hat \gamma$, $h_\partial=\hat h_\partial$. Further assume that $(\hat \rho,\hat w)$ is Lipschitz on every edge $e \in \mathcal{E}$. Then \begin{align*} \|\rho(\tau) - \hat \rho(\tau)\|_{L^2(\mathcal{E})}^2 + \int_0^\tau \| w(s) - \hat w(s)\|_{L^3(\mathcal{E})}^3 ds &\le \hat C e^{\hat c \tau} \varepsilon^2 , \end{align*} with constants $\hat c,\hat C$ of the same form as in Theorem~\ref{thm:main} and \ref{thm:main2}. \end{theorem} As similar generalization can also be made for the stability estimate of Theorem~\ref{thm:main}, and also the remarks concerning possible generalizations made for a single pipe generalize almost verbatim to gas networks. \section{Summary} In this paper, we studied the stability of classical solutions to hyperbolic balance laws describing the gas transport in pipelines and pipeline networks. Our analysis was based on the formulation of a rescaled set of equations as an abstract port-Hamiltonian system involving state and co-state variables. Under some general smoothness assumptions on possible solutions, we established quantitative perturbation bounds via relative energy estimates. As a particular application, we proved convergence of sufficiently smooth solutions to solutions of the parabolic limit problem in the asymptotic high-friction, long-time, low-Mach limit $\varepsilon \to 0$. A key ingredient of our analysis was the proper treatment of boundary conditions, which allowed us to generalize our results almost verbatim to gas-networks. Natural next steps for future investigation are the structure-preserving discretization and discretization error analysis, which can most probably be done with similar arguments. Another topic of interest might be the further investigation of the parabolic limit problem, which seems to be widely used in the gas network community.
{ "timestamp": "2020-12-29T02:26:16", "yymm": "2012", "arxiv_id": "2012.14135", "language": "en", "url": "https://arxiv.org/abs/2012.14135" }
\section{Introduction} First-order cosmological phase transitions are of particular interest because these violent phenomena in the early Universe can be the production source of gravitational waves (GWs), which can be probed by current and future GW experiments~\cite{Schwaller2015PRL}. The phase transitions are relevant to the spontaneous breakdown of symmetries in particle physics. As the Universe temperature drops to the symmetry breaking scale, the vacuum of the Universe transits from a symmetric phase to a broken one. Within the Standard Model (SM) of particle physics, there are two phase transitions: the electroweak weak phase transition (EWPT) at $\sim 100$~GeV and the QCD phase transition at $\sim 0.1$~GeV. However, both of them are found to be crossovers rather than first-order phase transitions~\cite{Onofrio2014PRL,Aoki2016Nature}. Extensions of the SM to render the EWPT first-order have been widely studied in the literature~ \cite{Espinosa2008PRD,Barger2008PRD,Barger2009PRD,Espinosa2012NPB,Li2014JHEP,Chiang2014PLB,Profumo2015PRD,Kotwal2016PRD,Beniwal2017JHEP,Cline2017PRD,Alves2019JHEP,Ghosh2020, Gould2019PRD,Kozaczuk2020PRD,Jiang2016PRD,Chiang2018PRD,Niemi2019PRD,Chao2019JHEP,Basler2017JHEP,Dorsch2017JHEP,Bernon2018JHEP,Andersen2018PRL, Huang2016PRD,Huang2017PRD,Chala2018JHEP,Grzadkowski2018JHEP,Carena2020JHEP,Musolf2020JHEP,Grojean2005PRD,Cai2017JCAP,Chiang2020JHEP,Chiang2020JCAP}. Future space-based interferometers can be used to test these models since the GW signals from the EWPT peak around the mHz range. In addition to the SM symmetries, the solution to the strong CP problem via the Peccei-Quinn (PQ) mechanism~\cite{Peccei1977PRL,Peccei1977PRD} demands the existence of a global $U(1)_{\rm PQ}$ symmetry. The QCD axion~\cite{Weinberg1978PRL,Wilczek1978PRL,Kim1979PRL,Shifman1980NPB,Dine1981PLB,Zhitnitsky1980SJNP}, the pseudo-Goldstone boson associated with the PQ symmetry breaking, can serve as an attractive dark matter candidate. In the conventional QCD axion scenarios, the axion decay constant $f_a$ is at the same order as the PQ symmetry breaking scale $f$, {\it i.e.,} $f_a\sim f$. The axion decay constant has been restricted to the range $10^{9}\lesssim f_a\lesssim 10^{12}$~GeV (see, for example, refs.~\cite{Marsh2016PR,Luzio2020PR} for recent reviews on the QCD axion). The lower bound comes from the the SN~1987A neutrino burst duration observations~\cite{Mayle1988PLB,Raffelt1988PRL,Turner1988PRL}, while the upper bound is to ensure that the Universe is not over-closed by the axion dark matter~\cite{Preskill1983PLB,Abbott1983PLB,Dine1983PLB}. As a consequence, the classical QCD axion is nearly invisible since the QCD axion-gluon coupling is inversely proportional to the axion decay constant. However, it is not necessary to associate the axion decay constant close to the PQ scale. The canonical association of the axion decay constant to the PQ scale can be circumvented with the help of clockwork mechanism~\cite{Kaplan2016PRD}. In the clockwork axion model~\cite{Kaplan2016PRD,Choi2016JHEP,Higaki2016JHEPa,Higaki2016JHEPb,Giudice2017JHEP,Coy2017JHEP,Long2018JHEP,Agrawal2018JHEP}, $N+1$ complex scalar fields with global $U(1)$ symmetries are introduced and the axion decay constant can be exponentially enlarged with respect to the symmetry breaking scale~\cite{Kaplan2016PRD}. The clockwork mechanism allows a PQ symmetry breaking scale $f\lesssim 10^9$~GeV while keeping the axion decay constant $f_a$ consistent with the cosmological/astrophysical observations. The abundant particle and cosmology phenomena for $f\ll f_a$ in the clockwork axion model have been investigated in the literature~\cite{Higaki2016JHEPa,Higaki2016JHEPb,Long2018JHEP,Agrawal2018JHEP}. In this work, we are concerned with testing the clockwork axion model with the GW observations. This is possible especially when the phase transition of the PQ symmetry breakdown is first-order. However, the PQ phase transition in the conventional QCD axion models is only second-order. Attempts to make a first-order PQ phase transition have been made in recent works~\cite{Croon2019JHEP,Dev2019JCAP,Harling2020JHEP,Rose2020JHEP,Ghoshal2020}. One of the simplest scenarios is to introduce one or more Higgs(-like) doublets and the PQ complex scalar is coupled to the Higgs fields via the renormalization operator $\lambda_m|\Phi|^2|H|^2$~\cite{Dev2019JCAP,Harling2020JHEP}. It is found that to obtain a first-order PQ phase transition, one needs a large Higgs portal coupling $\lambda_m\gtrsim 1$ and a small PQ scalar self-coupling $\lambda\sim 10^{-3}$~\cite{Dev2019JCAP}, which may be confronted with the constraints from the Higgs properties~\cite{Rose2020JHEP}. The realization of a first-order PQ phase transition may also be achieved in the radiative PQ symmetry breaking scenario~\cite{Rose2020JHEP,Ghoshal2020} or in the composite axion models~\cite{Rose2020JHEP}. In this work, to make the PQ phase transition first-order, we will introduce in the scalar potential a dimension-6 operator that can be generated by decoupling a massive degree of freedom. This scenario has been investigated in the context of first-order EWPT in the literature~\cite{Grojean2005PRD,Huang2016PRD,Cai2017JCAP,Chala2018JHEP,Ellis2019JCAP,Musolf2020JHEP}. We will show that the future space-based interferometers, such as, LISA~\cite{LISA2017,LISA2019CQG}, Taiji~\cite{Hu2017NSR,Ruan2020NA}, ALIA~\cite{ALIA2014JPCS}, DECIGO~\cite{DECIGO2017}, and BBO~\cite{BBO2006CQG}, and the ground-based GW observatories including Einstein Telescope (ET)~\cite{ET2010CQG}, Cosmic Explorer (CE)~\cite{CE2017CQG}, and Advanced LIGO (aLIGO)~\cite{LIGO2019,LIGOSGW2019PRD,LIGOSGW2018PRL} can explore the PQ symmetry breaking scale of the clockwork axion model in a broad range of $10^{3}-10^{6}$~GeV. For the clockwork axion models with a PQ scale $f\gtrsim 10^{6}$~GeV, the domain walls produced from the phase transition would dominate the energy density of the Universe and have therefore been excluded. We find that GWs produced from the annihilation of domain walls with a PQ scale $f\simeq 2\times 10^5$~GeV can account for the signal in the stochastic GW background from the analysis of 12.5-year data collected by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav)~\cite{NANOGrav2020}. For the phase transition at the scale $f\simeq 2\times 10^5$~GeV, we also expect to find a footprint of the clockwork axion on the stochastic GW background in the LIGO O3 run~\cite{LIGOSGW2019PRD,LIGOSGW2018PRL}. This work is presented as follows. In Sec.~\ref{sec:CWaxion}, we briefly review the clockwork axion model. In Sec.~\ref{sec:pt}, we study in detail the phase transition in our model and perform a scan of model parameter space. The nucleation temperature of the true vacuum and the GW parameters are calculated in Sec.~\ref{sec:bn}. The spectrum of GWs coming from the first-order phase transition and the annihilation of domain walls along with their detections in GW experiments are analyzed in Sec.~\ref{sec:GWPT} and Sec.~\ref{sec:GWDW}, respectively. Finally in Sec.~\ref{sec:summary}, we summarize our findings. \section{The clockwork axion model} \label{sec:CWaxion} In this section we briefly review the clockwork axion model, which has been widely studied in refs.~\cite{Kaplan2016PRD,Choi2016JHEP,Higaki2016JHEPa, Higaki2016JHEPb,Giudice2017JHEP,Coy2017JHEP,Long2018JHEP,Agrawal2018JHEP}. The clockwork model contains a number of $N+1$ complex scalars, denoted as $\Phi_i(x)$ with $i=0,1,...,N$. The potential of these scalars are determined by \begin{equation} \label{eq:poten1} V(\Phi)=\sum_{j=0}^{N}\left(-m^{2}\left|\Phi_{j}\right|^{2}+\frac{\lambda}{4}\left|\Phi_{j}\right|^{4}\right)- \varepsilon \sum_{j=0}^{N-1}\left( \Phi_{j}^{\dagger} \Phi_{j+1}^{3}+\rm h.c.\right) ~, \end{equation} where the parameters $m^2$, $\lambda$, and $\varepsilon$ have been assumed to be real and universal. The first term respects a global $U(1)^{N+1}$ symmetry, which is explicitly broken by the $\varepsilon$-dependent term down to a global $U(1)$ symmetry \begin{equation} \mathrm{U}(1): \Phi_{i} \rightarrow \exp \left[i q^{N-i} \theta\right] \Phi_{i}, \end{equation} with $0\leq \theta <2\pi$ and $q\equiv 3$. The global $U(1)$ symmetry is identified as the PQ symmetry in the clockwork work axion model. Even without the $\varepsilon$-dependent term, the global $U(1)$ symmetry of the potential could be spontaneously broken when the radial components of the $N+1$ complex scalars acquire a nonzero vacuum expectation value (VEV) $\left \langle \Phi_i \right \rangle=f/\sqrt{2}$, where $f$ is the $U(1)$ symmetry breaking scale and is assumed to be the same for all $\Phi_i$. Since now the $U(1)^{N+1}$ symmetry is explicitly broken to $U(1)$ by the $\varepsilon$-dependent term, the spontaneous symmetry breaking of the potential \eqref{eq:poten1} leads to $N$ massive pseudo-Goldstone bosons and one massless Goldstone boson. After the spontaneous symmetry breaking, we parametrize the scalar field as $\Phi_i=fe^{i\pi_i/f}/\sqrt{2}$ and obtain the potential for the $N+1$ Goldstone bosons \begin{equation} \label{eq:Vpi} V(\pi)=-\frac{1}{2} \varepsilon f^{4} \sum_{i=0}^{N-1} \cos \frac{\pi_{i}-q \pi_{i+1}}{f} \simeq \frac{\varepsilon f^{2}}{4} \sum_{i=0}^{N-1}\left(\pi_{i}-q \pi_{i+1}\right)^{2} =\frac{1}{2}\sum_{i,j=0}^{N}\pi_j \left( M_{\pi}^2 \right)_{ji} \pi_i ~, \end{equation} where the constant term is omitted and the mass matrix $M_{\pi ij}^2$ is given by \begin{equation} M_{\pi}^{2}=m_{G}^{2}\left(\begin{array}{cccccc} 1 & -q & 0 & \cdots & & 0 \\ -q & 1+q^{2} & -q & \cdots & & 0 \\ 0 & -q & 1+q^{2} & \cdots & & 0 \\ \vdots & \vdots & \vdots & \ddots & & \vdots \\ & & & & 1+q^{2} & -q \\ 0 & 0 & 0 & \cdots & -q & q^{2} \end{array}\right), \end{equation} where $m_G^2=\varepsilon f^2/2$. One then rotates the $\pi_i$ fields to the mass eigenstate $a_i \equiv (a, A_1, \dots, A_N)$ by a real $(N+1)\times (N+1)$ orthogonal matrix $O$ so that the the mass matrix is diagonalized as $O^{T} M_{\pi}^{2} O=\operatorname{diag}\left(m_{a}^{2},m_{A_1}^2, \ldots, m_{A_{N}}^{2}\right)$, where the eigenvalues of $N+1$ Goldstone bosons $a_i$ are given by \begin{equation} \label{eq:massG} m_{a}^2=0~{\rm and}~m_{A_k}=\eta_km_{G}^2 ~\mbox{with}~ \eta_{k}\equiv q^{2}+1-2q\cos\frac{k\pi}{N+1} ~\left( k=1,2,...,N \right) ~. \end{equation} The massless Goldstone boson $a$ is identified as the axion and the $N$ massive pseudo-Goldstone states $A_k$ are the so-called gear fields since they play the role of `gears' in the clockwork mechanism. The matrix elements of $O$ are given by \begin{equation} \label{eq:rote0} O_{i 0}=\frac{\mathcal{N}_{0}}{q^{i}}, \quad O_{i k}=\mathcal{N}_{k}\left[q \sin \frac{i k \pi}{N+1}-\sin \frac{(i+1) k \pi}{N+1}\right], \end{equation} with $i=0,1,...,N$, $k=1,2,...,N$ and \begin{equation} \label{eq:rote1} \mathcal{N}_{0} \equiv \sqrt{\frac{q^{2}-1}{q^{2}-q^{-2 N}}} ~, \quad \mathcal{N}_{k} \equiv \sqrt{\frac{2}{(N+1) \eta_{k}}} ~. \end{equation} The $(N+1)$ $a_i$ fields are related to the $\pi_i$ fields by the rotation \begin{equation} \label{eq:piN} \pi_i=\sum_{j=0}^{N}O_{ij}a_{j} \equiv O_{i0}a+\sum_{j=1}^{N}O_{ij}A_j ~. \end{equation} The potential of the (pseudo-)Goldstone bosons in the physical basis is then given by the sum of the contributions from all sites \begin{equation} \label{eq:gearV} V(\pi)=\sum_{j=0}^{N} V_j(A_j)=\frac{1}{2}m_{G}^2\sum_{j=1}^{N}\eta_jA_j^2=\frac{1}{4}\varepsilon f^2\sum_{j=1}^{N}\eta_jA_j^2 ~. \end{equation} Here we have used the fact that $m_a^2=0$. The clockwork mechanism is illustrated as follows. Consider the effective Lagrangian in which the $N$-th site $\pi_N$ is coupled to the QCD topological term \begin{equation} \label{eq:qcdtopol} \mathcal{L}\supset\frac{\alpha_{s}}{8 \pi}\frac{\pi_{N}}{f} G_{\mu \nu}^{a} \tilde{G}^{\mu \nu, a}. \end{equation} where $G_{\mu \nu}^{a}$ is the gluon field strength tensor, as seen in the Kim-Shifman-Vainshtein-Zakharov (KSVZ)~\cite{Kim1979PRL,Shifman1980NPB} type and the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ)~\cite{Dine1981PLB,Zhitnitsky1980SJNP} type of axion models. Using eq.~\eqref{eq:piN}, the axion coupling to the topological term is then given by \begin{equation} \mathcal{L}\supset\frac{\alpha_{s}}{8 \pi}\frac{a}{f_{a}} G_{\mu \nu}^{a} \tilde{G}^{\mu \nu, a} ~, \end{equation} where we have defined \begin{equation} \label{eq:decayCont} f_a \equiv \frac{f}{O_{N0}}=\frac{q^Nf}{\mathcal{N}_{0}}\simeq q^Nf ~. \end{equation} If the QCD topological term~\eqref{eq:qcdtopol} occurs at the `first' site $i=N$, we observe from eq.~\eqref{eq:rote0} that the coupling of the massless axion at the `last' site $i=0$ is suppressed by a factor of $q^N$. In other words, the axion decay constant $f_a$ is amplified by a factor of $q^N$ compared to the symmetry breaking scale $f$, as given in eq.~\eqref{eq:decayCont}. With the clockwork mechanism, a low PQ symmetry breaking scale $f$ and a nearly invisible axion can be simultaneously achieved in an axion model. \section{Phase transition} \label{sec:pt} In this section, we discuss the phase transition associated with the PQ symmetry breakdown. Various realizations of a first-order phase transition of the global $U(1)$ symmetry (PQ symmetry) breaking have been discussed in refs.~\cite{Croon2019JHEP,Dev2019JCAP,Harling2020JHEP,Rose2020JHEP,Ghoshal2020}, which include adding one~\cite{Dev2019JCAP} or two~\cite{Harling2020JHEP} Higgs(-like) doublets to the scalar potential, radiative PQ symmetry breaking~\cite{Rose2020JHEP,Ghoshal2020}, and the composite axion models~\cite{Rose2020JHEP}. In this work, we make the first-order phase transition for the global $U(1)$ symmetry breaking possible by adding a dimension-6 operator to the scalar potential \begin{equation} \label{eq:poten2} V_{\Lambda}(\Phi)=V(\Phi)+\sum_{j=0}^{N}\frac{1}{\Lambda^2}\left|\Phi_{j}\right|^{6}, \end{equation} where $\Lambda \geq f$ is the cut-off scale of the theory. \subsection{The effective potential} Let's first pay attention to the vacuum phase transition at the $j$-th site only. The complex scalar $\Phi_j$ can be expanded around the classical backgrounds as \begin{equation} \Phi_j=\phi_j e^{-i\pi_j/f}/\sqrt{2} ~. \end{equation} At finite temperature, the effective one-loop scalar potential is given by \begin{equation} \label{eq:Veff} V_{\rm eff}(\phi,T)=V_0(\phi)+V_{\rm CW}(\phi)+V_T(\phi,T)+V_{\rm ring}(\phi,T) ~. \end{equation} Following ref.~\cite{Kaplan2016PRD}, here we have assumed the radial field $\phi_j \equiv \phi$, which develops a VEV $\langle \phi \rangle=f$ after the spontaneously $U(1)$ symmetry breaking. The tree-level scalar potential at zero temperature is given by \begin{equation} V_0(\phi)= -\frac{1}{2}m^2\phi^2+\frac{1}{4}\lambda \phi^4 +\frac{1}{\Lambda^2}\phi^6. \end{equation} By requiring the renormalization conditions \begin{equation} V_{\mathrm{CW}}(\phi=f)=V_{\mathrm{CW}}^{\prime}(\phi=f)=V_{\mathrm{CW}}^{\prime \prime}(\phi=f)=0 ~, \end{equation} the Coleman-Weinberg potential can be written as~\cite{Coleman1973PRD} \begin{equation} V_{\mathrm{CW}}(\phi)=\sum_{i} \frac{n_{i}}{64 \pi^{2}}\left[m_{i}^{4}(\phi)\left(\log \frac{m_{i}^{2}(\phi)}{m_{i}^{2}(f)}- \frac{3}{2}\right)+2 m_{i}^{2}(\phi) m_{i}^{2}(f)-\frac{m_{i}^{4}(\phi)}{2}\right] ~, \end{equation} where the subscript $i=\{ \phi, A \}$, where $A = A_j$ is the $j$-th site gear field whose potential is given by eq.~\eqref{eq:gearV}, and the number of degrees of freedom $n_i=\{ 1,1 \}$. The gear contributes to the effective potential at loop level. The gear's mass depends on its site (see eq.~\eqref{eq:massG}), with the value of $\eta_j$ falling in the range of $\sim 4-16$. Conservatively, we assume $\eta_j\equiv \eta\simeq 4$ for the gears to simplify the estimation. For small values of $\phi$, the field-dependent mass of $\phi$ is negative, leading to a complex effective potential. The imaginary part of the effective potential is related to the decay rate of the scalar~\cite{EJWeinberg1987PRD}. This part can be abandoned in the calculation of phase transition since it is found to be tiny compared to the real part around the transition temperature~\cite{Delaunay2008JHEP}. The finite-temperature contributions to the effective potential at one-loop level are given by~\cite{Dolan1974PRD} \begin{equation} V_{\mathrm{T}}(\phi, T)=\frac{T^{4}}{2 \pi^{2}} \sum_{i} n_{i} J_{B, F}\left(m_{i}^{2}(\phi)/T^{2}\right) ~, \end{equation} where the thermal functions are defined as \begin{equation} J_{B, F}\left(z^{2}\right)=\int_{0}^{\infty} d x x^{2} \ln \left(1 \mp e^{-\sqrt{x^{2}+z^{2}}}\right), \end{equation} with the minus sign for bosons ($B$) and the plus sign for fermions ($F$). The ring diagram part of one-loop finite-temperature potential from bosons is given by \begin{equation} V_{\text {ring }}(\phi, T)=\sum_{i} \frac{n_{i} T}{12 \pi}\left[m_{i}^{3}(\phi)-\left(m_{i}^{2}(\phi)+ \Pi_{i}(T)\right)^{\frac{3}{2}}\right] ~, \end{equation} where \begin{equation} \Pi_{\phi}(T)=\left( \frac{m_{\phi}^2}{4f^2}+\frac{2}{3}\epsilon-\frac{3f^2}{4\Lambda^2} \right )T^2~{\rm and}~ \Pi_{A}(T)=\frac{2}{3}\epsilon T^2 ~. \end{equation} \subsection{The high temperature expansion} \label{sec:highT} To get analytic insights on the phase transition associated with potential~\eqref{eq:poten2}, we study in this section the high temperature expansion of the potential given by \begin{equation} \label{eq:vht1} V_{\rm HT}(\phi,T)= -\frac{1}{2}m^2\phi^2+\frac{1}{4}\lambda \phi^4 +\frac{1}{\Lambda^2}\phi^6 +\frac{1}{2}CT^2\phi^2 ~, \end{equation} where \begin{equation} C=\frac{m_{\phi}^2}{4f^2}+\frac{2}{3}\epsilon-\frac{3f^2}{4\Lambda^2} ~. \end{equation} Using the renormalization conditions \begin{equation} V_{\rm HT}^{\prime}(f,0)=0 \quad {\rm and} \quad V_{\rm HT}^{\prime \prime}(f,0)=m_{\phi}^{2} ~, \end{equation} where the prime denotes the derivation with respect to $\phi$ and $m_{\phi}$ is the mass of $\phi$. We thus have \begin{equation} m^{2}=\frac{m_{\phi}^{2}}{2}-\frac{3 f^{4}}{4 \Lambda^{2}}, \quad \lambda=\frac{m_{\phi}^{2}}{2 f^{2}} -\frac{3 f^{2}}{2 \Lambda^{2}} ~. \end{equation} The signs of $m^2$ and $\lambda$ depend on two parameters \begin{equation} \kappa_{m}=\frac{m_{\phi}}{f} ~~{\rm and}~~ \kappa_{l}=\frac{f}{\Lambda} ~. \end{equation} From the potential~\eqref{eq:vht1} we see that at zero temperature, the tree-level barrier that separates the false and true vacua could arise by requiring that both $m^2$ and $\lambda$ be negative. We thus have \begin{equation} \label{eq:bound1} \kappa_m<\sqrt{\frac{3}{2}}\kappa_l ~. \end{equation} This bound can be alleviated when the thermal corrections are included. At finite temperature, the condition of $m^2<0$ is generalized to \begin{equation} \frac{d^2V(0,T)}{d\phi^2}>0 ~. \end{equation} This requirement should be satisfied at least around the critical temperature. For the potential~\eqref{eq:vht1}, the bound on $m^2$ is loosened to \begin{equation} m^2\lesssim CT_c^2 ~. \end{equation} The critical temperature $T_{c}$ at which the local minimum of the potential at the true vacuum $\phi\neq 0$ is degenerate with that at the false vacuum $\phi=0$ is \begin{equation} \label{eq:vhttc2} T_{c}^{2}=\frac{\Lambda^{4} m_{\phi}^{4}+2 \Lambda^{2} m_{\phi}^{2} f^{4}-3 f^{8}}{16C\Lambda^{2} f^{4}} ~. \end{equation} The VEV at the critical temperature is given by \begin{equation} \label{eq:vhtfc2} f_{c}^{2}=\frac{3}{2} f^{2}-\frac{m_{\phi}^{2} \Lambda^{2}}{2 f^{2}} ~. \end{equation} Obviously, both $T_c^2$ and $f_c^2$ are required to be positive to trigger a first-order phase transition, giving the bounds~\cite{Grojean2005PRD} \begin{equation} \label{eq:bound2} \max \left(\frac{1}{\kappa_m}, \frac{\sqrt{3}}{\sqrt{\kappa_m^{2}+4\varepsilon /3}}\right)<\frac{1}{\kappa_l}< \frac{\sqrt{3}}{\kappa_m} ~. \end{equation} In order to have a correct direction of the phase transition, the symmetry broken minimum should decrease faster than the symmetric one as the temperature keeps dropping. This condition can be expressed as $dV_{\rm HT}/dT^2>0$~\cite{Chiang2020JCAP}, which can be satisfied when $C>0$. Combining with the bound~\eqref{eq:bound2}, we obtain the resrtriction $\varepsilon >0$. If we require the tighter upper bound~\eqref{eq:bound1} on $\kappa_m$, then the value of $\varepsilon $ is constrained to be in the range \begin{equation} \varepsilon>\frac{9}{16}\kappa_l^2 ~. \end{equation} One should also ensure the symmetry broken vacuum to be the global minimum at zero temperature, {\it i.e.,} $V_{\rm HT}(f, 0) < V_{\rm HT}(0, 0)$, which gives the constraint \begin{equation} \label{eq:bound3} \kappa_l<\kappa_m ~. \end{equation} This constraint is valid even when the Coleman-Weinberg potential and the thermal contributions are both taken into account. Combining with the bound~\eqref{eq:bound1}, we find that $\kappa_l\lesssim \kappa_m$ is required to trigger a first-order phase transition. The high temperature approximation can be improved by including higher order thermal corrections~\cite{Bodeker2005JHEP}. The corresponding potential is given by \begin{equation} \label{eq:vht2} V_{\rm HT}(\phi,T)= -\frac{1}{2}m^2\phi^2+\frac{1}{2}CT^2\phi^2-ET\phi^3+\frac{1}{4}\lambda \phi^4 +\frac{1}{\Lambda^2}\left( T^4\phi^2+2T^2\phi^4+\phi^6 \right ), \end{equation} where $E=\varepsilon^{3/2}/(4\pi)$. The high temperature expansion~\eqref{eq:vht1} can approximate the one-loop effective potential~\eqref{eq:Veff} quite well when there exists a tree-level barrier for separating the two vacua~\cite{Espinosa2012NPB,Chiang2020JCAP}. We further confirm this conclusion for the improved high temperature approximation~\eqref{eq:vht2} by numerically comparing with the effective potential~\eqref{eq:Veff} using various choices of parameters. \subsection{Parameter scan} \begin{figure} \centering \includegraphics[width=100mm,angle=0]{./fig/potential.pdf} \caption{Evolution of the effective potential with temperature, taking $f=10^4$~GeV, $\kappa_m=0.15$, $\kappa_l=0.10$, and $\varepsilon=0.51$.} \label{fig:potential} \end{figure} The phase transition would be first-order if there exists a sufficiently high and wide potential barrier separating the two degenerate vacua of the thermal effective potential at the critical temperature. As shown in section~\ref{sec:highT}, adding a dimension-6 operator could make the self-coupling $\lambda$ negative. The tree-level barrier could then arise if the parameter $m^2$ is negative (or $d^2V(0,T)/d\phi^2>0$) at the same time. The gear fields can make contributions to the barrier of the effective potential at loop level. This is given by the third term on the right hand side of the improved high temperature expansion~\eqref{eq:vht2}. In Fig.~\ref{fig:potential}, we show the evolution of the potential with temperature, using the parameters $f=10^4$~GeV, $\kappa_m=0.15$, $\kappa_l=0.10$, and $\varepsilon=0.51$. As shown in the figure, the two vacua become degenerate at the critical temperature $T_c=2.195$~TeV and there is a potential barrier between the two vacua. In search of parameter space that permits a first-order phase transition, we take $f=10^4$~GeV and make a random scan of the parameters in the following ranges: \begin{equation} 10^{-3}\leq\kappa_m\leq 1 ~,~~ 10^{-3}\leq\kappa_l\leq 1 ~,~{\rm and}~~ 10^{-3}\leq \varepsilon\leq 1 ~. \end{equation} The upper value of $\kappa_l$ is required by $f\leq \Lambda$ and the upper limit on $\varepsilon$ is to ensure the perturbativity of theory. Given a set of parameters, we first check various constraints discussed above. We start from an initial temperature given by eq.~\eqref{eq:vhttc2} and find the local VEV minimum around the value given by eq.~\eqref{eq:vhtfc2}. If the local minimum at the symmetric phase $\phi(T)=0$ is found to be larger (smaller) than the one at the broken phase $\phi(T)\neq 0$, the temperature is increased (decreased) in the next trial. The critical temperature is then determined by the degenerate condition {\it i.e.,} $V(0,T_c)=V(\phi_c,T_c)$. We find that for most of the sample points, the VEV at critical temperature is larger than that given by eq.~\eqref{eq:vhtfc2}. We generate one million random floats uniformly for each of the input parameters, among which about 4.8\% are found to be able to trigger afirst-order phase transition. \begin{figure} \centering \includegraphics[width=110mm,angle=0]{./fig/dist_tc.pdf} \caption{Distributions of the parameters that can trigger a first-order phase transition and the critical temperature distribution. The symmetry breaking scale is fixed at $f=10^4$~GeV.} \label{fig:dist_tc} \end{figure} \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/epkm_4.pdf} \includegraphics[width=75mm,angle=0]{./fig/klep_4.pdf} \caption{Scatter plots of parameter distributions in the $\varepsilon-\kappa_m$ and $\kappa_l-\varepsilon$ planes. The color represents the critical temperature in units of GeV.} \label{fig:dist_tc2} \end{figure} Fig.~\ref{fig:dist_tc} shows the distributions of the parameters and the critical temperature. We have fixed the parameter $f=10^4$~GeV. For other choices of $f$, the distributions are found to be nearly the same. We observe that the distribution profile of $\kappa_m$ is similar to that of $\kappa_l$. This is because of the requirement $\kappa_l\lesssim \kappa_m$ from the above discussions. The parameter $\varepsilon$ mainly peaks around 0.1. The distribution of the critical temperature concentrates in $\lesssim 2.5$~TeV and decreases quickly with the temperature. Finally, we plot the scatter distributions on the $\varepsilon-\kappa_m$ and $\kappa_l-\varepsilon$ plane in Fig.~\ref{fig:dist_tc2}, from which we can directly observe the variation of the critical temperature with respect to the parameters. We note that the existence of two degenerate vacua at the critical temperature does not guarantee that a first-order phase transition could successfully happen. To achieve a successful first-order phase transition, the bubble nucleation of the true vacuum should be triggered successfully as the temperature of the Universe drops from $T_c$ to a certain value. This will further constrain the parameter space for the phase transition, as will be discussed in the next section. \section{Bubble nucleation} \label{sec:bn} At high temperatures the Universe is in the symmetric phase, where the vacuum is located at the origin of the scalar field. As the temperature decreases with the expansion of the Universe, the other minimum of the effective potential appears and becomes the global minimum when the temperature goes lower than critical temperature. For a first-order phase transition, the symmetric and broken vacua are separated by a potential barrier. The tunneling from the metastable minimum to the stable one can proceed through the help of thermal fluctuations. The tunneling process leads to the decay of the false vacuum and the nucleation of the true vacuum. The tunneling rate per unit volume and time element is approximately given by~\cite{Apreda2002NPB, Espinosa2008PRD} \begin{eqnarray} \Gamma(T)=A(T)e^{-S_{3}/T} ~, \end{eqnarray} where $A(T)\simeq [S_3/(2\pi T)]^{3/2}T^{4}$ and $S_3$ denotes the three-dimensional on-shell Euclidean action of instanton. The probability of bubble nucleations per Hubble volume is defined as \begin{eqnarray} p(T)=\int _{T}^{T_{\rm c}}\frac{\Gamma(x)}{H^{4}(x)}\frac{dx}{x}\approx \left( \frac{T}{H} \right)^4e^{-S_{3}/T} ~. \end{eqnarray} In a radiation dominated Universe, the Hubble parameter is given by \begin{equation} H=1.66 g_{*}^{1 / 2} T^{2} / M_{\mathrm{pl}} ~, \end{equation} where $g_{\ast}\simeq 110$ and $M_{\rm pl}=1.22\times 10^{19}$~GeV is the Planck mass. The potential barrier decreases with the decrease of the temperature, which improves the probability of the vacuum tunneling. The nucleation temperature $T_{\rm n}$ is defined to be one at which the probability of nucleating one bubble per horizon volume is of order one, i.e., $p(T)\sim 1$, which can be translated into the following criterion for determining the nucleation temperature~\cite{Espinosa2008PRD} \begin{eqnarray} \label{eq:bnc} \frac{S_3(T_{\rm n})}{T_{\rm n}}\simeq 4\ln\left( \frac{T_{\rm n}}{H} \right) ~. \end{eqnarray} For a successful bubble nucleation, the tunneling rate from the flase vacuum to the true vacuum should be large enough to overcome the expansion rate of the Universe. It is this criterion that determines whether the first-order cosmological phase transition has successfully proceeded or not. We see that, for one thing, a first-order phase transition needs a barrier to separate the two vacua. This has been fully explored in section~\ref{sec:pt}. For another thing, the proceeding of the bubble nucleation may be hindered if the barrier is too high or the decrease of the barrier with temperature is too slow. In the following, we will determine the parameter space that satisfies the bubble nucleation criterion. To determine the bubble nucleation, we have to first obtain the Euclidean action of the $O(3)$ symmetric field configuration $S_3(T)$, which can be written as \begin{eqnarray} S_{3}(T)=4\pi\int dr~r^2\left [ \frac{1}{2}\left ( \frac{d\phi}{dr} \right )^2+V_{\rm eff}(\phi,T) \right ] ~. \end{eqnarray} By extremizing the Euclidean action, we obtain the following differential equation \begin{eqnarray}\label{ce} \frac{d^2\phi}{dr^2}+\frac{2}{r}\frac{d\phi}{dr}-\frac{dV}{d\phi}=0 ~, \end{eqnarray} with the boundary conditions \begin{equation} \left.\frac{d \phi}{dr}\right|_{r=0}=0, \quad \lim _{r \rightarrow \infty} \phi(r)=\phi_{\mathrm{false}} \equiv 0 ~. \end{equation} The equation of motion, eq.~(\ref{ce}), can be solved by the traditional overshooting/undershooting method~\cite{Apreda2002NPB}. In this work, we employ the $\textsf{CosmoTransitions~2.0.2}$ package~\cite{Wainwright2012PLB} to perform the numerical calculations of the bubble profile and Euclidean action. Afterwards, we use eq.~\eqref{eq:bnc} to determine the nucleation temperature $T_{n}$. The first-order phase transition is usually completed after percolation of the true vacuum bubbles. The production of GWs is significant at the percolation time (temperature), at which 34\% of the false vacuum has been converted to the true vacuum~\cite{Cai2017JCAP,Ellis2019JCAP,Wang2020JCAP}. In this work, we assume that the percolation takes place soon after the nucleation of the true vacua, which leads to the commonly used condition $T_*\simeq T_n$, where $T_*$ is the GW generation temperature~\cite{Espinosa2008PRD,Ellis2020JCAP}. The stochastic GWs generated by the first-order phase transition can be fully characterized by the knowledge of two primary parameters~\cite{Kamionkowski1994PRD}. One of them is the latent heat normalized by the radiation energy density in the plasma \begin{eqnarray} \label{eq:ala} \alpha =\frac{\epsilon (T_n)}{\rho _{\rm rad}(T_n)} ~, \end{eqnarray} where $\rho_{\rm rad}=\pi^2g_{\ast}T^4/30$ is the radiation energy density in the plasma, and the latent heat associated with the phase transition is given by \begin{eqnarray} \epsilon (T)=T\frac{\partial \Delta V_{bs}(T)}{\partial T}- \Delta V_{bs}(T) ~, \end{eqnarray} where $\Delta V_{bs}(T)\equiv V_{\rm eff}(f(T),T)-V_{\rm eff}(0,T)$ is the potential difference between the broken phase and the symmetric phase at temperature $T$. The parameter $\alpha$ is related to the maximum available energy budget for gravitational wave emissions. The other parameter relevant to the GW production is defined as \begin{equation} \frac{\beta}{H_{n}}=\left.T_{n} \frac{d}{d T}\left(\frac{S_{3}(T)}{T}\right)\right|_{T=T_{n}} ~. \end{equation} The parameter $\beta$ represents the rate of time variation of the nucleation rate, whose inverse gives the duration of the bubble nucleation. Consequently, $\beta/H_n$ defines the characteristic frequency of the GW spectrum produced from the phase transition. \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/ab_3.pdf} \includegraphics[width=75mm,angle=0]{./fig/ab_4.pdf}\\ \includegraphics[width=75mm,angle=0]{./fig/ab_5.pdf} \includegraphics[width=75mm,angle=0]{./fig/ab_6.pdf} \caption{Distributions of the GW parameters $\alpha$ and $\beta/H_n$. The color bars indicate the nucleation temperature. For the upper left, upper right, lower left and lower right plots, the symmetry breaking scales are taken as $f=10^3$~GeV, $10^4$~GeV, $10^5$~GeV, and $10^6$~GeV, respectively.} \label{fig:ab} \end{figure} \begin{figure} \centering \includegraphics[width=110mm,angle=0]{./fig/dist_3.pdf}\\ \includegraphics[width=110mm,angle=0]{./fig/dist_4.pdf}\\ \includegraphics[width=110mm,angle=0]{./fig/dist_5.pdf}\\ \includegraphics[width=110mm,angle=0]{./fig/dist_6.pdf} \caption{Histograms of the parameters that satisfy the nucleation condition. The blue histograms represent the total samples, and the orange and green histograms denote those samples that are detectable by the BBO and LISA interferometers, respectively. From top to bottom, the symmetry breaking scales are fixed at $f=10^3$~GeV, $10^4$~GeV, $10^5$~GeV, and $10^6$~GeV. } \label{fig:dist_tn} \end{figure} In the scatter plots of Fig.~\ref{fig:ab}, we show the calculated results of $\alpha$ and $\beta/H_n$, with the the corresponding nucleation temperature $T_n$ indicated by the colored dots. We find about 15\% of the sample points that trigger a first-order phase transition (see Fig.~\ref{fig:dist_tc}) can also satisfy the bubble nucleation condition. The plot shows that the nucleation temperature tends to be lower for larger $\alpha$, in agreement with the observation that the GWs can be stronger when they are produced at a lower nucleation temperature~\cite{Ellis2019JCAP,Ellis2020JCAP,Ellis2020}. The distributions of parameters $\kappa_m$, $\kappa_l$, and $\varepsilon$ are shown by the blue histograms in Fig.~\ref{fig:dist_tn}. From the upper plots to the lower plots, the $U(1)$ symmetry breaking scale $f$ is taken as $10^3$~GeV, $10^4$~GeV, $10^5$~GeV, and $10^6$~GeV, respectively. Compared with the parameter distributions for generating a first-order phase transition, as shown in Fig.~\ref{fig:dist_tc}, $\kappa_m$, $\kappa_l$, and $\varepsilon$ have been restricted to be $\gtrsim 10^{-2}$ by the criterion of the successful bubble nucleation. \section{Gravitational waves from phase transition} \label{sec:GWPT} \subsection{Gravitational wave sources} We first briefly review the numerical simulation results of the GW spectra produced from a first-order phase transition. During the percolation of the bubbles, there exist three processes that can produce the GWs~\cite{Espinosa2010JCAP,No2011PRD,Caprini2016JCAP}: \begin{itemize} \item[1.] {\bf Bubble Collisions}. GWs produced from this process depends only on the dynamics of the scalar field. The GW spectrum from the collisions of bubble walls estimated by the numerical simulations~\cite{Huber2008JCAP} is given by \begin{equation} \label{eq:gwspt1} h^{2} \Omega_{\mathrm{\rm col}}(f)=1.67 \times 10^{-5}\left(\frac{H_{n}}{\beta}\right)^{2}\left(\frac{\kappa_{\rm col}\alpha} {1+\alpha}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}}\left(\frac{0.11 v_{w}^{3}}{0.42+v_{w}^{2}}\right) \frac{3.8\left(f / f_{\mathrm{col}}\right)^{2.8}}{1+2.8\left(f / f_{\mathrm{col}}\right)^{3.8}} ~, \end{equation} where the red-shifted peak frequency of the GW spectrum from bubble collision is \begin{equation} f_{\mathrm{col}}=16.5 \times 10^{-3} \mathrm{mHz}\left(\frac{0.62}{1.8-0.1 v_{w}+v_{w}^{2}}\right) \left(\frac{\beta}{H_{n}}\right)\left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}} ~. \end{equation} The efficiency factors $\kappa_{\rm col}$ indicates the fraction of latent heat that is transformed into the kinetic energy of bubbles. In the case of non-runaway bubbles, the bubble walls will reach a terminal velocity and the latent energy transferred into the scalar field is negligible. For the runaway bubbles, however, most of the bubble energy is dissipated into the surrounding plasma and very little energy is deposited in the bubble walls~\cite{Bodeker2017JCAP}. Both cases lead to a negligible GW spectrum from the bubble collisions, and thus, in this work we do not take into account the bubble collision's contributions. In the following, we will restrict ourselfs to the case of non-runaway bubbles, in which the GWs can be effectively produced by the sound waves and turbulence. \item[2.] {\bf Sound waves.} They are generated subsequently after the bubble collisions. Numerical simulations indicate that the durations of sound waves and turbulence as active sources of GWs are typically much longer than the collisions of the bubble walls. The GW spectrum generated by sound waves propagating in the plasma is approximated by~\cite{Hindmarsh2015PRD} \begin{equation} \label{eq:gwspt2} h^{2} \Omega_{\mathrm{sw}}(f)=2.65 \times 10^{-6}\left(\frac{H_{*}}{\beta}\right)\left(\frac{\kappa_{\rm sw} \alpha} {1+\alpha}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}} v_{w} \left(\frac{f}{f_{\rm{sw}}}\right)^{3}\left(\frac{7}{4+3\left(f / f_{\mathrm{sw}}\right)^{2}}\right)^{7 / 2}, \end{equation} where the red-shifted peak frequency of the GW spectrum from sound waves is \begin{equation} f_{\mathrm{sw}}=1.9 \times 10^{-2} \mathrm{mHz} \frac{1}{v_{w}}\left(\frac{\beta}{H_{n}}\right) \left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}}. \end{equation} The efficiency factors $\kappa_{\rm sw}$ indicates the fraction of latent heat that is transformed into the bulk motion of the plasma. For the non-runaway bubbles, the efficiency factor for the sound wave contribution is then given by \begin{equation} \label{eq:ksw} \kappa_{\mathrm{sw}}\simeq \frac{\alpha}{0.73+0.083 \sqrt{\alpha}+\alpha} ~,~~{\rm for}~~v_w\sim 1.0 ~. \end{equation} \item[3.] {\bf Turbulence.} As the sound waves, turbulence in the plasma forms after the bubble collisions. Simulations show that only a small fraction $\delta \sim 5-10\%$ of the bulk motion from the bubble walls is converted into turbulence. Since the GWs from sound waves decay much faster, the GWs from turbulence could play a dominant role at high frequencies. The modeling of turbulence is far from settled. In this work, we adopt GW spectrum from turbulence as follows~\cite{Caprini2009JCAP} \begin{equation} \label{eq:gwspt3} h^{2} \Omega_{\mathrm{turb}}(f)=3.35 \times 10^{-4}\left(\frac{H_{*}}{\beta}\right)\left(\frac{\kappa_{\mathrm{turb}} \alpha} {1+\alpha}\right)^{\frac{3}{2}}\left(\frac{100}{g_{*}}\right)^{1 / 3} v_{w} \frac{\left(\frac{f}{f_{\mathrm{turb}}}\right)^{3}}{\left[1+\frac{f}{f_{\mathrm{turb}}}\right]^{\frac{11}{3}}\left(1+\frac{8\pi f}{H_{0}}\right)} ~, \end{equation} where the red-shifted Hubble constant observed today is given by \begin{equation} H_{0}=16.5 \times 10^{-3} \mathrm{mHz}\left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}}. \end{equation} The red-shifted peak frequency of the GW spectrum from turbulence is \begin{equation} \label{eq:fturb} f_{\text {turb }}=2.7 \times 10^{-2} \mathrm{mHz} \frac{1}{v_{w}}\left(\frac{\beta}{H_{n}}\right) \left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}}. \end{equation} The efficiency factor for turbulence $\kappa_{\rm turb}$ is related to $\kappa_{\rm sw}$ by $\kappa_{\rm turb}=\delta\kappa_{\rm sw}$, where we take $\delta=0.1$ in this work. \end{itemize} \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/GWspectraA.pdf} \includegraphics[width=75mm,angle=0]{./fig/GWspectraB.pdf}\\ \includegraphics[width=75mm,angle=0]{./fig/GWspectraC.pdf} \includegraphics[width=75mm,angle=0]{./fig/GWspectraD.pdf} \caption{GW spectra from phase transition (curves) and various experimental sensitivities (colored patches) as a function of frequency. The solid, dashed, and dash-dotted curves represent the results in model $\rm M_1$, $\rm M_2$, and $\rm M_3$ (where M=A, B, C, and D). For the upper left, upper right, lower left and lower right plots, the symmetry breaking scales are taken as $f=10^3$~GeV, $10^4$~GeV, $10^5$~GeV, and $10^6$~GeV, respectively. The parameters used in this plots are summarized in Table.~\ref{tab:i}.} \label{fig:GWsp} \end{figure} The amplitude and peak frequency of the GW spectrum also depend on the bubble wall velocity $v_w$, which is the expanding speed of the true vacuum. In this work, we take $v_w\simeq 1.0$ for the calculations of GW spectra. The total stochastic GW spectrum is approximately given by adding up these three contributions: \begin{equation} h^{2} \Omega_{\mathrm{GW}} \simeq h^{2} \Omega_{\rm col}+h^{2} \Omega_{\mathrm{sw}}+h^{2} \Omega_{\mathrm{turb}}. \end{equation} Note that as explained above, the bubble collision's contribution has been neglected in our calculation. \subsection{Gravitational wave detections} For the experimental investigation of the stochastic GW signals, one often adopts the frequentist approach where the detectability of the signals is measured by the corresponding signal-to-noise ratio (SNR)~\cite{Caprini2016JCAP} \begin{equation} \rho=\sqrt{\mathcal{N}\mathcal{T}_{\rm obs} \int_{f_{\mathrm{min}}}^{f_{\mathrm{max}}} df\left[\frac{h^{2} \Omega_{\mathrm{GW}}(f)}{h^{2} \Omega_{\mathrm{exp}}(f)}\right]^{2}} ~, \end{equation} where $\mathcal{N}$ is the number of independent observatories of the experiment, $\mathcal{T}_{\rm obs}$ is the duration of the mission, and $h^2\Omega_{\mathrm{exp}}$ denotes the sensitivity of a GW experiment. Following ref.~\cite{Caprini2016JCAP}, we take the SNR threshold value $\rho_{\rm thr}=10$, above which the GW signal is detectable for the experiment. The future space-based GW interferometers, including LISA~\cite{LISA2017,LISA2019CQG}, Taiji~\cite{Hu2017NSR,Ruan2020NA}, ALIA~\cite{ALIA2014JPCS}, DECIGO~\cite{DECIGO2017}, and BBO~\cite{BBO2006CQG} are able to explore the GW signals with the frequencies in the range of $\sim 10^{-5}-10^2$~Hz. Higher frequencies ($\sim 10-10^3$~Hz) GW signals are expected to be probed by the ground-based GW observatories such as aLIGO~\cite{LIGO2019}, ET~\cite{ET2010CQG}, and CE~\cite{CE2017CQG}. We show the distributions of parameters with detectibility for BBO and LISA interferometers in Fig.~\ref{fig:dist_tn}. We use $\mathcal{N}=1$ for the auto-correlated experiment, LISA, while for the cross-correlated experiments, BBO, we take $\mathcal{N}=2$. Following ref.~\cite{Chiang2020JHEP}, we assume a mission duration of $\mathcal{T}_{\rm obs}=4$ years for both interferometers. The observation frequency ranges are set in the range of $10^{-3}-10^2$ and $10^{-5}-1$ Hz for BBO and LISA, respectively. The experimental sensitivities of BBO and LISA are summarized in appendix E of ref.~\cite{Chiang2020JHEP}. The orange and green histograms represent the detectable regions for BBO and LISA, respectively. As indicated in the figure, the BBO experiment can probe almost all of the parameter space that can successfully generate a first-order phase transition. The GW signals produced at the scale $f=10^{3}$~GeV in the clockwork model can be effectively detected by the LISA experiment. The nucleation temperature increases with $f$, leading to higher peak frequencies of the GW signals. As a result, it is difficult to probe the GW signals induced by the $U(1)$ symmetry breaking at a scale of $f=10^{6}$~GeV at the LISA interferometer. We will show in section~\ref{sec:GWDW} that the domain walls produced after the phase transition at the symmetry breaking scale $f\gtrsim 10^{6}$~GeV could dominate the energy density of the Universe, which is not compatible with cosmological observations. Thus, here we do not further consider those GW signals produced from the phase transition above the scale of $f=10^{6}$~GeV. \begin{table}[] \caption{A summary of models and parameters} \begin{tabular}{|c|c|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|l|}{} & Models & $\kappa_m$ & $\kappa_l$ & $\varepsilon$ & $T_c$ {[}GeV{]} & $T_n$ {[}GeV{]} & $\alpha$ & $\beta/H_n$ \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Model A\\ $f=10^3$ GeV\end{tabular}} & $\rm A_1$ & 0.176 & 0.128 & 0.548 & 231.4 & 115.4 & 0.715 & 751.4 \\ \cline{2-9} & $\rm A_2$ & 0.120 & 0.092 & 0.094 & 207.1 & 126.6 & 0.133 & 2046.6 \\ \cline{2-9} & $\rm A_3$ & 0.449 & 0.332 & 0.309 & 417.9 & 383.3 & 0.025 & 5188.6 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Model B\\ $f=10^4$ GeV\end{tabular}} & $\rm B_1$ & 0.228 & 0.181 & 0.440 & 2287.3 & 969.2 & 1.220 & 348.2 \\ \cline{2-9} & $\rm B_2$ & 0.080 & 0.053 & 0.511 & 2524.1 & 645.5 & 0.261 & 1163.0 \\ \cline{2-9} & $\rm B_3$ & 0.101 & 0.071 & 0.086 & 1979.4 & 1597.2 & 0.054 & 4877.7 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Model C\\ $f=10^{5}$ GeV\end{tabular}} & $\rm C_1$ & 0.151 & 0.115 & 0.538 & 21686.2 & 8087.8 & 1.472 & 660.1 \\ \cline{2-9} & $\rm C_2$ & 0.029 & 0.019 & 0.092 & 9494.1 & 4607.1 & 0.763 & 9355.4 \\ \cline{2-9} & $\rm C_3$ & 0.317 & 0.227 & 0.807 & 29054.8 & 20595.1 & 0.307 & 907.8 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Model D\\ $f=10^{6}$ GeV\end{tabular}} & $\rm D_1$ & 0.018 & 0.015 & 0.018 & 70698.9 & 23487.0 & 1.658 & 39613.7 \\ \cline{2-9} & $\rm D_2$ & 0.071 & 0.055 & 0.173 & 136258.8 & 50724.8 & 1.698 & 3261.5 \\ \cline{2-9} & $\rm D_3$ & 0.262 & 0.187 & 0.714 & 270259.0 & 176070.9 & 0.380 & 791.0 \\ \hline \end{tabular} \label{tab:i} \end{table} In Fig.~\ref{fig:GWsp} we plot the GW spectrum as a function of frequency, with various choices of parameters shown in Table~\ref{tab:i}. The solid, dashed, and dash-dotted curves depict the spectra expected for model $\rm M_1$, $\rm M_2$, and $\rm M_3$, where M=A, B, C, and D. The upper left, upper right, lower left and lower right plots assume the symmetry breaking scales of $f=10^3$~GeV, $10^4$~GeV, $10^5$~GeV, and $10^6$~GeV, respectively. As shown in the plots, the peak frequency increases with the symmetry breaking scale. We observe that the $U(1)$ symmetry breaking scales in the range of $10^3-10^6$~GeV are all within the probe of the BBO and ALIA interferometers. The LISA and Taiji interferometers can probe lower frequencies of GW signals, corresponding to the symmetry breaking scale $f\lesssim 10^4$~GeV. On the other hand, the GW signals from scale of $f=10^6$~GeV could be detected by the ground-based GW observatories ET and CE. \subsection{LIGO searches on stochastic gravitational waves} \begin{figure} \centering \includegraphics[width=110mm,angle=0]{./fig/LIGOdist_5.pdf}\\ \includegraphics[width=110mm,angle=0]{./fig/LIGOdist_6.pdf} \caption{Histograms of the parameters that satisfy the nucleation condition. The grey histograms represent the total samples, and the blue, orange, and green histograms denote those samples that can be probed for LIGO design, O3, and O2 run, respectively. The top (bottom) row assumes the symmetry breaking scale to be $10^5$~GeV (and $10^6$~GeV).} \label{fig:dist_LIGO} \end{figure} We have shown that for the symmetry breaking scale at $f\sim 10^{6}$~GeV, the peak frequencies of the GW signals are right in the range that is covered by the ground-based GW observatories. Searches for the isotropic stochastic GWs background have been undertaken by the LIGO and Virgo Collaborations. The results from a cross-correlation analysis of data from the two observing run phases (O1 and O2) of Advanced LIGO are shown in refs.~\cite{LIGOSGW2018PRL,LIGOSGW2019PRD}. The upper limits on the normalized energy density in GWs at the 95\% confidence level of $\Omega_{\rm GW}<6.0\times 10^{-8}$ at 25 Hz for a flat background have been obtained due to no evidence for the existence of a stochastic background. Here we take advantage of the results from the Advanced LIGO O2 running to put constraints on the clockwork axion model. We will also estimate the detection prospects for the on-going Advanced LIGO observing run, including the third phase (O3) and design phase. The second observing run, O2, has ben completed in 2017 and ran for approximately 9 months. Following ref.~\cite{LIGOSGW2018PRL}, we use a duration of 12 months for O3 run and assume 24 months for the design phase (2022+). The GW background may be detectable with a $\rm SNR=3$~\cite{LIGOSGW2018PRL}. \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/epkl_5.pdf} \includegraphics[width=75mm,angle=0]{./fig/epkl_6.pdf} \caption{Scatter plots of the parameter distributions on the $\varepsilon-\kappa_l$ plane. The grey scatter dots represent the total samples that satisfy the nucleation condition, while the blue, orange, and green scatter dots denote those samples that can be probed by the LIGO design, O3, and O2 run, respectively. We fix $f=10^5$~GeV and $10^6$~GeV in the left and right plots, respectively.} \label{fig:epkl} \end{figure} We calculate the GW signals from the vacuum phase transitions at all sites, assuming $N=10$, and compare them with the LIGO sensitivities. The main results are shown in Fig.~\ref{fig:dist_LIGO} and Fig.~\ref{fig:epkl}. The green, orange and blue histograms in Fig.~\ref{fig:dist_LIGO} represent detection ranges in the O2, O3 and design phases, respectively. The grey histograms are the total samples that can generate a successful phase transition. We fix the symmetry breaking scale to $f=10^{5}$~GeV and $10^6$~GeV in the upper and lower plots of Fig.~\ref{fig:dist_LIGO}, respectively. We further show the detectable regions for the LIGO run phases on the $\varepsilon-\kappa_l$ plane in Fig.~\ref{fig:epkl}. Since no evidence for a stochastic GW background is found in the O2 run of LIGO, we interpret the green histograms in Fig.~\ref{fig:dist_LIGO} (and scatter points in Fig.~\ref{fig:epkl}) as those parameter regions that have been constrained by the LIGO O2 observations. Combine with Fig.~\ref{fig:dist_LIGO} and Fig.~\ref{fig:epkl}, we observe that for the symmetry breaking scale of $f=10^{5}$~GeV, the parameter space with $\kappa_m\sim 0.06-0.001$, $\kappa_l\sim 0.04-0.001$, and $\varepsilon\sim 0.1-0.01$ has been excluded by LIGO O2 run. Nearly half of the parameter space for $f=10^5$~GeV can be further tested by LIGO O3 and design phases. \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/klb_5.pdf} \includegraphics[width=75mm,angle=0]{./fig/epb_5.pdf} \caption{The distribution of $\beta/H_n$ as a function of $\kappa_l$ (left) and $\varepsilon$ (right). The colored bars represent the nucleation temperature. We fix $f=10^5$~GeV, and $10^6$~GeV in the left and right plot, respectively.} \label{fig:klepb} \end{figure} To see why lower values of $\kappa_m$, $\kappa_l$, and $\varepsilon$ for $f=10^5$~GeV can be effectively probed by the LIGO O2 run, we plot in Fig.~\ref{fig:klepb} the parameter $\beta/H_n$ as a function of the parameters $\kappa_l$ (left) and $\varepsilon$ (right), with the color indicating the nucleation temperature. We observe that $T_n$ tends to increase with both $\kappa_l$ and $\varepsilon$, while $\beta/H_n$ tends to decrease as $\kappa_l$ and $\varepsilon$ increase. For the parameters in the available range of O2 run, we find $T_n\lesssim 10^4$~GeV and $\beta/H_n\gtrsim 5\times 10^3$. With eq.~\eqref{eq:fturb}, the peak frequencies for these parameters fall in the range of $10-100$~Hz, which are the most sensitive frequency band for LIGO. However the amplitude of GW signal is suppressed by a large value of $\beta/H_n$ since $h^2\Omega_{\rm GW}$ is inversely proportional to $\beta/H_n$ (with bubble collisions being neglected). For the case of $f=10^{6}$~GeV, we can achieve the LIGO sensitive frequency band with a higher nucleation temperature $T_n\gtrsim 10^5$~GeV and a lower $\beta/H_n\lesssim 5\times 10^{2}$. Thus, the GW signals from symmetry breaking at the scale $f=10^{6}$~GeV can be sufficiently loud for LIGO O2 run, as indicated the Fig.~\ref{fig:dist_LIGO} and Fig.~\ref{fig:epkl}. We observe that for $f=10^{6}$~GeV, most of the parameter space has been excluded by while the LIGO O2 run, the remaining regions can be further tested by LIGO O3 and design phases. \section{Gravitational waves from domain wall annihilation} \label{sec:GWDW} \subsection{Gravitational wave spectrum} The network of cosmic strings and domain walls form after the phase transition of the clockwork axion model. Numerical simulations~\cite{Higaki2016JHEPb} show that for large number of $N$ ($\gtrsim 3$), this string-wall network can survive until the QCD phase transition. The QCD instanton effects at $T\lesssim 1$~GeV give rise to the QCD axion potential, $V_{\rm bias}\sim \Lambda_{\rm QCD}^4$ (where $\Lambda_{\rm QCD}=(332\pm 17)$~MeV~\cite{Tanabashi2018PRD} is the the QCD confinement scale), which serves as an energy bias to break the degeneracy of discrete vacua, and thus, leads to the annihilation of the domian walls. In the scaling regime for long-lived domain walls, the evolution of the energy density of domain walls can be parameterized as \begin{equation} \label{eq:rhowall} \rho_{\rm wall}(t)=\mathcal{A}\frac{\sigma}{t} ~, \end{equation} where $\sigma\simeq 8m_A^2f^2$ is the tension of the domain wall~\cite{Higaki2016JHEPb,Long2018JHEP}, $t=1/(2H)$, and $\mathcal{A}\sim 1.0$ from the analysis of numerical simulations~\cite{Kawasaki2015PRD}. The annihilation of domain walls becomes significant when the tension of domain walls is comparable with the volume pressure $p_{\rm V}\sim V_{\rm bias}$, and the annihilation temperature of domain walls is given by~\cite{Saikawa2017} \begin{equation} T_{\rm ann}\simeq 7.15\times 10^{-2}~{\rm GeV}~\varepsilon^{-1/4}\left(\frac{g_{*}\left(T_{\mathrm{ann}}\right)}{10}\right)^{-1/4} \left(\frac{f}{100~\rm{TeV}}\right)^{-3/2}\left(\frac{\Lambda_{\rm QCD}}{100~\rm{MeV}}\right)^{2}. \end{equation} Eq.~\eqref{eq:rhowall} shows that the energy density of domain walls in the scaling regime decreases as $\propto t^{-1}$. This decay rate is slower than those of dusts $\propto t^{-3/2}$ and radiation $\propto t^{-2}$. From the condition $\rho_c=\rho_{\rm wall}$, where $\rho_c$ is the critical density of the Universe, the energy density of the Universe would eventually be dominated by the domain walls at the temperature \begin{equation} T_{\rm dom}=5.44\times 10^{-2}~{\rm GeV}~\varepsilon^{1/4}\left(\frac{g_{*}\left(T_{\mathrm{ann}}\right)}{10}\right)^{-1/4} \left(\frac{f}{100~\rm{TeV}}\right)^{3/2}. \end{equation} Requiring the domain walls annihilate before they dominate the Universe, {\rm i.e.,} $T_{\rm ann}\gtrsim T_{\rm dom}$, we have \begin{equation} f\lesssim 100~{\rm TeV}~\varepsilon^{-1/6}\left(\frac{\Lambda_{\rm QCD}}{100~\rm{MeV}}\right)^{2/3}. \end{equation} We thus find an upper bound on the symmetry breaking scale, $f\lesssim 400$~TeV. The production of GWs from the annihilation of domain walls has been studied in the literature. The peak amplitude of the GW spectrum is produced at the annihilation time of domain walls, \begin{equation} \Omega_{\mathrm{GW}}\left(\nu_{\mathrm{peak}}\left(t_{\mathrm{ann}}\right)\right) \simeq \frac{8 \pi \tilde{\epsilon}_{\mathrm{gw}} G^{2} A^{2} \sigma^{2}}{3 H_{\mathrm{ann}}^{2}} ~, \end{equation} where the efficiency of the gravitational wave emission $\tilde{\epsilon}_{\rm{gw}} \simeq 0.7 \pm 0.4$~\cite{Hiramatsu2014JCAP} and $A\simeq N$ from the numerical simulations~\cite{Higaki2016JHEPb}. With the expansion of the Universe, the amplitude is diluted as $\propto R(t)^{-4}$, where $R(t)$ is the scale of the Universe. The peak amplitude of the GW spectrum today is given by \begin{equation} h^2\Omega_{\rm{GW}}^{\rm peak}\left(t_{0}\right) =6.45 \times 10^{-6} \varepsilon \left(\frac{\tilde{\epsilon}_{\rm{gw}}}{0.7}\right )\left( \frac{A}{10} \right )^2 \left(\frac{g_{* s}\left(T_{\rm{ann}}\right)}{10}\right)^{-4/3}\left(\frac{f}{100~\rm{TeV}}\right)^{6} \left(\frac{T_{\rm{ann}}}{0.1~\rm{GeV}}\right)^{-4}, \end{equation} with the red-shifted peak frequency from $\nu_{\rm{peak }}\left(t_{\rm{ann }}\right) \simeq H_{\rm{ann }}$ given by \begin{equation} \label{eq:vpeak} \nu_{\rm peak}(t_0)\simeq 1.1 \times 10^{-8} \mathrm{~Hz}\left(\frac{g_{*}\left(T_{\mathrm{ann}}\right)}{10}\right)^{1/2} \left(\frac{g_{*s}\left(T_{\mathrm{ann}}\right)}{10}\right)^{-1 / 3}\left(\frac{T_{\mathrm{ann}}}{0.1~\rm{GeV}}\right). \end{equation} The analysis of numerical simulations in ref.~\cite{Hiramatsu2014JCAP} shows that the frequency dependence of the GW spectrum can be approximately parameterized as \begin{equation} h^2\Omega_{\rm GW}= \begin{cases} \displaystyle h^2\Omega_{\rm GW}^{\rm peak}\left ( \nu \over \nu_{\rm peak} \right )^3~{\rm for~\nu<\nu_{\rm peak}} \\ \displaystyle h^2\Omega_{\rm GW}^{\rm peak}\left ( \nu_{\rm peak} \over \nu \right )~~{\rm for~\nu>\nu_{\rm peak}}. \end{cases} \end{equation} \subsection{NANOGrav pulsar timing observations} The byproducts of the clockwork axion phase transition, the domain walls, dominantly annihilate at a temperature around $T_{\rm ann}\sim 0.1-1$~GeV, right after the QCD confinement. From eq.~\eqref{eq:vpeak}, we see that the peak frequency of GWs from domain wall annihilation is around $\sim 10^{-8}$~Hz, falling in the frequency band probed by pulsar timing observations. The searches for nHz isotropic stochastic GW background via the observation of pulsars are performing by, for example, the European Pulsar Timing Array (EPTA)~\cite{EPTA2015} and the NANOGrav~\cite{NANOGrav2018}, over a long time span. The constraints from the observations of 18 years by EPTA and over 11 years by NANOGrav have restricted the GW background amplitude in the frequency band of $2-10$~nHz to be $h^2\Omega_{\rm GW}\lesssim 5\times 10^{-9}$ and $10^{-9}$, respectively. These results constrain the symmetry breaking scale to be $f\lesssim 300$~TeV, slightly tighter than the constraints from requiring domain walls to annihilate before they dominate the energy density of the Universe. \begin{figure} \centering \includegraphics[width=75mm,angle=0]{./fig/nano.pdf} \includegraphics[width=75mm,angle=0]{./fig/totgw.pdf} \caption{The GW spectra and various experimental sensitivities as a function of frequency. In both plots, the blue data points represent the analysis of NANOGrav 12.5-year data, the parameter is fixed at $\kappa_m=0.030$, $\kappa_l=0.024$, and $\varepsilon=0.1$, respectively. In the left plot, the red dashed, solid, and dashed-dot lines represent the GW spectrum from domain wall annihilation with a symmetry breaking scale $f=300$~TeV, $f=200$~TeV, and $f=100$~TeV, respectively. The black line in the right plot shows the total GW signal from domain wall annihilation and first-order phase transition.} \label{fig:NANOGrav} \end{figure} The NANOGrav Collaboration has recently released their analysis on the 12.5-year data~\cite{NANOGrav2020} and one signal of a stochastic spectrum was found within the frequency band $\sim 1-10$~nHz. Various possible sources of the signal have been proposed, including cosmic strings~\cite{Ellis2020NANO,Blasi2020,Buchmuller2020PLB,Samanta2020}, first-order cosmological phase transitions~\cite{Nakai2020,Addazi2020,Neronov2020,Bian2020,Li2020,Paul2020}, coherent oscillation of axionic fields~\cite{Ratzinger2020,Namba2020}, and primordial black holes~\cite{Vaskonen2020,Vaskonen2020,Domenech2020}. As indicated by eq.~\eqref{eq:vpeak}, the nHz stochastic GWs can naturally be produced by the annihilation of domain walls in the clockwork axion model~\cite{Higaki2016JHEPb}. In the left plot of Fig.~\ref{fig:NANOGrav}, we show the GW relic energy density from the domain wall annihilation, taking $\varepsilon=0.1$. The red dashed, solid, and dash-dotted lines represent the results with a symmetry breaking scale of $f=300$~TeV, $f=200$~TeV, and $f=100$~TeV, respectively. We find that GWs from the model with a scale $f=200$~TeV can explain the NANOGrav 12.5-year data in blue points. In addition to the nHz GW signals, the first-order phase transition of the clockwork work axion model at the scale of $f=200$~TeV can also produce a GW signal peaked around $\sim 10$~Hz. We show the result in this scenario with the black line in the right plot of Fig.~\ref{fig:NANOGrav}. As we have shown in the previous section, most of the parameter space for the first-order phase transition at the scale $f=10^{3}$~TeV has been excluded by the LIGO O2 run. The O3 and the design phases of LIGO could probe half of the parameter space of the phase transition at the scale around $10^{2}$~TeV. We thus expected another signal in the LIGO O3 run if the NANOGrav 12.5-year data are indeed induced by the annihilation of domain walls from the phase transition of the clockwork work axion at the scale of $f=200$~TeV. \section{Conclusions} \label{sec:summary} We have shown in this work the opportunity to explore the clockwork axion with the information from gravitational wave (GW) detection. It is well known that GWs can be produced from violent first-order cosmological phase transitions. However, in the conventional QCD axion models the PQ phase transition is second-order. We show that the PQ phase transition can be first-order when the dimension-6 operator is included in the scalar potential. Based on a comprehensive scan in the parameter space, we find that the parameters $\kappa_m$, $\kappa_l$, and $\varepsilon$ have been restricted within the range of $10^{-2}-1.0$ in order to trigger a first-order phase transition while having successful nucleation of the true vacuum bubble. We show that the GWs from the PQ phase transition at scales in the range of $10^3-10^6$~GeV can be probed by the BBO and ALIA interferometers. The LISA and Taiji interferometers could probe GW signals of lower frequencies, corresponding to the symmetry breaking scale of $f\lesssim 10^4$~GeV. On the other hand, the GW signals from the scale $f\gtrsim 10^5$~GeV could be detected by the ground-based GW observatories ET and CE. In fact, we find that for the symmetry breaking scale of $f=10^{5}$~GeV, the parameter space of $\kappa_m\sim 0.06-0.001$, $\kappa_l\sim 0.04-0.001$, and $\varepsilon\sim 0.1-0.01$ has been excluded by LIGO O2 run. Nearly half of the parameter space for $f=10^5$~GeV can be further tested by the LIGO O3 and design phases. For the scale of $f=10^{6}$~GeV, we find that most of the parameter space has been excluded by the LIGO O2 run, with the remaining regions further testable by the LIGO O3 and design phases. The QCD axion potential arises after the QCD confinement can serve as the bias potential to annihilate the domain walls produced from the PQ phase transition. We show that the PQ scale $f$ should $\lesssim 4\times 10^{5}$~GeV to ensure that the domain walls annihilate before they dominate the energy density of the Universe. We find that the GWs from the annihilation of domain walls with the scale $f=2\times 10^5$~GeV can induce the stochastic GW background signals indicated by the NANOGrav 12.5-year observation. The running of the LIGO O3 phase has a chance to observe the GW signals from the first-order PQ phase transition at this scale in the clockwork axion model. The future space-based GW interferometers, including LISA, Taiji, ALIA, DECIGO, and BBO, the ground-based GW experiments, ET and CE, as well as the pulsar timing arrays, such as PPTA~\cite{Hobbs2013}, IPTA~\cite{Hobbs2013}, and SKA~\cite{Janssen2015} will be able to further provide more opportunities to test the clockwork axion model. \acknowledgments{ BQL thanks Da Huang for helpful comments and suggestions. This work was supported in part by the Ministry of Science and Technology (MOST) of Taiwan under Grant Nos.~MOST-108-2112-M-002-005-MY3 and 109-2811-M-002-550. } \bigskip
{ "timestamp": "2020-12-29T02:23:46", "yymm": "2012", "arxiv_id": "2012.14071", "language": "en", "url": "https://arxiv.org/abs/2012.14071" }
\section{Introduction} \label{sec:intro} In the Galaxy, M dwarfs are inherently faint objects, but dominate the faint magnitudes of the Galaxy by numbers, making up $\sim 70\%$ in our Galaxy \citep{2010AJ....139.2679B}. \cite{1999ApJ...519..802K} discovered spectral classes $L$ and $T$ using the Two Micron All Sky Survey (2MASS; \citealt{2006AJ....131.1163S}). $T$ type stars are completely comprised of brown dwarfs, while main-sequence (MS) stars earlier than M type are comprised of hydrogen-burning. M dwarf stars, which are at the MS end of the Hertzsprung-Russell (H-R) diagram, are in between the former and the later and show admixture features of both. The M dwarf stars have lifetimes much longer than the Hubble time \citep{2010AJ....139.2679B}, which makes them valuable for tracing the chemical and dynamical history of the Galaxy. Previous studies also used M dwarf stars to determine the initial mass function (IMF) \citep{2008AJ....136.1778C, 2010AJ....139.2679B} at low-mass end. Furthermore, M dwarfs are primary candidates for exoplanet searching \citep{2018A&A...609A.117T}. Accurate and precise parameters including effective temperatures and chemical compositions of the planet-host stars play a key role in looking for habitable exoplanets. The M dwarf stars are classified at wavelengths 6300 to 9000 \rm{\AA} \citep{boeshaar1976, kirk1992, boeshaar1985}. One of the main difficulties is that the prominent molecular absorption in the spectra of M dwarfs, are hard to predict by atmospheric models \citep{2015ApJ...804...64M}. Moreover, obtaining spectra with high quality for these faint objects is challenging. Generally, the measurement of equivalent widths (EWs) and synthesis are the classical and most common methods to derive stellar parameters. However, synthesis is the favorable method to measure the stellar parameters for cool stars since the EWs are difficult to be measured for their crowded absorption lines (reviewed by \citealt{2019ARA&A..57..571J}). With the development of new facilities, large surveys such as the Sloan Digital Sky Survey, Apache Point Observatory Galactic Evolution Experiment (SDSS/APOGEE; \citealt{2017AJ....154...94M}), Transiting Exoplanet Survey Satellite \citep{2018AJ....155..180M} (\textit{TESS}) missions and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; \citealt{2012RAA....12.1197C, 2012RAA....12..735D, 2012RAA....12..723Z}) make incremental photometric and spectroscopic data of M dwarf stars. The LAMOST survey has provided nine million spectra in its Data Release 6 (DR6) at R $\sim 1800$, among which $\sim$ 600,000 spectra are M dwarf stars. However, these stars are lack of stellar parameters. Many efforts have attempted to decode the effective temperatures and chemical abundances of M dwarfs from high-resolution spectra either in the optical or near-infrared (NIR) band \citep{2005MNRAS.356..963W, 2014A&A...564A..90R, 2018A&A...620A.180R, 2017ApJ...851...26V, 2019ApJ...871...63M}. The APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) measurements of $T_{eff}$ and metallicity \citep{2016AJ....151..144G} for M dwarfs have been determined with precisions of $\sim$ 100K and 0.18 dex, respectively \citep{2016MNRAS.460.2611S} by fitting with the atmospheric models. Furthermore, the APOGEE data of SDSS Data Release 16 (DR16) \citep{2020AJ....160..120J} use new atmospheric grids which can estimate effective temperatures down to 3000K. \citeauthor{2020ApJ...892...31B} (2020, hereafter B20) using \textit{The Cannon} \citep{2015ApJ...808...16N, 2017ApJ...836....5H} derived the effective temperature and metallicities for 5,875 APOGEE M dwarfs with 87 sources from \citeauthor{2015ApJ...804...64M} (2015, hereafter M15) as training dataset. M15 estimated effective temperatures by comparing spectra with the BT-Settl atmospheric models \citep{2013MSAIS..24..128A} and calibrated their results using stars with determinations from interferometry \citep{2012ApJ...757..112B, 2013ApJ...779..188M}. Furthermore, \cite{2018A&A...620A.180R} determined parameters of 45 M dwarfs using high-resolution H-band spectra by fitting BT-Settl model grids. \cite{2020arXiv201200915D} obtained stellar parameters of five M dwarf systems by fitting BT-Settl atmospheric and test current stellar models. \cite{2020AJ....159..193G} presented effective temperatures, radii, masses, and luminosities for 29,678 M dwarfs from LAMOST DR1 using \textit{The Cannon} with a typical uncertainty of $T_{eff}$ of $\sim$110 K. They used stellar labels from {\it TESS} Cool Dwarf Catalog \citep{2018AJ....155..180M}, in which $T_{eff}$ was determined from the color–$T_{eff}$ relations in M15. In other words, the effective temperatures of all previous studies rely on the BT-Settl atmospheric model. It is noted that the parameters derived by the previous works display substantial systematic errors, since different works used the different spectral regions and lines with various methods \citep{2019ARA&A..57..571J}. Recently, new techniques to derive stellar parameters with machine learning algorithms \citep{2019ApJ...879...69T, 2019ApJS..245...34X} have become efficient for large data of spectra survey. Data-driven methods were illustrated as promising solutions in cool star parameterization \citep{2019ARA&A..57..571J}. These novel methods have well performed in transferring the known information from training datasets to entire datasets. In this work, we build a data-driven model for LAMOST spectra based on Stellar LAbel Machine (SLAM; \citealt{2020ApJS..246....9Z}) trained by APOGEE stellar labels and BT-Settl model atmospheres and synthetic spectra \citep{2013MSAIS..24..128A} to estimate the entire dataset of LAMOST M dwarf stars. This paper is organized as follows. In section \ref{sec:method}, we introduce how SLAM works. In section \ref{sec:data}, we describe M dwarf spectra selected from the LAMOST and \textit{Gaia} surveys as well as the training dataset from APOGEE survey and BT-Settl model. We then present the results in section \ref{sec:results} and make comparision with previous works. Section \ref{sec:discussion} raises discussions about the caveats of the results, and we assess the robustness and performance of the results and draw conclusions in section \ref{sec:discussion}. \section{Method} \label{sec:method} Stellar LAbel Machine (SLAM), developed by \cite{2020ApJS..246....9Z}, has shown good performance in determining the stellar labels of LAMOST DR5. It is a data-driven model based on Support Vector Regression (SVR) \citep{vapnik1997support}, which is a robust non-linear regression model. The data-driven method has been demonstrated as one of the most practical ways to measure the stellar parameters of M dwarfs. Additionally, LAMOST data is suitable for data-driven methods because the spectra of low resolution are hard to perform the standard methods of measuring EWs for parameters estimations of cool stars \citep{2019ARA&A..57..571J}. Meanwhile, the large quantity of LAMOST dataset demands to conduct fast data-driven methods. \subsection{Support Vector Regression} \label{subsec:SVR} Support-vector machine (SVM, also support-vector network; \citealt{cortes1995support}) is one of the most important supervised machine learning algorithms to be used for classification and regression. The regression algorithms of SVM, named support-vector regression (SVR), has been used in many astronomical studies, particularly in spectral data analysis \citep{2012MNRAS.426.2463L, 2014ApJ...790..110L, 2015ApJ...807....4L, 2015RAA....15.1137L}. \subsection{SLAM} \label{subsec:SLAM} SLAM has three hyper-parameters, two of which are penalty level ($C$) and tube radius ($\epsilon$) coming from the genetic SVR algorithm. The third one ($\gamma$) indicates the width of the radial basis function (RBF), which is the kernel adopted by SLAM. The architecture of SLAM consists of 3 steps: \begin{itemize} \item [1.]\textbf{Pre-processing}. We normalize the spectra of the training data; in the mean time, we also standardize both stellar labels and spectral fluxes so that their mean is 0 and variance is 1; \item [2.]\textbf{Training}. We train the SVR model with stellar parameters as independent variables and flux at given wavelength as dependent variable at each wavelength pixel using the training dataset; \item [3.]\textbf{Prediction}. We apply the optimized SVR model to predict the stellar labels for observed spectra. \end{itemize} To choose the best-fit hyper-parameters at each wavelength, which is defined as training procedure, SLAM minimizes the $k$-fold cross-validated mean squared error (CV-MSE), which is defined as \begin{equation} \label{MSE} MSE_{j} = \frac{1}{m} \sum_{i=1}^{m}[f_j(\vec{\theta_{i}}) - f_{i,j}] ^ 2, \end{equation} where $\vec{\theta_i} = (T_{eff,i}, \log{g}_i, [M/H]_i)$ denotes the stellar label vector of the $i$th star in the training data, $f_j(\vec{\theta_i})$ is the $j$th pixel of the training spectra as a function of $\vec{\theta_i}$. In prediction procedure, the posterior probability density function of $\vec{\theta}$ for an observed spectrum can be written as \begin{equation} p(\vec{\theta}| f_{obs}) \propto p(\vec{\theta}) \prod_{j=1}^n p(f_{j,obs}|\vec{\theta}), \label{posterior} \end{equation} where $p(f_{j,obs}|\vec{\theta})$ is the likelihood of the spectral flux $f_{j,obs}$ varing with $\vec{\theta}$ based on the trained SVR model and $p(\vec{\theta})$ is the prior of $\vec{\theta}$. The best estimate of stellar labels can be found at the maximum of the posterior probability $p(\vec{\theta}| f_{obs})$. In practice, the logarithmic form of Eq. (\ref{posterior}) as in below is used by adopting a Gaussian likelihood: \begin{equation} \begin{aligned} \ln p\left(\vec{\theta} \mid f_{obs}\right)=&-\frac{1}{2} \sum_{j=1}^{n} \frac{\left[f_{j, obs}-f_{j,model}(\vec{\theta})\right]^{2}}{\sigma_{j, obs}^{2}+\sigma_{j, model}(\vec{\theta})^{2}} \\ &-\frac{1}{2} \sum_{j=1}^{n} \ln \left[2 \pi\left(\sigma_{j, obs}^{2}+\sigma_{j, model}(\vec{\theta})^{2}\right)\right], \end{aligned} \end{equation} where $f_{j,obs}$ is the $j$th pixel of the observed spectrum, $f_{j,model}(\vec{\theta})$ is the SVR model-predicted spectral flux corresponding to the stellar label vector $\vec{\theta}$. $\sigma_{j,obs}$ is the uncertainty of the $j$th pixel of the observed spectrum, and $\sigma_{j,model}(\vec{\theta})$ is the uncertainty of the $j$th pixel of the model-predicted spectrum given the stellar labels $\vec{\theta}$. In practice, $\sigma_{j,model}(\vec{\theta})$ is replaced with CV-MSE$_j$, which is independent of $\vec{\theta}$. SLAM adopts Maximum Likelihood Estimation (MLE) with Levenberg-Marquardt (LM, \citealt{1978LNM...630..105M}) least square optimizer as the optimization method to derive the most likely $\vec{\theta}$ for an observed spectrum. \section{Data} \label{sec:data} \subsection{LAMOST Data} \label{subsec:LAMOST Data} LAMOST (Guo Shou Jing Telescope) is one of the most efficient spectroscopic survey telescopes providing 9,919,106 low-resolution (R$\sim$1800) optical spectra, among which 8,966,416 are stellar spectra, in its sixth data release (DR6) \citep{2012RAA....12.1197C, 2012RAA....12..735D, 2012RAA....12..723Z}. 607,142 spectra are published as the M dwarf catalog in LAMOST DR6. We firstly select stars from LAMOST DR6\footnote{\url{http://dr6.lamost.org}} M dwarf catalog \citep{2014AJ....147...33Y, 2019MNRAS.485.2167G} cross-matched with \textit{Gaia} DR2 \citep{2018A&A...616A..10G}. Then samples are selected using the following criteria to obtain both reliable \textit{Gaia} photometry ($G_{BP}-G_{RP}$ color and $G$-band magnitude), astrometry (parallax) and LAMOST spectra. \begin{enumerate} \item {\tt\string parallax / parallax error > 5}; \item {\tt\string phot\_bp\_mean\_flux/phot\_bp\_mean\_flux\_error} $>$ 20,\\ {\tt\string phot\_rp\_mean$\_$flux/phot$\_$rp$\_$mean$\_$flux$\_$error} $>$ 20,\\ and {\tt\string phot$\_$g$\_$mean$\_$flux/phot$\_$g$\_$mean$\_$flux$\_$error} $>$ 20; \textbf{\item {\tt\string ruwe < 1.4}} \item signal-to-noise ratio (SNR) at $i$-band of LAMOST spectra is larger than 5. \end{enumerate} \begin{figure}[hbt!] \centering \includegraphics[width=9cm]{figures/hrd_compare_paper.pdf} \caption{The panel shows the color-magnitude diagram, i.e. $G$-band absolute magnitude ($M_G$) versus $G_{BP}-G_{RP}$, of selected M dwarf samples compared with M giants from \cite{2019ApJS..244....8Z}. The red stars represent the M dwarfs candidates from LAMOST survey, while the blue stars denote the M giants samples from \cite{2019ApJS..244....8Z}. The horizontal dotted line represents the $M_G$ equals to 5. The coordinates of the selected quadrangles enclosed by the dotted line and two dashed lines are [(0.7, 5), (1.4, 5), (4, 16.5), (5, 16.5)]. \label{fig:hrd_cut}} \end{figure} The criteria 1-3 aim to select stars with both accurate photometry and astrometry. {\tt\string ruwe} is the re-normalised unit-weight error which measures astrometric goodness-of-fit, crieria 3 is to select stars with small re-normalised unit-weight error. Criteria 4 aims to select spectra with clear stellar spectral signature. Similar to B20, criteria 5 aims to select the main-sequence M dwarfs as shown in Fig. \ref{fig:hrd_cut}. We display the selected M dwarf samples in H-R diagrams (Fig. \ref{fig:hrd_cut}). The $G$-band absolute magnitude is estimated from the Bayesian distance from \cite{Bailer-Jones2018}. To investigate the contamination of M giant stars, we draw 35,382 M giants on Fig. \ref{fig:hrd_cut} \citep{2019ApJS..244....8Z}. We select $M_G + A_G \le 5$ to remove the contaminations. Some of the M dwarf stars are not located at the main-sequence, but below and above the main-sequence. There are $\sim 7,000$ stars on the top side of the quadrangle which are likely pre-main sequence stars or binaries. And about 1,000 stars on the bottom side might be white dwarfs - MS binaries. We remove these stars and only select 379,258 M dwarf samples fell into the quadrangle for the estimation of the stellar parameters. Most of M dwarf stars are located within a few hundreds pc. Therefore, the interstellar extinction of M dwarfs are mostly very low. There may be a few stars with large extinction and thus location beyond the selection area. These stars may be removed mistakenly. \subsection{Training Dataset} \label{sec:training dataset} Since SLAM is a data-driven model, which assumes stellar labels as ground truth, reliable stellar parameters of training datasets are needed. To date, various methods and training datasets have been developed and introduced to measure labels (stellar parameters) for M dwarfs. In this work, we use APOGEE stellar parameters and BT-Settl synthetic spectra, respectively, as the training dataset separately described in the following two subsections \ref{subsec:ASPCAP} and \ref{subsec:BT-Settl}. \subsubsection{APOGEE labels as training data} \label{subsec:ASPCAP} \cite{2016AJ....151..144G} presented the ASPCAP, which fits observed near-infrared spectra to synthetic spectra made with the code FERRE \citep{2006ApJ...636..804A}. The measurement of $T_{eff}$ for M dwarfs reach the precision of 100K between $3550 < T_{eff} < 4200$K, the \textbf {mean} precision of ASPCAP metallicities is 0.18 dex between $-1.0<$ [M/H] $<0.2$ \citep{2016MNRAS.460.2611S}. For APOGEE DR16, \cite{2020AJ....160..120J} used the new MARCS stellar atmospheric models which is continuous from 3000 to 4000K for $T_{eff}$. The stellar parameters of DR16 enhance significantly for cool stars with $T_{eff} <$ 3500K, avoiding discontinuities in ASPCAP at 3500K. As for the metallicity of DR16, the comparison with six well-studied open clusters shows a faint difference of 0.004 dex \citep{2020AJ....159..199D, 2020AJ....160..120J}. We cross-match our selected M dwarfs APOGEE DR16 catalog\footnote{\url{https://www.sdss.org/dr16/irspec/aspcap/}} and obtain about 4,317 common stars with LAMOST spectra and APOGEE labels as training data. We further select 3,785 samples using the following criteria: \begin{itemize} \item[1.] $2800 < T_{eff} < 4500$K; \item[2.] $T_{eff}$ uncertainty smaller than 100K; \item[3.] $-2<[M/H]<0.5$ dex; \item[4.] [M/H] uncertainty smaller than 0.1 dex; \item[5.] $\log{g} >4$ dex. \end{itemize} \begin{figure*}[hbt!] \plotone{cv_errors.pdf} \caption{The figure displays how the cross-validation (CV) errors of stellar labels change with the signal-to-noise ratio (SNR). In both panels, the dotted lines represent the CV-scatter, while the dashed lines denote the CV-bias. Note that the SNR for APOGEE labeled LAMOST spectra is the SNR in the $i$-band (SNR$_i$), the SNR of the BT-Settl synthetic spectra is added artificially. Clearly, all of the CV errors decrease as the SNR increases. When SNR$_i > 100$, the typical CV-scatters of $T_{eff}$ and [M/H] are 50K, 0.10 dex for the ASPCAP labels, respectively. The typical CV-scatter for the BT-Settl labels is 40K when SNR $>100$. } \label{fig:cv_error} \end{figure*} \subsubsection{BT-Settl as training data} \label{subsec:BT-Settl} Another independent training dataset is BT-Settl spectra. Unlike the empirical spectra with APOGEE stellar labels, the BT-Settl model atmospheres and synthetic spectra \citep{2013MSAIS..24..128A} are computed by solving the radiative transfer using the Mixing Length Theory \citep{1958ZA.....46..108B}. The BT-Settl model can be used to determine the parameters from moderately active very-low-mass stars (VLMs), brown dwarfs to planetary-mass objects. The BT-Settl model would be a useful supplement and extend the effective temperature lower than 3000K. Since a data-driven method is more difficult to be trained at the edge of training labels, we finally use $T_{eff}$ range from 2200K to 7000K as the training labels. \section{Results} \label{sec:results} \begin{figure*}[hbt!] \centering \plotone{teff_ap_apogee.pdf} \plotone{teff_ap_b20.pdf} \plotone{teff_bt_b20.pdf} \caption{The top panels are comparisons of the ASPCAP-trained $T_{eff}$ (TEFF\_AP) and APOGEE stelalr labels. The top-right panel shows the distribution of their residuals. While the middel panels show the comparison between TEFF\_AP and B20. The bottom panels show the similar comparison, but between BT-Settl-trained $T_{eff}$ (TEFF\_BT) and B20.} \label{fig:teff} \end{figure*} \begin{figure*}[hbt!] \centering \plotone{teff_bt_gal.pdf} \caption{The left panel show the comparison between BT-Settl-trained $T_{eff}$ (TEFF\_BT) and \cite{2020AJ....159..193G}. The right panel displays the distribution of the residuals.} \label{fig:teffgal} \end{figure*} \begin{table*}[hbt!] \caption{Notation of the names of the stellar parameter of LAMOST M dwarfs from different training dataset. The LAMOST spectra with APOGEE labels are used to estimate both the effective temperature and metallicities. BT-Settl synthesis spectra are used to only determine $T_{eff}$.} \centering \begin{tabular}{c c c c} \hline Training dataset & LAMOST spectra & BT-Settl \\ & with ASPCAP labels & synthesis spectra \\ \hline effective temperature & TEFF\_AP & TEFF\_BT & \\ metallicity & M\_H\_AP & - \\ \hline \end{tabular} \label{table:names} \end{table*} Note that APOGEE stellar parameters and BT-Settl depend on different atmospheric models and are not necessarily consistent with each other. Therefore, we use them as the training data independently and predict two sets of stellar labels based on the two dataset. We use observed (LAMOST spectra with APOGEE labels) and synthesis spectra (BT-Settl), respectively, as the training dataset. The notation of stellar parameters ($T_{eff}$, [M/H]) of LAMOST M dwarfs are named by the different training datasets as shown in Table \ref{table:names}. Firstly, we estimate the $T_{eff}$ and [M/H] using the APOGEE labels as the training set described in subsection \ref{subsec: aspcap teff} ($T_{eff}$) and \ref{subsec: metal} ([M/H]). We further use the synthetic spectra from BT-Settl models and obtain TEFF\_BT. More details are discussed in subsection \ref{subsec: bt_teff}. \subsection{Effective Temperature} \label{subsec: Teff} \subsubsection{APOGEE temperature} \label{subsec: aspcap teff} First of all, we combine the LAMOST spectra with the corresponding APOGEE labels as the training dataset to train the SLAM model. Effective temperature (TEFF\_AP) and metallicity (M\_H\_AP) are determined for the test dataset by applying this SLAM model. In this section, we discuss TEFF\_AP and leave M\_H\_AP in subsection \ref{subsec: metal}. A 10-fold cross-validation (CV) is taken to estimate the precision and accuracy of the ASPCAP-trained SLAM labels. The CV-scatter and CV-bias denote the standard deviation and mean deviation respectively, and can be written as \begin{equation} \label{CVbias} {\rm CV-bias} = \frac{1}{n} \sum_{i=1}^{n}(\theta_{i,SLAM} - \theta_{i}), \end{equation} and \begin{equation} \label{CVscatter} {\rm CV-scatter} = \frac{1}{n} \sqrt{\sum_{i=1}^{n}(\theta_{i,SLAM} - \theta_{i})^2}, \end{equation} where $\theta_{i,SLAM}$ is the stellar label of the $i$th star predicted by SLAM. $\theta_{i}$ denotes the stellar label of the $i$th star as ground truth. Theoretically, a robust data-driven algorithm has a small CV bias and CV scatter. The CV results displayed in Fig. \ref{fig:cv_error} indicate that the TEFF\_AP reaches a precision of 50K with no bias when SNR$_i >100$. B20 derives spectroscopic temperatures and metallicities for 5,875 M dwarfs from the APOGEE survey. We cross-match the LAMOST data with their results and obtained 1,913 common stars. Among them, B20, LAMOST, and APOGEE DR16 together have 1,286 common stars. We compare the stellar parameters in the two catalogs separately with those we obtained using SLAM. As illustrated in the top-panel of Fig. \ref{fig:teff}, $T_{eff}$ of APOGEE DR16 and TEFF\_AP are in good agreement, which is not surprising because TEFF\_AP are estimated by stellar labels of APOGEE. The $\sim$50K scatter and 3K bias are identical to the test results of CV, which are reasonable. The mid-panel of Fig. \ref{fig:teff} shows the comparison between TEFF\_AP and B20, the residuals between the two have a dispersion of $\sim 60$K, and an offset of 185K. This bias is mainly due to the 182K difference between the temperature of APOGEE and B20. This is the result of different stellar atmospheric models used in the stellar parametrization. \subsubsection{BT-Settl temperature} \label{subsec: bt_teff} We then set up the alternative training dataset using BT-Settl synthetic spectra so that the lower $T_{eff}$ can go down beyond 3000K. We follow the preprocessing procedure developed by \cite{2020ApJS..246....9Z}\footnote{\url{https://github.com/hypergravity/astroslam}} to adjust the resolution and wavelength to be same as LAMOST low-resolution spectra. The training labels of the model grids\footnote{\url{https://phoenix.ens-lyon.fr/Grids/BT-Settl/CIFIST2011b}} are $2200<{\rm T_{eff}}<7000$K, $-1.0<{\rm [M/H]}<0.0$ dex, $2.5<{\rm \log g}<5.5$ dex with steps of 100K, 0.5 dex and 0.5 dex, respectively. The original grid from BT-Settl is too sparse. So we firstly interpolate the grid to obtain a new training dataset with denser grids. Because SLAM is a forward model, it can be used to do the interpolation. We randomly draw 15,000 points in the parameter space with $2800<{\rm T_{eff}}<4500$ K, $-1.0<{\rm [M/H]}<0.0$ dex and $4.5<{\rm \log g}<5.5$ dex following the uniform distributions. And the corresponding synthetic spectra are obtained from the SLAM model \textit{SLAM\_0}, which is trained by the sparse original grid of BT-Settl. We then train model \textit{SLAM\_1} with the 15,000 synthetic spectra interpolated from model \textit{SLAM\_0} as training dataset. Finally, we predict TEFF\_BT for the LAMOST spectra using \textit{SLAM\_1}. Similar to subsection \ref{subsec: aspcap teff}, 10-fold cross-validation is used to test the performance and robustness of our model. Random Gaussian noise is added to the test spectra. The left panel of Fig. \ref{fig:cv_error} shows that CV errors change with a given signal-to-noise ratio (SNR). We obtain a random error of 40K with CV-bias of 5K at SNR $> 100$. The results of the CV errors indicate that our method effectively works. Fig. \ref{fig:cv_error} also shows that CV-scatter varies with SNR. Note that the SNR is calculated by noise artificially added on fluxes of the normalized BT-Settl synthetic spectra. We compare the temperatures with literature to assess the performance of our approach. Comparisons with B20 show that TEFF\_BT are in agreement with their results with a scatter of $\sim$90K and an offset of $\sim$110K (see Fig. \ref{fig:teff}). \cite{2018AJ....155..180M} presented {\it TESS} Cool Dwarf Catalog providing 1,140,255 cool dwarfs with $T_{eff}$ determined on the basis of the empirical color-temperature relations of \cite{2015ApJ...804...64M}. We cross-match LAMOST M dwarfs with this cool dwarf catalog and compare the stellar parameters. We find that TEFF\_BT agrees with the {\it TESS} catalog with a scatter of 114K and an offset of $\sim $100K. This is similar to the comparison with B20, which is not surprising since the $T_{eff}$ of both B20 and {\it TESS} Cool Dwarf Catalog are calibrated with \cite{2015ApJ...804...64M}. Finally, TEFF\_BT is compared with \cite{2020AJ....159..193G}, which estimate stellar parameters of $\sim$ 30,000 M dwarfs from LAMOST DR1 based on {\it TESS} Cool Dwarf Catalog, as shown in Fig. \ref{fig:teffgal}. As expected, the residuals of $T_{eff}$ in Fig. \ref{fig:teffgal} demonstrate a small scatter of 65K and an offset of 50. Compared to other results, the smaller difference is due to the fact that both \cite{2020AJ....159..193G} and we used the same LAMSOT spectra to estimate the effective temperatures. \subsection{Metallicity} \label{subsec: metal} \begin{figure*}[hbt!] \plotone{mh_apogee_ap.pdf} \plotone{mh_b20_ap.pdf} \plotone{mh_apogee_bt.pdf} \caption{The top-left panel shows the comparison between APOGEE-trained [M/H] (M\_H\_AP) and metallicities of APOGEE. The top-right panel displays the residuals of the metallicities with a scatter of 0.13 dex and no bias. The middel panels display the comparison between M\_H\_AP and B20. The residuals of M\_H\_AP and B20 shows a 0.16 dex scatter and a 0.2 dex bias. The bottom panel shows the comparision with metallicities derived by BT-Settl synthetic spectra and [M/H] of APOGEE, they have a 0.35 dex scatter and a 0.16 dex offset. } \label{fig:mh} \end{figure*} \begin{figure*}[hbt!] \centering \includegraphics[height=8.25cm]{hrd_teff_ap_mh_ap.pdf} \includegraphics[height=8.25cm]{hrd_bprp_ap_mh_ap.pdf} \includegraphics[height=8.25cm]{hrd_teff_bt.pdf} \caption{The top-left panel displays the HR-diagram of $\sim$ 9,000 M dwarf stars with SNR$_i > 50$, a sub-sample of our catalog in the SLAM $T_{eff}$ (TEFF\_AP) versus {\it Gaia} $M_G$ plane, and is colored by M\_H\_AP. The top-right pabel shows the same HRD as the top-left panel, but the $x$-axis is $G_{BP}-G_{RP}$ colors. The bottom panel displays the contours drawn with the same stars as the left panel, while the temperatures are TEFF\_BT. The solid lines indicate the PARSEC isochrones from [M/H]=-1.0 dex to [M/H]=0.6 dex. The grey dashed curves represent the locations of 0.3, 0.5 and 0.7 $M_{\sun}$. } \label{fig:hrd_bt_kde} \end{figure*} Before estimating the metallicities of LAMOST M dwarfs, we first compared the [M/H] of APOGEE DR16 and [Fe/H] of B20, with a residual difference of only 0.1 dex, while B20 is overall more metal-rich than APOGEE by 0.2 dex. We find metallicity of APOGEE and B20 have better consistency where the metallicity abundance is higher. For stars with [M/H] larger than -0.25, the scatter between B20 and APOGEE is 0.07 dex with an offset of 0.15 dex. While for stars with [M/H] $< -0.25$, the scatter and bias are 0.15 and 0.25 dex, respectively. M\_H\_AP is determined by [M/H] of APOGEE as the training dataset. The prediction of M\_H\_AP is estimated at the same time as the TEFF\_AP using SLAM. We find M\_H\_AP agrees well with APOGEE labels (see the top-panel of Fig. \ref{fig:mh}). Their good agreement is exactly what we expected, since the APOGEE parameters are used as the training labels. Furthermore, M\_H\_AP is in agreement with B20 metallicity with a scatter of 0.16 dex as shown in the mid-panel of Fig. \ref{fig:mh}. We also find a similar scenario to the comparison between APOGEE and B20. For stars with [M/H] $> -0.25$, the scatter is $\sim$0.1 dex with a bias of 0.14 dex, but for stars with metallicities lower than -0.25 dex , the scatter is 0.2 dex with a 0.3 dex offset. Metallicities could be also estimated by the BT-Settl models together with $T_{eff}$. However, the derived BT-Settl [M/H] (hereafter, M\_H\_BT) shows no clear correlation with observed data published in previous studies (see the bottom-panel of Fig \ref{fig:mh}). In the metal-poor regime ([M/H] $<-0.25$ dex), though with larger scatter, the correlation between M\_H\_BT and APOGEE metallicities is obvious. But for [M/H] $> -0.25$ dex, M\_H\_BT is not able to distinguish well the metallicities of stars. This is probably due to the lack of model grids with [M/H] $>0$. Therefore, the BT-Settl-trained [M/H] is not adopted in our catalog Table \ref{tab:all}. \subsection{Hertzsprung-Russell Diagram} \label{subsec: hrd} Fig. \ref{fig:hrd_bt_kde} displays the Hertzsprung-Russell Diagram (HRD) of the LAMOST M dwarfs in the \textit{Gaia} $M_G$ versus logarithmic $T_{eff}$ (TEFF\_AP) plane. Note that the $M_G$ is estimated from the Bayesian distance from \cite{Bailer-Jones2018} with extinction removed. 3D dust-reddening maps from \textit{Bayestar} \citep{2019ApJ...887...93G} is used to estimate the visual extinction $A_V$ for each star. $A_G$ is further estimated from $A_V$ using the extinction factor from \cite{2019ApJ...877..116W}. Fig. \ref{fig:hrd_bt_kde} is the de-reddened HRD for a sub-sample of $\sim 9,000$ M dwarf stars in our catalog. As illustrated in the top-left panel, the metallicity (M\_H\_AP) can be clearly distinguished in HRD. Moreover, the top-right panel shows the color-magnitude diagram in $G_{BP}-G_{RP}$ versus $M_G$ plane, the gradient of metallicity is still very clear and shows the similar trend as in the top-left panel. We further overlap the PARSEC (the Padova and Trieste Stellar Evolutionary Code) \citep{2012MNRAS.427..127B} theoretical tracks with the age of 1 Gyr and find that they are well fit with each other as shown in the bottom panel of Fig. \ref{fig:hrd_bt_kde}. The PARSEC version 1.2S\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/cmd}} \citep{2014MNRAS.444.2525C} provides revisions on very-low-mass stars (VLMs) from the BT-Settl model with a wide range of metallicities from -2.19 to +0.70 dex. As displayed in Fig. \ref{fig:hrd_bt_kde}, we found: a) The HRD given by the PARSEC stellar model shows good agreement with our observed HRD for TEFF\_AP and TEFF\_BT; b) Stars above the isochrone of [M/H] = 0.6 are likely to be in binary systems. \subsection{Chromosperic Activity} \label{subsec: activity} Considering active M dwarf is the only class of stars for which magnetic field affects overall stellar parameters systematically \cite{2021A&ARv..29....1K}. H$\alpha$ emission which can be the indicator of chromospheric activity might alter the reliability of the stellar parametrization. \cite{2015RAA....15.1182G} found that the fraction of active stars increases as spectral subtype becomes later. In our work, there is $\sim 8$ percent of M dwarf stars that have active magnetic fields. So during the procedure of data pre-processing, the pixels at the wavelength of H$\alpha$ emission are masked to improve the precision of stellar parameter estimation. \subsection{The Catalog} \begin{table}[hb] \caption{The field definition of the stellar parameters catalog of LAMOST M dwarf stars.} \centering \begin{tabular}{lcc} \hline \multicolumn{1}{c}{Column} & Unit & \multicolumn{1}{c}{Description} \\ \hline source\_id & & \textit{Gaia} identification ID \\ obsid & & LAMOST unique spectra ID \\ ra\_obs & deg & LAMOST fiber pointing right ascension \\ dec\_obs & deg & LAMOST fiber pointing declination \\ snru & & LAMOST signal noise at SDSS $u$-band \\ snrg & & LAMOST signal noise at SDSS $g$-band \\ snrr & & LAMOST signal noise at SDSS $r$-band \\ snri & & LAMOST signal noise at SDSS $i$-band \\ snrz & &LAMOST signal noise at SDSS $z$-band \\ z & & LAMOST redshift \\ z\_err & & LAMOST redshift uncertainty \\ type & & magnetic activity \\ TEFF\_BT & K & effective temperature from BT-Settl-trained SLAM \\ TEFF\_BT\_ERR & K & uncertainty of effective temperature from BT-Settl-trained SLAM \\ TEFF\_AP & K & effective temperature from ASPCAP-trained SLAM \\ TEFF\_AP\_ERR & K & uncertainty of effective temperature from ASPCAP-trained SLAM \\ M\_H\_AP & dex & {[}M/H{]} from ASPCAP-trained SLAM \\ M\_H\_AP\_ERR & dex & uncertainty of {[}M/H{]} from ASPCAP-trained SLAM \\ \hline \end{tabular} \label{tab:catalog-header} \end{table} Table \ref{tab:catalog-header} shows the field definition of the stellar parameter catalog of our results. The important information of both the LAMOST and \textit{Gaia} observations are presented in our catalog. TEFF\_AP, TEFF\_AP\_ERR, M\_H\_AP and M\_H\_ERR are the ASPCAP-trained SLAM effective temperature and metallicity with the corresponding uncertainty. TEFF\_BT and TEFF\_BT\_ERR are the BT-Settl-trained SLAM $T_{eff}$ with the estimated uncertainty. Note that the \textit{type} column is adopted from \citet{2015RAA....15.1182G}, which indicates the magnetic activity determined by measuring H$\alpha$ activity. The stellar parameters including $T_{eff}$ and [M/H] with the corresponding uncertainties of LAMOST M dwarfs estimated in this work are presented in Table \ref{tab:all}. \section{Discussion and conclusions} \label{sec:discussion} \subsection{The accuracy of metallicity assessed from Open Clusters} \begin{figure}[hbt!] \centering \includegraphics[width=10cm]{hyades.pdf} \caption{Metallicity distribution in Hyades. The vertical line indicate the metallicity [Fe/H]=0.13 given by previous studies. } \label{fig:hyades} \end{figure} \begin{figure*}[hbt!] \plotone{mh_compare_opcl.pdf} \caption{The left panel shows the comparison of M\_H\_AP and [Fe/H] determined by \cite{2020AJ....159..199D}. The right panel displays the distribution of [M/H] difference. } \label{fig:opencluster} \end{figure*} \begin{table}[hb] \caption{The comparison of [M/H] of LAMOST M dwarf stars with OCCAM. $<$M\_H\_AP$>$ is the mean metallicity in each cluster derived by our work, ${\rm [Fe/H]}_{\rm OCCAM}$ is the mean value given by OCCAM and $N$ is the number of the member in the cluster.} \centering \begin{tabular}{lccrr} \toprule Cluster & $<$M\_H\_AP$>$ & ${\rm [Fe/H]}_{\rm OCCAM}$ & $N$ \\ \hline ASCC 16 & -0.07 $\pm$ 0.03 & -0.04 $\pm$ 0.03 & 5 \\ ASCC 21 & -0.21 $\pm$ 0.03 & -0.12 $\pm$ 0.04 & 2 \\ Berkeley 19 & -0.19 & -0.01 & 1 \\ Berkeley 29 & 0.08 & 0.09 & 1 \\ Briceno 1 & -0.06 & -0.03 & 1 \\ Chupina 1 & -0.63 & -0.35 & 1 \\ Chupina 3 & -0.11 & -0.11 & 1 \\ Chupina 5 & -0.83 & -0.65 & 1 \\ Collinder 69 & -0.20 $\pm$ 0.08 & -0.21 $\pm$ 0.14& 5 \\ Collinder 70 & 0.01 & -0.01 & 1 \\ Koposov 62 & -0.05 & 0.07 & 1 \\ Melotte 20 & 0.05 $\pm$ 0.11 & 0.05 $\pm$ 0.12 & 21 \\ Melotte 22 & 0.00 $\pm$ 0.13 & 0.00 $\pm$ 0.09& 66 \\ NGC 2420 & -0.23 & -0.26 & 1 \\ NGC 2682 & -0.16 $\pm$ 0.25 & -0.22 $\pm$ 0.30 & 7 \\ NGC 752 & -0.17 $\pm$ 0.26 & -0.09 $\pm$ 0.11 & 6 \\ \hline \end{tabular} \label{tab:opcl} \end{table} To assess the accuracy of the metallicities estimated in our work, we select 23 Hyades member stars from our catalog. Hyades is the nearest open cluster, as far as we know \citep{2009A&A...497..209V}, with a metallicity of about +0.13 dex \citep{2006AJ....131.1057S}. We select all stars in a circle with a radius of 5 degrees, centered on right ascension of 66.725$^\circ$ and declination of 15.867$^\circ$ from {\it Gaia} DR2. Then, we adopt that stars located within $4<${\tt\string pmra / parallax}$<6$, $-2.5<${\tt\string pmdec / parallax}$<0$ and $0.04<${\tt\string parallax}$<0.05$, where {\tt\string pmra} and {\tt\string pmdec} are {\it Gaia} proper motions, are the member stars of Hyades. We cross-match our catalog with this sample and obtain 23 M dwarf member stars. We find that the distribution of M\_H\_AP of the members has a mean of 0.0 dex and a standard deviation of 0.1 dex, as shown in Fig. \ref{fig:hyades}. This result tentatively verifies that the accuracy of M\_H\_AP is around 0.1 dex. Furthermore, we cross-match our catalog with the Open Cluster Chemical Analysis and Mapping (OCCAM) survey \citep{2020AJ....159..199D} and obtain 138 member stars belonging to 15 open clusters. Fig. \ref{fig:opencluster} compares [Fe/H] given by \cite{2020AJ....159..199D} with M\_H\_AP. The left panel shows that M\_H\_AP matches very well with the metallicity of the corresponding clusters in the ranges from -0.7 to 0.4\,dex. A few stars with higher metallicity in literature are estimated as lower metallicity using our method. This may implies some limit of the estimation in metal-poor regime. The right panel of Fig. \ref{fig:opencluster} displays the distribution of residuals between M\_H\_AP and [Fe/H] of OCCAM. The mean value is 0.01 dex and the standard deviation is 0.1 dex. This illustrates that the uncertainty of the metallicity derived in this work is around 0.1 dex. Detailed information on the comparison of metallicities of cluster members is shown in Table. \ref{tab:opcl}. \subsection{Precision of metallicity from wide binary} \begin{figure*}[hbt!] \plotone{mh_compare_widebinary.pdf} \caption{Comparison of metallicities of high-confidence binaries with separations less than 3,000 au. Left: comparison of the metallicities (M\_H\_AP) of the primary and secondary component. Right: distributions of uncertainty-normalized [M/H] difference, compared to Gaussians with the corresponding $\sigma$.} \label{fig:wide binary} \end{figure*} We further assess the precision of the metallicity using M dwarf - M dwarf wide binaries (Qiu et al., in prep). We start with the catalog of initial wide binaries candidates released by \cite{2020ApJS..246....4T}, which contains 807,611 candidates, selected form {\it Gaia} DR2 within a distance of 4.0 kpc and a maximum projected separation $s=1.0$ pc. This catalog contains many types of wide binaries (e.g., main sequence - main sequence, main sequence - white dwarf, white dwarf - white dwarf etc.), and these wide binary stars may be polluted by visual binaries (chance alignments) at large separations, as described in Section 3.5 in \cite{2020ApJS..246....4T}. We finally choose 92 pairs of binary stars with the separations less than 3,000 au to assess the precision of [M/H]. Fig. \ref{fig:wide binary} compares the differences of the metallicities (M\_H\_AP) of the companions of these binaries. M\_H\_AP is expected to be consistent with each other if the companions are physically associated. The left panel shows the pairs have similar [M/H], only 2 of the 92 systems fall outside of 3-sigma range. These outliers are most likely not physical binary stars. The mean difference of metallicity of the primary and secondary companions is 0.01 dex with scatter of 0.23 dex. The right panel is the uncertainty-normalized metallicity difference; i.e., $\Delta {\rm [M/H]} / \sigma_{\Delta {\rm [M/H]}} = ({\rm [M/H]_1} - {\rm [M/H]_2})/\sqrt{\sigma_{\rm [M/H]_1^2} + \sigma_{\rm [M/H]_2^2}}$. If the derived $\sigma_{\rm [M/H]}$ values are accurate, the uncertainty-normalized metallicity difference should be distributed as a Gaussian distribution with $\sigma =1$. As shown in the right panel, the distribution matches better with $\sigma \sim 0.8$, which suggests that M\_H\_AP uncertainties may be overestimated by $\sim 20 \%$. \subsection{Summary} \label{subsec: summary} In this work, we have derived a spectroscopic catalog of stellar parameters for $\sim 300,000$ M dwarf stars. Precise effective temperatures and metallicities of M dwarf stars from LAMOST DR6 and {\it Gaia} DR2 with are given in this catalog. Stars located within the range of $2800<T_{eff}<4500$K (both of TEFF\_AP and TEFF\_BT) are finally adopted in our catalog. Two versions of effective temperatures are obtained with precisions of 40K for TEFF\_BT and 50K for TEFF\_AP at SNR $>50$. Particularly, TEFF\_AP agrees with B20 with a 60K scatter and a 185K offset. The systematic errors come from different stellar atmospheric models, in this case, BT-Settl model and MARCS, respectively. This study provides a method using BT-Settl model to obtain the parameters of LAMOST M dwarf stars. We also publish the code and the stellar parametrization pipeline on the website\footnote{\url{https://github.com/jiadonglee/MDwarfMachine}}. Note that the data-driven method to derive stellar labels strongly relies on training datasets, for the estimation in this work is on BT-Settl model and stellar parameters of APOGEE. We address that SLAM can be used as the industrial framework against which to decode the stellar parameters of the cool atmosphere of M dwarf stars in the upcoming surveys such as SDSS-V \citep{2017arXiv171103234K}. \acknowledgments We thank the anonymous referee for the very hepful comments. This work is supported by National Key R\&D Program of China No. 2019YFA0405500. C.L. thanks the National Natural Science Foundation of China (NSFC) with grant No. 11835057. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. % \vspace{5mm} \facilities{LAMOST, \textit{Gaia}} \software{astropy \citep{2018AJ....156..123A}, scipy \citep{2019arXiv190710121V}, scikit-learn \citep{scikit-learn}, SLAM \citep{2020ApJS..246....9Z}, TOPCAT \citep{2005ASPC..347...29T}}
{ "timestamp": "2021-02-02T02:05:22", "yymm": "2012", "arxiv_id": "2012.14080", "language": "en", "url": "https://arxiv.org/abs/2012.14080" }
\section{Introduction} \label{sec:intro} The recent detection of gravitational waves (GWs) has consecrated gravitational wave interferometers (GWIs) as a vital experimental tool for astronomy, cosmology, and fundamental physics~\cite{LIGOScientific:2018mvr,Abbott:2020niy}. The GWs detected thus far are a product of cataclysmic transient events, such as binary black hole mergers. These signals are strong, with gravitational strain of the order of \(h\sim10^{-21}\), but very short, from a fraction of a second to several seconds. Much weaker signals can be detected if they are coherent over a longer time, such as the continuous GWs (CWs) emitted by rapidly spinning neutron stars~\cite{Riles:2017evm} or ultra-compact Galactic binaries~\cite{Nelemans:2001hp}. In the former case, recent searches for this type of signal have been performed in~\cite{Pisarski:2019vxw,Dergachev:2020fli,Steltner:2020hfd}. Having detected no CWs, this set the upper limit \(h\sim10^{-25}\) on the maximum strain for this type of signal at frequencies of about \(f\sim10^2\)~Hz. Another important source of CWs is due to the scattering of ultra-light bosons off black holes via a mechanism known as superradiance~\cite{Brito_2020}. Bosons with masses \(m\ll1\)~eV, are predicted in theories beyond the Standard Model of particle physics, and are an excellent candidate for the cosmological dark matter dubbed ultra-light dark matter (ULDM)~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah,Turner:1983he,Nelson:2011sf,Ferreira:2020fam}. In particular, spin-2 ULDM is especially interesting because it arises as a modification of gravity itself, even though it is in the guise of an additional particle, the dark matter~\cite{Marzola:2017,Aoki:2017cnz}. Searches for CWs produced by superradiance have been carried out only for spin-0 bosons~\cite{Palomba:2019vxe,Ng:2020ruv}, whereas no limits on the properties of spin-2 ultra-light bosons with GWI data exist yet. In this work we show that, if ULDM has spin two, it interacts with GWIs in a way that, owing to its quasi-monochromaticity and persistence, closely resembles CWs. The spin-2 ULDM-CW signal can be detected by existing Earth-based facilities such as advanced LIGO~\cite{TheLIGOScientific:2014jea} / advanced Virgo~\cite{TheVirgo:2014hva} (HLV) in their entire accessible frequency range, approximately corresponding to masses \(4\times10^{-14}~\mathrm{eV}\lesssim m \lesssim4\times10^{-11}~\mathrm{eV}\). Furthermore, planned facilities such as LISA~\cite{Baker:2019nia}, DECIGO~\cite{Seto:2001qf}, and the BBO~\cite{Harry:2006fi} will extend this range down to \(m\sim\mathrm{few}\times10^{-19}~\mathrm{eV}\). The spin-2 ULDM-CW signal is produced by the coherent oscillations of the ULDM field which is universally coupled to Standard Model fields and is unrelated to superradiance; this is similar to dark photon dark matter, where the ULDM carries additional interactions~\cite{Pierce:2018xmy,Miller:2020vsl}\footnote{Other types of direct interactions between ULDM and matter have also been considered, see, e.g., \cite{Arvanitaki:2014faa,Morisaki:2018htj,Grote:2019uvn,Michimura:2020vxn}} (notice however that in the spin-2 case the interaction can not be tuned away). Moreover, regardless the spin, if the ULDM field only interacts gravitationally, the signal is undetectable by GWIs~\cite{Aoki:2016kwl}. Our findings demonstrate that, in case of a null result, GWIs can place some of the most stringent bounds on the spin-2 Yukawa fifth force strength \(\al\) in the frequency ranges accessible by GWIs. This paper is structured as follows. In Section~\ref{sec:maths} we compute the strength and shape of the expected signal from spin-2 ULDM for the frequency ranges of interest for GWIs. In Section~\ref{sec:res} we present our results, and in Section~\ref{sec:out} we put them into context and give an outlook for future work. We work with units in which \(c=k_B=\hslash=1\), and use latin indices \((i,j,\ldots)\in[1,3]\) for spatial tensor components. \section{The shape and strength of the signal} \label{sec:maths} The behaviour of the spin-2 ULDM in sufficiently small regions inside the local dark matter halo is described by the oscillating tensor field~\cite{Marzola:2017} \begin{align} \Mij(t) &= \frac{\sqrt{2\rhoDM}}{m}\cos{(mt+\Upsilon)}\epij(\vr) \,, \label{eq:Mij} \end{align} where \(\rhoDM\) is the observed local dark matter energy density, for which we assume \(\rhoDM=0.3\)~GeV/cm\(^3\)~\cite{Piffl:2014mfa,Evans:2018bqy,2015ApJ...814...13M}, and \(\Upsilon\) is a random phase. The five polarisations of the spin-2 field are encoded in the \(\epij(\vr)\) tensor, which has unit norm and zero trace, is symmetric and is direction-dependent via the unit vector \(\vr\)~\cite{Maggiore:1900zz}. The solution \Eq{eq:Mij} assumes a single frequency \(2\pi f=m\) and a coherent polarisation structure. The latter is justified for scales shorter than the characteristic scale of the inhomogeneities of the ULDM field, which is given by the de~Broglie wavelength \(\ldB \deq 2\pi/mv = 1/fv\) where \(v\sim 10^{-3}\) is the effective velocity of the ULDM. Thus, owing to the fact that \(\ldB\) is much larger than the physical size of the GWIs and the distance between the HLV sites, we can safely neglect gradients (see~\cite{Armaleo:2020yml} for further discussion). The coherence of the oscillation frequency is instead guaranteed up to a coherence time that is given by\footnote{Notice the definition of the coherence time differs from the one that is commonly used in the ULDM literature by a factor~4. We adopt the definition used in the GW literature here.} \(\tcoh \deq 4\pi/mv^2 = 2/fv^2\). Given that a typical GWI observation run will last for much longer than \(\tcoh\), a more precise description of the ULDM field would be a superposition of plane waves, see~\cite{Pierce:2018xmy,Miller:2020vsl}; we neglect this for our order of magnitude estimates\footnote{This solution is also valid provided that the energy, or frequency, scales of the system is well below the ultra-violet cutoff of the effective field theory, that is \(f \sim m/2\pi \ll (M_\text{P}m^2)^{1/3}\)~\cite{Akrami:2015qga}; this is easily verified for all the values of the spin-2 mass \(m\) we consider in this work.}. In the ULDM reference frame \((\vp,\vq,\vr)\) the polarisations of the spin-2 field can be described as \(\epij(\vr) \deq \sum_\kappa \vep_\kappa {\cal Y}^\kappa_{ij}(\vr)\)~\cite{Maggiore:1900zz,Armaleo:2019gil}, where the summation runs over the five amplitudes \(\left\{ \vepCross, \vepPlus, \vepL,\vepR, \vepS \right\}\) that obey \(\sum_\kappa \vep_\kappa^2=1\)---the overall amplitude is fixed by the requirement that \(\Mij\) makes up all of the dark matter. The five polarisation matrices are given by \begin{align} {\cal Y}^\times_{ij} &\deq \frac{1}{\sqrt2} \left(p_i q_j + q_i p_j\right) \,, & {\cal Y}^+_{ij} &\deq \frac{1}{\sqrt2} \left(p_i p_j - q_i q_j\right) \,, & \nn\\ {\cal Y}^L_{ij} &\deq \frac{1}{\sqrt2} \left(q_i r_j + r_i q_j\right) \,, & {\cal Y}^R_{ij} &\deq \frac{1}{\sqrt2} \left(p_i r_j + r_i p_j\right) \,, & \nn\\ {\cal Y}^S_{ij} &\deq \frac{1}{\sqrt6} \left(3 r_i r_j - \delta_{ij}\right) \,. &&& \nn \end{align} Notice that, unlike for CWs, there is no propagation along the \(\vr\) direction, which in our case serves merely as reference for the decomposition in tensor, vector, and scalar helicities according to their behaviour under a rotation about \(\vr\) (see also Appendix~\ref{app:not}). Spin-2 ULDM couples to Standard Model fields \(\Psi\) as~\cite{Marzola:2017} \begin{align}\label{eq:int} S_\text{int}[g,\Mij,\Psi] & \deq -\frac{\al}{2\mpl} \int\!\dd^4x\,\sqrt{-g} \Mij T_\Psi^{ij} \,, \end{align} where \(T_\Psi^{ij}\) is the stress tensor of the fields \(\Psi\) and \(\mpl\) is the reduced Planck mass. At leading (linear) order in \(\alpha\) the interaction \Eq{eq:int} can be absorbed into a redefinition of the metric \(g_{ij}\to g_{ij} + \al M_{ij}/\mpl\)~\cite{Armaleo:2020yml}. Therefore, the effect of spin-2 ULDM on the detector can be equivalently described by the gravitational effect of an oscillating metric perturbation \(\hij\) given by \begin{align} \hij(t) &= \frac{\al}{\mpl}\Mij(t) = \frac{\al\sqrt{2\rhoDM}}{m\mpl}\cos{(mt+\Upsilon)}\epij(\vx) \,. \end{align} The parameter \(\al\) is idiosyncratic for spin-2 ULDM because it is required by the self-consistency of the model, such as in bigravity~\cite{Babichev:2016bxi}. This parameter defines the inverse ULDM self-interaction strength: there is no ULDM at all with \(\al\rar0\) because the ULDM field becomes infinitely strongly coupled in this limit. Furthermore, spin-2 ULDM is ineluctably coupled universally to standard matter fields, so that ULDM will appear as a Yukawa-like fifth force modification of the gravitational potential \(\Phi\) in the weak field regime, for which \(\al\) quantifies the strength: \(\Phi\rar\Phi\left[1+\al^2\exp(-mr)\right]\). The strength of this fifth force for different values of the mass \(m\) (or, equivalently, frequency) is constrained by several experiments and tests of gravity~\cite{Murata:2014nra,Sereno:2006mw}: we call this maximal coupling \(\al=\al_Y\). In the reference frame of the detector, \((\vx,\vy,\vz)\), the response function \(D^{ij}\) is given by the differential change in the length of the detector arms directed along the unit vectors \(\vn\) and \(\vm\) as \(D^{ij} = (n^i n^j - m^i m^j)/2\)~\cite{Maggiore:1900zz}. The signal is the combination of the variation of the metric perturbation and the response function: \begin{align} h(t) &\deq D^{ij} h_{ij}(t) = \frac{\al\sqrt{\rhoDM}}{\sqrt2 m\mpl}\cos{(mt+\Upsilon)}\Delta\vep \deq h_s\sin{(mt)} + h_c\cos{(mt)}\,, \label{eq:signal} \end{align} where we defined \(\Delta\vep \deq \,\epij (n^i n^j - m^i m^j)\), and introduced the sine \(h_{s}\) and cosine \(h_{c}\) amplitudes. This is the central equation of the paper. The theoretical spin-2 ULDM-CW signal \Eq{eq:signal} presents two key features. First, the signal is inversely proportional to the spin-2 boson mass \(m\). This inverse linear scaling is also found in dark photon dark matter, where the spin-1 ULDM field carries a additional charges such as baryon number \(B\) or baryon minus lepton number \(B-L\), through which the ULDM directly interacts with the mirrors of the detector~\cite{Pierce:2018xmy,Miller:2020vsl}. The inverse linear dependence should be compared with the generic inverse \emph{quadratic} dependence obtained by pure gravitational interaction~\cite{Aoki:2016kwl}. In other words, in absence of non-gravitational interactions, the signal strength decays much more rapidly with increasing mass (or frequency). This makes it practically impossible to detect such a signal with future GWIs, let alone existing ones. Second, the spin-2 ULDM-CW signal has a unique geometric structure that sets it apart from other CWs. Explicitly we have \begin{align} \Delta\vep &= \sqrt{2} \vepCross\left[\left(\vp\cdot\vn\right) \left(\vq\cdot\vn\right) - \left(\vp\cdot\vm\right) \left(\vq\cdot\vm\right)\right] + \frac{\vepPlus}{\sqrt{2}}\left[\left(\vp\cdot\vn\right)^2 - \left(\vq\cdot\vn\right)^2 -\left(\vp\cdot\vm\right)^2 + \left(\vq\cdot\vm\right)^2\right] \nn\\ &~~+ \sqrt{2} \vepL\left[\left(\vq\cdot\vn\right) \left(\vr\cdot\vn\right) - \left(\vq\cdot\vm\right) \left(\vr\cdot\vm\right)\right] + \sqrt{2}\vepR\left[\left(\vp\cdot\vn\right) \left(\vr\cdot\vn\right) - \left(\vp\cdot\vm\right) \left(\vr\cdot\vm\right)\right] \nn\\ &~~+ \sqrt{\frac{3}{2}}\,\vepS\left[\left(\vr\cdot\vn\right)^2 - \left(\vr\cdot\vm\right)^2\right] \label{eq:signal_rpq} \\ &=\frac{\cos2\phi}{\sqrt{2}} \left[\vepPlus\left(\cos^2\theta+1\right) + \vepR\,\sin2\theta + \sqrt3\,\vepS\,\sin^2\theta\right] -\sqrt{2}\sin2\phi\left(\vepCross\,\cos\theta + \vepL\,\sin\theta\right) \,, \label{eq:signal_xy} \end{align} where in obtaining the last expression we have set \(\vn=\vx\) and \(\vm=\vy\), which we can always do for a single L-shaped detector, and we have defined the ULDM reference frame in terms of the detector's frame as \(\vr = (\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), \(\vp = (\cos\theta\cos\phi,\cos\theta\sin\phi,-\sin\theta)\), \(\vq = (-\sin\phi,\cos\phi,0)\); the origins of the two frames are connected by the vector \(r\vr\). Before moving on to our results, a comment is in order here. The detector is moving with respect to the ULDM, this motion being the result of three contributions: (1) the Earth is rotating about its axis with equatorial velocity of approximately \(v\sim10^{-6}\) (this only applies to Earth-bound detectors); (2) the Earth is moving along its orbit around the Sun with speed \(v\sim10^{-4}\); (3) the Solar System is moving through the dark matter halo at a speed of \(v\sim10^{-3}\) causing what is known as the dark matter wind. Therefore, in principle we should Lorentz-boost the ULDM frame to the reference frame of the detector. However, owing to the smallness of the velocities in question, the effect of the boost on \(r\vr\) amounts to less than a percent correction to the theoretical signal and can be safely neglected. The relative acceleration of the two frames also induces a Doppler frequency shift \(\Delta f_\text{Doppler}\) that affects the spin-2 ULDM-CW signal, and that needs to be accounted for when designing a data analysis pipeline~\cite{Miller:2020vsl,Frasca:2005ey,DAntonio:2018sff}. All-sky searches for CWs with Earth-bound GWIs resort to semi-coherent methods because it is not computationally feasible to analyse the data from the entire observation campaign in a fully coherent way\footnote{In the case of space-based detectors such as the upcoming LISA interferometer, owing to the sparse sampling frequency of around 1~Hz, compared to the HLV sampling of about \(10^4\)~Hz, this is not an issue.}~\cite{Brady:1998nj,Krishnan:2004sv,Antonucci:2008jp,PhysRevD.90.042002}. In semi-coherent methods the whole data set is broken into shorter time chunks of length \(\tchunk\), each of which is then analysed coherently but separately. One of the advantages of this approach is that, by choosing \(\tchunk<\tdop\deq1/\Delta f_\text{Doppler}\) the Doppler frequency shift can be neglected\footnote{To be more precise, within each chunk the instantaneous Doppler shift that would contribute to \(\dot{f}\) can be neglected, i.e., the frequency is held constant. Nevertheless, in CW searches, in order to identify viable source candidates for the follow-up steps in the hierarchical semi-coherent analysis, the predicted Doppler shift for each chunk and each location in the sky needs to be corrected for. Being there no ``sky location'' for ULDM searches, this is not a concern.}. Moreover, one should ensure that \(\tchunk<\tcoh\) in order to have a stable ULDM configuration within a given chunk. The sensitivity for a coherent analysis over the whole observation campaign time \(\tobs\) scales as \(\tobs^{-1/2}\). In semi-coherent methods, assuming that all \(N\) chunks last the same time \(\tchunk\) and all together they cover the whole observation run such that \(\tobs=N\tchunk\), the sensitivity scales instead as \(N^{-1/4}\tchunk^{-1/2} = \tobs^{-1/4}\tchunk^{-1/4}\). Thanks to the coherence of the signal, even within the limitation of the semi-coherent methods, the actual sensitivity attained by the HLV collaboration for CW searches is more than a factor \(10^{-3}\) smaller than the design sensitivity \(h_0\) for transient events~\cite{Pisarski:2019vxw,Dergachev:2020fli,Steltner:2020hfd}. The semi-coherent techniques have been adapted and optimised, taking into account the coherence time and the geometry of the signal, for dark photon dark matter searches~\cite{Miller:2020vsl}. They can therefore be tailored for spin-2 ULDM-CW searches by replacing the average over the different polarisations of ULDM waves (which for the spin 1 dark photon case amounts to a factor \(\sqrt{2}/3\)~\cite{Pierce:2018xmy,Miller:2020vsl}) with \(\sqrt{\langle\Delta\varepsilon^2\rangle} = \sqrt{2/5}\). We define the effective theoretical strain amplitude \(h\) for the spin-2 ULDM-CW signal as the root mean square average, taken over all the polarisation angles and the random phase \(\Upsilon\), of the sine and cosine amplitudes of \Eq{eq:signal}: \begin{align}\label{eq:signalaver} h &\deq \langle h_s^2+h_c^2\rangle^{1/2} = \frac{\al\sqrt{\rhoDM}}{\sqrt{5}m\mpl} \,. \end{align} \section{Results} \label{sec:res} In order to estimate the values of \(\al\) accessible with GWIs, we compare the expected theoretical signal \(h\) of \Eq{eq:signalaver} with the design sensitivities of a number of current and planned GWIs (Fig.~\ref{fig:signal}). We find that the HLV detectors can nominally detect spin-2 ULDM for \(\al\gtrsim10^{-4}\) depending on the frequency (Fig.~\ref{fig:signal}). We expect that a dedicated semi-coherent search for the spin-2 ULDM-CW signal will improve the range of detectable \(\al\) by a few orders of magnitude, potentially down to \(\al\sim10^{-7}\) or less for frequencies of tens of Hz, corresponding to masses around the \(10^{-13}\)~eV mark; this is shown in Fig.~\ref{fig:signal} as the dotted line ``HLV opt''---the details on how we obtained this curve can be found in Appendix~\ref{app:opt}. In this frequency range, from \(f\sim10\)~Hz (\(m\sim4\times10^{-14}\)~eV) to \(f\sim10^3\)~Hz (\(m\sim4\times10^{-12}\)~eV) and beyond the planned experiments Einstein Telescope (ET)~\cite{Hild:2010id} and Cosmic Explorer (CE)~\cite{Evans:2016mbw} should reach sensitivities of order \(h_0\sim10^{-22}\text{---}10^{-23}\), further improving the chances to detect spin-2 ULDM. \begin{figure}[htbp] \centering \includegraphics[width=1.0\textwidth]{sensitivitiesVSalpha} \caption{Design sensitivity \(h=h_0\) for several current and planned GWIs, as a function of frequency (solid lines). The dotted line ``HLV opt'' is the optimised sensitivity obtained with a semi-coherent method tailored for spin-2 ULDM-CW searches (Appendix \ref{app:opt}). Overlaid as dashed lines are the signal strains \(h\) of \Eq{eq:signalaver} for different values of the parameter \(10^{-4}\leq\al\leq10^{-10}\). The dot-dashed black line is the spin-2 ULDM-CW strain corresponding to the maximal values of \(\al\) allowed by fifth force constraints, \(h=h(\al_Y)\) with \(\al_Y\) obtained from~\cite{Murata:2014nra,Sereno:2006mw}; the region above this line is excluded.}\label{fig:signal} \end{figure} Future facilities will also be able to probe much lower values of the ULDM mass. In the intermediate frequency range \(0.1~\mathrm{Hz}\lesssim f \lesssim1\)~Hz, corresponding to \(4\times10^{-16}~\mathrm{eV}\lesssim m \lesssim4\times10^{-15}\)~eV, the BBO and DECIGO detectors are expected to attain sensitivities of order of \(h_0\sim10^{-23}\text{---}10^{-24}\)~\cite{Seto:2001qf,Harry:2006fi}. This means these GWIs could detect a spin-2 ULDM-CW signal for \(\al\lesssim10^{-8}\) at those frequencies. In the low frequency range the planned space-based interferometer LISA will reach a sensitivity of \(h_0\sim10^{-21}\) for \(f\sim10^{-2}\)~Hz (\(m\sim4\times10^{-17}\)~eV), which means that it could detect spin-2 ULDM with \(\al\sim10^{-7}\) and below. These limits would be much improved with a dedicated pipeline for these interferometers, as is the case for HLV. We collect all the sensitivities as compiled in~\cite{Schmitz:2020syl} and compare them to the theoretical signal in Fig.~\ref{fig:signal}---notice that strictly speaking these sensitivities are valid only for the standard tensor modes of GWs, namely the \(\vepCross\) and \(\vepPlus\) in our notation, but the differences are small and not relevant for our order of magnitude estimates~\cite{Zhang:2019oet}. \section{Conclusion and outlook} \label{sec:out} GWIs are a unique tool to understand the nature of gravity. In this work we have shown that GWIs have the potential to test the properties of gravity \emph{and} dark matter by detecting or constraining spin-2 ULDM. In particular, we expect that the existing HLV facilities could detect spin-2 ULDM for values of the coupling parameter \(\al\) as little as \(\al\sim10^{-7}\) for frequencies of \(f\sim\mathrm{few}~\times100\)~Hz (that is, a Yukawa range \(\lambda\deq1/2\pi f\sim10^4\)~m). A null result would place the most stringent limits the strength of the Yukawa-like fifth force modification of the inverse-square law of gravitational interaction, quantified by \(\al\), provided that the fifth force is carried by the dark matter. Looking forward, future GWIs in the same frequency range can push this limit even further by up to two orders of magnitude, whereas planned facilities such as DECIGO and the BBO (\(f\sim0.1\)~Hz), and the milli-Hertz space-based LISA interferometer are expected to attain \(\al\lesssim10^{-7}\text{---}10^{-8}\) in their respective frequency ranges; limits which can be significantly improved with a dedicated pipeline for the spin-2 ULDM-CW signal. Our results complement our previous studies on the bounds on the spin-2 ULDM coupling \(\al\) coming from PTAs~\cite{Armaleo:2020yml} and individual pulsar timing data~\cite{Armaleo:2019gil}, which cover the frequency range \(10^{-9}~\mathrm{Hz}\lesssim f \lesssim10^{-3}\)~Hz, and for which comparable limits on \(\al\) were obtained. Our findings should be compared with existing limits on spin-2 ULDM coming from superradiance. By measuring the spin and mass of known black holes and other astrophysical objects, the mass ranges \(6.4\times10^{-22}~\mathrm{eV}\lesssim m \lesssim7.7\times10^{-21}~\mathrm{eV}\), \(1.8\times10^{-20}~\mathrm{eV}\lesssim m \lesssim1.8\times10^{-16}~\mathrm{eV}\) and \(2.2\times10^{-14}~\mathrm{eV}\lesssim m \lesssim2.8\times10^{-11}~\mathrm{eV}\) are excluded, or else these black holes and celestial objects would not be there~\cite{Stott:2020gjj}. These bounds are valid provided that any additional interactions that the bosons might possess are small enough not to interfere with the onset and development of superradiance. In particular, these limits are valid if \(10^{-30}\,\mathrm{eV}/m\ll\al\ll1\)~\cite{Brito:2020lup}, which is verified for most of the parameter space we are considering. Therefore, the limits we can obtain from GWIs will at the same time independently exclude some of the parameter space that is probed by superradiance, as well as test regions not accessible by it. Spin-2 ULDM can also be detected thanks to the CW signal that superradiance produces, which is physically unrelated to and distinct from the signal we have described in this work~\cite{Brito:2020lup}; no such searches have been carried out yet for spin-2 ULDM. In order to fully take advantage of GWI data to test spin-2 ULDM, a dedicated pipeline should be developed. As we have shown in Sec.~\ref{sec:maths} the signal \Eq{eq:signal} has a peculiar geometric structure that is explicitly given in \Eq{eq:signal_rpq}. Moreover, the ULDM signal is expected to be coherent for a time \(\tcoh = 2/fv^2\). An optimised analysis molded onto the shape of this signal can not only improve the sensitivity of GWIs to spin-2 ULDM-CW, but also discriminate between ULDM and other sources of CWs at different frequencies, such as fast-spinning Galactic neutron stars (at high frequencies) or ultra-compact Galactic binaries (in the milli-Hertz band), CW coming from superradiance, and other variants of ULDM, furthering our grasp of dark matter and gravity. \acknowledgments The Authors would like to thank C.~Palomba for valuable discussion on the semi-coherent methods used in GW full-sky searches, R.~Brito and V.~Cardoso for an update on the status of spin-2 superradiance, and N.~Tamanini, A. Klein and R. Sturani for useful correspondence on CW searches with LISA. FU is supported by the European Regional Development Fund (ESIF/ERDF) and the Czech Ministry of Education, Youth and Sports (MEYS) through Project CoGraDS - \verb|CZ.02.1.01/0.0/0.0/15_003/0000437|. The work of DLN and JMA has been supported by CONICET, ANPCyT and UBA.
{ "timestamp": "2021-04-26T02:03:44", "yymm": "2012", "arxiv_id": "2012.13997", "language": "en", "url": "https://arxiv.org/abs/2012.13997" }
\section{Introduction} Yoshida and Soda have studied \cite{Yoshida:2017fao} how a possible cosmological axion background would affect measurements of the electromagnetic memory effect \cite{winicour,bieriED,stromED,Mao:2017wvx,Hamada:2017atr,Hamada:2018cjj}. Not surprisingly, a new radiation mode became observable. On the other hand, the extension of electromagnetic memory to non-Abelian theories has been studied in \cite{Pate:2017vwa,ymmemo2,Jokela:2019apz,Campoleoni:2019ptc}. The purpose of this article is to study how non-Abelian memory would be affected by a simultaneous excitation of a color singlet axion-like (called axionic in the following) degree of freedom. Physically, non-Abelian theories develop a gap and do not propagate as massless radiation. Nevertheless, classical radiation-like color field configurations appear in the framework of analysing the dynamics of collisions of ultrarelativistic large nuclei in terms of the McLerran-Venugopalan model \cite{Ayala:1995kg} and color glass condensate (CGC) \cite{iancu}. A single collision can be interpreted as a burst of classical non-Abelian radiation for which the memory effect can be formulated. However, to obtain physical gauge choice independent results one has to average over an ensemble of color field configurations. We shall set up the problem by studying the case in which there is one large nucleus, the wave function of which is excited by a weak probe, like a single nucleon. We assume an axionic degree of freedom is also excited, write down the coupled equations of mode and solve \cite{Gelis:2005pt} the fluctuations of the gauge fields induced by the axion and finally compute the effect on the memory, total transverse kick of a test particle. The outcome is that there on the classical level indeed is a new parity violating mode, in analogy with \cite{Yoshida:2017fao}. However, when going over to quantum theory, the effect averages itself out in the infinitely contracted nucleus limit. We list several effects which could contribute in the finite width limit but are so far unable to compute them. These can be addressed in future studies. In cosmology, the motivation for studying an axionic background is, for example, dark matter. In non-Abelian field theory, QCD, the motivation is the anomalous non-conservation of the axial U(1) current. There is an extensive literature on the appearance of these phenomena in nucleus-nucleus collisions \cite{Kharzeev:2015kna}. Axion-like effects can appear in the single nucleus case in a very subtle way in polarized deep inelastic scattering \cite{Tarasov:2020cwl}. Non-Abelian gauge fields together with axions appear also in studies of inflationary cosmology \cite{Maleknejad:2012fw,Lozanov:2018kpk}. The rest of this paper is organized as follows. Coupled equations of motion and solution iterative in the axion are written down in Section~\ref{eomsec}; leading order equations are solved in Section~\ref{O0}, and next-to-leading ones in Section~\ref{O1}, at the end of which axion-corrected gauge fields are summarized. Effect on the memory is derived in Section~\ref{memo} and Section~\ref{sec:conclusions} contains our conclusions. Two appendices contain additional details. \medskip \paragraph{Conventions} Color conventions are $D_\mu={\partial}_\mu-igA_\mu\equiv D_\mu(A)$ , $A_\mu=A_\mu^a T_a$, $[T_a,T_b]=if_{abc}T_c$, $a,b,c=1,...,N_c^2-1$, $F_{\mu\nu}=i/g [D_\mu,D_\nu]={\partial}_\mu A_\nu-{\partial}_\nu A_\mu-ig[A_\mu,A_\nu]\equiv F_{\mu\nu}(A)$. Under a unitary gauge transformation $U(x)$ \begin{equation} A_\mu\to A_\mu^\prime=UA_\mu U^\dagger+i/g\, U{\partial}_\mu U^\dagger,\quad F_{\mu\nu}\to F_{\mu\nu}^\prime=UF_{\mu\nu} U^\dagger. \la{gt} \end{equation} The adjoint representation is defined by $(T^a)_{bc}=-if_{abc}$. For commutators of color matrices $M=M_a T^a$ in any representation we use $[M,N]_c=if_{abc}M_aN_b\equiv M^\rmi{adj}_{cb}N_b=(M^\rmi{adj}N)_c$, where in the RHS $M^\rmi{adj}$ is in adjoint representation and $N=(N_b)$ is a color vector. In a related projection we may have a matrix equation $UMU^\dagger = J$ in any representation, $M,J$ are Lie algebra elements, $M=M_a T_a$, ${\rm Tr\,} T_aT_b=T_R\delta_{ab}$. Then the $a-$component of the equation is \begin{equation} J_a = \fra1{T_R}{\rm Tr\,}[T_aUT_bU^\dagger]M_b\equiv C(U)_{ab}M_b \la{cu} \end{equation} and the equation is written as a matrix$\times$vector equation $C(U)M=J$ with a new adjoint matrix $C(U)$. The transformation $C(U)$ forms a representation of the group in the sense that $C(UV)=C(U)C(V)$. However, if $U,\,T_a$ are in the adjoint representation, the new matrix is exactly the same as the original $U$ matrix, \begin{equation} C(U)=\fra1{T_R}{\rm Tr\,}[T_aUT_bU^\dagger]=U_{ab}\ . \la{uadj} \end{equation} This is easy to verify infinitesimally, by writing $U\approx 1+i\theta_a T_a$ and doing the trace. The matrix equation $UMU^\dagger = J$ then has become a matrix$\times$vector equation $UM=J$. The metric convention is using mostly plus light cone coordinates, $v\equiv v^\mu =(v^+,v^-,{\bf v})$, $v^+ = \fra1{\sqrt2}(v^0+v^1)=-v_-$, $v^i=v_i$, $v\cdot u=-v^+u^--v^- u^++{\bf v}\cdot{\bf u}$, and $v_T=|{\bf v}|$. Here we have taken $x^1=x_1=x_L$ as the longitudinal coordinate, $x^i=x_i=(x^2,x^3)$ are then the transverse ones. \section{Equations of motion}\la{eomsec} The action of $N_f=0$ QCD with a pseudoscalar axion $\chi$ is \begin{equation} S[A^\mu_a,\chi]=\int d^4x\left[-\fra14 F_{\mu\nu}^a F^{\mu\nu}_a-\fra{\lambda}4 \chi F_{\mu\nu}^a \tilde F^{\mu\nu}_a-\fra{f^2}2{\partial}_\mu\chi {\partial}^\mu\chi - V(\chi)+J_\mu^a A^\mu_a \right] \ . \la{sax} \end{equation} We focus the attention on the coupling of axions with the gluon sector and omit quarks from consideration. Here $F^a_{\mu\nu}$ is the field tensor of SU($N_c$) Yang-Mills theory, $\tilde F^a_{\mu\nu}$ its dual, $\lambda$ is a dimensionless parameter counting the number of axion interactions, $f$ is a parameter of dimension 1, $V(\chi)$ is axion potential, often chosen as $ \mu^4(1- \cos\chi)$ to retain some shift symmetry and $J$ is a color current. Without the axion this action and its extensions has been used to study \cite{ymmemo2,Jokela:2019apz} the possibility and even phenomenology of YM memory \cite{Pate:2017vwa,Campoleoni:2019ptc} in heavy ion collisions in which particularly large densities of gluons and thus classical gluon fields are involved. The aim of the present study is to investigate how possible existence of an axionic interaction would affect these considerations. In the cosmological context the effect of a cosmic axion background on the usual U(1) electromagnetic memory has been studied in \cite{Yoshida:2017fao,Mao:2017wvx,Hamada:2018cjj}. While the usual memory is of $E$-type \cite{winicour}, parity breaking properties of the axion lead also to the appearance of $B$-type memory. Interactions between YM fields and axions have also been studied in cosmology in the context of inflation \cite{Maleknejad:2012fw,Lozanov:2018kpk}. In the spirit of \cite{ymmemo2,Jokela:2019apz} we shall assume the classical YM fields are those appearing in a nuclear wave function excited by a weak probe. Associating an axion with these phenomena is speculative. However, in the study of deep inelastic scattering on polarized hadrons a momentum structure $\epsilon^{{\mu\nu}\alpha\beta}p_\mu q_\alpha \chi(p,q)$, analogous to that in \nr{sax}, naturally enters. This is due to the appearance of chiral triangle anomaly in polarized DIS, discussed, for example, in \cite{Tarasov:2020cwl}. We suggest that it would be useful to study how the assumption of a particle-like axion state would fit in the framework of classical fields in large nuclei in the infinite momentum frame. The action \nr{sax} leads to the equations of motion \begin{eqnarray} D_\mu F^{\mu\nu} & = & J^\nu-\lambda \,{\partial}_\mu\chi\,\tilde F^{\mu\nu} ,\qquad D_\mu\,\tilde F^{\mu\nu}=0 \la{eom1} \\ f^2\square \chi-V'(\chi) & = & \fra\lambda 4 F_{\mu\nu} \,\tilde F^{\mu\nu} \ . \la{eom2} \end{eqnarray} Defining the axion current \begin{equation} j_\rmi{ax}^\nu=-{\partial}_\mu\chi\,\tilde F^{\mu\nu} \end{equation} we have automatically ($\chi$ is color singlet) \begin{equation} D_\nu j_\rmi{ax}^\nu =0 \ , \la{axcurrcons} \end{equation} so that the current $J^\nu$ has to satisfy the condition \begin{equation} D_\nu J^\nu=0 \ . \end{equation} We shall further split the current $J$ in components corresponding to a nucleus $A$ and probe $p$ moving along opposite light cones: \begin{equation} J=J_A+j_p \ , \end{equation} where $J_A$ has only a $+$ and $j_p$ only a $-$ component. To approximately solve the equations \nr{eom1} and \nr{eom2} we write \begin{equation} A_\mu = A_\mu^{(0)}+\lambda A_\mu^{(1)}+\ldots \equiv A_\mu +\lambda a_\mu+\ldots \qquad \chi = \chi_0+\lambda \chi_1+\ldots \end{equation} and iterate to first order in $\lambda$, treating $j_p^\nu$ as ${\cal O}(\lambda^1)$. The intent is to include only the fluctuations caused by the axion, more generally there will be quantum fluctuations with different momentum spectra. Expanding in $\lambda$, \begin{eqnarray} D_\mu(A +\lambda a)F^{\mu\nu}( A +\lambda a)&=& J_A^\nu+\lambda j_p^\nu + \lambda j_\rmi{ax}^\nu \la{full} \\ f^2\square (\chi_0+\lambda \chi_1)-V'(\chi_0+\lambda \chi_1) &=&\fra\lambda 4 F_{\mu\nu}(A +\lambda a) \,\tilde F^{\mu\nu}(A +\lambda a) \\ D_\nu(A +\lambda a)(J_A^\nu+\lambda j_p^\nu + \lambda j_\rmi{ax}^\nu) & = & 0 \ , \end{eqnarray} leads to: \vspace{4mm} ${\cal O}(\lambda^0)$ equations: \begin{eqnarray} D_\mu(A)F^{\mu\nu}(A) & = & J_A^\nu \la{01} \\ f^2 \square \chi_0-V'(\chi_0) & = & 0 \la{02}\\ D_\nu(A)J^\nu_A & = & 0 \ .\la{03} \end{eqnarray} ${\cal O}(\lambda^1)$ equations (with one order $\lambda$ correction) \begin{eqnarray} [D^2a^\nu - D^\nu D\cdot a+2ig F^{\mu\nu} a_\mu]_c & = & -{\partial}_\mu\chi_0\cdot \tilde F^{\mu\nu}_c \la{11} \\ f^2\square\chi_1-\chi_1 V''(\chi_0) & = & F_{\mu\nu}^a\tilde F^{\mu\nu}_a+\lambda\tilde F^{\mu\nu} D_\mu a_\nu \la{12} \\ D_\nu(A)(j_p^\nu+j_\rmi{ax}^\nu)-ig a_\nu J_A^\nu & = & 0 \ . \la{13} \end{eqnarray} In \nr{11}-\nr{13} $D^\nu,F^{\mu\nu},\tilde F^{\mu\nu}$ are adjoint representation matrices, $F=F^a T^a$, $T^a_{bc}=-if_{abc}$, evaluated at $A_\mu$, the solution of \nr{01}, $a_\mu^c$ is a color vector. The order $\lambda$ correction in \nr{12} is included since the leading term actually vanishes. Note that $\lambda$ is here intended as a parameter counting axionic interactions. Perturbatively, there are also quantum fluctuations of order $g$, which are neglected here, the background field $A$ is taken to be a purely classical YM field. Quantum effects will enter by integrating over an ensemble of color currents. We shall now apply these equations for a process in which a weakly interacting probe $p$ moves along the $x^-$ axis and collides with a large nucleus $A$ moving along the $x^+$ axis, see Fig.~\ref{burst} (Left). On an event-by-event basis the collision excites the nucleus to an effective color field configuration together, as is assumed in this work, with a weak axionic configuration. We shall solve these configurations in order to check whether they can be represented in the framework of a memory effect. We shall work here in the very high energy approximation of a Lorentz contracted infinitely thin nuclear sheet. The thickness parameter $\epsilon$ is taken to zero at the end of the computation. In usual discussions of the memory effect the coordinate $x^\mu=(u,r,\theta,\phi)$, $u=t-r$, with the line element \begin{equation} ds^2=-du^2-2du\,dr +r^2(d\theta^2+\sin^2\theta d\phi^2) \end{equation} is a natural one to use, for the angular part, a more general metric $ds^2=h_{AB}\,d\theta^A d\theta^B$, $A,B=1,2$ on $S^2$. The relation to the light cone coordinates is simply $x_L=r\cos\theta$ with $\theta\to0$ so that the surface of $S^2$ is flattened. Then \begin{equation} \sqrt2 x^- = u+r-r\cos\theta\approx u \end{equation} and the $t,x_L$ space-time diagram and the flat space Penrose diagram can be qualitatively related as in Fig.~\ref{burst}. In the Penrose diagram the null infinity is brought to a finite distance by a conformal transformation, in the $t,x_L$ space-time diagram the dominant field configuration is $x^+$ independent and the ``null infinity'' is at some large value of $x^+$. \section{Leading order equations}\la{O0} The {\cal O}$(\lambda^0)$ equation for the gauge field $A^\mu$ is, in color matrix$\times$color vector notation, \begin{equation} D_\mu F^{\mu\nu} = J_A^\nu=\delta^{\nu +}\rho(x^-,{\bf x}) \ . \la{YM} \end{equation} Here $\rho$ is the color current of a nucleus $A$ moving in the $x^+$ direction in the infinite momentum frame. In the extreme thin sheet approximation one is tempted to write $\rho(x^-,{\bf x})=\delta(x^-)\rho({\bf x})$, but when integrating over $x^-$ one should first take a finite range and then let the upper limit go to zero, see remarks around \nr{wrong}. Thus $\rho$ is concentrated in the range $0<x^-<\epsilon$, $\epsilon\to0$. It is also crucial for the following that there is no $x^+$ dependence, due to infinite time dilatation. Below we shall keep track of $x^+$ dependence, too. For a nucleus moving in the $x^+$ direction it is convenient to choose the light cone gauge\footnote{The logic of naming gauges is as follows. Since one $\pm$ component is fixed ($A^-=0$) the gauge is a light cone (LC) gauge. This comes in two variants, either as a longitudinal LC gauge, also called $A^+$ gauge ($A^+$ nonzero) or as a transverse LC gauge, also called $A^i$ gauge ($A^i$ nonzero).} $A^-=-A_+=0$. Then a current with only $+$ component and no $x^+$ dependence automatically satisfies $D_\mu J^\mu={\partial}_+ J^+(x^-,{\bf x})=0$, as required by \nr{YM}. Imposing first just the gauge condition $A^-=0$, $A^\mu=(A^+,0,A^2,A^3)$ the field tensor is, in the $(+,-,2,3)$ basis, \begin{eqnarray} F_{\mu\nu}&=&\left( \begin{array}{cccc}0 & -{\partial}_+ A^+ & {\partial}_+A_2&{\partial}_+A_3 \\ {\partial}_+ A^+ & 0 &F_{-2} & F_{-3} \\ -{\partial}_+A_2 & -F_{-2} &0 &F_{23}\\ -{\partial}_+A_3 & -F_{-3} &-F_{23} &0 \end{array} \right) \nonumber \\ &=& \fr1{\sqrt2}\left( \begin{array}{cccc}0 & -\sqrt2E_L & E_2-B_3 &E_3+B_2 \\\sqrt2 E_L & 0 &E_2+B_3 & E_3-B_2 \\- E_2+B_3 & -E_3-B_2 &0 & -\sqrt2B_L\\ -E_3-B_2 &-E_2+B_3 &\sqrt2 B_L &0 \end{array} \right) \ . \la{aminus} \end{eqnarray} Here the second form of $F_{\mu\nu}$ records what the color electric and magnetic fields would be with the usual 3d associations $F_{0i}=E_i,\,\,F_{ij}=-\epsilon_{ijk}B_k$, where $i,j,k=1,2,3$, and remembering that $x_1\equiv x_L$ is the longitudinal coordinate. These explicit forms emphasize the strong effect of the approximation of $x^+$ independence, putting ${\partial}_+=0$. Firstly, the longitudinal electric field vanishes, $E_L=0$. Secondly, the transverse electric and magnetic fields are related: \begin{equation} (E_2,E_3)=(B_3,-B_2) \ , \ E_i=\epsilon_{ij}B_j \ , \ B_i=-\epsilon_{ij}E_j \ , \ E_iB_i=0 \ . \la{fields} \end{equation} and orthogonal, $\tilde F^{\mu\nu} F_{\mu\nu}=0$ (see below). With no $x^+$ dependence, the only nonzero components of $F_{\mu\nu}$ are $F_{ij}$ and $F_{-i}= F^{i+}={\partial}_-A_i+D_iA^+$ and the field tensor is \begin{equation} F_{\mu\nu}=\left( \begin{array}{cccc}0 & 0 & 0 &0 \\ 0 & 0 &F_{-2} & F_{-3} \\ 0 & -F_{-2} &0 &F_{23}\\ 0 & -F_{-3} &-F_{23} &0 \end{array} \right) \ . \la{fmn} \end{equation} For the dual tensor we have ($\tilde F^{+i}=-\epsilon_{ij}F_{-j}$), \begin{equation} \tilde F^{\mu\nu}= \fra12 \epsilon^{{\mu\nu}\alpha\beta}F_{\alpha\beta}= \left( \begin{array}{cccc}0 & F_{23} & -F_{-3} &F_{-2} \\ -F_{23} & 0 & {\partial}_+A_3 & -{\partial}_+A_2 \\ F_{-3} &- {\partial}_+A_3 &0 & -{\partial}_+A^+\\ -F_{-2} &{\partial}_+A_2 & {\partial}_+A^+ &0 \end{array} \right)= \sqrt2\left( \begin{array}{cccc}0 & 0 & B_2 &B_3 \\ 0 & 0 &0 &0 \\ -B_2 & 0 &0 &0\\ -B_3 &0&0 &0 \end{array} \right) \end{equation} with \begin{equation} \tilde F^{\mu\nu} F_{\mu\nu} = \fra12\epsilon^{{\mu\nu}\alpha\beta}F_{\mu\nu} F_{\alpha\beta}= 4\,\Big[{\partial}_+A_3\cdot F_{-2} -{\partial}_+A_2\cdot F_{-3}- {\partial}_+ A^+ \cdot F_{23}\Big]. \la{bdote} \end{equation} The second form of the dual follows from $x^+$ independence and choosing $A^+$ gauge in which $F_{23}=0$ (see below). One sees concretely how the vanishing of $\tilde FF$ follows from $x^+$ independence. We can now return to solving the YM equations \nr{YM} choosing $A^-=0$ and assuming $x^+$ independence. The $\nu=-$ component of \nr{YM}, $D_+F^{+-}+D_iF^{i-}=J_A^-=0$, is identically satisfied since the field tensor components vanish. The $+,i$ components of \nr{YM} are \begin{eqnarray} D_iF^{i+} & = & D_i{\partial}_-A_i + D_iD_i A^+=\rho(x^-,{\bf x}) \la{am0} \\ D_jF^{ji} & = & 0 \ . \la{amj} \end{eqnarray} We remind that an equation $DF=\rho$ is short for a matrix equation $D_{ab}F_b=\rho_a$. Using the associations in \nr{fmn} we can equally write \begin{equation} D_iF^{i+}=\sqrt2 D_iE_i=\sqrt2 \epsilon_{ij}D_iB_j=\rho \ . \end{equation} \begin{figure}[!t] \begin{center} \includegraphics[width=0.56\textwidth]{spacetimeplot_clean.pdf}\hspace{1cm} \includegraphics[width=0.36\textwidth]{EM_kick.pdf} \end{center} \caption{\small {\bf{Left:}} A weak probe $p$ moving along the $x^-$ axis (current $j_p^\nu$) collides with a large nucleus moving along the $x^+$ axis (current $J_A^\nu$) spread over a distance $\epsilon$ in the $x^-$ direction. As discussed in Sections~\ref{O0}~and~\ref{O1}, a colored field configuration with an axionic component with the current $j_\rmi{ax}^\nu$ is excited in the strip $0<x^-<\epsilon$. Computations are carried out in the limit $\epsilon\to0$. Memory is the kick experienced by a test particle (vertical red line, Section~\ref{memo}). {\bf{Right:}} For comparison, a Penrose diagram presentation of the memory effect in electrodynamics. A radiator at $r=0$ sends a pulse of radiation to null infinity ${\cal I}^+$ during the time interval $u_i<u<u_f$. The time integrated pulse of transverse electric field gives a total momentum kick in \protect\nr{p} to a test charge at null infinity. }\la{burst} \end{figure} Further discussion of \nr{am0} and \nr{amj} splits naturally in two branches, one can find solutions with either the longitudinal light cone gauge (LLC) $A^i=0$, which we call $A^+$ gauge, this being the only non-zero component: \begin{eqnarray} &&A^\mu =(\tilde A^+(x^-,x^i),0,0,0),\quad D_\mu =({\partial}_+,{\partial}_-+ig\tilde A^+,{\partial}_i) \nonumber \\ &&D^\mu =(-{\partial}_--ig\tilde A^+,-{\partial}_+,+{\partial}_i),\quad D^2=\square-2ig\tilde A^+{\partial}_+ \nonumber \\ &&F^{+-}=F^{-i}=0, \quad F_{-i}={\partial}_i \tilde A^+=\sqrt2 \tilde E_i=\epsilon_{ij}\sqrt2 \tilde B_j \ , \la{LLC} \end{eqnarray} or the transverse light cone gauge (TLC) $A^+=0$, which we call $A_i$ gauge, these being the only non-zero components: \begin{eqnarray} &&A^\mu=(0,0,A^i(x^-,x^j)=\fra i g U{\partial}_iU^\dagger),\quad {\partial}_- U=igU\tilde A^+ \la{ai} \\ &&D_\mu =({\partial}_+,{\partial}_-,{\partial}_i-igA_i),\quad D^2=-2{\partial}_+{\partial}_-+D_iD_i \nonumber \\ &&F^{+-}=F^{-i}=0, \quad F_{-i}={\partial}_- A^i=\sqrt2 E_i=\epsilon_{ij}\sqrt2 B_j \ . \la{TLC} \end{eqnarray} Whenever confusion might arise, quantities in $A^+$ gauge will be appended with a tilde. In these equations, $\tilde A^+$ is determined from the equation of motion \nr{am0} with $A^i=0$: \begin{equation} D_\mu F^{\mu +}=D_iF^{i+}={\partial}_i{\partial}_i \tilde A^+(x^-,x^i)=\tilde\rho(x^-,x^i)\ , \la{poisson} \end{equation} {\emph{i.e.}}, by inverting a 2d transverse Poisson equation, \begin{equation} \tilde A^+(x^-,{\bf x})=\int d^2y\,G({\bf x}-{\bf y})\tilde\rho(x^-,{\bf y})=\fr1{2\pi}\int d^2y\log(|{\bf x}-{\bf y}|\Lambda) \tilde\rho(x^-,{\bf y}) \ , \la{A+green} \end{equation} where $\Lambda$ is an IR cutoff parameter. A key role in the following is played by the adjoint matrix $U(x^-,{\bf x})$ transforming from $A^+$ to $A_i$ gauge, {\emph{i.e.}}, transforming $\tilde A^+$ to zero. According to \nr{gt} this matrix $U$ has to satisfy \begin{equation} {\partial}_-U^\dagger +ig\tilde A^+U^\dagger = D_-U^\dagger =0 \ , \la{D-U} \end{equation} {\emph{i.e.}}, \begin{equation} U(x^-,x^i)=P\exp\biggl[ig\int_0^{x^-} dy^- \tilde A^+(y^-,x^i)\biggr]U(0,x^i) \ , \la{U} \end{equation} transforms $\tilde A^+$ to zero. Note that we define $U$ with $+ig$ in the exponent. Since \begin{equation} {\partial}_+U(x^-,{\bf x})=0 \ , \end{equation} $A^-=0$ is intact and one transforms from the longitudinal to the transverse LC gauge. Since the first order matrix equation \nr{D-U} is homogeneous, its solution \nr{U} could be multiplied with an arbitrary matrix function $M(x^+,{\bf x})$. The transverse field $A_i$ given in \nr{ai} is then generated and the field tensors transform, in matrix notation, as \begin{equation} F^{i+}={\partial}_-A^i=U\tilde F^{i+}U^\dagger= U{\partial}_i\tilde A^+U^\dagger \ . \la{Ftrans} \end{equation} In component form we can as well write, using \nr{uadj}, \begin{equation} {\partial}_-A_{ia}=U_{ab}\, {\partial}_i\tilde A^+_b \ , \ {\partial}_i\tilde A^+_a=(U^{-1})_{ab}\,{\partial}_-A_{ib}= U_{ba}{\partial}_-A_{ib} \ . \la{etrafo} \end{equation} As discussed above $\rho$ and $A^+$ are confined in the range $0<x^-<\epsilon\to 0$, due to Lorentz contraction, and one might be tempted to insert $\delta(x^-)$ for the $x^-$ dependence. However, then $U$ in \nr{U} would be $\sim$ \begin{equation} \exp[-igA^+(0,{\bf x})]\theta(x^-)+\theta(-x^-) \la{wrong} \end{equation} and this form does not satisfy \nr{D-U}. One should keep the path ordered integral and only at the end take the range to zero. The field configuration excited by a weak probe is thus very simple, just radiation-like mutually orthogonal color electric and magnetic fields. The situation is quite different in glasma, the state excited in a collision of two large systems \cite{Lappi:2006fp}. Then also longitudinal fields are excited. Consider then the axion equation \nr{eom2} or its expanded versions \nr{02} and \nr{12}. Since the leading term for $F\tilde F$ vanishes, the equation to order $\lambda^0$ and to order $\lambda^1$ is simply the free scalar equation, \begin{equation} \square \chi -m^2\chi =0 \ . \end{equation} The simplest approximation $V(\chi)=\fra12 f^2m^2\chi^2$, $m= $ axion mass, is used for the potential. Actually the axion will induce an order $\lambda^2$ inhomogeneous term $\fra{\lambda^2}{f^2}{\partial}_+(a_i\sqrt2 B_i)$ to the RHS (see later Eq.\nr{newffd}). Thus to ${\cal O}(\lambda)$ the axion simply is a plane wave state $\chi_0(k)\, e^{ik\cdot x}$, $2k^+k^-=k_T^2+m^2$. Since the inhomogeneous term is of higher order the normalization is unknown; actually it is too much to expect that the normalization could be determined. If the axion is an exponential in time, $\sim e^{imt}$, then along $x^-=0$, just after the nuclear sheet, where we really need it, \begin{equation} \chi_0(mx^+,0,{\bf x})=\chi_0({\bf x})e^{i\fra{m}{\sqrt2} x^+} \ . \la{chimt} \end{equation} We will also need the combination \begin{equation} \fra1{{\partial}_+}{\partial}_+\chi_0=\chi_0( mx^+,0,{\bf x})-\chi_0(0,0,{\bf x})\equiv \bar\chi_0( mx^+,0,{\bf x}) \ , \la{invplus} \end{equation} using the normalization \nr{invd+} of inverse ${\partial}_+$. This vanishes when $m\to0$ \cite{Hill:2015vma}. Of course, the $x^+$ integration constant in \nr{invd+} is basically unknown. At this point one may also compare the situation in QCD and cosmological contexts, in view of Fig.~\ref{burst}. In the cosmological context \cite{Yoshida:2017fao} one has an emitter at $r=0$ which during a time interval $u_f-u_i$ sends a pulse to null infinity. Actually the emitted radiation is cosmological background radiation and null infinity is here, where the radiation is observed. In the course of its propagation the radiation passes through a (tentative) axionic dark matter background and this affects the polarization properties of the radiation so that not only $E$-mode radiation but also $B$-mode one is observed. This is a memory effect. There is only one universe, but the observed effect is still an average over many subsystems. In the QCD case, the field configuration is excited by a probe colliding with the nucleus. The transverse radiation potential $A_i$ grows within the shock wave $0<x^-<\epsilon$ essentially $\sim \theta_\epsilon(x^-,{\bf x})$, so that the fields, derivatives of $A_i$ are $\sim \delta_\epsilon(x^-,{\bf x})$ and can produce a finite result when integrated over $0<x^-<\epsilon\to0$. The required parity violation resides in the anomalous non-conservation of the axial current. It has been extensively discussed, in the form of the chiral magnetic effect, mainly in the central region of nucleus-nucleus collisions, less so in phenomena involving a single nuclear sheet (see, however, \cite{Tarasov:2020cwl}). \section{Next-to-leading order equations}\la{O1} Inserting the computed background field to \nr{11} we have the fluctuation equation \begin{equation} D^2a^\nu - D^\nu D\cdot a+2ig F^{\mu\nu} a_\mu =j^\nu_p+ j^\nu_\rmi{ax} \ , \end{equation} where \begin{equation} j^\nu_p=\delta^{\nu -}\delta(x^+)\rho_p({\bf x}),\quad j^\nu_\rmi{ax}= \epsilon_{ik}{\partial}_k A^+(\delta^{\nu i}{\partial}_+\chi_0-\delta^{\nu +}{\partial}_i\chi_0) \la{currs} \end{equation} and \begin{equation} j^\nu_p+ j^\nu_\rmi{ax}=(\sqrt2 B_i{\partial}_i\chi_0,\,\,\delta(x^+)\rho_p({\bf x}), -\sqrt2 B_i{\partial}_+\chi_0) \ , \la{curr} \end{equation} where we introduced the magnetic field from \nr{fields}, $\sqrt2 B_i=-\epsilon_{ij}{\partial}_j\tilde A^+$, and recall $\chi_0=\chi_0( mx^+,0,{\bf x})$ in \nr{chimt}. We will discuss these equations in the $A^+$ gauge modified by the fluctuation field in the gauge $a^-=0$: \begin{equation} A^\mu+a^\mu=(A^+(x^-,{\bf x})+a^+,0,a^i) \ , \end{equation} in which they have the explicit form ($D\cdot a=D_\mu a^\mu = {\partial}_+a^+ +{\partial}_i a^i$) \begin{eqnarray} & & \nu = - \quad {\partial}_+({\partial}_+a^+ +{\partial}_i a^i) = j_p^- \la{eqq1} \\ & & \nu = i \quad \square a_i-2igA^+{\partial}_+a_i- {\partial}_i({\partial}_+a^+ +{\partial}_i a^i) = j^i_\rmi{ax} \la{eqq2} \\ & & \nu = + \quad \square a^+-2igA^+{\partial}_+a^++ ( {\partial}_-+igA^+) ({\partial}_+a^+ +{\partial}_i a^i)+2ig{\partial}_iA^+\cdot a_i = j^+_\rmi{ax} \ .\la{eqq3} \end{eqnarray} We shall restore the tilde in notation for $A^+$ gauge when simultaneous quantities in the $A^i$ gauge start entering, after Eq.~\nr{tutudag}. One is interested in solving these equations for the fluctuation field $a^\mu=(a^+,a^-=0, a^i)$ by integrating over the region depicted in Fig.~\ref{burst}, starting from vanishing values at $x^-=-\infty$ and then integrating in the direction of $x^-$. The main effect is what happens when crossing the nucleus, in the range $0<x^-<\epsilon\to0$. Note that this $x^-$ integration is in exact analogy when integrating over $u$ at large $r$ when computing the ED memory in $(u,r,\theta_A)$ coordinates. Let us first check the current conservation condition \nr{13} explicitly. First, $a_\nu J_A^\nu=0$ since $J_A$ has only the $+$ component and $a_+=-a^-=0$. Contracting the current \nr{curr} with $D_\nu=({\partial}_+,D_-,{\partial}_i)$ cancels the axionic terms (as should, according to \nr{axcurrcons}), but the condition \begin{equation} D_-j_p^-=({\partial}_-+igA^+)j_p^-=({\partial}_-+igA^+)\delta(x^+)\rho({\bf x})=0 \la{jpcons} \end{equation} remains. It is satisfied whenever $A^+=0$, but on the nuclear sheet at $0<x^-<\epsilon$ we are back to Eq.~\nr{D-U}, the collision with the nuclear sheet rotates the color of the probe by multiplying $j_p^-$ by the conjugate of the matrix $U^\dagger$ in \nr{U}. We thus have to write the probe current in the form \begin{equation} j_p^-=\delta(x^+)[\theta(x^-)U^\dagger(x^-,{\bf x})\rho_p({\bf x})+\theta(-x^-)\rho_p({\bf x})] \ . \end{equation} Returning to the fluctuation equations \nr{eqq1}-\nr{eqq3}, one first sees that the $\nu=-$ equation \nr{eqq1} can be integrated to \begin{equation} D\cdot a ={\partial}_+a^+ +{\partial}_i a^i={1\over{\partial}_+}j_p^-= \theta(x^+)[\theta(x^-)U^\dagger(x^-,{\bf x})\rho_p({\bf x})+\theta(-x^-)\rho_p({\bf x})] \ , \la{Daval} \end{equation} using \begin{equation} {1\over{\partial}_+}f(x^+)=\int_0^{x^+} dy^+\,f(y^+) \ . \la{invd+} \end{equation} The lower limit is at $x^+=0$ since that is when the collision takes place. In view of \nr{jpcons}, $D\cdot a$ also is covariantly conserved: \begin{equation} D_-(D\cdot a)=0\ , \la{Dacons} \end{equation} as is expected of a ``time'' $x^-$ independent constraint. Before the collision, at $x^-<0$, we have $A^+=0$ and all the fields are simple to solve. First, \begin{equation} (-2{\partial}_+{\partial}_-+{\partial}_T^2)a_i-\theta(x^+)\theta(-x^-){\partial}_i\rho_p({\bf x})=0 \la{aieq1} \end{equation} so that \begin{equation} a_i=\theta(x^+)\theta(-x^-){{\partial}_i\over {\partial}_T^2}\rho_p({\bf x})= \theta(x^+)\theta(-x^-)\int{d^2y\over2\pi}\,{x_i-y_i\over|{\bf x}-{\bf y}|^2}\rho_p({\bf y}) \ , \ a^+=0 \ , \ D\cdot a = {\partial}_i a^i \ . \la{initvals} \end{equation} Actually for this solution ${\partial}_+{\partial}_- a_i\sim \delta(x^+)\delta(x^-)$ so that it only satisfies \nr{aieq1} away from the collision point $x^+=x^-=0$. The most general solution of \nr{Dacons} would contain $U^\dagger$ multiplied by a matrix function $M(x^+,{\bf x})$, independent of $x^-$ \cite{Gelis:2008rw}. Writing $\square$ explicitly and dividing by $-2{\partial}_+$ the two last ones, Eqs. \nr{eqq2} and \nr{eqq3}, become \begin{eqnarray} &&\nu=i\quad {\partial}_- a_i+igA^+a_i=-\fra1{2{\partial}_+}\left(-{\partial}_T^2 a_i+{\partial}_i (D\cdot a) +j^i_\rmi{ax}\right) \la{eqq2p} \\ &&\nu=+\quad {\partial}_- a^++igA^+a^+=-\fra1{2{\partial}_+}\left(-{\partial}_T^2 a^+ -2ig {\partial}_iA^+\cdot a_i-D_-(D\cdot a) +j^+_\rmi{ax}\right) \ .\la{eqq3p} \end{eqnarray} We are particularly interested in integrating these across the nuclear sheet, $0<x^-<\epsilon$. Inserting what we learnt of $D\cdot a$ these in this range, and for $x^+>0$, are, in $A^+$ gauge, \begin{eqnarray} \nu=i\qquad D_-a_i={\partial}_- a_i+igA^+a_i &=&\fra1{2{\partial}_+}\left({\partial}_T^2 a_i-{\partial}_i (U^\dagger \rho_p) +\sqrt2 B_i{\partial}_+\chi_0\right) \la{eqq2q}\\ \nu=+\quad D_- a^+= {\partial}_- a^++igA^+a^+&=&\fra1{2{\partial}_+}\left({\partial}_T^2 a^++2ig {\partial}_iA^+ \cdot a_i-\sqrt2 B_i{\partial}_i\chi_0\right) \ .\la{eqq3q} \end{eqnarray} As a check of the consistency of the equations \nr{eqq2q} and \nr{eqq3q} one may compute that their solutions indeed satisfy \begin{equation} D_- D\cdot a = D_-{\partial}_+ a^+ + D_- {\partial}_i a^i = 0 \ . \end{equation} The equations \nr{eqq2q} and \nr{eqq3q} are 1st order inhomogeneous matrix equations which are solved by first solving the homogeneous equation and adding an inhomogeneous term. If $M,F$ are vectors and $A$ a matrix, the equation is of type \begin{equation} {\partial}_xM(x)+A(x)M(x)=F(x) \ . \end{equation} The homogeneous equation ${\partial}_xM(x)+A(x)M(x)=0$ is solved by \begin{equation} M_0(x)=P\exp\left[-\int_0^x dy\,A(y)\right]M_0(0)\equiv U^\dagger(x)M(0) \end{equation} and the general solution is ($C$ is a constant) \begin{equation} M_a(x)=C\,U^\dagger_{ab}(x)M_b(0)+U^\dagger_{ab}(x)\int_0^x dy\,U_{bc}(y)F_c(y) \ .\la{gensol} \end{equation} Consider now the equation \nr{eqq2q} for $a_i$. In it $A^+$ and $B_i$, as a spatial derivative of $A^+$, contain a $\delta(x^-)$ singularity, regulated by $\epsilon$. We expect that these singular terms dominate over the two transverse spatial derivative terms on the RHS. We shall therefore neglect these regular transverse terms (as was done in \cite{Gelis:2005pt} in an analogous computation). Note that their sum ${\partial}_T^2 a_i-{\partial}_i (U^\dagger \rho_p)$ vanishes if $a_i=\fra1{{\partial}_T^2}{\partial}_i(U^\dagger\rho_p)$. The equation then basically becomes an equation for ${\partial}_+a_i$, but the RHS also depends on ${\partial}_+\chi_0$. Dividing out ${\partial}_+$ we have to use \nr{invplus} for the inverse. The equation for $a_i$ then becomes \begin{equation} {\partial}_- a_i+ig\tilde A^+a_i=\fra1{\sqrt2}\tilde B_i \,\bar\chi_0(mx^+) \quad , \quad \sqrt2 \tilde B_i= -\epsilon_{ij}{\partial}_j \tilde A^+ \ . \la{aieq} \end{equation} To solve this, we first need the homogeneous solution for $a_i$ with the initial condition \nr{initvals}: \begin{equation} a_i^{(0)}(x^-,{\bf x})= U^\dagger(x^-,{\bf x}) a_i(0,{\bf x})\ , \ a_i(0,{\bf x})=\fra{1}{{\partial}_T^2}{\partial}_i\rho_p({\bf x}) \ . \la{aihom} \end{equation} Then \nr{gensol} gives the transverse fluctuation field in $A^+$ gauge: \begin{eqnarray} a_i(x^+,x^-,{\bf x}) & = & U^\dagger(x^-,{\bf x})\,a_i(0,{\bf x}) \nonumber \\ & & +\,\,U^\dagger(x^-,{\bf x}) \int_0^{x^-} dy^- U(y^-,{\bf x})\fra1{\sqrt2}\tilde B_i(y^-,{\bf x}) \,\bar\chi_0(mx^+,y^-,{\bf x}) \ .\la{aisol1} \end{eqnarray} Here the upper limit $x^-$ is within the range $0<x^-<\epsilon$. As the final step, we want this solution at the exit from the nuclear sheet, at $\epsilon\to0$. That the integral does not vanish in this limit follows from the fact that there effectively is a $\delta(x^-)$ singularity in $B_i$ on the nuclear sheet: a large background field $A_i\sim\theta(x^-)$ is created and $B_i$ is a derivative thereof. In the 2nd term, according to \nr{Ftrans}, the field derivatives are mathematically related by ${\partial}_-A_i=U{\partial}_i\tilde A^+ U^\dagger$ or, in terms of color vector components (see \nr{uadj}), by \begin{equation} {\partial}_- A^i_a=\fra1{N_c}{\rm Tr\,}(T_a\,U\, T_b\, U^\dagger){\partial}_i \tilde A^+_b =U_{ab}\,{\partial}_i\tilde A^+_b\ . \la{tutudag} \end{equation} The axion is effectively constant in the $y^-$ integration so that in the second term we can write (tildes are now restored) \begin{equation} \int_0^{\epsilon} dy^- U(y^-,{\bf x})\,{\partial}_j \tilde A^+(y^-,{\bf x})\bar\chi_0\approx\int_0^{\epsilon\to0}dy^-{\partial}_- A_j(y^-,{\bf x})\bar\chi_0 \approx A_j(\epsilon,{\bf x})\bar\chi_0(x^+,0,{\bf x}) \ . \la{eAchi} \end{equation} Note that we are automatically lead to the adjoint color vector component \begin{equation} A_{jb}=\fra1{N_c}{\rm Tr\,} T_b A_j \end{equation} of the background field $A_j=\fra{i}{g}U{\partial}_jU^\dagger$ in the $A_i$ gauge. Thus, the transverse axion induced fluctuation field, in the $A^+$ gauge, at the exit from the nucleus is \begin{equation} \tilde a_{ia}(x^+,\epsilon,{\bf x})=U^\dagger_{ab}(\epsilon,{\bf x}) \Big[-\fra{1}{2} \epsilon_{ij}A_{jb}(\epsilon,{\bf x})\bar\chi_0(mx^+,0,{\bf x})+\fra1{\bf {\partial}^2}{\partial}_i\rho_{pb}({\bf x})\Big] \ . \la{aisol} \end{equation} We have written down the color components explicitly to emphasize the fact that one should take the color component $b$ of the vector $A_j$, not the matrix. In the second term the color index $b$ comes from the color density $\rho_{pb}$ of the incident probe. This is rotated by the matrix $U^\dagger$ while crossing the nucleus. In the axionic first term the axion is color singlet and the color index $b$ is that of a gluonic transverse field $\sim A_{ib}$ excited from the background. Its color is further rotated by $U^\dagger$ while traversing the sheet. When gauge transforming the background plus fluctuation system from the $A^+$ to the $A_i$ gauge some correction terms arise, relative to transforming only the background field \cite{Ayala:1995kg,Gelis:2008rw,Jeon:2013zga}. These terms are computed in Appendix~\ref{trafoLC}, but they can be neglected in the thin sheet limit. The result in the $A_i$ gauge is thus simple to obtain: just left multiply the $A^+$ gauge result by $U$. This cancels the matrix $U^\dagger$ in the right hand side and \begin{equation} a_{ia}(x^+,\epsilon,{\bf x})= -\fra{1}{2} \epsilon_{ij}A_{ja}(\epsilon,{\bf x})\bar\chi_0(mx^+,0,{\bf x}) +\fra1{\bf {\partial}^2}{\partial}_i\rho_{pa}({\bf x}) \ . \la{aisolAi} \end{equation} This result implies that the axion has induced an $x^+$ dependence \begin{equation} {\partial}_+ a_{ia}=-\fra12\epsilon_{ij}{\partial}_+(A_{ja}\bar\chi_0)= i\fra{m}{\sqrt2}\chi_0(mx^+)(-\fra12\epsilon_{ij}A_{ja}) \la{xpdep} \end{equation} to the transverse field. There is also an interesting property of probe-nuclear sheet interactions contained in the second term on the RHS of \nr{aisolAi}: the non-axionic transverse fluctuation is not at all affected by the sheet \cite{Kajantie:2019nse}. The last term is simply the vacuum solution \nr{initvals} before the collision. The $\tilde a^+$ fluctuation (actually one only needs its derivative ${\partial}_+\tilde a^+$) should now be solved from the equation \nr{eqq3q} \begin{equation} {\partial}_- \tilde a^++ig\tilde A^+\tilde a^+=\fra1{{\partial}_+}\left[+ig {\partial}_i\tilde A^+ \cdot \tilde a_i-\fra{1}{\sqrt2} \tilde B_i{\partial}_i\chi_0\right] \ , \la{ap1} \end{equation} where the non-singular term ${\partial}_T^2\tilde a^+$ has been neglected. The solution can be directly written down from the general formula \nr{gensol} noting that since the initial condition is $a^+(0)=0$ (Eq.~\nr{initvals}), there is no homogeneous solution. Inserting $\tilde a_i$ from \nr{aisol} one to begin with has a non-axionic contribution from the homogeneous term of $\tilde a_i$. This, written for ${\partial}_+\tilde a^+$ is, \begin{equation} {\partial}_+\tilde a^+=U^\dagger(x^-)\int_0^{x^-} dy^- \left(U(y^-)ig{\partial}_i \tilde A^+\cdot U^\dagger(y^-) a_i(0) \right)=U^\dagger\,igA_ia_i(0)=-{\partial}_iU^\dagger a_i(0) \ , \end{equation} in agreement with \cite{Gelis:2005pt} (there $U^\dagger$ is defined as $U$). The relevant new axionic terms come from the inhomogeneous axionic term in \nr{aisol} and the last term in \nr{ap1}. The full result requires one more $y^-$ integral and is, with color indices, \begin{eqnarray} {\partial}_+ \tilde a^+_a & = & U^\dagger_{ab}\left[igA_{ibe} a_{ie}(0)- \fra12 \epsilon_{ij}ig\int_0^\epsilon dy^-{\partial}_-A_{ibe} A_{je}\bar\chi_0+\fra12\epsilon_{ij}A_{jb}{\partial}_i\chi_0 \right]\nonumber \\ & = & U^\dagger_{ab}\left[igA_{ibe} a_{ie}(0)-\fra12\epsilon_{ij}({\partial}_iA_{jb})\bar\chi_0 +\fra12\epsilon_{ij}A_{jb}{\partial}_i\chi_0 \right] \ , \la{apsol} \end{eqnarray} where the arguments $x^-=\epsilon,\,\,y^-,{\bf y}$ are omitted, $\chi_0=\chi_0(mx^+,0,{\bf x})$ is as given in \nr{chimt}, $\bar\chi_0$ has the value at $x^+=0$ subtracted. By using symmetries the $y^-$ integral simply is $\fra12 A_{ibe}A_{je}$ so that the whole middle term is $-\fra14 \epsilon_{ij}ig A_{ibe} A_{je}\bar\chi_0$ ($A_i$ appears both as a matrix and a vector here). However, one further has $igA_{ibe}A_{je}=ig[A_i,A_j]_b={\partial}_iA_{jb}-{\partial}_jA_{ib}$ since the background solution is $F_{ij}=0$. Thus the middle term reduces to $-\fra14 \epsilon_{ij}ig A_{ibe} A_{je}\bar\chi_0=-\fra12\epsilon_{ij}{\partial}_iA_{jb}\bar\chi_0$. With a different sign the axionic terms would combine to $\pm \fra12\epsilon_{ij}{\partial}_i(A_{jb}\bar\chi_0)$. Compare also with \nr{xpdep} for ${\partial}_+a_i$. Note also how the sources of the two terms are different, the middle term comes from the interaction with the transverse fluctuation, the last term from the interaction of the $a^+$ fluctuation with the background field, see \nr{ap1}. In summary, at the exit from the crossing of the nuclear sheet, in the $A_i$ gauge, the total transverse field and the $x^+$ derivative of the longitudinal gluon field are given by \begin{equation} A_{ia}(\epsilon,{\bf x})+a_{ia}(x^+,\epsilon,{\bf x})=\Big(\delta_{ij}-\fra1{2}\epsilon_{ij}\bar\chi_0(mx^+,0,{\bf x})\Big) A_{ja}(\epsilon,{\bf x})+a_{ia}(0,{\bf x}) \ .\la{aisolsumm} \end{equation} \begin{equation} {\partial}_+a^+_b(x^+,\epsilon,{\bf x})=\fra12\epsilon_{ij}A_{jb}\,{\partial}_i\chi_0(mx^+,0,{\bf x})-\fra12\epsilon_{ij}({\partial}_iA_{jb})\bar\chi_0+igA_{ibe} a_{ie}(0)\ . \la{Elong} \end{equation} In the $A^+$ gauge the fluctuation fields are in \nr{aisol} and \nr{apsol}. The corresponding field tensors, in the $+,-,2,3$ basis, are, in the $A_i$ gauge \begin{equation} F_{\mu\nu}(A+a)=\left( \begin{array}{ccc}0 & -{\partial}_+ a^+ & {\partial}_+a_i \\ {\partial}_+ a^+ &0 & {\partial}_-(A_i+a_i)+ D_ia^+ \\ -{\partial}_+a_i & {\rm antis} & D_i a_j-D_j a_i \end{array} \right) \ . \la{newffd} \end{equation} or in the $A^+$ gauge \begin{equation} F_{\mu\nu}(\tilde A+\tilde a)=\left( \begin{array}{ccc}0 & -{\partial}_+ \tilde a^+ & {\partial}_+\tilde a_i \\ {\partial}_+ \tilde a^+ &0 & {\partial}_i (\tilde A^++\tilde a^+)+ D_-\tilde a^i \\ -{\partial}_+\tilde a_i & {\rm antis} & {\partial}_i \tilde a_j-{\partial}_j \tilde a_i \end{array} \right) \ , \la{newffdAp} \end{equation} All the fields are evaluated at $x^-=\epsilon\to0$, just after crossing the thin nuclear sheet, $a_{ia}(0,{\bf x})= {\partial}_T^{-2}{\partial}_i\rho_{pa}({\bf x})$ (Eq. \nr{aihom}), $\bar\chi_0$ is in \nr{invplus}. Axion induced effects as as follows. The large transverse gauge field $A_i$ is corrected by a perpendicular vector, the $-\fra12\epsilon_{ij}A_j\bar\chi_0$ term in \nr{aisolsumm}. This implies that the length of the color vector $A_{ia}$ is only changed by a very small color independent amount \begin{equation} A_{ia}A_{ia}\to (1+\fra14\bar\chi_0^2(mx^+,\epsilon,{\bf x})A_{ia}A_{ia}\approx A_{ia}A_{ia}\ . \la{length} \end{equation} This correction may decouple in the limit $m\to0$. The $a^+$ fluctuation is corrected by a term with a very similar structure in \nr{Elong}. The corrections to the color electric and magnetic fields induced by the axion can be read from \nr{newffd},\nr{newffdAp} together with \nr{aminus}: \begin{equation} \sqrt2 E_i={\partial}_-A_i+{\partial}_+a_i+{\partial}_-a_i+D_ia^+,\quad B_i=-\epsilon_{ij}(E_j-\sqrt2{\partial}_+a_j), \end{equation} \begin{equation} \sqrt2 E_L={\partial}_+a^+,\quad B_L=-D_i a_j+D_j a_i. \end{equation} In particular, a nonzero $F\tilde F$ is induced: \begin{equation} \fra14F_{\mu\nu} \tilde F^{\mu\nu} = -\epsilon_{ij}{\partial}_+ \tilde a_i \,{\partial}_j\tilde A^+ ={\partial}_+\tilde a_i\,\sqrt2\tilde B_i = -\epsilon_{ij}{\partial}_+ a_i \,{\partial}_- A_j ={\partial}_+a_i\,\sqrt2 B_i \end{equation} in the two gauges. These are analogous to writing ${\bf E}\cdot{\bf B}={\partial}_t{\bf A}\cdot {\bf B}$ (3d vectors) in electrodynamics. One sees how $x^+$ dependence of the transverse fluctuation leads to a nonzero $F\tilde F$. The term ${\partial}_+a^+$ in \nr{bdote} does not contribute since it is multiplied by $F_{23}$ which also is of first order. Using \nr{xpdep} we can further write \begin{equation} \fra14 F_{\mu\nu}^a \tilde F^{\mu\nu}_a = {\partial}_+ a_{ia}\cdot\sqrt2 B_{ia} =-\fra12i\,m\chi_0\,\epsilon_{ij}A_{ja}B_{ia} =\fra12 i\,m\chi_0(mx^+)A_{ia}E_{ia} \ .\la{eqq5} \end{equation} Remember that here $A_i,E_i,B_i$ are independent of $x^+$. This is valid as it stands in $A_i$ gauge but going over to $A^+$ gauge, where there is no background $A_i$ field, one must transform $A_{ja}$ in \nr{eqq5} to $U^\dagger_{ab}A_{jb}=(U^\dagger A_jU)_a =-\fra{i}{g}(U^\dagger{\partial}_jU)_a$. This is in agreement with $\tilde A_{ia}=(U^\dagger A_iU)_a +\fra{i}{g}(U^\dagger{\partial}_iU)_a=0$. Many of the qualitatively important effects induced by the axion seem to come from the $x^+$ dependence of the fluctuations. The large background fields were independent of $x^+$. \section{Memory}\la{memo} In the set-up of Fig.~\ref{burst} the memory of YM radiation is the permanent effect this radiation burst has on some property of a test particle crossed by the burst. Without the axion the simplest type of memory \cite{Pate:2017vwa,ymmemo2,Jokela:2019apz,Campoleoni:2019ptc} is the transverse momentum change of the test particle, caused by the transverse electric field of the burst. We set out to study how the introduction of an axion-like particle would modify this pattern. We have now computed the color fields in the infinitesimally thin nuclear sheet approximation and the response of a test particle to these fields can, in principle, be computed from Wong's equations \cite{wong}. Wong's equations give the motion $x^\mu=x^\mu(\tau)$ of a particle of mass $M$ with an adjoint color vector $Q_a$ in a given background field. Defining first \begin{equation} p^\mu=Mu^\mu=M{dx^\mu\over d\tau} \end{equation} they are ($Q\cdot F\equiv Q_aF_a$) \begin{equation} {dp^\mu\over d\tau}=gQ\cdot F^{\mu\nu}{dx_\nu\over d\tau}\ , \ {dQ^a\over d\tau}=-gf_{abc}u^\mu A^b_\mu Q^c \ .\la{eom} \end{equation} Note that the equation for $p_\mu$ explicitly conserves the mass shell condition $p_\mu p^\mu=-2p^+p^-+p_i^2=-M^2$. The color equation expresses its covariant conservation: In matrix form ($u^\mu{\partial}_\mu={\partial}_\tau$) \begin{equation} \dot Q-igu^\mu A_\mu Q=u^\mu({\partial}_\mu-igA_\mu)Q =u^\mu D_\mu Q=0 \ . \end{equation} The $-,\,i,\,+$ components of the equations of motion are \begin{eqnarray} M{dp^-\over d\tau} & = &-gQ\cdot (F_{+-}p^-+F_{+i}\,p^i)=gQ\cdot (p^-{\partial}_+ a^+-p^i {\partial}_+ a^i) \ . \la{mshell1} \\ M{dp^i\over d\tau} & = & gQ\cdot \left(p^-F_{i-}+p^+F_{i+}+p^jF_{ij}\right) \nonumber \\ & = & gQ\cdot\left[-p^-\left({\partial}_-(A_i+a_i)+D_ia^+\right)-p^+{\partial}_+ a_i+p^j(D_ia_j-D_ja_i)\right] \la{mshell2} \\ & = & g\tilde Q\cdot\left[-p^-\left({\partial}_i(\tilde A^++\tilde a^+)+D_-\tilde a^i\right) -p^+{\partial}_+ \tilde a_i+p^j({\partial}_i\tilde a_j-{\partial}_j\tilde a_i)\right] \la{mshell3} \\ p^+ & = & {p_ip_i+M^2\over 2p^-} \ . \la{mshell} \end{eqnarray} We know the fields from the front of the nucleus at $x^-=0$ to its tail end at $x^-=\epsilon$ and we should compute the cumulative effect integrated over the nuclear sheet on a test particle starting at $x^-=0$ with some initial velocity $u^-(0)$, the fate of the red line in Fig.\ref{burst}. All the fields are constructed on the basis of the large background transverse field $A_i(x^-,{\bf x})$ together with the axion $\chi_0(x^+,x^-,{\bf x})$. There is no reason to expect any strong variation as a function of $x^-$ in the axion wave function so that one can as well set $x^-=0$ there. The transverse field $A_i(x^-,{\bf x})$ grows rapidly across the nuclear sheet, behaves $\sim \theta_\epsilon(x^-)$. There also the integral over the burst, over the range $0<x^-<\epsilon$ will produce something of the order of $\epsilon$. However, there is one term containing a singularity in the range of integration, the $x^-$ derivative ${\partial}_-(A_i+a_i)$ in $dp^i/d\tau$, goes $\sim\delta(x^-)$ and produces a finite result in the limit $\epsilon\to0$. This feeds itself further into the behavior of $p^+$. Similarly, the $A^+$ gauge equation \nr{mshell3} has a $\delta(x^-)$ singularity in $\tilde A^+$ ; this will be discussed in Appendix~\ref{aplusgauge}. We thus conclude that in the thin sheet limit we can concentrate on the ${\partial}_-(A_i+a_i)$ term in $dp^i/d\tau$, the rest will produce ${\cal O}(\epsilon)$ effects. However, this is a limit and in serious modeling the ${\cal O}(\epsilon)$ effects should be quantitatively studied. There are also further ${\cal O}(\epsilon)$ effects, like the one coming from careful gauge transformation between $A^+$ and $A_i$ gauges when also fluctuations are included, studied in Appendix~\ref{trafoLC}. Quantitative conclusions are only possible by numerical means. Solving Wong's equations numerically has been extensively studied \cite{Moore:1997sn,Dumitru:2006pz,Li:2020uhl}. Consider then Eq.~\nr{mshell1} for $M\,dp^-/d\tau$. The RHS is, from the point of view of the axion, particularly interesting since it is entirely induced by the $x^+$ dependence of the axionic fluctuation. Its coefficients, given in \nr{xpdep} and \nr{Elong}, have a reasonably simple structure, but the equation is not obviously integrable. It is nevertheless non-singular and produces negligible ${\cal O}(\epsilon)$ effects. We thus have \begin{equation} {dp^-\over d\tau}=0\quad \Rightarrow\quad p^-=Mu^-=M{dx^-\over d\tau}={\rm constant} \quad\Rightarrow\quad x^-(\tau)={p^-\over M}\tau \ . \end{equation} Of course, it will be a very interesting problem to ultimately sort out how the now neglected coefficients in \nr{xpdep} and \nr{Elong} affect the constancy of $p^-$, but this requires a good numerical control of the fields as well as a better knowledge of the axion wave function. Assume then that the test particle is initially is at rest, $p^-=M/\sqrt2$. The equations of motion conserve the mass shell condition so that all the time during motion across the sheet \begin{equation} \sqrt2 p^-=E-p_L =\sqrt{p_L^2+p_T^2+M^2}-p_L=M_Te^{-y} = M \la{plpt} \end{equation} From this one can solve \begin{equation} p_L={p_T^2\over 2M}, \quad y=\log{\sqrt{p_T^2+M^2}\over M} \ . \end{equation} so that the momentum of the test particle is (in $(E,p_L,p_2,p_3)$ coordinates) \begin{equation} p^\mu=\left({p_T^2\over 2M}+M,{p_T^2\over 2M},p_i\right) \ , \la{pmuT} \end{equation} $p_ip_i=p_T^2$. Computing $p_i$ as a function of time, this gives the fate of the red line in Fig.\ref{burst}. Passage through the sheet develops some $p_T$ and, associated with thus some $p_L$. This is negligible in the non-relativistic limit, $p_T\ll M$. How this affects the U(1) memory analogy is discussed later after Eq.~\nr{longmem}. Thus the primary quantity is transverse motion, the rest follows from it. The equation for $x_i(\tau)$ is \begin{equation} \dot p_i=M\ddot x_i(\tau)= -g Q\cdot E_i(\tau,{\bf x}), \la{transeq} \end{equation} where the color electric field is $E_i=F_{-i}/\sqrt2$ as given by \nr{newffdAp} or \nr{newffd}. Solving from here $x_i(\tau)$ one gets $x_L(\tau)$ by integrating $\dot x_L=\fra12 \dot x_i\dot x_i$ and finally (from $E=p_L+M$) $x^0(\tau)=\tau + x_L(\tau)$. To do the first integral over $\tau$ or $x^-$ it is simplest to use the $A_i$ gauge since then $\sqrt2 E_i={\partial}_- A_i$ and one integral can be immediately carried out. Including just the $x^-$ derivative term in \nr{mshell2} the $p^i$ equation is simply \begin{equation} {\partial}_- p_i(x^-)=-gQ_aF^{i+}_a(A+a)=-gQ_a{\partial}_-(A_a^i+a_a^i)\ . \la{derpi} \end{equation} In the $A_i$ gauge $D_-Q_a={\partial}_-Q_a=0$ and the color does not rotate. This is a key property of the gauge choice since then we can immediately integrate over $x^-$. Choosing the initial value $p_i(0)=0$ and taking the transverse fluctuation from \nr{aisolsumm} (the weak probe initial field $a_{ia}(0,{\bf x})$ is inessential and can be neglected), the final result for the transverse kick is\footnote{Of course, one can as well use the $A^+$ gauge, {\emph{i.e.}}, start from \nr{mshell3}. This is done in Appendix~\ref{aplusgauge}. For the consistency of the approach it is important that a gauge invariant answer is obtained.} \begin{equation} p_i(x^+,{\bf x})=-gQ_a\left(A_a^i+a_a^i\right)=-g\left[\delta_{ij}-\fra12 \epsilon_{ij}\bar\chi_0(mx^+,0,{\bf x})\right]Q_aA_{ja}({\bf x}) \ . \la{p} \end{equation} Thus, in analogy with \cite{Yoshida:2017fao}, there is a new parity breaking mode. Both $A_{ia}$ and $p_i$ have the same parity ($P-$) and since $\chi_0$ is pseudoscalar, the $\epsilon_{ij}$-term has opposite parity. For given colors \nr{p} is a definite prediction for the transverse kick on an event-by-event basis, for one element of the nuclear color densities, given concretely in \nr{kickmag}. Note that $A_{ia}$ as an adjoint color vector is real. Colors and momentum dependencies remain unspecified, though, and in this sense this result may be mathematically correct but is unphysical. Geometrically, the axion dependent correction in \nr{p} is a small perpendicular addition to the 2d vector $A_i$. Thus to first order, as already discussed in Eq.\nr{length}, the length of the vector $A_i$ is unchanged. The result \nr{p} should now be (complex) squared and averaged over an ensemble of color densities. For a Gaussian ensemble, see Eq.~\nr{ensemble}. In this heavy ion collision analogue model, this averaging is concretely carried out in Appendix~\ref{aplusgauge}. One has to evaluate expectation values of the type (one takes different points $x=(x^+,{\bf x}),\,y=(y^+,{\bf y})$ since there will be a logarithmic divergence when ${\bf y}\to{\bf x}$) \begin{eqnarray} &&\langle p_i(x^+,{\bf x})p_i(y^+,{\bf y})\rangle \la{pixpiy} \\ &&\hspace{-8mm}=g^2\left\langle\Bigl[\delta_{jk}-\fra12\epsilon_{jk}\left(\bar\chi_0(mx^+,0,{\bf x})-\bar\chi_0(my^+,0,{\bf y})\right)+\fra14\delta_{jk}\bar\chi_0(x)\bar\chi_0(y)\Bigr] Q_aQ_b A_{ja}({\bf x})A_{kb}({\bf y})\right\rangle_\rho\la{pp}\nonumber \\ &&\underset{y\to x}{\longrightarrow} g^2 \Bigl[1+\fra14\bar\chi_0^2(mx^+,0,{\bf x})\Bigr]\langle{\rm Tr\,}\left[A_i({\bf x})A_i({\bf y})\right]\rangle_\rho \ , \end{eqnarray} where we on the 2nd line have taken $y\to x$ in the axionic factor and used the fact that the field expectation values are diagonal in color so that one can replace $Q_aQ_a\to C_A=N_c$, the adjoint Casimir, and write the color sum as a trace. Eq.~\nr{pp} shows that unless there are some special effects in the $x^+$ direction, the first order axionic correction to the displacement memory vanishes. Note that this vanishing happens on the tree level, even before computing the expectation value. There will be corrections to next order. Physically, when traversing the thin nuclear sheet the color neutral axion had to pick up an adjoint color vector and there was only one available, $A_i$. To relate the result to phenomenology, the leading memory term was evaluated in \cite{Jokela:2019apz}: \begin{equation} g^2\langle{\rm Tr\,}\left[A_i({\bf x})A_i({\bf y})\right]\rangle_\rho = \lim_{{\bf y}\to{\bf x}}\langle p_i({\bf x})p_i({\bf y}) \rangle={1\over\pi} Q_s^2 \log{Q_s\over\Lambda} \la{memfin} \end{equation} where $Q_s$ is a saturation scale (of the order of 2 GeV) and $\Lambda$ (of the order of $\Lambda_\rmi{QCD}\approx m_\pi$) regulates the divergence at ${\bf y}\to{\bf x}$. The derivation was carried out in the $A_i$ gauge and required a computation of the expectation value of a string of $U$ matrices. This has more accurately been carried out in \cite{Lappi:2017skr}. As shown by \cite{Kolbe:2020hem}, it is simplest to use the $A^+$ gauge, then one gets not only the expectation value but the entire distribution, see Appendix~\ref{aplusgauge}. The results coincide which shows the consistency of the scheme. Further, this analogue model predicts that in addition to the transverse displacement memory there is a longitudinal memory due to \nr{plpt}: \begin{equation} \langle p_L\rangle_\rho = \left\langle {p_T^2\over 2M} \right\rangle_\rho \ , \la{longmem} \end{equation} where $M$ is the mass of the test particle. This effect is there already on event-by-event basis, see \nr{pmuT}, and survives averaging over an ensemble of collisions. The appearance of the infrared sensitive quantity $M$ indicates that the longitudinal component of the memory is not as controllable as the transverse one. That the longitudinal memory does not appear in usual discussions \cite{bieriED},\cite{Mao:2017wvx,Hamada:2018cjj,Hamada:2017atr},\cite{Campoleoni:2019ptc} is due to the fact that this analogue model is inherently relativistic with equal $E_i,\, B_i$ while in the usual Lorentz factor ${\bf E}+\fra{1}{c}{\bf v}\times c{\bf B}$ with ${\bf E}\sim c{\bf B}$ (3-vectors) the magnetic field term is negligible at non-relativistic velocities. It is this term which produces longitudinal motion. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.3\textwidth]{paraplot.png} \end{center} \caption{The velocities in Eq. \protect\nr{vels} plotted on the $(v_T,v_L)$-plane as functions of $p_T/M$. }\la{ellips} \end{figure} To analyse this from another angle, note that from \nr{pmuT} the velocities are \begin{equation} v_T={2Mp_T\over p_T^2+2M^2},\quad v_L={p_T^2\over p_T^2+2M^2},\quad v_L(1-v_L) =\fra12 v_T^2 \ . \la{vels} \end{equation} These velocities form an ellipse plotted on the $(v_T,\,v_L)$-plane in Fig.~\ref{ellips}. In the nonrelativistic limit $p_T\ll M$, we have $v_T=p_T/M,\,\,v_L=p_T^2/(2M^2)=\fra12 v_T^2$ so that the longitudinal effect is negligible. Increasing $p_T$ $v_T$ grows and reaches its maximum value $v_T=1/\sqrt2$ at $p_T=M\sqrt2,\,\,v_L=\fra12$. Increasing $p_T$ further, $v_T$ decreases and finally vanishes as $2M/p_T$ when $v_L\to1$. It may seem paradoxical that the transverse velocity decreases in the large momentum limit; this of course is due to energy increasing. In discussions of U(1) memory \cite{bieriED} there are two types of memory, an ordinary memory caused by a radial electric field (sourced by a collection of charged particles which end up in the future timelike infinity, not in null infinity) and a null memory caused by transverse electric and magnetic fields (sourced by massless charged particles going to null infinity). In this analogue model there are longitudinal fields sourced by the axion, but all sources go to null infinity. In this sense the model is the analogue of the null memory only, caused by transverse fields. Longitudinal memory in \nr{longmem} is a relativistic effect in the observation of null memory. Also longitudinal fields are induced by the axion but we are so far unable to compute their effect. Longitudinal fields enter in the central region of nucleus-nucleus collisions \cite{Lappi:2006fp,Kharzeev:2015kna,Lappi:2017skr} or in phenomenological analyses of $\eta^\prime$ production \cite{jeonetapr}. \section{Conclusions}\label{sec:conclusions} This article was motivated by a cosmological study \cite{Yoshida:2017fao} in which the effect of a cosmological axion background on electromagnetic memory was computed. This gave a motivation to ask how an axion-like particle, called axion, coupling to QCD matter would affect the Yang-Mills radiation memory in \cite{Pate:2017vwa,ymmemo2,Jokela:2019apz,Campoleoni:2019ptc}. Operationally, answering this required sorting out how an axion could coexist with a CGC, Color Glass Condensate. Another physical way of expressing the problem is as follows. Assume there is a speculative axion-like degree of freedom in QCD matter, interacting with QCD as axions are expected to do. How does it affect the motion of a test quark passing through a large nucleus, in the infinite momentum frame? This problem has been studied in the setting of a weak probe, a proton, an offshell photon, exciting the wave function of a single nucleus. Of course, there is no observational evidence of this type of dynamics, Subtle analyses of spin effects in deep inelastic scattering may, nevertheless, lead to some related effects \cite{Tarasov:2020cwl}. The effect of the axion on the CGC in the form of axion-induced fluctuations of the gauge potentials in the CGC has been computed in the thin sheet limit. These effects can be measured by the effect of the nuclear sheet on the motion of a test quark. There is a clear parity violating memory effect on the classical event-by-event level, but summing over an ensemble of events, the axionic effect averages out. The treatment is inherently relativistic and the memory, in addition to the usual transverse one \cite{bieriED}, has a longitudinal component also. In any case, the memory here is an analogue of the null memory only, all the charges reach null infinity. There are lots of finite width effects to modify the averaging out of the axionic signal, with bigger width there is more space and time for interesting phenomena to take place. However, these are accessible only by numerical computations, equations for which have been written down. Proceeding further would be an entirely new project. Technically, the work presented here is largely based on the methods and approximations developed in \cite{Gelis:2005pt}. There also gluon production in the process was computed. The same can be done here, too, on the amplitude level, but averaging over an ensemble would need computing very complicated correlators. The level of complication is set by a related computation of $\eta'$ production \cite{jeonetapr}. One knows of the axion little beyond the assumed free pseudoscalar massive field equation. It has one interesting effect, through its mass dependence it induces $x^+$, ``time'', dependence to the CGC. Inherently, due to time dilatation, the CGC is $x^+$ independent. Altogether, of course the whole appearance of the axion is speculative, but maybe this is a useful theoretical exercise anyway. \vspace{0.8cm} {\bf Acknowledgements} We thank F.~Gelis, I.~Kolbe, T.~Lappi, and R.~Paatelainen for discussions on Color Glass Condensate and L.~Bieri, D.~Garfinkle, C.~Heissenberg, D.~Nichols, and B.~Oblak for discussions on gravitational radiation memory. N.~J. and M.~S. have been supported in part by the Academy of Finland grant no. 1322307. M.~S. is also supported by the Finnish Cultural Foundation.
{ "timestamp": "2020-12-29T02:27:07", "yymm": "2012", "arxiv_id": "2012.14160", "language": "en", "url": "https://arxiv.org/abs/2012.14160" }
\section{Introduction} In the previous decades, many states were found experimentally in the mass range of heavy hadrons that can not be easily explained within a quark model and a standard quarkonium picture \cite{Olsen:2014qna,Tanabashi:2018oca}. To understand the structure of these states is one of the important aims of hadron physics. For most of these states, only their masses are experimentally known, and information about their quantum numbers and decays modes are very scarce \cite{Olsen:2017bmm, Lebed:2016hpi}. Especially the content of $X(3872)$ among all the exotics has been the subject of various studies since its discovery. The mass of $X(3872)$, which is very close to the $D^0D^{*0}$ threshold \cite{Choi:2003ue, Acosta:2003zx, Abazov:2004kp, Aubert:2004ns, Chatrchyan:2013cld, Aaij:2013zoa, Aaij:2020qga, Aaij:2020xjx}, and the isospin-breaking decay $X(3872) \rightarrow J/\psi \rho$, make it an ideal candidate for a $D\bar D^*$ hadronic molecule \cite{Chen:2016qju, Guo:2017jvc}. Another important measurement that might give insight into the structure of X(3872) is its electromagnetic decay into $\psi(2S)$ and $J/\psi$. The ratio of the branching ratios of these decays is measured as \cite{Aubert_2009, Aaij_2014}: \begin{equation} \label{eq:d1} R_{\psi\gamma}=\frac{B_{r}(X\to\psi(2S)\gamma)}{B_{r}(X\to J/ \psi \gamma)}=2.46\pm 0.64\pm 0.29 \, . \end{equation} This implies that, since the phase space for the decay $X \rightarrow \psi(2S)\gamma$ is much smaller than the phase space for the $X \rightarrow J/\psi \gamma$ decay, the amplitude for $X \rightarrow \psi(2S)\gamma$ is much larger. This is naturally expected in the quark model since in the quark model $X \rightarrow \psi(2S)\gamma$ is a $\Delta L=1$ transition. But, in the quark model, the predicted mass of the $1^{++} (J^{PC})$ charm- anticharm state, called $\chi_{c1}(2P)$, is around $ 3.95$ GeV \cite{Godfrey:1985xj, Ebert:2011jc}, about 70 MeV higher than the observed state. On the other hand, in Ref.~\cite{Guo:2014taa}, triangular $D D^{*} \bar{D}^{*}$ and simple $D\bar{D}^{*}$ loop contributions to the radiative amplitude were computed. It was concluded that the observed ratio allows the $X(3872)$ to be a hadronic molecule with the dominant component $D\bar{D}^{*}$. In Ref.~\cite{Cincioglu:2019gzd}, the effects of short-range contributions to the radiative decays of the $X(3872)$ were analyzed. It was demonstrated that the possible constructive or destructive interferences between the meson-loop and the short-distance contact term are important to determine whether the charmonium content of the $X(3872)$ is nontrivial. In \cite{Cincioglu:2016fkm}, an effective theory using heavy quark spin symmetry (HQSS) is presented in which $X(3872)$ is described as a superposition of molecular components and a compact core, taken to be 2P charmonium state throughout this study\footnote{ There are plenty of studies that considered X(3872) as a mixture in the literature \cite{ Dong:2009uf, Takizawa:2012hy, Chen:2013upa}. }. An important source of uncertainty in the model is bare masses of the compact components. In \cite{Cincioglu:2016fkm}, a bare mass was taken to be the charmonium mass predicted by potential quark models. This work aims to draw attention to the effect of the bare charmonium mass, which affects the trajectories on the predictions of the model, and, as shown, how it alters the molecular weight of the observation. It is noted that the bare mass is UV regulator dependent and it is a free parameter in the presented scheme. Although the bare mass is not physically observable, theoretically it is still relevant, because its value can be obtained from schemes that ignore the coupling of charmonium states to the mesons ($d=0$). A major problem is to set the UV regulator to match the quark model and the EFT approaches. Thus, all calculations have been performed with two different UV cutoffs, spanning a physically motivated range of values. The expectation is that the cutoff dependence will be absorbed into the low energy constants (LECs) and thus predictions for observables could become at most mildly regulator dependent. This paper is organized as follows: In section II, the main points of the model proposed in \cite{Cincioglu:2016fkm} is presented, followed by the our results and discussion. \section{Formalism} Due to the presence of heavy quarks in $X(3872)$, it can be described within a HQSS framework. To describe the molecular component of $X(3872)$, interactions of $D$ and $D^*$ are needed. In the HQSS, these mesons group in a HQSS doublet, which can be written as: \begin{equation} H^{(Q)}_a = \frac{1+\slashed{v}}{2} (P^{*(Q)} _{a \mu} \gamma ^{\mu} - P^{(Q)}_a \gamma _5) \, . \label{d2} \end{equation} Due to the very low momentum exchange between the mesons in the molecule, contact interactions are sufficient to describe the D mesons' interactions in X(3872) \cite{Nieves:2012tt}. Except for contact interaction, other interactions like one pion exchange and coupled channel are sub-leading order \cite{Nieves:2012tt}. Therefore their contribution can be ignored safely. Since the interaction among the heavy hadrons forming a molecule is nonperturbative, the potential should be iterated by solving Lippmann-Schwinger equation. Lippmann-Schwinger equation shows ill-defined ultraviolet behavior resulting from a contact interaction. In consequence, it requires regularization. As a regulator function, a Gaussian function $f_{\Lambda}(\vec{p})$ is employed \begin{equation} \langle \vec{p}^{\prime} ; D^{(*)}\bar{D}^{(*)} \vert V_{\Lambda} \vert \vec{p};D^{(*)} \bar{D}^{(*)} \rangle=C_{0X} f_{\Lambda}(\vec{p}^{\prime}) f_{\Lambda}(\vec{p}) \end{equation} \begin{equation} \langle \vec{p}; D^{(*)}\bar{D}^{(*)} \vert V_{c\bar{c}; \Lambda} \vert \Psi_{c\bar{c}}(2P) \rangle = d f_{\Lambda}(\vec{p}) \end{equation} where $d$ and $C_{0X}$ are low energy constants of related interactions, and in this paper we take $\Lambda$ cutoff values a $0.5-1.0$ GeV. At leading order the interaction of four heavy mesons with contact interaction potentials can be described as below \cite{Nieves:2012tt} \begin{equation} \begin{array}{l l l l l} \mathcal{L}_{4H}&=&D_{0a} Tr\left[ \bar{H}^{(Q)a} H^{(Q)}_a \gamma_\mu \right] Tr\left[ H^{(\bar{Q})b} \bar{H}^{(\bar{Q})}_b \gamma^\mu \right] \\ & + & D_{0b} Tr\left[ \bar{H}^{(Q)a} H^{(Q)}_a \gamma_\mu \gamma_5\right] Tr\left[ H^{(\bar{Q})b} \bar{H}^{(\bar{Q})}_b \gamma^\mu \gamma_5\right] \\ &+& E_{0a} Tr\left[ \bar{H}^{(Q)a}\vec{\tau}^b_a H^{(Q)}_b \gamma_\mu \right] Tr\left[ H^{(\bar{Q})r} \vec{\tau}^s_r \bar{H}^{(\bar{Q})}_s \gamma^\mu \right] \\ &+&E_{0b} Tr\left[ \bar{H}^{(Q)a}\vec{\tau}^b_a H^{(Q)}_b \gamma_\mu \gamma_5\right] Tr\left[ H^{(\bar{Q})r} \vec{\tau}^s_r \bar{H}^{(\bar{Q})}_s \gamma^\mu \gamma_5\right] \, , \label{d3} \end{array} \end{equation} where $D_{0i} $ and $E_{0i} $ are LECs. To include the compact core component, it is necessary to identify the HQSS multiplet that can be used to describe it. For this purpose, the $P$-wave quarkonium multiplet, which can be written as \cite{Casalbuoni:1992yd}: \begin{equation} J^{\mu} = \frac{ 1+\slashed{v} }{2} \left( \chi_2 ^{\mu \alpha} \gamma _{\alpha} +\frac{ i }{\sqrt{2}} \epsilon ^{ \mu \alpha \beta \gamma} \chi _{1 \gamma} v_{\alpha} \gamma _{\beta} + \frac{1}{\sqrt{3}} \chi _0 (\gamma ^{\mu}-v ^{\mu}) + h ^{\mu} \gamma _5\right) \frac{1-\slashed{v}}{2} ~. \label{d4} \end{equation} HQSS restricts the possible contact interactions between the $J^\mu$ multiplet, and the $D$ meson multiplets. The only interaction term between two $D$ mesons and $J^\mu$ that does not contain any derivative interactions at the leading order, and the only consistent Lagrangian with HQSS is: \begin{equation} \mathcal{L}_{HHQ\bar{Q}}=\frac{d}{2}Tr[H^{a(\bar{Q})}\bar{J}_{\mu} H_a^{(Q)}\gamma^{\mu}]+\frac{d}{2}Tr[\bar{H}^{a(Q)}J_{\mu} \bar{H}_a^{(\bar{Q})}\gamma^{\mu}] \, , \label{d5} \end{equation} where the parameter $d$ is an unknown LEC that causes molecular and compact components to mix (For more details, see e.g. \cite{Cincioglu:2016fkm, Hanhart:2014ssa, Colangelo:2003sa}). For bound states, to study the weights of the molecular and compact components in $X(3872)$, a method put forward by Weinberg \cite{Weinberg:1962hj, Weinberg:1965zz} can be used. The method is crucial to examine interaction couplings and the probabilistic interpretations of the components. For small binding energies (s-wave), the approach is model-independent. With the help of the sum rule \cite{Garcia-Recio:2015jsa, Weinberg:1965zz} \begin{equation} -1=\sum_{ij}g_i g_j \left(\delta_{ij}\left[\frac{\partial G_i ^{II}(E)}{\partial E}\right]_{E=E_R} + \left[G_i ^{II}(E)\frac{\partial V_{ij}(E)}{\partial E} G_j ^{II}(E)\right]_{E=E_R} \right) \, , \label{d6} \end{equation} the probabilistic interpretation of the compositeness condition can be made; moreover, Eq.~(\ref{d6}) is valid for both bound states and resonance states\cite{Garcia-Recio:2015jsa}. For resonance (bound) states, $G$ should be taken as $G^{II}$ ($G^I$). Each term in Eq.(\ref{d6}) can be identified differently\footnote{ The imaginary parts of $\tilde{X}_i$ and $\tilde{Z}$ must cancel each other.}. % \begin{equation} X_i =Re\tilde{X}_i =Re \left(- g_i^2 \left[\frac{\partial G_i^{II}(E)}{\partial E}\right]_{E=E_R} \right) \label{d7} \end{equation} \begin{equation} Z =Re\tilde{Z}_i =Re \left(-\sum_{ij} \left[ g_i G_i^{II}(E) \frac{\partial V_{ij}(E)}{\partial E} G_j^{II}(E)g_j\right]_{E=E_R} \right) \label{d8} \end{equation} With the definitions in Eqs.(\ref{d7}) and (\ref{d8}), we obtain the compositeness and elementariness, respectively. $\tilde{X}_i$ quantifies the probability of finding a two-body component in the wave function of a hadron, and $\tilde{Z}_i$ is related to other components and thus is understood as the elementariness. Hence Z close to 1 signifies that its compact component dominates the bound state. On the other hand, in the case of a resonance, probabilistic interpretations are not entirely accurate because of $\tilde{X}_i$'s negative imaginary values. But in Ref.~\cite{Guo:2015daa, Aceti:2014ala}, it was claimed that the absolute value of $\tilde{X}_i$ can be used as a measure of the weight of the i-th channel\footnote{In Ref.~\cite{Guo:2015daa} a probabilistic interpretation of the compositeness relation at the pole of a resonance with only positive coefficients thanks to a suitable transformation of the S matrix has been derived. Absolute value of $\tilde Z_{i}$ gives the weight of finding a specific component in the wave function of a hadron, but it is only valid when $Re(E_R)>M_{i,th}$, with $E_R$ resonance energy and $M_{i,th}$ the corresponding threshold of the channel $i$.}. $T$-matrix, which can be obtained as a solution of an LSE equation, develops poles on the complex energy plane. If $T$-matrix is close to the pole, its elements are approximately % \begin{equation} T_{ij} \approx \frac{g_i g_j}{E-E_R} \, , \label{d9} \end{equation} where $g_i$ is the coupling of the state to the i-th channel. When considering a specific $(1^{++})$ state, $T$-matrix, which gives dynamics of the system, can be formed as \cite{Cincioglu:2016fkm}: \begin{equation} T(E) = \frac{\Sigma_{c\bar{c}} }{1 - G^0_{c\bar{c}} \Sigma_{c\bar{c}}} \left( \begin{array}{cc} f^2_{\Lambda}(E)[\frac{1}{d^2 G_{QM}^2} - \frac{1-G^0 _{c\bar{c}} \Sigma_{c\bar{c}}}{G_{QM} \Sigma_{c\bar{c}}}] & f_{\Lambda}(E) \frac{1}{d G_{QM}} \\ f_{\Lambda}(E) \frac{1}{d G_{QM}} & 1 \end{array} \right) \, , \label{d10} \end{equation} where $\Sigma_{c\bar{c}}$, $ G^0_{c\bar{c}}$, $ f_{\Lambda}$\footnote{ All calculations have been performed with an UV cutoff $\Lambda=0.5-1$ GeV. } , and $ G_{QM}$ \footnote{ QM stands for non-relativistic quantum mechanics.} are the charmonium self-energy induced by the meson loops, the non-relativistic bare charmonium propagator, the Gaussian regulator, and the diagonal meson loop function, respectively. In Eq.~(\ref{d10}), while the first channel is molecular type, the second is charmonium \cite{Baru:2010ww}. Besides, poles of the transition matrix are given by zeros of the inverse of the dressed propagator \cite{Cincioglu:2016fkm} \begin{equation} 1 - G^0 _{c\bar{c}}(E_R) \Sigma _{c\bar{c}}(E_R) = 0 \, , \label{d11} \end{equation} where $\Sigma_{c\bar c}$ is the quarkonium self-energy \begin{equation} \Sigma_{c\bar c}(E) = \left[V_{c\bar c}^{\rm QM}\right]^t G_{\rm QM}(E)\Gamma_{c\bar c}(E) \, , \label{d12} \end{equation} with the dressed vertex function, $\Gamma_{c\bar c}$, reads \begin{equation} \Gamma_{c\bar c}(E) = \left( 1-V^{\rm QM} G_{\rm QM}(E)\right)^{-1}V_{c\bar c}^{\rm QM} \, . \label{d13} \end{equation} where $V^{QM}$ and $V_{c\bar c}^{QM}$ are the molecular contact potential and the $\chi_{c_1}(2P)-D\bar{D}^{(*)}$ transition amplitudes, respectively (see Eq.~(30) and Eq.~(34) of Ref.~\cite{Cincioglu:2016fkm}). Finally, for an arbitrary $E$, the mesonic loop function is given by~\cite{Albaladejo:2013aka} \begin{align} G_{\rm QM}(E) & = \int \frac{\text{d}^3 \vec{q}}{(2\pi)^3} \frac{e^{-2\vec{q}^{\,2}/\Lambda^2}}{E-M_1-M_2 - \vec{q}^{\,\,2}/2\mu + i0^+} \nonumber\\ & = -\frac{\mu\Lambda}{(2\pi)^{3/2}} + \frac{\mu k}{\pi^{3/2}}\phi\left(\sqrt{2}k/\Lambda\right)-i \frac{\mu k}{2\pi}e^{-2k^2/\Lambda^2}~,\label{eq:gmat_gr} \end{align} with $\mu^{-1}=M_1^{-1}+M_2^{-1}$, $k^2= 2\mu (E-M_1-M_2)$ and $\phi(x)$ the Dawson integral given by: \begin{equation} \phi(x) = e^{-x^2}\int_{0}^{x} e^{y^2} \text{d}y~. \end{equation} Poles on the complex E-energy plane represent the observable states. The mass and width of a pole can be obtained from the pole position on the complex energy plane. On the complex energy E-plane, poles can be located on the different Riemann sheets. Indeed, $G_{\rm QM}(E)$ has two Riemann sheets. In the first Riemann sheet (FRS), $0\leqslant{\rm Arg}(E-M_1-M_2)< 2\pi$, there is a discontinuity $G_{\rm QM}^I(E+ i\epsilon)-G_{\rm QM}^I(E-i \epsilon) = 2i\,{\rm Im}G_{\rm QM}^I(E+i\epsilon)$ for $E> (M_1+M_2)$. For those poles located on the FRS, on the real axis, and below the threshold are named bound states. In the second Riemann sheet (SRS), $2\pi\leqslant {\rm Arg}(E-M_1-M_2)< 4\pi$, one can find $G_{\rm QM}^{II}(E- i\epsilon) = G_{\rm QM}^I(E+i\epsilon)$, for real energies and above threshold. Poles located below the real axis, and above the threshold on the SRS are called resonances\footnote{ The more detailed information about poles can be found in Refs.~\cite{Guo:2017jvc, Hanhart:2014ssa}. There is no restriction for the location of the poles on the second Riemann (unphysical) sheets. Hermitian analyticity requires that if there is a pole at a complex value of s (resonance), there must be another pole at its complex conjugate value, $s^*$ (anti-resonance). In this study, the properties of the conjugate pole are not given since this pole corresponds to the same resonance.}. When it is a narrow resonance in SRS, the pole with a negative imaginary part (the pole located in the lower half-plane) is closer to the physical Riemann sheet than the pole with a positive imaginary part; thus, it influences the observables more strongly in the vicinity of the resonance region \cite{Baru:2010ww}. Moreover, when the poles' real part reaches the threshold with increasing d, both resonance and anti-resonance poles are equally essential. Those nearby poles only significantly influence resonance behavior in the experiment region that could be extracted from the experiment data in a phenomenological study. However, when they have large imaginary parts, they lose their width interpretation. The position of the pole might give further insight into the structure of the state. It appears that if the bound state is mostly compact, there are two near-threshold poles, one is on the first Riemann sheet, and the other is on the second Riemann sheet. Furthermore, if it is a predominantly molecular state, there is a single near-threshold pole on the first Riemann sheet \cite{Baru:2010ww, Morgan:1992ge}. We look carefully at the trajectories of 2P charmonium poles located in nearby threshold zone in scattering amplitudes. There are qualitative differences between the pole trajectories of resonances that couple to the related continuum channel with changing bare 2P charmonium masses. \subsection{General remarks} In the model of Ref.~\cite{Cincioglu:2016fkm}, two poles in the $1^{++}$ sector are expected. One is located in the FRS as a bound state. This state is identified as $X(3872)$ bound state in the FRS, and its mass is fixed at $3871.69$ MeV to determine the LEC $C_{0X}$ defined as \begin{equation} C_{0X}= C_{0A}+C_{0B}, \qquad C_{0\phi}= D_{0\phi}+3 E_{0\phi}, \qquad \text{for} \qquad \phi=a,\, b. \end{equation} The other one is identified as a dressed $\chi _{c1}(2P)$. The dependence of the second pole position on the bare 2P charmonium mass results from the non-relativistic bare propagator: \begin{equation} G^0_{c\bar{c}}(E)= \frac{1}{E-\stackrel{\circ}{m}_{c\bar c}} \, , \label{d14} \end{equation} where $\stackrel{\circ}{m}_{c\bar c}$ is the mass of the 2P bare charmonium state. As mentioned above, it is dressed by the $D\bar{D}^{*}$ meson loops and gives rise to the physical mass of the charmonium states, as $d$ increases. On the other hand, there exist some uncertainties in the presented model of Ref.~\cite{Cincioglu:2016fkm}. One of the most considerable uncertainties is the mass of the bare charmonium state. As can be seen in Table~\ref{tab:1}, most recent constituent quark models give the mass of the $\chi_{c1}(2P)$ in quite a broad range\footnote{ Furthermore, if the compact component is identified as a tetraquark, its mass is completely unknown.}. In Ref.~\cite{Cincioglu:2016fkm}, the bare $\chi_{c1}(2P)$ mass is taken 3906 MeV from Ref.~\cite{Ebert:2011jc} due to the closest prediction to the experimental mass of the $\chi_{c2}(2P)$ state. Another uncertainty is a sizable error in the predicted masses that can reach up to $10\%$. Besides, different models in Table~\ref{tab:1} predict masses for the two-meson threshold and the bare $\chi_{c1}(2P)$ mass differences ranging from around 35 MeV to 82 MeV \footnote{To see effects of the a lower mass of the threshold(for $3865$ MeV see Tables \ref{tab:01} and \ref{tab:05}), we searched the hypothetical values below the threshold. Contrary to the values higher than the threshold, as $C_{0X}$ decreases, $d$ increases, the pole moves towards the threshold, gaining small width.}. Also, it might be interesting to see the impact of the bare charmonium mass, which is $10$ MeV below the $3906$ MeV mass value, on the properties of the $1^{++}$ hidden charm poles (charmonium content in $X(3872)$, dressed charmonium $\chi_{c1}(2P)$, $DD^*- \chi_{c1}(2P)$ coupling, etc.). For $m_{c\bar{c}}^0=3906$ MeV, the state is just $\sim35$ MeV above the $DD^*$ threshold, and dressed $\chi_{c1}(2P)$ pole becomes below threshold in the SRS with relatively small $X(3872)$ charmonium content. However, as the larger bare $\chi_{c1}(2P)$ mass is taken, it is seen that on the SRS, a larger charmonium content is needed to move the $\chi_{c1}(2P)$ state below the $DD^*$ threshold. \begin{table} \centering \begin{tabular}{c|c|c} Ref. & $m_{\mathcal{X}_{c1}(2P)}$ [MeV] & $\stackrel{\circ}{m}_{c\bar c}-m_{DD^*}$ [MeV] \\[2pt] \hline \hline \cite{Ebert:2011jc} &3906 & 35 \\[2pt] \hline \cite{Sungu:2018eej}& 3924 & 53 \\ [2pt] \hline \cite{Barnes:2005pb}& 3925 & 54 \\ [2pt] \hline \cite{Ebert:2002pp}& 3929 & 58 \\ [2pt] \hline \cite{Gui:2018rvv, Deng:2016stx}&3937& 66 \\ [2pt] \hline \cite{Segovia:2013wma, Ortega:2012rs}& 3947 & 76 \\[2pt] \hline \cite{Godfrey:1985xj}& 3953 & 82 \\ [2pt] \hline \hline \end{tabular} \caption{The $\chi_{c1}(2P)$ masses in the literature. Constituent quark model~\cite{Segovia:2013wma, Ortega:2012rs}, Regge trajectory~\cite{Ebert:2011jc}, relativistic quark model~\cite{Ebert:2002pp}, non-relativistic potential model~\cite{Barnes:2005pb, Godfrey:1985xj}, non-relativistic quark model - $^3P_0$ model~\cite{Gui:2018rvv, Deng:2016stx}, and QCD sum rule~\cite{Sungu:2018eej} were used to obtain these mass values.} \label{tab:1} \end{table} \section{Results and Discussion} In Tables~\ref{tab:2}--\ref{tab:8} and Table I of Ref.~\cite{Cincioglu:2016fkm}\footnote{ Table I includes properties of $\chi_{c1}(2P)$ states, taking the bare charmonium mass as 3906 MeV. }, the properties of the poles found in the $1^{++}$ hidden charm sector are studied as a function of the LEC $d$, which controls the admixtures of charmonium and $DD^*$ molecule. The location of the dressed $\chi_{c1}(2P)$ pole depends on the mixing parameter $d$ and the bare charmonium mass. Within the scheme presented here and in Ref.~\cite{Cincioglu:2016fkm} of bare charmonium mass is a free parameter and it is not an observable, as mentioned above, which gets dressed by the $D^{(*)}D^{(*)}$ meson loops and gives rise to the physical mass of the charmonium states. Couplings of the $\chi_{c1}(2P)$ state to $D^{(*)}$ mesons causes the bare mass to be renormalised. Since, in the effective theory, the difference between the bare and the physical charmonium masses is a finite renormalization. This shift depends on the UV regulator since the bare mass itself depends on the renormalization scheme. To show the cutoff dependency of the properties of the poles found in the $1^{++}$ hidden charm sector, all calculation have been carried out with UV cutoff values $\Lambda=0.5\--1$ GeV. The results obtained for both cutoffs with values of d are qualitatively similar, though some quantitative differences appear, as can be seen in Tables~\ref{tab:2}--\ref{tab:8} and Tables~\ref{tab:9}--\ref{tab:14} of the appendix. Moreover, it is observed that the behavior of the physical $\chi_{c1}(2P)$ state alters around $\stackrel{\circ}{m}_{c\bar c} \approx 3908.5$ MeV. For $\stackrel{\circ}{m}_{c\bar c} = 3896, \, 3906$ and $3908.5$ MeV values, the trajectories of the corresponding dressed $\chi_{c1}(2P)$ pole are depicted in Fig.~\ref{fig:1}. When $d$ is $0$, the pole in the SRS is on the real axis. With the increasing value of $d$, the pole gradually gets away from the real axis by gaining width. As d continues to increase, the pole moves below the threshold. At some point, it reaches the real axis again. When the pole reaches the real axis with $\tilde X_{X(3872)} \sim 0.39$ for $\stackrel{\circ}{m}_{c\bar c} =3896$ MeV, $\tilde X_{X(3872)} \sim 0.43$ for $\stackrel{\circ}{m}_{c\bar c} = 3906$ MeV and $\tilde X_{X(3872)} \sim 0.47$ for $\stackrel{\circ}{m}_{c\bar c} = 3908.5$ MeV. As can be seen at Tables~\ref{tab:2}--\ref{tab:3}, for $m_{c\bar{c}}^0=3896$ MeV charmonium content $\vert \tilde{X}_{\chi_{c1}} \vert $ is about 35$\%$, and for $m_{c\bar{c}}^0=3908.5$ MeV charmonium content $\vert \tilde{X}_{\chi_{c1}} \vert $ is about 83$\%$. With the conjugate pairs coinciding in the real axis, two poles appear in the SRS below threshold located at $m_R-0i$. As one pole moves along the real axis toward the threshold, a second pole departs from the threshold and leaves the real axis forming another conjugate pair, with $d$ increasing. These new-formed poles are either far below the threshold or above the threshold but deep in the complex plane. It means that, in that region, the width interpretation is lost. Since it will not produce any observable effects, its behavior is not illustrated in Fig.~\ref{fig:1}, and also its properties are not given in Tables~\ref{tab:2} and \ref{tab:3}. Additionally, values smaller than $3908.5$ MeV bare charmonium mass have similar trajectories. In the case of $\stackrel{\circ}{m}_{c\bar c} \gtrsim 3908.5$ MeV, the behaviors of the pole trajectories, depicted in Fig.~\ref{fig:2}, are different from Fig.~\ref{fig:1}. However, with increasing $d$ values, the poles stay in the SRS above or below the threshold. But they do not reach the real axis until d's high values. For larger bare mass values than $3908.5$ MeV, until around $3930$ MeV bare charmonium mass, the pole trajectories cross the threshold. For those who cross the threshold, while the bare charmonium mass is getting bigger, their molecular weight of the $X(3872)$ state is getting smaller. Besides this, for some specific values of $d$ where $ d> d^{\rm{crit}}$, there exist a double pole in the SRS. One of the poles approaches the threshold, with $\Sigma_{c\bar c}^{\prime}$ decreasing and moves along the real axis in the SRS becomes quite close to the threshold, where SRS and FRS are connected, it might have visible effects in scattering observables as the line shape will be determined by both the pole and this virtual state. The other goes away from the real axis with gaining width but eventually reaches the real axis far from the threshold. These trajectories of the poles (illustrated with crosses) as $d$ increases are either below the threshold or above the threshold but located in much above the real axis. As mentioned above, since these poles will not have any observable consequences, their details are not also included in Tables~\ref{tab:4}--\ref{tab:8} in the appendix. In the $d\gg d^{\rm{crit}}$ limit, $X(3872)$ appears to be a 2P charmonium state where the molecular weight in $X(3872)$, $X_{X(3872)} \sim 0$, mirror in the FRS of the pole found in the SRS. Also, the proximity of the bare mass to the threshold is essential within the model. As mentioned in the introduction, the radiative decays of $X(3872)$ were analyzed in Ref.~\cite{Cincioglu:2019gzd} within the effective theory of Ref.~\cite{Cincioglu:2016fkm} to constrain the charmonium content in $X(3872)$. In the case of the destructive interferences between the meson loops and the counter-term modeled by a charm quark loop, a strong restriction on the charmonium admixture was found. In this work, the contribution from short range interaction to the ratio $R{\psi\gamma}$ depends on $\tilde{Z}_{X(3872)}$\footnote{In the presented study, it is defined as $\tilde{Z}_{X(3872)}=1-\tilde{X}_{X(3872)}$}, which is the weight of finding the charmonium component $\chi_{c1}(2P)$ in the physical wave function of $X(3872)$, and position of the $\chi_{c1}(2P)$ pole, as can be seen from Eq.~(4) of that reference. It was claimed that the behavior of the predictions of the ratio of radiative decays is different when $\tilde{Z}_{X(3872)}\lesssim 0.55$ and when $\tilde{Z}_{X(3872)}\gtrsim 0.55$\footnote{As can be seen from Fig.~2 of that reference there is a bump around $\tilde{Z}_{X(3872)}\sim$ 0.55.}. Indeed, in the vicinity of $C_{0X}$=0 ($\tilde{Z}_{X(3872)}\sim 0.55$), which controls the four-meson contact interaction, the dressed $\chi_{c1}(2P)$ pole becomes below threshold with its mass decreasing rapidly and quite wide. As $\tilde{Z}_{X(3872)}$ increases, the mass and width of the charmonium state decrease, up to $\tilde{Z}_{X(3872)} \sim 0.57$ ($\chi_{c1}(2P)$ pole reaches the real axis with that weight), while $C_{0X}$ increases and takes large positive values which creates a strong repulsive force between the $D$ and $D^*$ mesons. Thus, the contribution of the molecular component in the $X(3872)$ is suppressed. However, this mentioned behavior of radiative branching ratio is valid only for $\stackrel{\circ}{m}_{c\bar c}=3906$ MeV. For larger (smaller) values of bare charmonium, smaller (larger) $\chi_{c1}(2P)$ contents are required to reach the real axis of the pole. For instance, the $\chi_{c1}(2P)$ pole appears on the real axis below threshold when $\tilde{Z}_{X(3872)}\sim$ 0.61 for $\stackrel{\circ}{m}_{c\bar c}=3896$ MeV and when $\tilde{Z}_{X(3872)}\sim$ 0.39 for $\stackrel{\circ}{m}_{c\bar c}=3937$ MeV, as can be seen in Tables~\ref{tab:2}--\ref{tab:8} of the appendix. Due to these non-trivial effects, it is difficult to put a firm restriction on the charmonium admixture in $X(3872)$. \begin{center} \begin{figure}[h] \includegraphics[width=1.1\textwidth]{f1.png} \caption{\small{ Pole trajectories of the $\chi_{c1}(2P)$, located in the SRS, for the different bare charmonium masses for $\stackrel{\circ}{m}_{c\bar c}=3896, \, 3906$ and 3908.5 MeV. The $D\bar{D}^{*}$ threshold is shown as a vertical black dashed line. The lines and the crosses are obtained with the help of Tables~\ref{tab:2}, \ref{tab:3} and Table I of Ref. ~\cite{Cincioglu:2016fkm}. While the crosses show the trajectories which come close to the threshold after colliding the real axis, and the solid circles show the trajectories which move away from the threshold. Note that the properties of the poles that depart from the threshold are not given in Tables~\ref{tab:2}, \ref{tab:3}.} \label{fig:1}} \end{figure} \end{center} \begin{center} \begin{figure}[h] \includegraphics[width=1.1\linewidth]{f2.png} \caption{\small{ Pole trajectories of the $\chi_{c1}(2P)$, located in the SRS, for the different bare charmonium masses for $\stackrel{\circ}{m}_{c\bar c}=3910, \, 3925, \, 3937, \,3947$ and 3953 MeV. The $D\bar{D}^{*}$ threshold is shown as a vertical black dashed line. To present a general picture, those located in the much above the real axis do not have any observable effects and are illustrated with crosses. The circles show other pole trajectories which come close to the threshold from deep. The lines and the circles are obtained with the help of the values in Tables~\ref{tab:4}--\ref{tab:8}. Note that the properties of the poles that depart from the threshold are not given in Tables~\ref{tab:4}--\ref{tab:8}.} \label{fig:2}} \end{figure} \end{center} \subsection*{Acknowledgement} This research has been supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under the grant no F117090. \clearpage \section{Appendix} \label{appendix} In this appendix, for the values whose pole trajectories are shown with the Figs.~\ref{fig:1} and \ref{fig:2}, their tables are shown below. The calculations have been carried out with an UV cutoff $\Lambda =0.5-1$ GeV. In the tables, $d^{crit}$ indicates the point that $C_{0X}$ is zero. After $d^{crit}$ values, $C_{0X}$ becomes positive, which means that the interaction becomes repulsive. While details of the Fig.~\ref{fig:1} values are given by Tables~\ref{tab:2} and \ref{tab:3}, values in the Fig.~\ref{fig:2} are given by Tables~\ref{tab:4}--\ref{tab:8}. Moreover, the properties of the $1^{++}$ hidden charm poles are compiled as a function of the mixing parameter $d$, when an UV cutoff is used to regularized the molecular interactions (see Tables~\ref{tab:9}--\ref{tab:14}). Note that $X(3872)$ is assumed as a bound state in the FRS; therefore, the pole position of $X(3872)$ is fixed at 3871.69 MeV in the FRS. Finally, the position of the $\chi_{1c}(2P)$ is located in the SRS. Numerical results for an UV cutoff $\Lambda=1$ GeV: \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.01 & -0.792 & 0.891 &0.978 & (3865.0, 0.1) & 0.02-0.05i &0.002 &1.00 \\ [2pt] 0.05 & -0.862& 0.723 & 0.645 & (3866.1, 2.0) & 0.11-0.20i & 0.055& 1.05-0.01i \\[2pt] 0.10 & -1.084 & 0.503 &0.312 & (3868.7, 3.17) & 0.25-0.20i & 0.122 & 1.11+0.06i \\[2pt] 0.20 & -1.968& 0.288 & 0.102 & (3870.8, 1.38) & 0.21-0.07i & 0.071 & 1.03+0.06i \\[2pt] 0.40 & -5.508 & 0.150 & 0.028 &(3871.46, 0.39) & 0.11-0.03i & 0.022& 1.01+0.02i \\[2pt] 1.00 & -30.285 & 0.061 & 0.005 & (3871.65, 0.06)& 0.05-0.01i &0.004 & 1.00 \\[2pt] 3.00 & -266.251& 0.020 & 0.000 & (3871.69, 0.007) & 0.015-0.003i & 0.00 & 1.00 \\[2pt] 10.00 & -2950.37& 0.006 & 0.000 & (3871.69, 0.000) & 0.005-0.001i & 0.00 & 1.00 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3865 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$. }} \label{tab:01} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 &1.00 & (3896.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -0.768 & 0.88 & 0.96 & (3896.7, 2.2) & 0.01-0.19i & 0.03 & 0.98+0.02i \\[2pt] 0.10 & -0.707 & 0.83 &0.86 & (3898.6, 9.6) & 0.00-0.36i & 0.10 & 0.93+0.07i \\[2pt] 0.20 & -0.464& 0.70 & 0.60 & (3900.2, 50.0) & 0.16+0.63i & 0.35 & 0.80+0.29i \\[2pt] 0.30 & -0.058 & 0.57 & 0.40 & (3821.0, 123.1) & 0.82+1.02i & $>$1 & 0.64+1.79i \\[2pt] 0.305 & -0.034 & 0.56 & 0.39 & (3797.6, 94.2) & 1.10+1.25i & $>$1 & 0.62+3.02i \\[2pt] 0.307 & -0.024 & 0.56 & 0.39 & (3784.8, 60.5) & 1.48+1.60i & $>$1 & 0.60+5.33i \\[2pt] 0.3075& -0.021& 0.56 & 0.39 &(3781.1, 43.9) & 1.79+1.88i & $>$1 & 0.59+7.59i \\[2pt] 0.3078& -0.019& 0.56 & 0.39 &(3778.8, 28.1) & 2.28+2.35i & $>$1 & 0.59+12.12i \\[2pt] 0.30798& -0.019& 0.56 & 0.39 &(3777.4, 7.9) & 4.35+4.39i & $>$1 & 0.59+43.38i \\[2pt] 0.309& -0.014& 0.56 & 0.39 &(3734.2, 0.0) & 0.00+2.25i & $>$1 & -4.84 \\[2pt] 0.31& -0.008& 0.56 & 0.38 &(3810.7, 0.0) & 1.73 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 4.44 \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.55 &0.38 & (3818.8, 0.0) & 1.45 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 3.45 \\[2pt] 0.35 & 0.206 & 0.52 & 0.33 &(3853.3, 0.0) & 0.66 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.57 \\[2pt] 0.40 & 0.510 & 0.47 & 0.27 &(3861.6, 0.0) & 0.47 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.33 \\[2pt] 0.50 & 1.241 & 0.40 & 0.19 &(3866.8, 0.0) & 0.33 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.18 \\[2pt] 1.00 & 7.328 & 0.21 & 0.06& (3870.8, 0.0)&0.14 &$\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.04 \\[2pt] 1.50 & 17.475 & 0.15 & 0.03 & (3871.3, 0.0)& 0.10 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$& 1.02 \\[2pt] 2.00 & 31.680 & 0.11 & 0.02 & (3871.5, 0.0) & 0.07 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.01 \\[2pt] 3.00 & 72.265 &0.07& 0.01 &(3871.6, 0.0) & 0.05 & 0.00 & 1.00 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3896 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.311708 $ fm$^{1/2}$). }} \label{tab:2} \end{table} \begin{table} \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 &1.00 & (3908.0, 0.0) & 0.00 & 0.00 & 1.00 \\ [2pt] 0.05 & -0.775 & 0.89 & 0.98 & (3909.6, 1.8) & 0.01+0.15i & 0.01 &0.99+0.01i \\[2pt] 0.10 & -0.735 & 0.87 & 0.93 & (3910.6, 7.6) & 0.03+0.30i & 0.06 & 0.97+0.05i \\[2pt] 0.20 & -0.574 & 0.79 & 0.77 & (3915.0, 35.8) & 0.14+0.54i & 0.21 & 0.89+0.18i \\[2pt] 0.30 & -0.306& 0.70 & 0.60 & (3910.4, 104.9) & 0.35+0.71i & 0.49 & 0.79+0.44i \\[2pt] 0.35 & -0.132 & 0.65 &0.53 &(3886.1, 172.1) & 0.55+0.80i & 0.83 & 0.73+0.78i \\[2pt] 0.38 & -0.014 & 0.63 &0.49 &(3827.9, 230.5) & 0.80+0.97 & $>$1 & 0.60+1.55i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.63 & 0.48&(3811.2, 236.5) &0.87+1.04i &$>$1 & 0.54+1.84i \\[2pt] 0.39 & 0.027 & 0.62 & 0.47 &(3756.2, 233.0) & 1.13+1.34i & $>$1 & 0.16+3.31i \\[2pt] 0.391 & 0.031 & 0.62 & 0.47 &(3739.8, 225.7) & 1.24+1.47i & $>$1 & -0.05+4.02i \\[2pt] 0.393& 0.039& 0.62 & 0.47 & (3672.0, 162.6) & 1.98+2.58i & $>$1 & -3.05+12.07i \\[2pt] 0.3932& 0.040 & 0.62 & 0.47 & (3652.2, 131.8) & 2.49+3.39i & $>$1 & -6.48+20.28i \\[2pt] 0.39334& 0.041& 0.62 & 0.47 & (3616.6, 49.3) & 6.50+9.19i & $>$1 & -54.59+148.92i \\[2pt] 0.393345& 0.041& 0.62 & 0.47 & (3611.2, 27.1) & 10.84+14.09 & $>$1 & -104.57+383.37i \\[2pt] 0.393346& 0.041& 0.62 & 0.47 & (3609.5, 16.3) & 15.72+18.77 & $>$1 & -136.33+742.74i \\[2pt] 0.395& 0.048& 0.62 & 0.47 & (3741.6, 0.0) & 2.14 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 6.25 \\[2pt] 0.398& 0.061& 0.61 & 0.46 & (3774.3, 0.0) & 1.58 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 3.85 \\[2pt] 0.40 & 0.069& 0.61 & 0.46 &(3785.7, 0.0) & 1.43 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 3.31 \\[2pt] 1.00 & 4.572 & 0.31 & 0.12 & (3869.4, 0.0) & 0.22 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.09 \\[2pt] 2.00 & 20.654 & 0.16 & 0.03 &(3871.2, 0.0) & 0.11 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.02 \\[2pt] 3.00 & 47.458 & 0.11 & 0.02 &(3871.5, 0.0) & 0.07 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$ & 1.01 \\[2pt]\hline \hline \end{tabular} }} \caption{ \small{ For $m^0_{c\bar{c}} =$ 3908.5 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.383565$ fm$^{1/2}$).}} \label{tab:3} \end{table} \begin{table} \centering{ \scalebox{0.9}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 & 1.00& (3910.0, 0.0) & 0.00 & 0.00 &1.00 \\ [2pt] 0.05 & -0.776 & 0.89 & 0.98& (3910.6, 1.8) & 0.01+0.15i &0.01 &0.99+0.01i \\[2pt] 0.10 & -0.737 & 0.87 & 0.94 & (3912.1, 7.5) & 0.04+0.29i & 0.05 & 0.97+0.04i \\[2pt] 0.20 & -0.583 & 0.80 &0.79 & (3916.6, 34.8) & 0.14+0.53i & 0.20 & 0.89+0.17i \\[2pt] 0.30 & -0.325& 0.71 & 0.62 & (3913.7, 100.7) & 0.34+0.70i & 0.46 & 0.80+0.42i \\[2pt] 0.35 & -0.158 & 0.67 &0.55 &(3895.0, 164.3) & 0.51+0.78i &0.74 & 0.75+0.70i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.63 &0.49 &(3818.6, 249.8) & 0.85+1.01i & $>$1 & 0.54+1.72i \\[2pt] 0.40 & 0.035& 0.63 & 0.48 &(3163.8, 0.0) & 0.86 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 2.22 \\[2pt] 0.402 & 0.044& 0.62 & 0.48 &(3514.7, 0.0)& 3.30& $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 15.71 \\[2pt] 0.40205 & 0.044& 0.62 & 0.48 &(3529.6, 0.0)&3.61& $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 18.40 \\[2pt] 0.4021 & 0.044& 0.62 & 0.48 &(3545.8, 0.0)&3.97 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 21.75 \\[2pt] 0.403 & 0.048& 0.62 & 0.48 &(3694.4, 0.0) & 2.82 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 0.52 \\[2pt] 0.403 & 0.050& 0.62 & 0.48 &(3714.3, 0.0) & 2.42 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 0.52 \\[2pt] 0.45 & 0.254 & 0.59 &0.42 & (3838.0, 0.0)& 0.78& $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.73 \\[2pt] 0.50 & 0.499 & 0.55 & 0.37 &(3852.3, 0.0)& 0.59 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.46\\[2pt] 1.00& 4.362 & 0.32 & 0.13 &(3869.2, 0.0)& 0.23 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$ & 1.10 \\[2pt] 2.00 & 19.815 & 0.17 & 0.36 &(3871.1, 0.0)& 0.11 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.03 \\[2pt] 3.00 & 45.569 & 0.11 & 0.02 &(3871.5, 0.0)& 0.07 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.01 \\[2pt] \hline \hline \end{tabular} }} \caption{ \small{For $m^0_{c\bar{c}} =$ 3910 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.391302$ fm$^{1/2}$).}} \label{tab:4} \end{table} \begin{table} \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 & 1.00& (3925.0, 0.0) & 0.00 & 0.00 & 1.00 \\[2pt] 0.05 & -0.780 & 0.90 & 0.99& (3925.5, 1.5) & 0.03+0.12i & 0.01 & 1.00+00i \\[2pt] 0.10 & -0.752 & 0.89 & 0.97 & (3926.8, 6.2) & 0.06+0.24i &0.03 & 0.98+0.03i \\[2pt] 0.20 & -0.641 & 0.84 & 0.88 & (3931.5, 27.5) & 0.15+0.45i &0.13 & 0.94+0.11i \\[2pt] 0.30 & -0.456& 0.79 & 0.76 & (3935.9, 73.3) & 0.29+0.62i & 0.29 & 0.87+0.26i \\[2pt] 0.40 & -0.196& 0.72 & 0.64 &(3927.9, 172.3) & 0.50+0.74i & 0.59 & 0.79+0.56i \\[2pt] 0.45 & -0.039 & 0.69 & 0.59& (3898.6, 280.6) & 0.69+0.84i & $>$1 & 0.67+0.96i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000& 0.68 &0.57 &(3882.1, 325.8) & 0.76+0.90i & $>$1 & 0.58+1.18i \\[2pt] 0.48 & 0.064 & 0.67 &0.56 &(3373.4, 0.0)& 0.99 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 2.44 \\[2pt] 0.50 & 0.137 & 0.66 &0.54 &(3749.4, 0.0) & 1.20 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 2.66 \\[2pt] 0.60 & 0.544 & 0.60 & 0.44& (3842.3, 0.0) & 0.65 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $&1.53 \\[2pt] 1.00 & 2.913 & 0.43 & 0.22& (3866.2, 0.0) & 0.33 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $&1.18 \\[2pt] 2.00 & 14.017 & 0.23 & 0.07& (3870.6, 0.0) & 0.16 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $&1.05 \\[2pt] 4.00 &58.44 & 0.12 & 0.02& (3871.4, 0.0) & 0.08 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$&1.01 \\[2pt] \hline \hline \end{tabular} }} \caption{ \small{ For $m^0_{c\bar{c}} =$ 3925 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.461594$ fm$^{1/2}$).}} \label{tab:5} \end{table} \begin{table} \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 & 1.00& (3937.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -0.781 & 0.90 &0.99 & (3937.42, 1.36) & 0.03+0.11i & 0.01 & 1.00+0.01i \\[2pt] 0.10 & -0.758 & 0.89 & 0.98 & (3938.6, 5.54) &0.07+0.21i & 0.02 &0.99+0.02i \\[2pt] 0.20 & -0.668 & 0.86 &0.92 & (3943.2, 23.9) & 0.16+0.40i & 0.09 & 0.96+0.09 \\[2pt] 0.30 & -0.517 & 0.82 &0.83 & (3949.1, 61.5) & 0.28+0.56i & 0.22 & 0.91+0.20i \\[2pt] 0.40 & -0.305 & 0.77 &0.73 &(3951.7, 134.3) & 0.44+0.68i & 0.42 & 0.85+0.39i \\[2pt] 0.50 & -0.033& 0.72 & 0.63 & (3930.6, 311.5) & 0.70+0.82i &0.94 & 0.67+0.88i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.71 &0.62 &(3923.5, 352.8) & 0.74+0.86i & $>$1 & 0.59+1.01i \\[2pt] 0.53 & 0.060 & 0.70 & 0.61 &(3081.0, 0.0) & 0.47 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.39 \\[2pt] 0.54 & 0.092 & 0.70 & 0.60 &(3457.4, 0.0) & 0.94 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 2.24 \\[2pt] 0.60 & 0.299 & 0.67 & 0.55 & (3795.6, 0.0) & 0.87 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$ & 1.86 \\[2pt] 1.00 & 2.233 & 0.49 &0.30 & (3862.4, 0.0) & 0.40 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.24 \\[2pt] 2.00 & 11.297 & 0.28 & 0.10 & (3870.0, 0.0) & 0.19 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.07 \\[2pt] 4.00 & 47.554 & 0.15 & 0.03 & (3871.3, 0.0) & 0.09 & $\vert \tilde{X}_{chi_{c1}} \vert <1 $ & 1.02 \\[2pt] \hline \hline \end{tabular} }} \caption{ \small{ For $m^0_{c\bar{c}} $= 3937 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.510903$ fm$^{1/2}$).}} \label{tab:6} \end{table} \begin{table} \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 &1.00 & (3947.0, 0.0) & 0.00 &0. 00 &1.00 \\ [2pt] 0.05 & -0.782 & 0.90 & 1.00 & (3947.4, 1.3) & 0.04 + 0.10i & 0.01 & 0.99 + 0.01i \\[2pt] 0.10 & -0.762 &0.89 & 0.98 & (3948.6, 5.1) &0.08 + 0.19i &0.02 & 0.99 + 0.02i \\[2pt] 0.20 & -0.683 &0.87 & 0.93 & (3953.0, 21.8) & 0.16 + 0.37i & 0.08 & 0.97 + 0.07i\\ [2pt] 0.30 & -0.553 & 0.84 & 0.86 & (3959.3, 54.7) & 0.27 + 0.52i & 0.18 & 0.93 + 0.17i\\ [2pt] 0.40 & -0.369 & 0.80 &0.78 &(3965.2, 115.0) & 0.42 + 0.63i & 0.34 &0.88 + 0.31i \\ [2pt] 0.50 & -0.134 & 0.75 &0.70 &(3963.3, 237.0) & 0.61 + 0.73i & 0.64 & 0.78 + 0.60i \\ [2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.73 &0.66 &(3953.8, 365.3) & 0.74 + 0.83i & 0.99 & 0.61 + 0.91i \\ [2pt] 0.60 & 0.155 & 0.71 & 0.61 & (3628.0, 0.0)& 1.00 & $\vert \tilde{X}_{\chi_{c1}} \vert <1$& 2.24 \\ [2pt] 1.00 & 1.832 & 0.54 &0.37 & (3858.0, 0.0) & 0.46 &$\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.30 \\ [2pt] 2.00 & 9.692 & 0.32 & 0.13 & (3869.4, 0.0)& 0.22 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.09 \\ [2pt] 4.00 & 41.135 & 0.17 & 0.03 & (3871.2, 0.0)& 0.11 &$\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.03 \\ [2pt] 6.00 & 93.538 & 0.11 & 0.02 & (3871.5, 0.0)& 0.07 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.01 \\ [2pt] \hline \hline \end{tabular} }} \caption{ \small{ For $m^0_{c\bar{c}} = $ 3947 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.548624$ fm$^{1/2}$). }} \label{tab:7} \end{table} \begin{table} \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -0.789 & 1.00 &1.00 & (3953.0, 0.0) & 0.00 & 0.00 & 1.00 \\ [2pt] 0.05 & -0.783 & 0.90 & 1.00 & (3953.4, 1.2) & 0.04+0.09i &0.01 & 1.00+0.00i \\[2pt] 0.10 & -0.764& 0.89 & 0.99 & (3954.5, 4.9) & 0.08+0.18i &0.02 & 0.99+0.02i \\[2pt] 0.20 & -0.692 & 0.88 & 0.94& (3958.8, 20.7) & 0.17+0.35i & 0.07 &0.97+0.07i \\ [2pt] 0.30 & -0.570 & 0.85 &0.88 & (3965.3, 51.4) & 0.27+0.50i & 0.16 & 0.94+0.15i \\ [2pt] 0.40 & -0.400 & 0.81 & 0.81 &(3972.2, 106.2) & 0.41+0.61i &0.30 & 0.89+0.28i \\ [2pt] 0.50 & -0.182 & 0.77 &0.73 &(3975.2, 210.5) & 0.58+0.70i & 0.54 & 0.81+0.51i \\ [2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.74 &0.67 &(3970.6, 370.1) & 0.74+0.82i &0.95 &0.61+0.87i \\ [2pt] 0.60 & 0.085 & 0.73 &0.65 & (3185.6, 0.0) & 0.50 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.40 \\ [2pt] 0.65 & 0.237 & 0.70 &0.61 & (3716.5, 0.0) & 0.93 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 2.00 \\ [2pt] 1.00 & 1.638 & 0.57 &0.40 & (3854.6, 0.0) & 0.50 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.33 \\ [2pt] 2.00 & 8.919 & 0.34 &0.14 & (3868.9, 0.0) & 0.24 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.10 \\ [2pt] 4.00 & 38.041 & 0.18 &0.04 & (3871.1, 0.0) & 0.12 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.03 \\ [2pt] 6.00 & 86.578 & 0.12 &0.02 & (3871.4, 0.0) & 0.08 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $& 1.01 \\ [2pt] \hline\hline \end{tabular} }} \caption{ \small{ For $m^0_{c\bar{c}} =$ 3953 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=$0.57006 fm$^{1/2}$). }} \label{tab:8} \end{table} \clearpage Numerical results for an UV cutoff $\Lambda=0.5$ GeV: \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.01 & -1.941 & 1.052 &0.995 & (3865.0, 0.21) & -0.02i &0.001 &1.00 \\ [2pt] 0.05 & -2.011& 0.997 & 0.894 & (3865.3, 0.11) & 0.04-0.02i & 0.013& 1.01-0.01i \\[2pt] 0.10 & -2.232 & 0.868 &0.677 & (3866.1, 1.65) & 0.05-0.20i & 0.048 & 1.05-0.01i \\[2pt] 0.20 & -3.118& 0.619& 0.344 & (3868.5, 2.80) & 0.17-0.25i & 0.110 & 1.11+0.03i \\[2pt] 0.40 & -6.657 & 0.359 & 0.116 &(3870.7, 1.34) & 0.18-0.12i & 0.070 & 1.04+0.06i\\[2pt] 1.00 & -31.434 & 0.151 & 0.021 & (3871.5, 0.25)& 0.08-0.04i & 0.015 & 1.01+0.01i \\[2pt] 3.00 & -267.4& 0.051 & 0.002 & (3871.67, 0.03) & 0.03-0.01i & 0.002 & 1.00 \\[2pt] 10.00 & -2951.5& 0.015 & 0.000 & (3871.69, 0.00) & 0.01 & 0.000 & 1.00 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3865 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$. }} \label{tab:05} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3896.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.917 & 1.05 & 0.99 & (3896.2, 0.4) & 0.02-0.18i & 0.006 & 0.99+0.04i \\[2pt] 0.10 & -1.856 & 1.03 &0.96 & (3896.8, 1.8) & 0.04-0.16i & 0.025 & 0.98+0.02i \\[2pt] 0.20 & -1.613& 0.98 & 0.87 & (3899.2, 7.95) & 0.10+0.30i & 0.09 & 0.93+0.07i \\[2pt] 0.30 & -1.207 & 0.91 & 0.75 & (3903.1, 21.1) & 0.19+0.4i & 0.21 & 0.86+0.16i \\[2pt] 0.40 & -0.639 & 0.83 & 0.63 &(3908.2, 48.3) & 0.33+0.47i & 0.40& 0.78+0.33i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.77 &0.53 & (3917.1, 105.9) & 0.48+0.55i & 0.80 & 0.53+0.65i \\[2pt] 0.50 & 0.091 & 0.76 & 0.52 &(3920.2, 118.4) & 0.49+0.57i & 0.89 & 0.43+0.69i \\[2pt] 0.70 & 2.039 & 0.63 & 0.36 &(3856.6, 0.0) & 0.35 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.28 \\[2pt] 1.00 & 6.179 & 0.49 & 0.21& (3866.9, 0.0)&0.23 &$\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.13 \\[2pt] 2.00 & 30.53 & 0.27 & 0.06 & (3870.7, 0.0) & 0.113 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.03 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3896 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.488589 $ fm$^{1/2}$).}} \label{tab:9} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3910.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.925& 1.05 & 0.99 & (3910.1, 0.3) & 0.03-0.05i & 0.003 & 0.99+0.02i \\[2pt] 0.10 & -1.886 & 1.04 &0.98 & (3910.6, 1.4) & 0.06-0.11i & 0.013 & 0.99+0.01i \\[2pt] 0.20 & -1.731& 1.024 & 0.94 & (3912.6, 5.8) & 0.13+0.22i & 0.05 & 0.97+0.04i \\[2pt] 0.30 & -1.474 & 0.99 & 0.88 & (3916.0, 14.2) & 0.21+0.30i & 0.11 & 0.93+0.10i \\[2pt] 0.40 & -1.113 & 0.94 & 0.81 &(3921.0, 28.4) & 0.29+0.37i & 0.21& 0.88+0.18i \\[2pt] 0.50 & -0.650 & 0.90 & 0.73 &(3928.5, 52.3) & 0.39+0.42i & 0.36 & 0.81+0.31i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.85 &0.64 & (3946.1, 100.5) & 0.51+0.48i & 0.67 & 0.57+0.52i \\[2pt] 0.70 & 0.586 & 0.80 & 0.58 &(3979.1, 139.1) & 0.52+0.54i & 0.89 & 0.23+0.46i \\[2pt] 1.00 & 3.213 & 0.67 & 0.40& (3855.4, 0.0)&0.32 &$\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.24 \\[2pt] 2.00 & 18.665& 0.40 & 0.14 & (3869.2, 0.0) & 0.174 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.07 \\[2pt] 3.00 & 44.419& 0.28 & 0.07 & (3870.6, 0.0) & 0.117 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.03 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3910 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.613349 $ fm$^{1/2}$). }} \label{tab:10} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3925.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.928& 1.05 & 0.99 & (3925.1, 0.2) & 0.03-0.04i & 0.002 & 0.99+0.02i \\[2pt] 0.10 & -1.900 & 1.05 &0.99 & (3925.5, 1.1) & 0.07-0.08i & 0.008 & 0.99+0.07i \\[2pt] 0.20 & -1.789& 1.04 & 0.97 & (3927.3, 4.5) & 0.13+0.16i & 0.03 & 0.98+0.03i \\[2pt] 0.30 & -1.604 & 1.02 & 0.93 & (3930.2, 10.7) & 0.21+0.23i & 0.07 & 0.96+0.06i \\[2pt] 0.40 & -1.345 & 0.99 & 0.89 &(3934.7, 20.4) & 0.28+0.28i & 0.14& 0.93+0.12i \\[2pt] 0.50 & -1.012 & 0.96 & 0.84 &(3941.0, 34.9) & 0.36+0.33i & 0.23 & 0.88+0.20i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.89 &0.72 & (3970.9, 91.9) & 0.53+0.42i & 0.59 & 0.61+0.44i \\[2pt] 1.00 & 1.763 & 0.79 & 0.57& (3822.8, 0.0)&0.33 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.26 \\[2pt] 2.00 & 12.868& 0.52 & 0.25 & (3866.4, 0.0) & 0.23 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.12 \\[2pt] 4.00 & 57.286& 0.29 & 0.07 & (3870.6, 0.0) & 0.12 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.04 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3925 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.723529 $ fm$^{1/2}$). }} \label{tab:11} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3937.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.930& 1.05 & 0.99 & (3937.1, 0.2) & 0.03-0.03i & 0.001 & 0.99+0.00i \\[2pt] 0.10 & -1.907 & 1.05 &0.99 & (3937.5, 0.9) & 0.07-0.06i & 0.006 & 0.99+0.00i \\[2pt] 0.20 & -1.817& 1.04 & 0.98 & (3939.1, 3.8) & 0.13+0.12i & 0.02 & 0.98+0.02i \\[2pt] 0.30 & -1.665 & 1.03 & 0.95 & (3941.8, 9.0) & 0.20+0.18i & 0.06 & 0.97+0.05i \\[2pt] 0.40 & -1.454 & 1.01 & 0.92 &(3945.9, 16.7) & 0.27+0.23i & 0.11& 0.94+0.01i \\[2pt] 0.50 & -1.182 & 0.99 & 0.89 &(3951.5, 27.7) & 0.34+0.27i & 0.17 & 0.91+0.15i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.91 &0.75 & (3988.2, 84.8) & 0.53+0.37i & 0.54 & 0.62+0.39i \\[2pt] 1.00 & 1.083 & 0.86 & 0.66& (3762.3, 0.0)&0.23 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.16 \\[2pt] 1.50 & 4.860& 0.72 & 0.47 & (3852.0, 0.0) & 0.31 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.21 \\[2pt] 2.00 & 10.1478& 0.61 & 0.33 & (3869.0, 0.0) & 0.33 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.48 \\[2pt] 4.00 & 46.404& 0.35 & 0.11 & (3869.9, 0.0) & 0.14 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.05 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3937 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.800832 $ fm$^{1/2}$). }} \label{tab:12} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3947.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.931& 1.05 & 0.99 & (3947.1, 0.2) & 0.03-0.02i & 0.001 & 0.99+0.00i \\[2pt] 0.10 & -1.911 & 1.05 &0.99 & (3947.5, 0.8) & 0.06-0.05i & 0.005 & 0.99+0.00i \\[2pt] 0.20 & -1.833& 1.04 & 0.98 & (3949.0, 3.4) & 0.13+0.10i & 0.02 & 0.98+0.02i \\[2pt] 0.30 & -1.702 & 1.03 & 0.96 & (3951.8, 7.9) & 0.20+0.15i & 0.05 & 0.97+0.04i \\[2pt] 0.40 & -1.518 & 1.02 & 0.94 &(3955.3, 14.5) & 0.26+0.19i & 0.10& 0.95+0.08i \\[2pt] 0.50 & -1.282 & 1.00 & 0.91 &(3960.6, 23.6) & 0.33+0.23i & 0.14 & 0.93+0.12i \\[2pt] 0.70 & -0.653 & 0.968 & 0.84 &(3977.5, 50.9) & 0.45+0.28i & 0.31 & 0.82+0.25i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.93 &0.78 & (4001.6, 79.0) & 0.53+0.33i & 0.50 & 0.64+0.36i \\[2pt] 1.00 & 0.682 & 0.90 & 0.72& (3660.8, 0.0)&0.12 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.05 \\[2pt] 2.00 & 8.543& 0.66 & 0.39 & (3859.3, 0.0) & 0.28 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.18 \\[2pt] 4.00 & 41.13& 0.16 & 0.03 & (3871.1, 0.0) & 0.11 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.02 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3947 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.859959 $ fm$^{1/2}$). }} \label{tab:13} \end{table} \begin{table}[h] \centering{ \scalebox{0.8}{ \begin{tabular}{c |c c c | c c c c} d [fm$^{1/2}$] & C$_{0X}$ [$fm^{2}$] & $g_{D\bar{D}^*}^{X(3872)}$[GeV$^{-1/2}$] & $\tilde{X}_{X(3872)}$ & ($m_{\chi_{c1}},\Gamma_{\chi_{c1}} $)[MeV] & $g_{D\bar{D}^*}^{\chi_{c1}}$[GeV$^{-1/2}$] &$\vert \tilde{X}_{\chi_{c1}} \vert $ & $\tilde{Z}_{\chi_{c1}}$ \\ [2pt] \hline \hline 0.00 & -1.938 & 1.00 &1.00 & (3953.0, 0.0) & 0.00 &0.00 &1.00 \\ [2pt] 0.05 & -1.931& 1.05 & 0.99 & (3953.1, 0.2) & 0.03-0.02i & 0.001 & 0.99+0.00i \\[2pt] 0.10 & -1.913 & 1.05 &0.99 & (3953.5, 0.8) & 0.06-0.04i & 0.005 & 0.99+0.00i \\[2pt] 0.20 & -1.840& 1.04 & 0.98 & (3954.9, 3.2) & 0.13+0.10i & 0.02 & 0.99+0.01i \\[2pt] 0.30 & -1.719 & 1.04 & 0.97 & (3957.4, 7.3) & 0.19+0.13i & 0.04 & 0.98+0.04i \\[2pt] 0.40 & -1.549 & 1.03 & 0.95 &(3961.1, 13.4) & 0.26+0.17i & 0.08& 0.96+0.07i \\[2pt] 0.50 & -1.331 & 1.01 & 0.92 &(3966.1, 21.7) & 0.32+0.20i & 0.13 & 0.93+0.11i \\[2pt] 0.70 & -0.748 & 0.979 & 0.86 &(3981.9, 45.8) & 0.44+0.26i & 0.28 & 0.84+0.23i \\[2pt] $\emph{d}^{\emph{crit}}$ & 0.000 & 0.94 &0.79 & (4009.2, 75.7) & 0.53+0.31i & 0.48 & 0.65+0.34i \\[2pt] 1.00 & 0.489 & 0.91 & 0.75 & (3550.4, 0.0)&0.06 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.01 \\[2pt] 2.00 & 7.769& 0.69 & 0.43 & (3856.6, 0.0) & 0.29 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.19 \\[2pt] 4.00 & 36.89& 0.42 & 0.16 & (3868.9, 0.0) & 0.17 & $\vert \tilde{X}_{\chi_{c1}} \vert <1 $ & 1.08 \\[2pt] \hline \hline \end{tabular} } } \caption{\small{For $m^0_{c\bar{c}} = $ 3953 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($ d^{crit}=0.893559 $ fm$^{1/2}$). }} \label{tab:14} \end{table} \clearpage
{ "timestamp": "2021-08-10T02:07:50", "yymm": "2012", "arxiv_id": "2012.14013", "language": "en", "url": "https://arxiv.org/abs/2012.14013" }
\section{Introduction} Information retrieval traditionally used sparse representations like TF-IDF or BM25 to retrieve relevant documents for a given query. However, these approaches suffer from the lexical gap problem \cite{berger_lexical_gap}. To overcome this issue, dense representations have been proposed \cite{end-to-end-retrieval}: Queries and documents are mapped to a dense vector space and relevant documents are retrieved e.g.\ by using cosine-similarity. Out-performance over sparse lexical approaches has been shown for various datasets \cite{end-to-end-retrieval,guo2020multireqa,guu2020realm,gao2020complementing}. Previous work showed the out-performance for fixed, rather small indexes. The largest dataset where it has been shown is the MS Marco \cite{msmarco} passage retrieval dataset, where retrieval is done over an index of 8.8 million text passages. However, in production scenarios, index sizes quickly reach 100 millions of documents. We show in this paper, that the performance for dense representations can decrease quicker for increasing index sizes than for sparse representations. For a small index of e.g.\ 100k documents, a dense approach might clearly outperform sparse approaches. However, with a larger index of several million documents, the sparse approach can outperform the dense approach. We show theoretically and empirically that this effect is closely linked to the number of dimensions for the representations: Using fewer dimensions increases the chances for false positives. This effect becomes more severe with increasing index sizes. \section{Related Work} A common choice for dense retrieval is to fine-tune a transformer network like BERT \cite{devlin2018bert} on a given training corpus with queries and relevant documents \cite{guo2020multireqa,guu2020realm,gao2020complementing,dpr,luan2020sparse}. Recent work showed that combining dense approaches with sparse, lexical approaches can further boost the performance \cite{luan2020sparse,gao2020complementing}. While the approaches have been tested on various information and question answering retrieval datasets, the performance was only evaluated on fixed, rather small indexes. \newcite{guo2020multireqa} evaluated approaches for eight different datasets having index sizes between 3k and 454k documents. We are not aware of previous work that compares sparse and dense approaches for increasing index sizes and the connection to the dimensionality. The only work we are aware of that systematically studies the encoding size for dense approaches is \cite{luan2020sparse}, but they only studied the connection to the document length. \section{Theory} \label{sec_theory} Dense retrieval approaches map queries and documents\footnote{We use \textit{document} as a cover-term for text of any length.} to a fixed size dense vector. The most relevant documents for a given query can then be found using cosine-similarity. Using as few dimensions as possible is desirable, as it decreases the memory requirement to store (an index) of millions of vectors and leads to faster retrieval. However, as we show, lower-dimensional representations can have issues with large indices. Given a query vector $q \in \mathbb{R}^k$, we search our index of document vectors $d_1, ..., d_n \in \mathbb{R}^k$ for the documents that maximizes: $$\text{cossim}(q, d_i) = \text{cos}(\theta) = \frac{q \cdot d_i}{\left\| q \right\| \left\| d_i \right\|}$$ Note: In the following we just show the case for cosine similarity. The proof extends to other similarity functions like dot-product and any p-norm (Manhatten, Euclidean) as long as the vector space is finite. A finite $n$-dimensional vector space can be mapped to an $n+1$-dimensional vectors space with vectors of unit length. In that case, dot-product in $n$ dimensions is equivalent to cosine-similarity in $n+1$ dimensions. Similar, any $p$-norm in $n$ dimensions can be re-written as cosine-similarity in $n+1$ dimensions. \textbf{Theorem:} The probability for false positives (I) increases with the index size $n$ and (II) with the decreasing dimensionality $k$. \textbf{Proof (I):} Given a query $q$ and the relevant document $d_r$. For simplicity, we assume only a single relevant document. If multiple documents are relevant, we consider only the one with the highest cosine similarity. In order that no false positive is returned, $\text{cossim}(q, d_{r})$ must be greater than $\text{cossim}(q, d_i)$ for all $i \neq r$. Assume the possible vectors are independent. Then, the probability for a false positive is $$P(\text{false positive}) = 1 - (1-P(\text{false positive}_i))^{n-1}$$ for an index with $n-1$ negative elements and $P(\text{false positive}_i)$ the probability that a single element is a false positive, i.e.\ $\text{cossim}(q, d_i) > \text{cossim}(q, d_{r})$. \textbf{Proof (II):} While the previous proof is straightforward, that the chance of false positives increases with larger index sizes, the more interesting aspect is the relation to the dimensionality, i.e., what is the probability $P(\text{false positive}_i)$ = $P(\text{cossim}(q, d_i) > \text{cossim}(q, d_{r}))$ for a random $d_i$? We show that this probability decreases with more dimensions. Without loss of generality, we assume that the vectors are of unit length. The vectors are then on an $k$-dimensional sphere with radius 1. A false positive happens if $\text{cossim}(q, d_i) > \text{cossim}(q, d_{r})$, or, equivalent if $1-\text{cossim}(q, d_i) < 1 - \text{cossim}(q, d_{r})$. I.e., we intersect the sphere in $k$ dimensions with a hyperplane in $k-1$ dimensions. The area of the cut-off portion is defined by $1-\text{cossim}(q, d_{r})$. All vectors within the cut-off portion (i.e.\ spherical cap) are false positives. The probability that a random vector will be returned as false positive is: $$P(\text{false positive}_i) = A_{cap} / A_{sphere}$$ with $A_{cap}$ the surface area of the spherical cap and $A_{sphere}$ the surface area of the sphere in $k$ dimensions. Define the surface area of the sphere in $k$ dimensions as $A_k$, then the surface area of $A_{cap}$ is \cite{Li2011ConciseFF}: $$A_{cap} = \frac{1}{2} A_k I_{sin^2 \theta}\left(\frac{k-1}{2}, \frac{1}{2}\right)$$ with $I_x(a,b)$ the regularized incomplete beta function and $\theta$ the polar angle, i.e.\ the angle between $q$ and the relevant document $d_r$. Hence: \begin{gather} \label{eq_false_pos} P(\text{false positive}_i) = \frac{1}{2} I_{sin^2 \theta}\left(\frac{k-1}{2}, \frac{1}{2}\right) \end{gather} For constant cosine similarity between query $q$ and relevant document $d_r$, $I_{sin^2 \theta}\left(\frac{k-1}{2}, \frac{1}{2}\right)$ is a monotonically decreasing function with increasing dimension $k$. In conclusion, more dimensions decrease the probability for false positives. Combining (I) and (II) shows that a low dimensional representation might work well for small index sizes. However, with more indexed documents, the probability of false positives increases faster for low dimensional representations than for higher dimensional representations. Hence, at some index size, higher dimensional representations might outperform the lower-dimensional representation. \section{Empirical Investigation} In the proof, we have assumed that vectors are independent and uniformly distributed over the space, which gives us a lower bound on the false positive rate. However, in practice, dense representations are neither independent nor uniformly distributed. As shown in \cite{ethayarajh-2019-contextual,li-etal-2020-sentence}, dense representations derived from pre-trained Transformers like BERT map to an anisotropic space, i.e., the vectors occupy only a narrow cone in the vector space. This drastically increases the chance that an irrelevant document is closer to the query embedding than the relevant document. Hence, we study how actual dense models are impacted by increasing index sizes and lower-dimensional representations. \subsection{Dataset} We conduct our experiments on the MS MARCO passage dataset \cite{msmarco}. It consists of over 1 million unique real queries from the Bing search engine, together with 8.8 million paragraphs from heterogeneous web sources. Most of the queries have only 1 passage judged as relevant, even though more can exist. The development set consists of 6980 queries and the performance is evaluated using mean reciprocal rank MRR@10. To better compare the relative performance differences, we compute a rank-aware error rate: $$\text{Err} = \frac{1}{n} \sum_{i=1}^n \left(1 - \frac{1}{\text{rank}_i}\right)$$ with $\text{rank}_i$ being the rank of the relevant document for the $i$-th query. To be compatible with MRR@10, we set $\text{rank}_i = \infty$ for $\text{rank}_i > 10$. We then define the relative error rate as $\text{Err}_{\text{Dense}} / \text{Err}_{\text{BM25}}$. A relative error rate of $50\%$ indicates that the dense approach makes only 50\% of the errors compared to BM25 retrieval. \subsection{Model} For sparse, lexical retrieval, we use ElasticSearch, which is based on BM25. For dense retrieval, we use a DistilRoBERTa-base model \cite{sanh2020distilbert} as a bi-encoder: The query and the passage are passed independently to the transformer model and the output is averaged to create fixed-sized representations. We train this using InfoNCE loss \cite{InfoNCE}: $$L = -\log \frac{\exp(\tau \cdot \text{cossim}(q, p_+))}{\sum_i \exp(\tau \cdot \text{cossim}(q, p_i))} $$ with $q$ the query, $p_+$ the relevant passage. We use in-batch negative sampling and use the other passages in a batch as negative examples. We found that $\tau=20$ performs well. We train the model in two setups: 1) only with random (in-batch) negatives, and 2) we provide for each query additionally one hard-negative passage. We use the hard-negative passages provided by the MS MARCO dataset, which were retrieved using lexical search. Models are trained with a batch size of 128 with Adam optimizer and a learning rate of $2e-5$. DistilRoBERTa produces representations with 768 dimensions. We also experiment with lower-dimensional representations. There, we added a linear projection layer on-top of the mean pooling operation to down-project the representation to either 128 or 256 dimensions. Dense retrieval is performed using cosine similarity with exact search. Models were trained using the SBERT framework \cite{reimers-2019-sentence-bert}.\footnote{\url{https://www.SBERT.net}} \section{Experiments} First, we study the impact of increasing index sizes with real text passages. Then, we study the performance when random noise is added. \subsection{Increasing Index Size} In the first experiment, we start with an index that only contains the 7433 relevant passages for the 6980 queries. Then, we add step-wise randomly selected passages from the MS MARCO corpus to the index until all 8.8 million passages are indexed. \begin{table}[h] \centering \footnotesize \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Model} & \textbf{10k} & \textbf{100k} & \textbf{1M} & \textbf{8.8M} \\ \hline BM25 & 79.93 & 63.88 & 40.14 & 17.56 \\ \hline \multicolumn{5}{|l|}{Trained without hard negatives} \\ \hline \quad 128 dim & 87.50 & 68.63 & 39.76 & 15.71 \\ \quad 256 dim & 88.82 & 70.79 & 41.74 & 17.08 \\ \quad 768 dim & 88.99 & 71.06 & 42.24 & 17.34 \\ \hline \multicolumn{5}{|l|}{Trained with hard negatives} \\ \hline \quad 128 dim & 90.32 & 77.92 & 54.45 & 27.34 \\ \quad 256 dim & 91.10 & 78.90 & 55.51 & 28.16 \\ \quad 768 dim & 91.48 & 79.42 & 56.05 & 28.55 \\ \hline \end{tabular} \caption{Dev performance (MRR@10 $\times 100$) on MS MARCO passage dataset with different index sizes. Higher score = better.} \label{table_increase} \end{table} \begin{table}[h] \centering \footnotesize \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Model} & \textbf{10k} & \textbf{100k} & \textbf{1M} & \textbf{8.8M} \\ \hline \multicolumn{5}{|l|}{Trained without hard negatives} \\ \hline \quad 128 dim & 62.3 & 86.8 & 100.6 & 102.2 \\ \quad 256 dim & 55.7 & 80.9 & 97.3 & 100.6 \\ \quad 768 dim & 54.9 & 80.1 & 96.5 & 100.3 \\ \hline \multicolumn{5}{|l|}{Trained with hard negatives} \\ \hline \quad 128 dim & 48.2 & 61.1 & 76.1 & 88.1 \\ \quad 256 dim & 44.3 & 58.4 & 74.3 & 87.1 \\ \quad 768 dim & 42.5 & 57.0 & 73.4 & 86.7 \\ \hline \end{tabular} \caption{Relative error rate (\%) of dense approaches in comparison to BM25 retrieval. Lower score = better.} \label{table_increase_err} \end{table} Table \ref{table_increase} shows the MRR@10 performance for the different systems. Increasing the index naturally decreases the performance for all systems, as retrieving the correct passages from a larger index is more challenging. The dense approach trained without hard negatives clearly outperforms BM25 for an index with 10k - 1M entries, but with all 8.8 million passages it performs worse than BM25. Table \ref{table_increase_err} shows the relative error rate in comparison to BM25 retrieval. For small index sizes, we observe that dense approaches drastically reduce the error rate compared to BM25 retrieval. With increasing index sizes, the gap closes. \subsection{Index with Random Noise} MS MARCO is sparsely labeled, i.e., there is usually only a single passage labeled as relevant even though multiple passages would be considered as relevant by humans \cite{trec-dl-2019}. To avoid that the drop in performance is due to the retrieval of relevant, but unlabeled passages, we perform an experiment where we add random irrelevant noise to the index. Our index consists only of the relevant passages and a large fraction of irrelevant, randomly generated strings.\footnote{Strings are generated randomly using lowercase characters a-z and space.} We also evaluate the popular DPR system by \newcite{dpr}, which is a BERT-based dense retriever trained on the Natural Questions (NQ) dataset \cite{NQ_dataset}. We chose the NQ dev set, consisting of 1772 questions from Google search logs. DPR encodes the passage as \texttt{Title [SEP] Paragraph}. We create a random string for the paragraph and combine it with 1) a randomly generated string as title, 2) selecting randomly one of the over 6 Million real Wikipedia article titles, 3) selecting randomly one of the 1772 article titles found in the NQ dev set. \begin{table}[h] \centering \footnotesize \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Model} & \textbf{100k} & \textbf{1M} & \textbf{10M} & \textbf{100M} \\ \hline BM25 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ \hline \multicolumn{5}{|l|}{Dense without hard negatives - MS MARCO} \\ \hline \quad 128 dim & 2.71\% & 4.41\% & 6.69\% & 9.73\% \\ \hline \quad 256 dim & 2.39\% & 4.03\% & 6.16\% & 9.04\% \\ \hline \quad 768 dim & 2.13\% & 3.72\% & 5.77\% & 8.52\% \\ \hline \multicolumn{5}{|l|}{Dense with hard negatives - MS MARCO} \\ \hline \quad 128 dim & 2.87\% & 4.20\% & 6.00\% & 8.11\% \\ \hline \quad 256 dim & 2.45\% & 3.72\% & 5.59\% & 7.38\% \\ \hline \quad 768 dim & 2.12\% & 3.32\% & 5.09\% & 7.03\% \\ \hline \multicolumn{5}{|l|}{DPR \cite{dpr} - Natural Questions} \\ \hline \quad rnd title & 0.17\% & 0.28\% & 0.34\% & 0.51\% \\ \hline \quad all titles & 2.48\% & 5.59\% & 9.31\% & 12.08\% \\ \hline \quad dev titles & 4.18\% & 5.36\% & 6.66\% & 8.01\% \\ \hline \end{tabular} \caption{Percentage of queries for which a random string passage is ranked higher than the relevant passage. 100k/1M/10M/100M indicates the number of random passages in the index.} \label{table_rnd_noise} \end{table} We count for how many queries a random string is ranked higher than the relevant passage. The results are shown in Table \ref{table_rnd_noise}. We observe that BM25 does not rank any randomly generated passage higher than the relevant passage for the MS MARCO dataset. The chance that a random passage contains words matching the query is small. For the dense retrieval models, we observe for quite a large number of queries that a random string passage is ranked higher than the relevant passage. As proven in Section \ref{sec_theory}, the error increases with larger index sizes and fewer dimensions. For DPR, we observe an extreme dependency on the title. Having 100 million entries in the index with a real Wikipedia article title and a random paragraph, results in the retrieval of those for about 12.08\% of all questions at the top position. The error numbers far exceed the estimation from equation (\ref{eq_false_pos}), confirming that the representations are not uniformly distributed over the complete vector space and are concentrated in a small space. In the appendix (Figure \ref{fig_umap}), we plot the representations for the queries, the relevant passages, and the random strings. \section{Conclusion} We have proven and shown empirically that the probability for false positives in dense information retrieval depends on the index size and on the dimensionality of the used representations. These approaches can even retrieve completely irrelevant, randomly generated passages with high probability. It is important to understand the limitations of dense retrieval: 1) Dense approaches work better for smaller, clean indexes. With increasing index size the difference to sparse approaches can decreases. 2) Evaluation results with smaller indexes cannot be transferred to larger index sizes. A system that is state-of-the-art for an index of 1 million documents might perform badly on larger indices. 3) The false positive rate increases with fewer dimensions. 4) The empirically found error rates far exceeded the mathematical lower-bound error rates, indicating that only a small fraction of the available vector space is effectively used. \section*{Acknowledgments} This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1) and has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
{ "timestamp": "2021-06-10T02:14:18", "yymm": "2012", "arxiv_id": "2012.14210", "language": "en", "url": "https://arxiv.org/abs/2012.14210" }
\section{Introduction} Cellular motility has an essential role in biological processes \cite{Bray_2000}. For instance, wounds cannot heal without the motility of various kinds of cells such as fibroblast~\cite{Velnar_2009}. This is not limited only to the cells of the human body. Unicellular microorganisms like bacteria could also move on surfaces or swim in fluid mediums~\cite{Berg_1975,Harshey_2003,Jarrell_2008}. One of the characteristics of cell movement is the temporal changes in the cell speed and its direction of motion. These changes can be due to fluctuations at the cellular scale or due to the decision-making processes inside the cell. The latter happens for microorganisms in response to chemical/physical stimuli in their environment~\cite{Dusenbery_2011,Adler_1966}. Regardless of why the cell speed changes, studying the mechanics of cells and the effect of variable speed are important because it would help to better understand the biological mechanisms which depend on cell motility. Any microscopic particle from cells and microorganisms to synthetic particles with the ability to move by the self-propulsion mechanism is known as self-propelled particle or active particle~\cite{Schimansky_1995, Ramaswamy_2010, Marchetti_2013}. Active particles take up energy from their environment and convert it into motion. Thus, active particles are intrinsically out of equilibrium. Because of the simplicity of active systems from theoretical and experimental point of view and at the same time their rich physics, studying active particles has received much attention in recent years~\cite{Howse_2007,Lauga_2009,Walther_2013,Elgeti_2015,Volpe_2016,Zottl_2016}. In order to describe the motion of active particles several models have been developed among which active Brownian motion is the most famous one~\cite{Schweitzer_2003}. This model is the generalization of the Brownian motion model by adding a self-propulsion force and orientational diffusivity for the direction of motion, (see~\cite{Romanczuk_2012} and references therein). In this model, the self-propulsion force (and consequently the self-propulsion speed) is assumed to be constant. Although, this assumption is true and/or enough for many applications, however it could be violated in reality. In this regard, some studies have investigated temporal changes in the self-propulsion speed. Peruani et. al.,~\cite{Peruani_2007} studied the general aspects of fluctuations in the speed and the direction of motion of self-propelled particles. They derived the characteristic time scales and different temporal regimes of motion. Babel et. al.,~\cite{Babel_2014} also studied statistical properties of an active Brownian motion with time-dependent self-propulsion for three deterministic forms of the speed. In this paper, we generalize active Brownian motion by considering time-dependent self-propulsion speed but in a stochastic fashion. We then derive analytical solutions for the first two moments of displacement. Different temporal regimes are investigated and the effective diffusion coefficient is obtained. We illustrate that the behavior of the active-passive Brownian model presented in this study includes both the characteristics of standard Brownian motion and active Brownian motion simultaneously. Finally, we study run-and-tumble particles (RTPs) which is a famous model for studying motile bacteria such as Escherichia coli. It is proven that our model can describe the diffusion characteristic of non-inetracting RTPs. \section{Model and Results} Consider an active particle in position ${\mathbf{r}} = (x,y)$ moving with time-dependent self-propulsion speed $v(t)$ in direction ${\mathbf{u}}(\varphi)=(\cos\varphi,\sin\varphi)$ which undergoes translational and rotational diffusion, see Fig.~\ref{fig: ABP in plane} for a schematic illustration. The motion of this particle is described by two-dimensional Langevin equations \begin{align} \label{eq: translational Langevin eq} \dot{{\mathbf{r}}} &= v(t) {\mathbf{u}}(\varphi) + \sqrt{2 D_t}\,{\boldsymbol{\xi}}(t)\\ \label{eq: rotational Langevin eq} \dot{\varphi} &= \sqrt{2D_r}\,\eta(t), \end{align} where ${\boldsymbol{\xi}}(t) = (\xi_x(t),\xi_y(t))$ and $\eta(t)$ are independent Gaussian white noises of zero mean and unit variance. The noises can be originated by the local thermal fluctuations of the bath containing the particle. The constants $D_t$ and $D_r$ are the translational and rotational diffusion constants, respectively. The above equations show the simple generalization of the model of active Brownian particle (ABP) in which the speed is constant. The solution of Eqs.~(\ref{eq: translational Langevin eq}) and (\ref{eq: rotational Langevin eq}) depends on the functional form of the self-propulsion speed $v(t)$. Here, we consider the special case that $v(t)$ at any moment takes only values 0 or $s$ in a random fashion. Under this assumption, the above Langevin equations describe a Brownian particle with two states: active and passive (inactive). We call this model an active-passive Brownian particle. The reason behind such assumption for $v(t)$ comes from this observation that a typical biological microorganism is not always active and there are instances of its inactivity. Since the transition from one state to the other state occurs randomly, we also assume the following transition relations for the self-propulsion speed \begin{equation} \label{eq: v(t)} v(t): 0\ce{<=>[\alpha][\beta]}s, \end{equation} where $\alpha$ and $\beta$ are transition rates. The propulsion speed $v(t)$ defined by Eq.~(\ref{eq: v(t)}) is a well-known discrete-state Markov process called random telegraph process. \begin{figure}[t!] \centering \includegraphics[width=0.3\textwidth]{ABP} \caption{Schematic of a particle (small blue disk) at position ${\mathbf{r}}$ which is heading in the ${\mathbf{u}}$ direction specified by the angle $\varphi$ with respect to the $x_1$-axis.} \label{fig: ABP in plane} \end{figure} In order to characterize the dynamics of an active-passive Brownian particle, we determine the first two displacement moments $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle$ and $\langle ({\mathbf{r}}(t)-{\mathbf{r}}_0)^2 \rangle$ where ${\mathbf{r}}_0\equiv {\mathbf{r}}(t=0)$. The second moment is also known as mean square displacement (MSD). One way to derive these two moments is to derive their components along the coordinate axes. For this purpose, we rewrite the vector translational Langevin equation, Eq.~(\ref{eq: translational Langevin eq}), along the coordinate axes \begin{align} \label{eq: translational Langevin equation 2} \begin{split} \dot{x}_i &= v(t){\mathbf{e}}_i.{\mathbf{u}}(\varphi) + \sqrt{2D_t}\,{\mathbf{e}}_i.{\boldsymbol{\xi}}_x(t) \quad,\quad i=1,2 \\ \end{split} \end{align} where ${\mathbf{e}}_1=(1,0)$ and ${\mathbf{e}}_2=(0,1)$. The first two moments for both directions are then readily obtained as follows \begin{equation} \label{eq: first moment of x} \langle x_i(t)-x_i(0) \rangle = \int_0^t dt_1 \langle v(t_1) \rangle \langle {\mathbf{e}}_i.{\mathbf{u}}(\varphi(t_1)) \rangle, \end{equation} \begin{equation} \label{eq: second moment of x} \langle [x_i(t)-x_i(0)]^2 \rangle = 2D_t t + \int_0^t\,dt_1 \int_0^t\,dt_2 \langle v(t_1) v(t_2) \rangle \langle {\mathbf{e}}_i.{\mathbf{u}}(\varphi(t_1)) {\mathbf{e}}_i.{\mathbf{u}}(\varphi(t_2)) \rangle, \end{equation} for $i=1,2$. Note that the prerequisite for writing the integrands of Eqs. (\ref{eq: first moment of x}) and (\ref{eq: second moment of x}) as the product of two averages is the independence of speed $v(t)$ from angle $\varphi(t)$. To compute these moments, we need six averages $\langle v(t) \rangle$, $\langle v(t_1) v(t_2) \rangle$, $\langle \cos\varphi(t)\rangle$, $\langle \sin\varphi(t)\rangle$, $\langle \cos\varphi(t_1) \cos\varphi(t_2) \rangle$, and $\langle \sin\varphi(t_1) \sin\varphi(t_2) \rangle$ (see appendices \ref{sec: Random Telegraph Process} and \ref{sec: Angular Ensemble Averages} for the details of deriving these averages). After substituting the expressions $\langle v(t) \rangle$, $\langle \cos\varphi(t)\rangle$, and $\langle \sin\varphi(t)\rangle$ into Eq.~(\ref{eq: first moment of x}), the mean position is obtained as \begin{equation} \label{eq: average of r(t)} \langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle = s {\mathbf{u}}_0 \left[ \frac{\alpha}{\alpha+\beta} \left( \frac{1-e^{-D_r t}}{D_r} \right) + \left(\delta_{v_0,s}-\frac{\alpha}{\alpha+\beta} \right) \left( \frac{1-e^{-(\alpha+\beta+D_r) t}}{\alpha+\beta+D_r} \right) \right], \end{equation} where ${\mathbf{u}}_0 = (\cos\varphi_0,\sin\varphi_0)$ is the initial direction of motion of the active particle at $t=0$. Notice that different behavioral regimes come from the exponential terms in Eq.~(\ref{eq: average of r(t)}). \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth]{MeanPosition} \caption{Temporal behavior of $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle$ for a typical particle of size $R=1 \,\mu m$ with $D_t=0.24\, \mu m^2$, $\alpha=\beta=D_r=0.18\,s^{-1}$, and $s=3 \,\mu m/s$.} \label{fig: MeanPosition} \end{figure} For the short-time regime, $t<<(\alpha+\beta+D_r)^{-1}$, \begin{equation} \label{eq: M(near zero)} \langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle = s {\mathbf{u}}_0 \left[ \frac{1}{2}\alpha t^2 + \delta_{v_0,s}\left( t-\frac{1}{2} (\alpha+\beta+D_r) t^2 \right) + O(t^3) \right]. \end{equation} This equation shows different temporal behavior depending on the initial speed $v_0$ so that for $v_0=0$ we have $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle \sim t^2 $ while for $v_0=s$ we have $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle \sim t$. For the long-time regime, $t>>D_r^{-1}$, \begin{equation} \label{eq: M(infinity)} \langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle \approx s {\mathbf{u}}_0 \left[ \frac{\alpha}{\alpha+\beta} \frac{1}{D_r} + \left(\delta_{v_0,s}-\frac{\alpha}{\alpha+\beta} \right) \frac{1}{\alpha+\beta+D_r} \right]. \end{equation} This constant value depends similarly on the value of the initial speed $v_0$. To show a typical temporal behavior of $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle$, we consider a disk-shaped particle of radius $R=1\mu m$ moving in water at room temperature. Based on the Stokes-Einstein relation, we have $D_t=k_BT/6\pi \eta R$ and $D_r=k_BT/8\pi \eta R^3$ where $k_B$ is the Boltzmann constant and $\eta$ is the viscosity of water which is about $0.9\,mPa$ at room temperature. We assume the active state $s=3\,\mu m/s$ with the transition rates $\alpha=\beta=D_r$. The parameters have chosen in the range of real and synthetic active particles \cite{Volpe_2016}. Figure.~\ref{fig: MeanPosition} shows the different regimes of $\langle {\mathbf{r}}(t)-{\mathbf{r}}_0 \rangle$ once for the particle starts moving from rest and once for the particle with the initial speed $s$. This plot illustrates asymptotic temporal behaviors as we expected. In a similar manner, after substituting the expressions $\langle v(t_1) v(t_2) \rangle$, $\langle \cos\varphi(t_1) \cos\varphi(t_2) \rangle$, and $\langle \sin\varphi(t_1) \sin\varphi(t_2) \rangle$ into Eq.~(\ref{eq: second moment of x}), the mean square displacement is obtained as \begin{equation} \label{eq: MSD} \langle ({\mathbf{r}}(t)-{\mathbf{r}}_0)^2 \rangle = A_0 + A_1 t + A_2 e^{-D_r t} + A_3 e^{-(\alpha+\beta)t }+ A_4 e^{-(\alpha+\beta+D_r)t}, \end{equation} where \begin{align} \label{eq: MSD's coefficients} \nonumber A_0 &= \frac{s^2}{(\alpha+\beta)^2} \bigg\{-2\frac{\alpha^2}{D_r^2} -2\frac{\alpha\beta}{(\alpha+\beta+D_r)^2} \\ \nonumber &\quad+ (\delta_{v_0,s}(\alpha+\beta)-\alpha) \bigg[ \frac{\alpha}{\alpha+\beta-D_r}\left(\frac{1}{D_r}-\frac{1}{\alpha+\beta}\right) - \frac{\beta}{\alpha+\beta+D_r}\left(\frac{1}{D_r}-\frac{1}{\alpha+\beta}\right) + \frac{1}{D_r} \bigg] \bigg\}, \\ \nonumber A_1 &= 4D_t + 2 \alpha \frac{s^2}{(\alpha+\beta)^2} \bigg\{ \frac{\alpha}{D_r} + \frac{\beta}{\alpha+\beta+D_r}\bigg\},\\ \nonumber A_2 &= 2 \frac{s^2}{(\alpha+\beta)^2} \frac{\alpha}{D_r} \bigg\{ \frac{\alpha}{D_r} - \frac{\delta_{v_0,s}(\alpha+\beta)-\alpha}{\alpha+\beta-D_r}\bigg\},\\ \nonumber A_3 &= \frac{s^2}{(\alpha+\beta)^2} (\delta_{v_0,s}(\alpha+\beta)-\alpha) \bigg\{ \frac{\alpha}{\alpha+\beta}\frac{1}{\alpha+\beta-D_r} - \frac{\beta}{\alpha+\beta} \frac{1}{D_r} \\ \nonumber &\quad - \frac{\alpha}{D_r} \bigg( \frac{1}{\alpha+\beta} - \frac{1}{\alpha+\beta-D_r} \bigg) -\frac{\beta}{\alpha+\beta+D_r} \bigg( \frac{1}{\alpha+\beta} + \frac{1}{D_r} \bigg)\bigg\}, \\ A_4 &= 2 \frac{s^2}{(\alpha+\beta)^2} \frac{\beta}{\alpha+\beta+D_r} \bigg\{ \frac{\delta_{v_0,s}(\alpha+\beta)-\alpha}{D_r} + \frac{\alpha}{\alpha+\beta+D_r}\bigg\}. \end{align} There are three time scales related to the tree different exponents in MSD. Especially, $D_r^{-1}$ is the characteristic time scale of the rotational diffusion after which correlations in the direction of motion vanish. Interestingly, the transition rates $(\alpha,\beta)$ appear along with $D_r$ and not with the translational diffusivity $D_t$. The value of $(\alpha,\beta)$ determines the distance between the smallest time scale, $(\alpha+\beta+D_r)^{-1}$, and the largest time scale, $D_r^{-1}$. MSD shows a linear asymptotic behavior for times far enough from the interval $[(\alpha+\beta+D_r)^{-1},D_r^{-1}]$ while for other times shows a nonlinear behavior. The form of the nonlinear segment depends on the problem parameters. For instance, a ballistic regime, MSD$\,\sim t^2$, could be seen in the nonlinear segment. To show a typical behavior of MSD, we consider the same parameters used for Fig.~\ref{fig: MeanPosition}. Figure.~\ref{fig: MSD} shows various behavioral regimes of MSD, once the particle starts moving from rest and once it has the initial speed $s$. In many experiments, the translational diffusivity $D_t$ is negligible compared to the self-propelling speed. For example, in wild-type run-and-tumble bacteria, $D_t$ could be safely set to be zero~\cite{Cates_2013}. If $D_t=0$, there is no linear regime anymore for the short times $t<<(\alpha+\beta+D_r)^{-1}$, and instead we observe ballistic-diffusion and super-diffusion regimes, see Fig~{\ref{fig: MSD2}. It is noteworthy to mention that for $v_0=s$ with $\beta = 0$, the results of active Brownian motion are re-obtained because the particle starts with a non-zero speed and keeps it since no transition could happen to the zero-speed state. \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth]{MSD} \caption{Temporal behavior of MSD for a typical particle of size $R=1 \,\mu m$ with $D_t=0.24\, \mu m^2$, $\alpha=\beta=D_r=0.18\,s^{-1}$, and $s=3 \,\mu m/s$.} \label{fig: MSD} \end{figure} Among all terms in Eq.~(\ref{eq: MSD}), the second term, $A_1t$, is the most important one, since it is related to the effective diffusion coefficient which is defined generally as \cite{Pottier_2014} \begin{equation} {D}_{eff} = \lim_{t\rightarrow\infty} \boldsymbol{(}\langle {\mathbf{r}}^2(t) \rangle-\langle {\mathbf{r}}(t) \rangle ^2 \boldsymbol{)}/2dt, \end{equation} where $d$ is the spatial dimension. The effective diffusion coefficient for the active-passive Brownian particle is then obtained as \begin{equation} \label{eq: effective diffusion coefficient} D_{eff} = D_t + \frac{s^2}{2D_r} \frac{\alpha}{(\alpha+\beta)^2} \left( \alpha+ \beta \frac{D_r}{\alpha+\beta+D_r}\right) \end{equation} For $\beta=0$, this relation reduces to the effective diffusion coefficient of ABPs with the self-propulsion speed $s$ which is $D_{eff} = D_t + s^2/2D_r$. The second term in the right-hand-side (RHS) of Eq.~(\ref{eq: effective diffusion coefficient}) has a positive sign. This indicates that the active-passive Brownian particle diffuses more than the ordinary Brownian particle due to its activity in some time intervals. On the other hand, since the second term in the RHS of Eq.~(\ref{eq: effective diffusion coefficient}) is always less than ($s^2/2D_r$), the active-passive Brownian particle diffuses slower than an ABP with the same parameters. Therefore, the two-state model of active-passive Brownian particle, has both the characteristics of ordinary and active Brownian motion. The results of our model can also be linked to the famous class of active particles called run-and-tumble particles (RTPs). The RTP model describes the motion of motile bacteria such as \emph{Escherichia coli}~\cite{Schnitzer_1993}. A single bacteria in a medium swims with almost a constant speed in almost a straight line until a tumbling of its flagella suddenly occurs and changes its orientation. Tumbling decorrelates the orientation and happens randomly with rate $\alpha_0$. For a run-and-tumble particle with the speed $s$ that also undergoes diffusion with coefficients $D_t$ and $D_r$, the effective diffusion coefficient reads~\cite{Cates_2013} \begin{equation} D_{eff} = D_t + \frac{s^2}{2(D_r+\alpha_0)}. \end{equation} Comparing $D_{eff}$ in this equation with that of Eq.~(\ref{eq: effective diffusion coefficient}) shows the possibility of finding values for $\alpha,\beta$ so that both effective diffusivities are equal. In fact, we should solve the following equation \begin{equation} \frac{\alpha}{(\alpha+\beta)^2} \left( \alpha+ \beta \frac{D_r}{\alpha+\beta+D_r}\right) = \frac{D_r}{D_r+\alpha_0}, \end{equation} to find $\alpha$ and $\beta$. Note that all parameters are non-negative. For any given value of $D_r$ and $\alpha_0$, the above equality turns into an implicit function of the form $f(\alpha,\beta)=0$. It can be shown for $\alpha>0$ that $f(\alpha,\beta)$ is continuously differentiable and satisfies the relation $\partial f/\partial \beta \neq 0$. Thus, according to the implicit function theorem~\cite{Krantz_2013}, $\beta$ could be obtained uniquely as $\beta=g(\alpha)$ where $g$ is a continuous real function. Consequently, we could say that a run-and-tumble particle can be mapped to an active-passive Brownian particle. Of course, this equivalence is true from the perspective of their diffusion at large time scales and there is still a lack of proof for the strict equivalence between the two models. \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth]{MSD2} \caption{Temporal behavior of MSD when all the parameters are the same as in Fig.~\ref{fig: MSD} except for $D_t$ which is set to be zero.} \label{fig: MSD2} \end{figure} \section{Summary} We investigate the motion of an active particle with stochastic self-propulsion speed $v(t)$. We assume that $v(t)$ is a two-state process with one passive state and an active state. We called this model an active-passive Brownian particle. The idea behind this choice for $v(t)$ comes from the fact that motile biological entities do not move continuously and their activity (which e.g., can be due to their swimming) turns on and off randomly. The path statistics of the motion of such a particle is then obtained by calculating the first two moments of displacement. An active-passive Brownian particle exhibits a dual behavior. It diffuses faster than a Brownian particle while diffuses slower than an active Brownian particle with the same parameters. Finally, we have shown that the run-and-tumble motion could be mapped to the active-passive Brownian motion by choosing proper parameters so that they posses the same effective diffusion coefficient. \section*{Appendices}
{ "timestamp": "2020-12-29T02:26:54", "yymm": "2012", "arxiv_id": "2012.14155", "language": "en", "url": "https://arxiv.org/abs/2012.14155" }
\section{Introduction} The advent of topological band theory has led to the burgeoning field of ``topological phases of matter'' which manifest exotic properties, such as surface conduction of electronic states, and wave propagation insensitive to backscattering and disorder~\cite{PhysRevLett.61.2015, PhysRevLett.98.106803, RevModPhys.82.3045, ryu2010topological}. In classical structures~\cite{kane2014topological, susstrunk2015observation, hadad2018self, li2018weyl, PhysRevB.95.125104, PhysRevResearch.2.023173, PhysRevB.99.125116, PhysRevLett.103.248101}, enormous efforts have been devoted to topological states that emulate their quantum analogs and enable many pioneering applications~\cite{PhysRevLett.121.094301, smirnova2020nonlinear, li2014granular, nash2015topological, PhysRevLett.120.068003, luo2021observation, PhysRevX.9.021054, 2021arXiv210412778L, 2020arXiv201201639Z}. To date, most of the studies of classical structures have been limited to linear topological band theory, with a few exceptions in the weakly nonlinear regime~\cite{PhysRevB.93.155112, hadad2018self, PhysRevB.100.014302, PhysRevE.97.032209} where perturbation theory is available. In 1D problems, the topological invariant called Berry phase~\cite{PhysRevLett.62.2747} is quantized by symmetries expressed as matrix operators. Due to bulk-boundary correspondence, topologically protected evanescent modes emerge on system boundaries. Although varied topological physics has been explored in linear systems, nonlinear dynamics are more ubiquitous in nature, such as biochemical processes~\cite{van1992stochastic}, fluid dynamics~\cite{RevModPhys.68.215}, and metamaterials~\cite{chen2014nonlinear, PhysRevB.101.104106}, etc. They give rise to rich properties like bifurcation~\cite{RevModPhys.63.991}, instability, solitons~\cite{PhysRevLett.42.1698}, and chaos~\cite{RevModPhys.57.617, RevModPhys.85.869}. The question naturally arises: can topological invariants and phases be extended to nonlinear systems? In this paper, we present a systematic study of topological attributes in 1D generalized nonlinear Schr\"{o}dinger equations beyond Kerr-nonlinearities~\cite{PhysRevB.93.155112}. The nonlinear parts of interactions are comparable to the linear ones and perturbation theory breaks down, which we designate the ``strongly nonlinear regime". We limit our considerations within the amplitude range~\cite{narisetti2010perturbation, zaera2018propagation} that chaos does not occur. Consequently, nonlinear bulk modes~\cite{vakakis2001normal, fronk2017higher} are remarkably distinct from sinusoidal waves (e.g., fig.\ref{fig1}(b) and fig.\ref{SIfig8}(c)). We develop the proper definition of Berry phase in nonlinear bulk modes. By adopting a symmetry-based analytic treatment, we demonstrate the quantization of Berry phase in reflection-symmetric systems, regardless of the availability of linear analysis. The emergence of nonlinear topological edge modes is associated with a quantized Berry phase that protects them from defects. Finally, instead of exponentially localizing on lattice boundaries, topological edge modes exhibit anomalous behaviors that decay to a plateau governed by the stable fixed points of nonlinearities. \section{Quantized Berry phase of nonlinear bulk modes} Generalized nonlinear Schr\"{o}dinger equations are widely studied in classical systems like nonlinear optics~\cite{smirnova2020nonlinear, PhysRevLett.95.013902} and electrical circuits~\cite{hadad2018self}. Their equations of motion are summarized as the general form in Eqs.(\ref{1}) below. We study nonlinear bulk modes, from which we define Berry phase and demonstrate its quantization in reflection-symmetric models. The considered model is a nonlinear SSH~\cite{PhysRevLett.42.1698} chain composed of $N$ classical dimer fields $\Psi_n = (\Psi_n^{(1)}, \Psi_n^{(2)})^\top$ ($\top$ is matrix transpose) coupled by nonlinear interactions, as represented pictorially in Fig.\ref{fig1}(a). The chain dynamics is governed by the 1D generalized nonlinear Schr\"{o}dinger equations, \begin{eqnarray}\label{1} & {} & \mathrm{i}\partial_t\Psi^{(1)}_n = \epsilon_0\Psi^{(1)}_n+f_1(\Psi^{(1)}_n, \Psi_n^{(2)}) +f_2(\Psi^{(1)}_{n}, \Psi^{(2)}_{n-1}), \nonumber \\ & {} & \mathrm{i}\partial_t\Psi^{(2)}_n = \epsilon_0\Psi^{(2)}_n+f_1(\Psi^{(2)}_n, \Psi^{(1)}_n) +f_2(\Psi^{(2)}_{n}, \Psi^{(1)}_{n+1}) , \end{eqnarray} subjected to periodic boundary condition (PBC), where $\epsilon_0\ge 0$ is the on-site potential, and $f_i(x,y)$ for $i=1$ and $i=2$ stand for intracell and intercell nonlinear couplings, respectively. $f_i(x,y)$ are real-coefficient general polynomials of $x$, $x^*$, $y$, and $y^*$ ($*$ represents complex conjugate), which offer time-reversal symmetry~\cite{RevModPhys.82.3045}. Given a nonlinear solution $\Psi_n(t)$, time-reversal symmetry demands a partner solution $\Psi_n^*(-t)$, as demonstrated in App.~B1. For systems such as those with Bose-Einstein condensates~\cite{RevModPhys.73.307}, $|\Psi(\vec r, t)|^2$ corresponds to a particle number density and third-order nonlinearities are thus limited to $|\Psi|^2 \Psi$ to enforce particle number conservation; in our case the fields do not correspond to particle densities and more general nonlinearities are thus permitted. In linear regime, the polynomials are approximated as $f_i(x,y)\approx c_i y$ ($c_{i=1,2}>0$) to have ``gapped" two-band models when $c_1\neq c_2$. The bulk mode eigenfunctions are sinusoidal in time, and Berry phase is quantized by reflection symmetry. In the ``strongly nonlinear regime" where nonlinear interactions become comparable to the linear ones, nonlinear bulk modes are significantly different from sinusoidal waves (e.g. figs.\ref{fig1}(b), \ref{SIfig8}(c)), and the frequencies naturally deviate from their linear counterparts. The nonlinearities become increasingly important as the bulk mode amplitude rises. Hence, the frequency of a nonlinear bulk mode is controlled both by wavenumber and amplitude. We thus define nonlinear band structure~\cite{hadad2018self, PhysRevB.93.155112} $\omega = \omega(q\in[-\pi,\pi], A)$ as the frequencies of nonlinear bulk modes for given amplitude $A$. We consider the simple case that nonlinear bulk modes are always non-degenerate (i.e., different modes at the same wavenumber have different frequencies) unless they reach the topological transition amplitude when the nonlinear bands merge at the band-touching frequency. Hence, given the amplitude, frequency, and wavenumber, a nonlinear bulk mode is uniquely defined. Extended from gapped linear models, the lattice is a ``gapped two-band nonlinear model''. In what follows, we define Berry phase for nonlinear bulk modes of the upper-band by adiabatically evolving the wavenumber across the Brillouin zone. The considered nonlinear bulk mode is spatial-temporal periodic. It takes the traveling plane-wave ansatz, \begin{eqnarray}\label{1.2} \Psi_{q} = (\Psi_q^{(1)}(\omega t-qn), \Psi_q^{(2)}(\omega t-qn+\phi_q))^\top, \end{eqnarray} where $\omega$ and $q$ are the frequency and wavenumber, respectively. $\Psi_q^{(j=1,2)}(\theta)$ are $2\pi$-periodic wave components, where the phase conditions are chosen by asking ${\rm Re\,}\Psi_{q}^{(j)}(\theta=0)=A$, and $A\overset{\rm def}{=}\max({\rm Re\,}\Psi_q^{(j)})$ is the amplitude. This is analogous to the phase condition ${\rm Re\,}\Psi(t=0)= \max({\rm Re\,}\Psi(t))$ adopted in Schr\"{o}dinger equation in order to have the eigenfunctions $\Psi(t) = |\Psi| e^{-\mathrm{i}\epsilon t/\hbar}$. Following this condition, $\phi_q$ in Eq.(\ref{1.2}) characterizes the relative phase between the two wave components. Nonlinear bulk modes are not sinusoidal. They fulfill $\mathrm{i} \partial_t \Psi_{q} = H(\Psi_{q})$, where $H(\Psi_{q})$ is the nonlinear function determined by Eqs.(\ref{1}) and is elaborated in App.~A. Given the band index and the amplitude $A$ of a nonlinear bulk mode, we find that $\omega$, $\phi_q$, and the waveform are determined by the wavenumber $q$. We adopt the ansatz in Eq.(2) based on a number of reasons. First, typical studies on weakly nonlinear bulk modes~\cite{PhysRevB.99.125116, fronk2017higher, PhysRevE.97.032209, PhysRevB.101.104106, narisetti2010perturbation, zaera2018propagation, vakakis2001normal} reveal that the dynamics of all high-order harmonics are controlled by the single variable $\theta=\omega t-qn$: $\Psi_q^{(j)} = \sum_l \psi_{l,q}^{(j)} e^{-\mathrm{i}l(\omega t -qn)}$, where $\psi_{l,q}^{(j)} = (2\pi)^{-1}\int_0^{2\pi} e^{\mathrm{i} l\theta} \Psi_q^{(j)}d\theta$ is the $l$-th Fourier component of $\Psi_q^{(j)}$. Second, numerical experiments such as shooting method (see figs.\ref{fig1}(b), \ref{SIfig8}(a,c), and Refs.~\cite{renson2016numerical, ha2001nonlinear, peeters2008nonlinear}) manifest non-dispersive, plane-wave like bulk modes in the strongly nonlinear regime. Finally, it is demonstrated in App.~C3 that the analytic solutions of nonlinear bulk modes at high-symmetry wavenumbers are in perfect agreement with Eq.(2). We realize the adiabatic evolution of wavenumber $q(t')$ traversing the Brillouin zone from $q(0)=q$ to $q(t) = q+2\pi$, while the amplitude $A$ remains unchanged during this process. According to the nonlinear extension of the adiabatic theorem~\cite{RevModPhys.82.1959, PhysRevLett.90.170404, PhysRevLett.98.050406, PhysRevA.81.052112}, a system $H(\Psi_{q})$ initially in one of the nonlinear modes $\Psi_{q}$ will stay as an instantaneous nonlinear mode of $H(\Psi_{q(t)})$ throughout this procedure, provided that the nonlinear mode $\Psi_{q}$ is stable~\cite{PhysRevLett.98.050406} within the amplitude scope of this paper. The stability of nonlinear bulk modes is confirmed in App.~C via the algorithm of self-oscillation~\cite{fronk2017higher, PhysRevB.99.125116, PhysRevE.97.032209}. Therefore, the only degree of freedom is the phase of mode. At time $t$, the mode is $\Psi_{q(t)}(\int_0^t \omega(t', q(t'))dt' -\gamma(t))$, where $\gamma(t)$ defines the phase shift of the nonlinear bulk mode in the adiabatic evolution. The dynamics of $\gamma$ is depicted by $(d\gamma/dt) (\partial\Psi_q/\partial\theta)=(dq/dt) (\partial\Psi_q/\partial q)$. After $q$ traverses the Brillouin zone, the wave function acquires an extra phase $\gamma$ dubbed Berry phase of nonlinear bulk modes, \begin{eqnarray}\label{1.3} \gamma = \oint_{\rm BZ}dq \frac{\sum_{l\in\mathcal{Z}} \left( l |\psi_{l,q}^{(2)} |^2 \frac{\partial\phi_q}{\partial q}+\mathrm{i}\sum_{j}\psi_{l,q}^{(j)*}\frac{\partial\psi_{l,q}^{(j)}}{\partial q}\right)} {\sum_{l'\in\mathcal{Z}} l' \left(\sum_{j'}|\psi_{l',q}^{(j')}|^2\right)},\quad \end{eqnarray} where $j, j'=1,2$ denote the two wave components, and the mathematical derivations are in App.~A. In general, $\gamma$ is \emph{not quantized} unless additional symmetry properties are imposed on the model, which we will discuss below. We note that the eigenmodes of linear problems are sinusoidal in time, which reduces Eq.(\ref{1.3}) to the conventional form~\cite{RevModPhys.82.1959} $\gamma = \oint_{\rm BZ}\mathrm{d}q\,\mathrm{i}\langle\Psi_q|\partial_q|\Psi_q\rangle$. Now we demonstrate that Berry phase defined in Eq.(\ref{1.3}) is quantized by reflection symmetry. The model in Eqs.(\ref{1}) respects reflection symmetry, which means that the nonlinear equations of motion are invariant under reflection transformation, \begin{eqnarray}\label{3.11} (\Psi^{(1)}_n, \Psi^{(2)}_n) \to (\Psi^{(2)}_{-n}, \Psi^{(1)}_{-n}). \end{eqnarray} Given a nonlinear bulk mode $\Psi_q$ in Eq.(\ref{1.2}), reflection transformation demands a partner solution $\Psi_{-q}' = (\Psi^{(2)}_q(\omega t +qn), \Psi^{(1)}_q(\omega t+qn-\phi_q))^\top$ that also satisfies the model. On the other hand, a nonlinear bulk mode of wavenumber $-q$ is by definition denoted as $ \Psi_{-q} = (\Psi_{-q}^{(1)}(\omega t+qn), \Psi_{-q}^{(2)}(\omega t+qn+\phi_{-q}))^\top $. Since there is no degeneracy of nonlinear bulk modes, $\Psi_{-q}'$ and $\Psi_{-q}$ have to be identical, which imposes the constraints \begin{eqnarray}\label{4} \phi_{-q} = -\phi_q \mod 2\pi, \quad{\rm and}\quad \Psi^{(2)}_q = \Psi^{(1)}_{-q}. \end{eqnarray} Thus, the Fourier components of nonlinear bulk modes satisfy $\psi_{l,q}^{(2)} = \psi_{l,-q}^{(1)}$. This relationship, together with Eqs.(\ref{4}), is the key to quantizing the Berry phase in Eq.(\ref{1.3}) (details in App.~B2), \begin{eqnarray}\label{4.12} \gamma = \frac{1}{2}\oint_{\rm BZ} \frac{d\phi_q}{dq} dq =\phi_\pi-\phi_0 = 0{\rm \,\, or\,\,}\pi\mod 2\pi, \end{eqnarray} where $\phi_{q=0,\pi}$ are the relative phases of the upper-band nonlinear modes at high-symmetry points. They are determined by comparing the frequencies $\omega(\phi_q=0)$ and $\omega(\phi_q=\pi)$ for $q=0$ and $\pi$. $\gamma=\pi$ if $\omega(\phi_0=0)$ and $\omega(\phi_\pi = \pi)$ belong to the same band, whereas $\gamma=0$ if they are in different bands. Interestingly, $\gamma$ encounters a topological transition induced by the critical amplitude $A=A_c$ if frequencies merge at $\omega(\phi_\pi = 0, A_c) = \omega(\phi_\pi=\pi, A_c)$. This transition is exemplified by the minimal model of nonlinear topological lattice in Sec.\uppercase\expandafter{\romannumeral3}. It is worth emphasizing that despite all the discussions of nonlinear Schr\"{o}dinger equations and the quantization of Berry phase, the model is purely classical in the sense of $\hbar$ being zero. \begin{figure}[htbp] \includegraphics[scale=0.52]{fig1v8.png} \caption{The minimal model of nonlinear SSH chain. (a), schematic illustration of the lattice subjected to PBC. Unit cell is enclosed by black dashed box. Red and blue bonds represent intracell and intercell couplings. (b), a nonlinear bulk mode computed by \emph{shooting method}~\cite{renson2016numerical, ha2001nonlinear, peeters2008nonlinear} with amplitude $A=1.5$ and wavenumber $q=4\pi/5$. Red and blue curves are the wave functions of $n=1$ and $3$ sites, respectively. Orange curve shows the noticeable difference between nonlinear mode and sinusoidal function. (c), frequency profile of nonlinear bulk mode in (b). (d), nonlinear band structures $\omega = \omega(q,A)$ plotted for bulk mode amplitudes from $A=0$ to $1.1$. The red curves touch for the topological transition amplitude $A_c = 0.8944$ at $\omega=\epsilon_0=1.5$. The inset elaborates the gap-closing transition amplitude $A_c$ at which band inversion occurs. }\label{fig1} \end{figure} Having established quantized Berry phase, we now search additional properties for vanishing on-site potential, $\epsilon_0=0$. The model's linear limit respects charge-conjugation symmetry~\cite{ryu2010topological, kane2014topological}, which demands that the states appear in $\pm\omega$ pairs, and the topological mode have zero-energy. To have $\pm\omega$ pairs of modes in the nonlinear problem, we require the parity of interactions to satisfy $f_i(-x, y) =-f_i(x, -y)= f_i(x,y)$. Consequently, the system is invariant under the transformation $(\Psi^{(1)}_n(\omega t), \Psi^{(2)}_n(\omega t)) \to (-\Psi^{(1)}_{n}(-\omega t), \Psi^{(2)}_{n}(-\omega t))$. Given a nonlinear mode $\Psi_{\omega}$ defined in Eq.(\ref{1.2}), this transformation demands a partner solution $\Psi_{-\omega} = (-\Psi^{(1)}_q(-\omega t -qn), \Psi^{(2)}_q(-\omega t-qn+\phi_q))^\top$. Therefore, nonlinear modes always appear in $\pm\omega$ pairs. Similar to charge-conjugation symmetric models in linear systems~\cite{ryu2010topological}, the frequencies of nonlinear topological modes are zero, which is illustrated in the following minimal model. \section{Topological transition and bulk-boundary correspondence in the minimal model} We now clarify the nonlinear extension of bulk-boundary correspondence~\cite{PhysRevB.100.014302, PhysRevA.97.043602} by demonstrating topological edge modes in the minimal model that respects time-reversal symmetry, where the couplings are specified as \begin{eqnarray}\label{9} f_i (x,y) = c_i y + d_i [({\rm Re\,}y)^3+\mathrm{i} ({\rm Im\,}y)^3], \end{eqnarray} with $c_i, d_i>0$ for $i= 1,2$. This interaction offers numerically stable nonlinear bulk and topological edge modes and can be realized in passive photonic and active electrical circuit metamaterials (Sec.\uppercase\expandafter{\romannumeral4} and App.~E). We are interested in attributes unique to nonlinear systems, in particular the topological phase transition induced by bulk mode amplitudes. Thus, the parameters yield $c_1<c_2$ and $d_1>d_2$ ($c_1>c_2$ and $d_1<d_2$) to induce topological-to-non-topological phase transition (non-topological-to-topological transition) as amplitudes increase. We abbreviate them as ``T-to-N" and ``N-to-T" transitions, and they are converted to one another by simply flipping intracell and intercell couplings. In the remainder of this paper, a semi-infinite lattice subjected to open boundary condition (OBC) is always considered whenever we refer to topological edge modes. We first study the case $c_1<c_2$ and $d_1>d_2$, in which a T-to-N transition occurs. Fig.\ref{fig1}(d) numerically illustrates nonlinear band structures and topological transition by considering $\epsilon_0=1.5$, $c_1=0.25$, $c_2=0.37$, $d_1=0.22$, and $d_2=0.02$. Given that Berry phase $\gamma(A=0)=\pi$, the lattice is topologically nontrivial in the linear limit. As amplitudes rise, the topological invariant $\gamma(A<A_c)=\pi$ cannot change until it becomes ill-defined when the nonlinear bandgap closes at the transition amplitude $A_c$. The bandgap reopens above $A_c$, allowing the well-defined Berry phase to take the trivial value $\gamma(A>A_c)=0$, as depicted in the inset of Fig.\ref{fig1}(d). $A_c$ is numerically computed by solving the bandgap-closing equation $\omega(\phi_\pi = 0, A_c)=\omega(\phi_\pi=\pi, A_c)$. We propose a convenient approximation~\cite{detroux2014harmonic} $f(\Psi_{n'}^{(j')}, \Psi_n^{(j)})\approx (c_i+\frac{3}{4}d_i A^2)\Psi_n^{(j)}$ to estimate the transition amplitude $A_c\approx \sqrt{-4(c_2-c_1)/3(d_2-d_1)}$. The good agreement between this approximation and the numerical solutions is shown in App.~C. We highlight that $A_c^2\max(d_1,d_2) /\max(c_1, c_2) \approx 0.5$, which demonstrates the comparable nonlinear and linear interactions in the strongly nonlinear regime. \widetext \begin{figure}[htbp] \includegraphics[scale=0.55]{fig2v7.png} \caption{Nonlinear edge excitations of the model subjected to T-to-N transition, where the parameters fulfill $c_1<c_2$ and $d_1>d_2$. (a-d) and (e-h) show lattice boundary responses in small-amplitude topological regime and large-amplitude non-topological regime, respectively. The magnitudes of Gaussian tone bursts are $S=7\times 10^{-2}$ in (a) and $S=56\times 10^{-2}$ in (e), respectively. (b) and (f), spatial-temporal profiles of $|{\rm Re\,}\Psi_n^{(1)}(t)|$ for all $45$ sites, where $|{\rm Re\,}\Psi_n^{(1)}(t)|$ denote the strength of the lattice excitations. (c) and (g), spatial profiles of the frequency spectra of the responding modes, where the time domain of performing Fourier analysis is from $250T$ to $500T$. White dashed lines mark the top and bottom of the linear bandgap. In (g), modes in the bandgap are triggered by energy absorption~\cite{fronk2017higher} from nonlinear bulk modes. (d) and (h), red and blue curves for the spatial profiles of the $\omega=\epsilon_0$ wave component of the excitations. The analytic prediction of the topological mode $\psi_n^{(1)}(\epsilon_0)$ is depicted by the black dashed curve in (d). }\label{fig2} \end{figure} \endwidetext Bulk-boundary correspondence has been extended to weakly nonlinear Newtonian~\cite{PhysRevB.100.014302} and Schr\"{o}dinger~\cite{PhysRevA.97.043602} systems by showing topological boundary modes guaranteed by topologically non-trivial Berry phase. In the strongly nonlinear problem, we utilize analytic approximation and numerical experiment, to doubly confirm this correspondence by identifying nonlinear topological edge modes. In the former, the lattice is composed of $N=45$ unit cells with OBCs on both ends to mimic semi-infinite lattice, and the parameters are carried over from Fig.\ref{fig1}. The topological mode and frequency are denoted as $\Psi_n=(\Psi_n^{(1)}, \Psi_n^{(2)})^\top$ and $\omega_{\rm T}$, respectively. Analogous to linear SSH chain~\cite{PhysRevLett.42.1698}, the analytic scheme is to approximate $\Psi_{n}^{(1)}\gg\Psi_{n}^{(2)}$, which is numerically verified in Fig.\ref{fig2}(d). We make one further approximation to truncate the equations of motion to fundamental harmonics. Therefore, the nonlinear topological edge mode is approximated as $\Psi_n \approx (\psi_{1,n}^{(1)}, 0)^\top e^{-\mathrm{i}\epsilon_0 t}$, where $\psi_{1,n}^{(1)}$ are the fundamental harmonic components. By doing so, we find $\omega_{\rm T} = \epsilon_0$, and \begin{eqnarray}\label{Recur} \left(c_1+\frac{3}{4}d_1|\psi_{1,n}^{(1)}|^2\right) |\psi_{1,n}^{(1)}|=\left(c_2+\frac{3}{4}d_2|\psi_{1,n+1}^{(1)}|^2\right) |\psi_{1,n+1}^{(1)}|. \nonumber \\ \end{eqnarray} From Eq.(\ref{Recur}), the semi-infinite lattice hosts topological evanescent modes when $|\Psi_{1}^{(1)}|<\sqrt{-4(c_2-c_1)/3(d_2-d_1)}\approx A_c$, whereas no such mode exists for $|\Psi_{1}^{(1)}|>\sqrt{-4(c_2-c_1)/3(d_2-d_1)}\approx A_c$. In App.~D, the frequency and analytic expression are applied in weakly nonlinear regime, and they are perfectly in line with method of multiple-scale~\cite{fronk2017higher, narisetti2010perturbation, zaera2018propagation, SNEE2019100487}. The numerical scenario is accomplished by applying a Gaussian profile signal $S_n =\delta_{n1} S e^{-\mathrm{i}\omega_{\rm ext} t-(t-t_0)^2/\tau^2} (1, 0)^\top$ on the first site, where the carrier frequency $\omega_{\rm ext} = \epsilon_0=1.5$, $T=2\pi/\omega_{\rm ext}$, $\tau=3T$ controls Gaussian spread, and $t_0 = 15T$ denotes trigger time. Figs.\ref{fig2}(b) and (f) together verify bulk-boundary correspondence~\cite{PhysRevB.100.014302, PhysRevA.97.043602} by identifying the presence and absence of topological boundary excitations below and above the critical amplitude $A_c$, respectively. In fig.\ref{fig2}(d), the flattened part near the lattice boundary is the manifestation of nonlinearities. One may find it unusual that the frequencies of topological modes $\omega_{\rm T}=\epsilon_0$ are independent of amplitudes, although this result is in agreement with Ref.~\cite{hadad2018self, PhysRevB.93.155112, PhysRevB.100.014302} in weakly nonlinear regime. Here we propose an explanation for this intriguing result. Because the evanescent mode fades to zero in the bulk, the ``tail" of this mode eventually enters into small-amplitude regime where nonlinearities are negligible and linear analysis becomes effective. Linear topological theory~\cite{PhysRevLett.42.1698} demands the tail of the mode to be $\omega_{\rm T}=\epsilon_0$, which in turn requires the frequency of the nonlinear topological mode to be independent of the amplitude. Topological protection is featured in multiple aspects. As visualized in Fig.\ref{fig1}(d), the frequencies of topological modes stay in the bandgap and are distinct from nonlinear bulk modes. The appearance and absence of these modes are captured by the topological invariant that cannot change continuously upon the variation of system parameters. Lastly, topological modes are insensitive to defects, which is numerically verified in App.~D. When $\epsilon_0=0$, the model manifests nonlinear bulk modes in $\pm\omega$ pairs. Topologically protected nonlinear boundary modes do not oscillate in time, in contrast to the $\epsilon_0 \ne 0$ systems. Thus, we obtain exact solutions of nonlinear topological modes via the recursion relation, $f_1(\Psi_n^{(1)},\Psi^{(2)}_n) +f_2(\Psi_n^{(1)},\Psi^{(2)}_{n-1}) = f_1(\Psi_n^{(2)},\Psi^{(1)}_n) +f_2(\Psi_n^{(2)},\Psi^{(1)}_{n+1}) =0$. This is the nonlinear analog of charge-conjugation symmetric systems. \widetext \begin{figure}[htbp] \includegraphics[scale=0.55]{fig6v9.png} \caption{Nonlinear boundary responses of the lattice subjected to N-to-T transition, where the parameters yield $c_1>c_2$ and $d_1<d_2$. (a-d) and (e-h) exhibit lattice boundary excitations in the small-amplitude non-topological regime and the large-amplitude topological regime, respectively. The magnitudes of Gaussian signals are $S=0.1$ in (a) and $S=2.5$ in (e), respectively. (b) and (f), Spatial-temporal profiles of $|{\rm Re\,}\Psi_n^{(1)}(t)|$ for $45$ sites. (c) and (g), frequency spectra of the lattice excitations for $45$ sites. Fourier analysis is executed from $250T$ to $500T$. White dashed lines encircle the linear bandgap. (d) and (h), red and blue curves for the spatial distributions of the $\omega=\epsilon_0$ mode component of the lattice excitations. The analytic result of the anomalous topological modes $\psi_n^{(1)}(\epsilon_0)$ is captured by the black dashed curve in (h). }\label{fig3} \end{figure} \endwidetext In the second case of $c_1>c_2$ and $d_1<d_2$, N-to-T (non-topological-to-topological) transition occurs as amplitudes rise. We exemplify boundary excitations in Fig.\ref{fig3} by letting $\epsilon_0=8$, $c_1=0.37$, $c_2=0.25$, $d_1=0.02$, and $d_2=0.22$. A Gaussian signal is applied on the first site of the lattice, where the carrier frequency $\omega_{\rm ext} = \epsilon_0=8$, $T=2\pi/\omega_{\rm ext}$, Gaussian spread $\tau = 10T$, and trigger time $t_0 = 25T$. In the small-amplitude regime, we consider a chain of $N=45$ unit cells. As shown in Fig.\ref{fig3}(b), the lattice is free of topological modes for $|\Psi_{1}^{(1)}|<A_c=0.8944$. In the large-amplitude regime, the lattice is constructed from $N=120$ unit cells. Anomalous topological edge modes emerge when $|\Psi_{1}^{(1)}|>A_c$ (see figs.\ref{fig3}(f,h)). In contrast to conventional topological modes that shrink to zero over space, $\Psi_n^{(1)}$ decay to the plateau $A_c$ governed by the stable fixed point of Eq.(\ref{Recur}), whereas $\Psi_n^{(2)}$ increase to $A_c$ by absorbing energy~\cite{fronk2017higher} from $\Psi_n^{(1)}$. Theoretical analysis predicts that the plateau should extend to infinity, but the plateau is limited to reach site $60$ by the finite lifetime of topological modes due to the energy conversion to bulk modes, as elaborated in Fig.\ref{SIfig11}. Despite the huge nonlinearities ($|\Psi_1^{(1)}|/A_c\sim 10$, and $|\Psi_1^{(1)}|^2\max(d_1,d_2) /\max(c_1,c_2) \sim 10$), this mode is stable within the finite lifetime of more than 400 periods. This model serves as the combined prototype of long-lifetime, high-energy storage, long-distance transmission of topological modes, and efficient frequency converter from Gaussian inputs to monochromatic signals. Although T-to-N and N-to-T transitions are converted to one another by choosing the unit cell, topological modes behave qualitatively different (Fig.\ref{fig2}(d) and \ref{fig3}(h)) due to the distinction in the fixed points of Eq.(\ref{Recur}). The modes converge to the stable fixed point $0$ in T-to-N transition ($A_c$ in N-to-T transition), but this fixed point becomes unstable in N-to-T transition (T-to-N transition). \section{Proposals for experimental implementations} Upon establishing nonlinear topological band theory, it is natural to ask if any realistic physical systems enjoy these unconventional properties. Classical passive and active structures are proposed here to realize the minimal model of Eq.(\ref{9}), as detailed in App.~E. \emph{Topological photonics~\cite{d2008ultraslow, Christodoulides:88}, passive system}: Our theoretical prototype is readily testified in 1D array of optic lattice. Each unit cell is composed of two waveguides to guide electro-magnetic modes along the axial direction, and the permittivity and permeability are nonlinearly modulated by the fields. Hence, the adjacent electro-magnetic fields are coupled nonlinearly. It can be shown that the propagation of electro-magnetic fields along the axial $z$-direction is depicted by 4-field extension of generalized nonlinear Schr\"{o}dinger equations, where the $z$-coordinate takes place of the time-like differential variable~\cite{d2008ultraslow, Christodoulides:88}. Consequently, this photonic system realizes the minimal model of Eq.(\ref{9}). \emph{Topoelectrical circuit~\cite{hadad2018self}, active system}: The second promising direction is to construct a ladder of cascaded diatomic unit cells composed of two LCR resonators and two capacitors $C_{j=1,2}\ll C$. The inductances are connected to external power sources which are nonlinear functions of $V_n^{(j=1,2)}$. The motion equation of the unit cell voltages $V_n^{(j=1,2)}$ are captured by Eqs.(\ref{1}) and Eq.(\ref{9}), and nonlinear topological attributes can be studied here. \begin{figure}[htbp] \includegraphics[scale=0.3]{fig4_arxiv_v13.png} \caption{Experimental proposals for passive and active nonlinear topological metamaterials. (a) 1D array of nonlinear optic lattice. The nearest neighbor electro-magnetic fields are coupled nonlinearly. (b) The unit cell of nonlinear active topoelectrical circuit, where the inductances are connected by external alternating power sources nonlinearly controlled by voltage fields $V_n^{(j=1,2)}$. }\label{expProposal} \end{figure} \section{Conclusions} In this paper, we extend topological band theory to strongly nonlinear Schr\"{o}dinger equations beyond Kerr-type nonlinearities. The proper definition of Berry phase is carried out for nonlinear bulk modes, and its quantization is demonstrated in reflection-symmetric models. The topological invariant experiences transitions induced by mode amplitudes. These results can be extended to higher dimensional systems with arbitrarily complex unit cells, but we leave the full proof for the future. The advent (disappearance) of topological modes is associated with a change in the Berry phase to its topological (non-topological) value. As amplitudes increase, T-to-N (topological-to-non-topological) and N-to-T (non-topological-to-topological) transitions take place for different choices of unit cells. Anomalous topological modes decrease away from lattice boundaries to a plateau controlled by the stable fixed point of nonlinearities. A rich variety of problems can be studied following this paper, such as the nonlinear extension of topological chiral edge modes in 2D systems~\cite{PhysRevLett.61.2015}, and higher-order topological states~\cite{benalcazar2017quantized}. Experimental characterizations of photonic, acoustic, and electrical metamaterials with built-in nonlinearities can also be studied in future. \begin{acknowledgements} D. Z. would like to thank insightful discussions with Xueda Wen, Junyi Zhang, Feng Li, and Biao Wu. This work is supported by the National Key R$\&$D Program of China (Grant No. 2020YFA0308800), the NSF of China (Grants Nos. 11734003, 12061131002), and the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB30000000). \end{acknowledgements}
{ "timestamp": "2022-06-22T02:05:49", "yymm": "2012", "arxiv_id": "2012.14103", "language": "en", "url": "https://arxiv.org/abs/2012.14103" }
\section{Introduction} \IEEEPARstart{M}{edical} imaging is an effective and widely used diagnosis tool in this modern medical industry, which commonly includes ultrasound imaging, magnetic resonance imaging (MRI), X-ray and computed tomography (CT). Among them, ultrasound imaging has the characteristics of low cost, non-radiation and continuous dynamic recording, which is superior to others. In the actual ultrasound imaging diagnosis, doctors usually judge whether there is a lesion by observing the shape, the blood flow degree, and the contour smoothness of the interest region in the ultrasound images. This indicates that the high resolution of ultrasound images is conducive to improving the accuracy of medical diagnosis. Actually, due to the limitation of acoustic diffraction in medical equipment, it is hard to obtain HR ultrasonic data. Thus, in terms of improving the resolution of ultrasound data, image super-resolution turns out to be a feasible approach, which is of great importance for visual perception based medical clinical diagnosis \cite{hudson2015dynamic,morin2013semi}. In the last couple of years, deep learning network has been applied to a variety of medical image processing tasks, including CT image segmentation \cite{skourt2018lung}, MRI image deblurring \cite{lim2020deblurring} and ultrasound image SR \cite{choi2018deep,lu2018unsupervised}. Umehara \emph{et al.} \cite{umehara2018application} in the first time applied the deep neural network to medial images. They improved the resolution of CT images with the pioneer image SR model - SRCNN \cite{dong2016image}). Recent works on bio-medical image segmentation and ultrasound image SR \cite{Olaf2015Unet,van2019deep} utilized the classical ’U-net’ structure to develop the task-specific deep models. Since there is no fully connected layer, the overall structure of U-Net is made up of many convolution and deconvolution layers. Here convolution layer plays a role of encoder while deconvolution layer acts as a decoder. Actually, the pooling operations and the single-scale structure in such U-net model may not be able to make full use of the multi-level image details and the multi-scope context information. A recent work \cite{kim2016accurate} suggested that better SR results can be acquired through a deeper and wider network with good generalization performance. In practice, this principle may not be always applicable to medical imaging field due to the fact that usually there are not numerous medical LR-HR sample pairs available for supervision training. Therefore, how to deal with the lack of supervision samples becomes one of the keys to improving the performance of medical image SR. Different from CNNs, Ledig \emph{et al.} \cite{ledig2017photo} introduced the idea of adversarial learning for image generation to produce photo-realistic SR results, and form a new network structure, namely SRGAN (SR generative adversarial network). Also the SRGAN model had been applied by Choi \emph{et al.} \cite{choi2018deep} for high-speed ultrasound image SR. Moreover, Yochai \emph{et al.} in their recent work \cite{blau2018perception} found that although GANs can obtain better reconstruction effect, the visual perception quality and the distortion decreasing metric seem to be contradictory with each other. In fact, the aforementioned deep SR methods are all working in the way of supervised learning with numerous LR-HR samples pairs, and are not suitable for unsupervised or self-supervised scenario. Meanwhile, these methods don't consider the consistency from LR to SR and then back to LR again. Thus, in this work, motivated by zero-shot natural image SR (ZSSR) \cite{ZSSR} and CycleGAN \cite{zhu2017unpaired}, we present a novel self-supervised CycleGAN framework for ultrasound image SR, which is fully different from the structures of ZSSR \cite{ZSSR} and CycleGAN \cite{zhu2017unpaired}. In our approach, for LR to SR , we firstly construct deep multi-scale encoder-decoder \cite{liu2020exploring} to super-resolve the LR input. And then, for back to LR, we use a special designed CNN with random noise input to degenerate the generated SR one. While for HR to LR and then back to SR, these two structures just utilized are used again in reverse order. Due to the cycle consistency structure, our proposed model greatly reduces the artifacts in SR results compared to ZSSR \cite{ZSSR}. Moreover, our model integrates multi-level feature loss when super-resolving ultrasound images to better balance the visual similarity to real data and the reconstruction accuracy. Numerous experimental comparisons under different ultrasound data sets are performed and the results show that the proposed approach can not only get good subjective visual effect but also obtain better objective quality evaluation metrics. Note that this work is a completely new development of our previous conference one \cite{liu2020exploring}. There are two obvious differences between them: the self-supervision learning mechanism is introduced to replace the previous supervised way; the CycleGAN structure with a richer variety of image losses including the cycle consistent loss is developed to replace the previous PatchGAN model. On the whole, our current work has made significant improvements on previous conference version and will get much better results than before. To the best of our knowledge, there are few works to deal with the problem of deep SR for single ultrasound image, let alone exploring the self-supervision and cycle adversarial learning in the absence of LR-HR training pairs to realize accurate reconstruction with perception consistency. The contributions of this work can be summarized as follows: \begin{itemize} \item By introducing the self-supervision mechanism with cycle adversarial learning, for the first time, we put forward a new self-supervised CycleGAN framework for single ultrasound image SR, which can lead to accurate reconstruction with perception consistency. \item Our proposed approach can adapt to ideal ultrasound images as well as non-ideal ones due to the self-supervision characteristics. \item We adopt both LR cycle loss and HR cycle loss with other multi-level image losses to jointly supervise the ultrasound image SR reconstruction. The experimental results indicate that the comprehensive loss can recover the multi-level and degradation consistent details of ultrasound images. \item We evaluate our approach on different public ultrasound datasets and provide the competitive results compared to other state-of-the-art methods. We also provide the ablation study on the proposed approach, which may be helpful for future further research on ultrasound image SR. \end{itemize} \section{Related Works} \label{sec:RW} \subsection{Natural Image SR} Although image SR is a classic low-level vision task, it is still a research hot-spot in recent years, and many new methods have emerged, especially those based on deep learning. Since the advent of SRCNN - the first image SR deep network presented by Dong \emph{et al.} \cite{dong2016image}, many early deep SR models followed the process of feature extraction, nonlinear mapping and image reconstruction. However, such shallow neural networks hold the limited ability in obtaining multi-level features of the input images. With paying attention to that the edge prior is conducive to image SR, Liang \emph{et al.} \cite{liang2016incorporating} firstly utilized Sobel edges with LR images to train deep SR model. However, their SR performance improvement is not obvious. Lately, based on the structure simulation on multiple resolution wavelet analysis, Liu \emph{et al.} \cite{liu2019single} proposed a multi-scale deep encoder-decoder model with the guidance of phase congruency edge map for single image SR and provided a convincing SR contrast effects. In addition, Wang \emph{et al.} \cite{wang2019multi} presented to form multi-memory residual block to progressively extract and retain inter-frame temporal correlations for video SR. Ma \emph{et al.} \cite{ma2020image} recently proposed dense discriminative network that is composed of several aggregation modules for image SR. With applying adversarial learning strategy to improve the reconstruction quality, Ledig \emph{et al.} \cite{ledig2017photo} applied GAN's framework to present SRGAN for image SR. In the model, the generator utilizes several residual blocks for efficient SR reconstruction while the discriminator forces the generator to produce the SR outputs close to the real HR labels. In addition, considering that the BN (batch normalization) operation may weaken the diversity of features, Lim \emph{et al.} \cite{lim2017enhanced} presented the so-called EDSR model by removing the BN layers in original deep residual blocks. They also made another adjustment to remove the ReLU layer after the sum of different paths so as to keep the path flexible. Recently, Park \emph{et al.} \cite{park2018srfeat} presented a new GAN-like model - SRFeat, which holds two discriminators to not only distinguish the generated images but also the hierarchical features in the feature domain. This additional discrimination network can force the generator to pay attention on feature approximation while generating SR images. Completely different from above supervised methods, Shocher \emph{et al.} proposed a zero-shot image SR approach (ZSSR) \cite{ZSSR} which can work in unsupervised way. The ZSSR approach does not need the HR label data prepared in advance and can adapt to known as well as unknown imaging conditions theoretically. However, this method makes use of the pattern similarity of the image itself, and it is easy to produce the artifacts when applied to unnatural images such as medical ones. \vspace{-2ex} \subsection{Ultrasound Image SR} Different from the vigorous development of natural image processing, medical image SR has not attracted enough attention. Recently, Zhao \emph{et al.} \cite{zhao2016single} implemented ultrasound image SR by obtaining a $\ell_2$ norm regularization based analytical solution. Diamantis \emph{et al.} \cite{diamantis2018super} focused on axial imaging. They developed a location-based approach to convert SR axial imaging to ultrasound one and recognized that the accuracy of ultrasonic axial imaging is closely related to the image-based location precision of single scattering. Umehara \emph{et al.} \cite{umehara2018application} suggested the SRCNN approach might also be suitable to medical images, so they applied the method for chest CT image SR and the results supported their viewpoint. Moreover, similarly to ZSSR \cite{ZSSR}, Lu \emph{et al.} \cite{lu2018unsupervised} presented to exploit the multi-scale contextual features extracted from the test image itself to train an image-specific network and called this as unsupervised way, then utilized dilated convolution and residual learning to improve the convergence and accuracy. In recently, U-Net \cite{Olaf2015Unet} deep network was applied by Van Sloun \emph{et al.} \cite{van2019deep} to super-resolve the vascular images based on high-density contrast-enhanced ultrasound data. In order to enhance details reconstruction in SR, Choi \emph{et al.} \cite{choi2018deep} slightly amended SRGAN \cite{ledig2017photo} model to enhance the transverse resolution of ultrasound images. Although the performance of adopting GAN is generally good, some recent study \cite{zhu2019make} have shown that the generated SR images can easily contain some unrealistic artificial details. This phenomenon has also been observed in our experiments (see Fig. \ref{fig5} and Fig. \ref{fig6} in this work). In addition, Liu \emph{et al.} \cite{Liu2019MedicalIS} presented to use dense connection with blended attention structure for MRI image SR. Although they gave some quite good experimental results, their methods did not consider the image generation consistency of HR-to-LR and LR-to-HR. \section{Methodology} \subsection{Self-supervised ultrasound image SR} Unlike other image processing tasks of low-level vision, image SR is to find a mapping function, which can map a LR image in LR image space onto a corresponding HR one in HR image space. Due to the different sources of various images, this mapping is usually complex and changeable. Therefore, whether or not the mapping relationship between the high resolution and the low one can be obtained accurately has a great impact on the SR performance. For natural images, this mapping can be gotten from a large number of pre-set LR-HR training sample pairs through supervised learning. But for ultrasound medical images, the situation is very different. Ultrasound images usually come from clinical diagnosis, due to the privacy, it is difficult to obtain a great deal of training sample pairs for supervised learning. Even if such samples can be obtained, due to the different imaging conditions and acquisition scenes, it is difficult to find the accurate mapping relationship from ultrasound LR images to HR ones by supervised learning way. However, due to the internal characteristics of ultrasound images, the changes of its edge and texture are relatively small compared with the natural image, and he content pattern has strong repeatability. Therefore, it is possible to exploit the relationship between the local region and the global image to construct training sample pairs and obtain the resolution mapping relationship at a specific down-sampling scale through self-supervised learning. Note that at this point, a general lightweight CNN network can meet the requirements. Actually, multi-scale analysis naturally has the excellent characteristics of capturing the relationship be tween the local region and the global image. Therefore, if we can build a multi-scale deep SR network, theoretically it will be more conducive to the performance improvement of this self-supervision learning method (will be described in detail in the following sections). Our self-supervised ultrasound image SR approach can be described as follows: firstly, the test ultrasound image is made data enhancement and these enhanced images can be called ``HR fathers''; then these ``HR fathers'' are down-sampled at a specified reduction factor to obtain the "LR sons"; then a CycleGAN SR network is constructed, which utilizes multi-scale structure as the generator and considers the perception consistency from LR to HR and back to LR (which will be introduced in detail below); and then the LR-HR data pairs obtained before are used for the network training; finally, after the CycleGAN is well trained, the test ultrasound image is then sent to the generator as LR input to obtain its SR reconstruction result. Note that above data enhancement operations on test ultrasound image include a series of down-sampling with different reduction factors, as well as 4 rotations ($0^{\circ}$, $90^{\circ}$, $180^{\circ}$, $270^{\circ}$) and their mirror reflections in the vertical and horizontal directions. In addition, for the purpose of robustness, we can also consider training several SR networks for certain intermediate down-sampling factors. The SR images generated by these networks and the corresponding down-scaled LR versions can also be added into the target training set as additional LR-HR example pairs. \subsection{Multi-scale Generator} Based on wavelet multi-resolution analysis (MRA) theory \cite{mallat1999wavelet} and motivated by the work \cite{liu2019single}, we can use deep structure to simulate wavelet multi-resolution analysis and construct a multi-scale deep network for ultrasound image SR. In order to adapt to any image size, our multi-scale model also adopts full convolution structure, which is fully composed of many encoders (convolution layers) and decoders (deconvolution layers). The detailed structure of our multi-scale generator is shown in Fig. \ref{fig1}. Note that this figure clearly demonstrates that the input LR image is considered to be the low frequency component of the multi-scale analysis of the HD image. Table \ref{table1} gives the detail parameters of our three-scale deep network. Obviously, the objective of multi-scale encoder-decoder learning is to find the optimized network parameters $\Theta_j$ of the network mapping function $F_j$ in every scale $j$ branch so that the final reconstruction can approximate the original HR image under certain measure (for example, $\ell_2$ norm). This may be formulated as: \begin{equation} \tilde{\Theta}=\mathop{\arg\min_\Theta(||conv({\mathop{concat}\limits_{j}(\cdots,y+{F_j(y,\Theta_j)},\cdots))-f}||_2}, \label{eq1} \end{equation} where $f$ and $y$ are the HR image and the LR input, respectively. The symbol $j$ denotes a specific scale, the $concat(\cdot)$ formula means concatenation operation and the $conv(\cdot)$ represents the final output convolution operation in Fig. \ref{fig1}. \begin{figure}[t] \centering \includegraphics[width=3.2in, height=1.4in]{fig3.pdf} \caption{The structure of our multi-scale generator.} \vspace{-3ex} \label{fig1} \end{figure} \begin{table}[h] \renewcommand{\arraystretch}{1.5} \vspace{-2ex} \caption{The specific parameters of three-scale generator} \label{table1} \centering \begin{threeparttable}[b] \begin{tabular}{c|c|c} \hline \bfseries scale1 & \bfseries scale2 & \bfseries scale3 \\ \hline (conv3-32)$\times$2 & (conv3-32)$\times$2 & (conv3-32)$\times$2 \\ & (conv3-32)$\times$2 & (conv3-32)$\times$2 \\ & & (conv3-64)$\times$2 \\ & & (deconv3-64)$\times$2 \\ & (deconv3-32)$\times$2 & (deconv3-32)$\times$2 \\ (deconv3-32)$\times$2 & (deconv3-32)$\times$2 & (deconv3-32)$\times$2 \\ \hline \end{tabular} \end{threeparttable} \vspace{-2ex} \end{table} In the multiple scales network, the LR image $I_{LR}$ is firstly input to three scales encoder-decoder streams to recover the image details at different scales. Since LR images can be treated as low-frequency components of HR ones (see Eq. (\ref{eq1}), the reconstructed images of different scales can be obtained by adding these detail images directly to LR input. Finally, the super-resolved ultrasound image $I_{SR}$ is obtained by concatenating and fusing the reconstruction images of three scales. In fact, the multi-scale deep encoder-decoder structure acts as the generator of the CycleGAN based ultrasound image perception consistency SR framework, which will be described at length below. \subsection{CycleGAN based Perception Consistency SR} Different from traditional GAN \cite{goodfellow2014generative} that only contains one generator and one discriminator, CycleGAN \cite{zhu2017unpaired} employs two generators and discriminators to distinguish the generated images from real ones, equipping with the cycle consistency loss for reliable image generation. Obviously, for medical image SR, the cycle consistency is particular significant because the redundant or artificial details introduced in image generation will seriously damage the accuracy of disease diagnosis. This fact is also a important motivation for us to use CycleGAN framework for ultrasound image SR. Since the original task of CycleGAN is image translation, it is easy to find a deal of natural images (paired or unpaired) for training. Whereas for ultrasound image SR, to obtain numerous paired LR and HR ultrasound images are quite difficult. Therefore, we not only need to build LR-to-HR generation model but also HR-to-LR one. Although the multi-scale deep encoder-decoder network mentioned above can be used as LR-to-HR generator, the HR-to-LR one still needs to be carefully designed and trained. Actually, as discussed in \cite{liu2020fast}, the HR-to-LR generation is just the complex image degradation process, which may involve multiple degeneration factors, such as noising, blurring and resolution decreasing. Fortunately, illuminated by the work \cite{bulat2018learn}, we introduce Gauss noise accompanied with LR image as input and construct a fully convolutional network (FCN) model to fulfill degrading high-resolution ultrasound image to LR one. The detail structure of our HR-to-LR ultrasound image generation network is shown in Fig. \ref{fig2}. \begin{figure}[t] \vspace{-1ex} \centering \includegraphics[width=3.3in, height=1.2in]{nca_fig2htl.pdf} \caption{The detailed structure of our HR-to-LR ultrasound image generation network.} \vspace{-2ex} \label{fig2} \end{figure} It should be noted that although the actual size of the output image of the HR-to-LR network is $1/4$ of the input image, for the convenience of calculating the HR consistency loss later, we will up-sample the output image to its $4$ times size. Our perception consistency ultrasound image SR model contains two sets of GANs, each of which utilizes two generators (one is for LR and the other is for HR) and one patch discriminator. The two generators are composed of above multi-scale encoder-decoder and HR degradation network while the discriminator is mainly made with a input layer and four convolutional block, each block containing a convolutional layer, ReLU layer and batch normalization layer. The detail structure of the discriminator is shown in Fig. \ref{fig3}. The input to discriminator is the pair of the produced SR and the label HR or the pair of the generated LR and the label LR, all with size of $64\times64$.The output of the discriminator is an array $X$, where each $X_{ij}$ signifies whether the patch $ij$ in the image is real or fake. \begin{figure}[h] \vspace{-2ex} \centering \includegraphics[width=3.5in, height=1.2in]{cycle_discriminator.pdf} \caption{The detailed structure of the discriminator.} \vspace{-1ex} \label{fig3} \end{figure} Our overall model can be looked upon as a CycleGAN framework, which includes two parts: one is LR cycle consistency GAN, the other is HR cycle consistency GAN. In addition, cycle consistency loss with multiple levels image measurement losses are introduced in the model. The architecture of our proposed model is illustrated in Fig. \ref{fig4}. In this figure, the detail structure of the low-to-high generator, the high-to-low generator and the discriminator can be refereed to Fig. \ref{fig1}, Fig. \ref{fig2} and Fig. \ref{fig3}, respectively. \begin{figure*}[tb] \centering \includegraphics[width=6.4in, height=2.9in]{cycle_zssr.pdf} \caption{The proposed perception consistency ultrasound image SR model. The low-to-high generator (blue box) is the multi-scale encoder-decoder in Fig. \ref{fig1} and the high-to-low generator (green box) is the HR-to-LR degradation network in Fig. \ref{fig2}; the discriminator can be looked upon Fig. \ref{fig3} .} \vspace{-3ex} \label{fig4} \end{figure*} \vspace{-2ex} \subsection{Loss Function} In order to ensure the perceptual consistency before and after ultrasound image generation, We firstly introduce the cycle losses for the generated cycle-HR and cycle-LR images, respectively. Since some recent works \cite{ledig2017photo,isola2017image} argued that using MSE loss in deep image generation training will incline to produce over-smooth results, we use $\ell_1$ loss instead of MSE ($\ell_2$) loss as a metric of the pixels difference between the generated one and the ground truth. Besides $\ell_1$ pixels proximity loss, we also incorporate other three levels loss functions to supervise SR or degradation to approximate the ground-truth one at multiple levels of details. Given a set of LR and HR image pairs $\{x_i,y_i\}_{i=1}^N$ and assuming the low-to-high mapping function is $G:LR$ $\rightarrow$$HR$ and the high-to-low one is $F:HR$$\rightarrow$$LR$, then the $\ell_1$ pixel-wise loss for both low-to-high and high-to-low mappings can be denoted as: \begin{equation} \mathcal{L}_{pixel}=\frac{1}{N}(\sum_{i=1}^N(||G(x_i)-y_i||_1+||F(y_i)-x_i||_1)) \label{eq4} \end{equation} Besides the pixel-wise loss, since the perceptual loss is more beneficial to retention image features, we also make use of the perceptual loss when acquiring super-resolved or degraded ultrasound images. Specifically, we utilize the feature extraction function $\phi(\cdot)$ to transform $y_i$ and $x_i$ into certain common feature space . Then the distance between the two features in such feature space can be easily calculated. Commonly, the perceptual (feature) loss can be expressed as: \begin{equation} \begin{split} \mathcal{L}_{percp}=\frac{1}{N}(\sum_{i=1}^N(||\phi(G(x_i))-\phi(y_i)||_2\\+||\phi(F(y_i))-\phi(x_i)||_2)), \label{eq5} \end{split} \end{equation} where the mapping function $\phi(\cdot)$ used in practice is the output combination of the 12th convolution layers from VGG \cite{simonyan2014very} network. We also apply the adversarial loss \cite{goodfellow2014generative} to both low-to-high and high-to-low generation networks. For the low-to-high generator $G:LR$ $\rightarrow$$HR$ and its discriminator $D_{hr}$, the adversarial loss for the generator may be expressed as: \begin{equation} \mathcal{L}_{g\_adv}=\frac{1}{N}\sum_{i=1}^N -log(D_{hr}(G(x_i))) \label{eq7} \end{equation} Similarly, the adversarial loss for high-to-low generator $F:HR$$\rightarrow$$LR$ and its discriminator $D_{lr}$ can also be easily calculated, denoted as $\mathcal{L}_{f\_adv}$. Therefore, the total adversarial loss for such two GANs' generation mapping can be written as: \begin{equation} \mathcal{L}_{adv}=\frac{1}{N}\sum_{i=1}^N (-log(D_{hr}(G(x_i)))-log(D_{lr}(F(y_i)))) \label{eq8} \end{equation} Although the adversarial loss can force the distribution of the generated SR image approximates to the distribution of the target HR data, it is not enough to guarantee that the learned mapping function can map an individual input $x_i$ to an expected target output $y_i$. In view of this, we introduce the LR-to-HR-to-LR and HR-to-LR-to-HR cycle losses to assure perception consistency for accurate ultrasound image reconstruction. Thus, the total cycle consistency loss may be formulated as: \begin{equation} \mathcal{L}_{cyc}=\frac{1}{N}(\sum_{i=1}^N(||F(G(x_i))-x_i||_1+||G(F(y_i))-y_i||_1)) \label{eq9} \end{equation} Finally, the total loss of our overall model is the sum of all the above losses and can be described as: \begin{equation} \mathcal{L}_{total}=\alpha\mathcal{L}_{pixel}+\beta\mathcal{L}_{percp}+\gamma\mathcal{L}_{adv}+\eta\mathcal{L}_{cyc}, \label{eq10} \end{equation} where $\alpha$, $\beta$, $\gamma$, and $\eta$ are the weighting coefficients, which control the relative importance of these different losses. In Section \uppercase\expandafter{\romannumeral4}, we will do the ablation study of some losses to show that the cycle structure and the consistency loss play important role in arriving at high-quality SR results. \section{Experimental results and analysis} \subsection{Datasets} The two public available ultrasound image datasets: CCA-US\footnotemark[1] and US-CASE\footnotemark[2] are fully used in this work to perform the SR experiments and the comparisons. The CCA-US data is acquired from ten volunteers with different ages and body weights (mean ages: $27.5 \pm 3.5$ years; mean weight: $76.5 \pm 9.7$ kg) by Sonix OP ultrasound scanner, which totally includes 84 B-mode ultrasound images of common carotid artery (CCA). While the US-CASE one is a free ultrasound library offered by SonoSkills and Hitachi Medical Systems Europe, which contains 125 ultrasound images of liver, heart and mediastinum, etc. Moreover, the well-known PSNR [dB], IFC \cite{sheikh2005information}, and SSIM \cite{wang2004image} metrics are exploited to evaluate the objective quality of the super-resolved ultrasound images. Our code for this work can be found at \url{https://github.com/hengliusky/UltraSound_SSSR}. \footnotetext[1]{http://splab.cz/en/download/databaze/ultrasound} \footnotetext[2]{http://www.ultrasoundcases.info/Cases-Home.aspx} \vspace{-4ex} \subsection{Training Details} The Original LR input can be any ultrasound image from these two datasets mentioned above. As described in section \uppercase\expandafter{\romannumeral3}, we can obtain ``HR fathers'' and ``LR sons'' from one image itself. We follow the strategy of ZSSR\cite{ZSSR} that training with random augmented cropped image instead of full image. Specifically, we obtain fixed-size random crops from father-son pair.The cropped size is typically set to $64 \times 64$ pixels. During the training, we utilize the total loss described in Eq. \ref{eq10} with Adam optimizer, starting with a learning rate of 0.001. The weighting coefficients of the loss function $\alpha$, $\beta$, $\gamma$ and $\eta$ are empirically set with 5, 0.1, 5 and 0.3, respectively. We also adopt the learning rate adjustment policy of ZSSR \cite{ZSSR} to gradually reduce the learning rate of our deep model. We stop training when the learning rate reaches to 0.000001. In order to stabilize the training, we follow the strategy of the work \cite{shrivastava2017learning} to update the discriminator with the historical generated images to avoid model oscillation. Finally, we combine self-ensemble and back projection techniques to get corrected median image as final super-resolved image. \vspace{-2ex} \subsection{Experimental Comparisons and Analysis} Different ultrasound image SR methods are made comparative evaluation by performing 4$\times$ SR experiments. Note that the codes and the data set of current most medical image SR methods are not released. For example, Choi \emph{et al.} \cite{choi2018deep} and Lu \emph{et al.} \cite{lu2018unsupervised} individually utilize the slightly changed SRGAN \cite{ledig2017photo} and the convolution network with residual connection for ultrasound image SR. But they do not release their code and the ultrasound dataset. Fortunately, many recent natural image SR approaches, including the same or the very similar methods by Choi \emph{et al.} and Lu \emph{et al.}, such as SRCNN \cite{dong2016image}, SRCAN \cite{ledig2017photo}, EDSR \cite{lim2017enhanced} (convolution network with residual connection), SRFeat \cite{park2018srfeat} have been public available. Therefore, we believe that the comparison results can correctly reflect the ultrasound image SR performance of the corresponding methods. In addition, for fair play, we use the two public datasets - CCA-US and US-CASE for comparisons. \begin{table}[b] \renewcommand{\arraystretch}{1.5} \vspace{-2ex} \caption{A comparison of PSNR and IFC scores under the test ultrasound dataset from US-CASE and CCA-US. The bold numbers indicate the best results.} \label{table2} \renewcommand\tabcolsep{8.0pt} \centering \vspace{-2ex} \begin{threeparttable}[t] \begin{tabular}{l|l l l l} \hline \multirow{2}{*}{\textbf{DataSets}} & \multicolumn{2}{c}{US-CASE} & \multicolumn{2}{c}{CCA-US} \\ \cline{2-5} & PSNR & IFC & PSNR & IFC \\ \hline Bicubic & 20.9131 & 1.213 & 26.300 & 1.055 \\ SRCNN\cite{dong2016image} & 20.673 & 0.972 & 25.636 & 1.009 \\ SRGAN\cite{ledig2017photo} & 25.331 & 1.127 & 29.069 & 1.102 \\ \hline Our proposed & \textbf{30.404}& \textbf{2.670} & \textbf{34.900} & \textbf{2.317} \\ \hline \end{tabular} \end{threeparttable} \vspace{-2ex} \end{table} We provide some quantitative evaluation comparisons in Table \ref{table2} and Table \ref{table3}. In Fig. \ref{fig5}, Fig. \ref{fig6} and Fig. \ref{fig7}, we also provide some visual comparison examples. Moreover, in terms of running efficiency, we compare our approach with other methods in inference speed, model capacity, data processing throughput. The results are shown in Table \ref{table4}. \begin{table}[t] \renewcommand{\arraystretch}{1.5} \caption{A comparison of PSNR and SSIM scores under two datasets. The bold numbers indicate the best results.} \label{table3} \renewcommand\tabcolsep{8.0pt} \centering \vspace{-1ex} \begin{threeparttable}[htpb] \begin{tabular}{l|l l l l} \hline \multirow{2}{*}{\textbf{DataSets}} & \multicolumn{2}{c}{US-CASE} & \multicolumn{2}{c}{CCA-US} \\ \cline{2-5} & PSNR & SSIM & PSNR & SSIM \\ \hline EDSR\cite{lim2017enhanced} & 25.290 & 0.740 & 27.432 & 0.804 \\ SRFeat\cite{park2018srfeat} & 25.602 & 0.721 & 28.864 & 0.808 \\ ZSSR\cite{ZSSR} & \textbf{32.670} & 0.872 & 34.882 & 0.918 \\ \hline Our proposed & 32.491 & \textbf{0.876} & \textbf{35.222} & \textbf{0.919} \\ \hline \end{tabular} \end{threeparttable} \vspace{-3ex} \end{table} \begin{table*}[t] \renewcommand{\arraystretch}{1.5} \caption{Evaluation of the running efficiency for all methods. The bold numbers indicate the best} \label{table4} \centering \vspace{-1ex} \begin{threeparttable}[b] \begin{tabular}{l|l l l l l l} \hline & \textbf{SRCNN\cite{dong2016image}} & \textbf{SRGAN\cite{ledig2017photo}} & \textbf{SRFeat\cite{park2018srfeat}}& \textbf{EDSR\cite{lim2017enhanced}}& \textbf{ZSSR\cite{ZSSR}} &\textbf{Our proposed} \\ \hline Platform & MATLAB & TensorFlow & TensorFlow & TensorFlow & pytorch & Pytorch \\ Test Image Size & 600*488 & 150*112 & 150*112 & 150*112 & 150*112 & 600*448\\ Inference Time & 188ms & \textbf{53ms} & 136ms & 49ms & 169ms & 176ms\\ Throughput (Kb/ms) & 4.189 & 0.929 & 0.362 & 1.00 & 0.290 &\textbf{4.474}\\ Model Capacity & \textbf{270KB} & 9.1MB & 37.2MB & 9.1MB & 3.6MB & 1.1MB \\ \hline \end{tabular} \end{threeparttable} \vspace{-2ex} \end{table*} \begin{figure*}[t] \captionsetup[subfigure]{justification=centering} \centering \vspace{-1ex} \subfloat[Ground Truth] {\includegraphics[height=1.0in,width=1.3in]{fig5-top-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.0in,width=1.0in]{fig5-top-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRCNN: \protect\\ 34.31/1.60] {\includegraphics[height=1.0in,width=1.0in]{fig5-top-srcnn-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRGAN: \protect\\ 20.83/1.61] {\includegraphics[height=1.0in,width=1.0in]{fig5-top-srgan-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR: \protect\\ 35.31/2.25] {\includegraphics[height=1.0in,width=1.0in]{fig5-top-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[The proposed method: \protect\\ 36.43/2.34] {\includegraphics[height=1.0in,width=1.0in]{fig5-top-cycle_zssr-eps-converted-to.pdf}} \\ \vspace{-2ex} \subfloat[Ground Truth] {\includegraphics[height=1.0in,width=1.3in]{fig5-bottom-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.0in,width=1.0in]{fig5-bottom-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRCNN: \protect\\ 28.70/1.34] {\includegraphics[height=1.0in,width=1.0in]{fig5-bottom-srcnn-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRGAN: \protect\\ 31.16/1.87] {\includegraphics[height=1.0in,width=1.0in]{fig5-bottom-srgan-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR: \protect\\ 30.17/2.312] {\includegraphics[height=1.0in,width=1.0in]{fig5-bottom-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[The proposed method: \protect\\ 33.11/2.58] {\includegraphics[height=1.0in,width=1.0in]{fig5-bottom-cycle_zssr-eps-converted-to.pdf}} \vspace{-2ex} \caption{The comparisons of visual effects and PSNR/IFC metrics for 4$\times$ super-resolved ultrasound images under CCA-US dataset by (b,h) Ground truth (c,i) SRCNN, (d,j) SRGAN, (e,k) ZSSR and (f,l) the proposed method. The green arrows and circles highlight the differences between the images} \label{fig5} \vspace{-2ex} \end{figure*} \begin{figure*}[!h] \captionsetup[subfigure]{justification=centering} \centering \subfloat[Ground Truth] {\includegraphics[height=1.0in,width=1.3in]{fig6-top-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.0in,width=1.0in]{fig6-top-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRCNN \protect\\ 26.89/1.90] {\includegraphics[height=1.0in,width=1.0in]{fig6-top-srcnn-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRGAN \protect\\ 24.59/1.87] {\includegraphics[height=1.0in,width=1.0in]{fig6-top-srgan-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR \protect\\ 30.77/2.93] {\includegraphics[height=1.0in,width=1.0in]{fig6-top-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[The proposed method: \protect\\ 30.65/2.89] {\includegraphics[height=1.0in,width=1.0in]{fig6-top-cycle_zssr-eps-converted-to.pdf}} \\ \vspace{-1ex} \subfloat[Ground Truth] {\includegraphics[height=1.0in,width=1.3in]{fig6-bottom-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.0in,width=1.0in]{fig6-bottom-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRCNN: \protect\\ 29.70/1.96] {\includegraphics[height=1.0in,width=1.0in]{fig6-bottom-srcnn-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[SRGAN: \protect\\ 25.98/2.02] {\includegraphics[height=1.0in,width=1.0in]{fig6-bottom-srgan-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR: \protect\\ 32.18/3.06] {\includegraphics[height=1.0in,width=1.0in]{fig6-bottom-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[The proposed method: \protect\\ 32.42/2.98] {\includegraphics[height=1.0in,width=1.0in]{fig6-bottom-cycle_zssr-eps-converted-to.pdf}} \hspace{0.1ex} \vspace{-2ex} \caption{The comparisons of visual effects and PSNR/IFC metrics for 4$\times$ super-resolved ultrasound images under US-CASE dataset by (b,h) Ground truth (c,i) SRCNN, (d,j) SRGAN, (e,k) ZSSR and (f,l) the proposed method. The green arrows and circles highlight the differences between the images} \label{fig6} \vspace{-2ex} \end{figure*} Table \ref{table2} lists the comparison results of PSNR and IFC under a test data set consisting of 20 ultrasound images randomly selected from the two datasets mentioned above (each dataset selects 10 images). Compared with SRCNN \cite{dong2016image} and SRGAN \cite{ledig2017photo}, our method achieves the best results on both test images from CCA-US and US-CASE datasets. Table \ref{table3} lists the comparison results of PSNR and SSIM under the whole US-CASE and CCA-US datasets. We can see that our proposed method can attain the best or the second best PSNR results on the two ultrasonic datasets compared with EDSR \cite{lim2017enhanced}, SRFeat \cite{park2018srfeat}, ZSSR \cite{ZSSR}. AS for SSIM measures, our method will always achieve the best measurement results. On the whole, the performance of our method is better than others. In addition, the results in two tables suggest that the self-supervised learning methods (including ours and ZSSR) might have more prospects on SR task than those of supervised learning. \begin{figure*}[t] \captionsetup[subfigure]{justification=centering} \centering \subfloat[Groud turth] {\includegraphics[height=1.4in,width=1.8in]{fig7-top-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.4in,width=1.4in]{fig7-top-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR: \protect\\ 32.17/0.87] {\includegraphics[height=1.4in,width=1.4in]{fig7-top-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[Ours: \protect\\ 32.32/0.88] {\includegraphics[height=1.4in,width=1.4in]{fig7-top-cycle_zssr-eps-converted-to.pdf}} \vspace{-2ex} \subfloat[Groud turth] {\includegraphics[height=1.4in,width=1.8in]{fig7-bottom-gt-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[HR] {\includegraphics[height=1.4in,width=1.4in]{fig7-bottom-gt_crop-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[ZSSR: \protect\\ 32.20/0.87] {\includegraphics[height=1.4in,width=1.4in]{fig7-bottom-zssr-eps-converted-to.pdf}} \hspace{0.1ex} \subfloat[Ours: \protect\\ 31.56/0.86] {\includegraphics[height=1.4in,width=1.4in]{fig7-bottom-cycle_zssr-eps-converted-to.pdf}} \hspace{0.1ex} \caption{The comparisons of visual effects and PSNR/SSIM metrics with ZSSR. (a) from CCA-US dataset, (e) from US-CASE dataset. The green arrows and circles highlight the differences between the images} \label{fig7} \vspace{-2ex} \end{figure*} According to Fig. \ref{fig5} and Fig. \ref{fig6}, it is clear that comparing with other methods, our presented method acquires the better SR visual effects. Especially, observing the local details of these SR images in Fig. \ref{fig5} and Fig. \ref{fig6} carefully, we can see that the results of our method are more accurate than others and do not introduce the artifacts or noise. In addition, Fig. \ref{fig7} shows additional visual details comparisons with ZSSR\cite{ZSSR}. From the figure, it is easy to find that, ZSSR is likely to introduce some unwanted artifacts. For example in Fig. \ref{fig7} (a,b), there are always some artificial pore structure appeared in the generated images of ZSSR. These artifacts might cause misdiagnosis by clinicians. Our CycleGAN framework can effectively alleviate this issue to achieve relatively accurate visual effects although its PSNR/SSIM perhaps decrease slightly. Furthermore, according to Table \ref{table4}, it is easy to find that the throughput of our proposed model achieve the best performance among all compared SR methods. This means that our model can concurrently process larger deal of image data than others. Moreover, from the table, it is clear the proposed model is a lightweight one due to the model capacity of ours only a little higher than the simplest model - SRCNN. In general, our proposed method has good visual effects and preferable objective evaluation indicators, which is of great value for ultrasound visual diagnosis in the medical industry. \subsection{Ablation Study} In order to analyze the impact of the components on the loss function (Eq. \ref{eq10}) on ultrasound image SR performance, we develop some variants of our model: (1) GAN alone, where other losses including cycle loss are removed, (2) cycle alone , (3) GAN only with forward cycle (LR-SR-LR), (4) GAN only with backward cycle (HR-LR-SR) and (5) GAN with cycle. These variants are trained under the same condition as our original model. The results are presented in Table \ref{table5}. \begin{table}[t] \renewcommand{\arraystretch}{1.5} \caption{Ablation study on CCA-US dataset. The best results are indicated in Bold.} \label{table5} \renewcommand\tabcolsep{10.0pt} \centering \vspace{-1ex} \begin{threeparttable}[] \begin{tabular}{l|l l} \hline \multirow{2}{*}{\textbf{DataSets}} & \multicolumn{2}{c}{CCA-US} \\ \cline{2-3} & PSNR &IFC \\ \hline GAN alone & 33.968 & 2.203 \\ Cycle alone &{34.721} & 2.298 \\ GAN + forward cycle & 34.282 & 2.221\\ GAN + backward cycle & 34.519 & 2.262 \\ GAN + cycle & {34.839} & 2.303 \\ \hline Ours & \textbf{34.900} & \textbf{2.317}\\ \hline \end{tabular} \end{threeparttable} \vspace{-2ex} \end{table} From Table \ref{table5}, it is obvious only using GAN (the adversarial loss), the performance of the results will be much reduced. While quite good performance can be achieved with only utilizing cycle loss. Meanwhile, the forward cycle loss and the backward cycle loss both contribute to the performance. The combination of cycle loss with GAN can achieve better results. Finally, all four losses proposed have an effect on the final reconstruction performance. Thus, we can conclude that the cycle structure is extremely beneficial to ultrasound image SR. \section{Conclusion} In this work, for medical industry, we propose a novel perception consistency ultrasound image SR approach based on self-supervised CycleGAN framework. Firstly, we analyze the multi-scale pattern characteristics between the local parts and the whole image for ultrasound data and propose to apply self-supervised learning strategy to get LR-HR pairs when lacking numerous ultrasound training images. Then we introduce a CycleGAN framework with a synthetic imaging loss, including the pixel-wise loss, the perceptual feature loss, the adversarial loss and the most important cycle consistency loss, to guarantee that the image ensemble and the details can keep the perception consistency not only in LR-to-SR-to-LR cycle but also in HR-to-LR-to-SR one. According to the evaluation results under two ultrasound datasets, it is clearly demonstrated that the proposed self-supervised CycleGAN approach achieves the best performance not only in objective qualitative results and the running efficiency but also in visual effects. In the meantime, it should be noted that ultrasound data SR may pay more attention to the accuracy of reconstruction than that of natural images. Therefore, our near future work will center on extending the proposed approach to natural image tasks, such as background subtraction \cite{sakkos2018end}, image defogging \cite{liu2017large}, etc., and analyzing the relationship between reconstruction accuracy and visual effects. \section*{Acknowledgment} We thank all the students in our Lab of AHUT for their help in discussions. This work was supported in part by the National Natural Science Foundation of China under Grant No. 61971004, the Natural Science Foundation of Anhui Province under Grant No. 2008085MF190 and Grant No. 1808085QF210, and also by the Key Project of Natural Science of Anhui Provincial Department of Education under Grant No. KJ2019A0083. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{./IEEEtran} \input{./bare_jrnl.bbl} \end{document}
{ "timestamp": "2020-12-29T02:26:34", "yymm": "2012", "arxiv_id": "2012.14142", "language": "en", "url": "https://arxiv.org/abs/2012.14142" }
\section{Introduction} Since Hawking radiation can be emitted out of a black hole \cite{hawking_1974_black_hole_explosions}, it is widely believed that a black hole carries the Bekenstein-Hawking (BH) entropy $A/(4G),$ where $A$ is the area of the horizon and $G$ is the gravitational constant. The origin of the BH entropy has been explored for a long time from various points of view. In field theory, the BH entropy is suggested to be derived from quantum entanglement \cite{Bombelli_1986_entropy_for_BH,Srednicki_1993_entropy_area}. It is also pointed out that entanglement may be the origin of the BH entropy in quantum gravity \cite{Susskind_1994,Fiola_1994,Emparan_2006,Azeyanagi_2008}. Besides, in string theory, special D-branes correspond to extremal black holes in the classical regime. The value of logarithm of the number of BPS states of the branes approaches the value of its corresponding BH entropy \cite{Strominger_1996}. Recently, soft hair at the horizon \cite{Hotta_2001,Hotta_2002,Hawking_2016} has attracted much attention as a possible origin of the BH entropy \cite{Afshar_2016,Mirbabayi_2016,Hotta_2016,Mao_2017,Ammon_2017,Bousso_2017,Hotta_2018,Chu_2018,Haco_2018,Raposo_2019,Grumiller_2020,Averin_2020}. In a near-horizon region of a black hole, asymptotic symmetries emerge and generate microstates which contribute to the BH entropy in the standard way of statistical mechanics. In 2001, supertranslation and superrotation with non-vanishing charges were discovered as horizon asymptotic symmetries of a Schwarzschild black hole in $(1+3)$-dimensional general relativity \cite{Hotta_2001,Hotta_2002}. Supertranslation is time translation depending on the position at the horizon, while superrotation is a 2-dimensional general coordinate transformation on the horizon. In 2016, Hawking, Perry and Strominger rediscovered the symmetries and named the micro states generated by the transformations as soft hair \cite{Hawking_2016}. Their work has stimulated interest in the quest for other symmetries at the horizon \cite{Mao_2017,Grumiller_2020}. Exploration of asymptotic symmetries near a boundary like a horizon is accompanied by hard effort. In the first stage, we fix an asymptotic condition of metric components near the boundary. In the second stage, we solve asymptotic Killing equations for the metric components so that the asymptotic behavior of the metric is preserved under diffeomorphisms generated by vector fields. In the third stage, we check whether the charges associated with the diffeomorphisms are integrable. If the charges are not integrable, it is required to repeat the above three stages until an appropriate asymptotic condition is found. In the fourth stage, if the charges satisfy the integrability condition, we should finally check whether the charges take various values for solutions of the Einstein equations. At this stage, it often happens that all the charges vanish, implying that all the diffeomorphisms we have selected may be gauge freedoms. In this case, to find non-vanishing charges, we restart from the first stage. Although there are several ways to construct a charge in general relativity, such as the Regge-Teitelboim method \cite{REGGE_1974} and the covariant phase method \cite{Lee_Wald_1990,Wald_1993,Iyer_Wald_1994,Iyer_Wald_1995,Wald_Zoupas_2000} developed by Iyer, Lee, Wald and Zoupas, all of them require such efforts in trials and errors. See also Ref.~\cite{Kijowski} for early study related to this method. In this paper, we take a shorter route to find non-trivial asymptotic properties and propose an approach without imposing asymptotic behaviors of metrics by hand. For a given background metric $\bar{g}_{\mu\nu}$ of interest, we consider a set of metrics which are diffeomorphic to it so that purely gravitational properties of asymptotic symmetries can be analyzed. In this case, some of these symmetries cannot be gauged away. For example, a diffeomorphism associated with a Lorentz boost is not a gauge freedom since it changes energy and momentum of a black hole. As a guiding principle to find a non-trivial diffeomorphism, at the first stage, we adopt a condition under which the charges take non-vanishing values for some metrics generated by the diffeomorphism. Analyzing this condition at the background metric, we can find the candidates for vector fields generating a non-trivial diffeomorphism. A key advantage of our protocol is the fact that the diffeomorphism generated by these vector fields cannot be gauged away by construction. As a consequence, it helps us to find a minimal non-trivial diffeomorphism as a building block of asymptotic symmetries. After identifying the minimal Lie algebra $\mathcal{A}$ spanned by the vector fields and their commutators, we first check the integrability condition at the background metric $\bar{g}$. Our approach to find a non-trivial diffeomorphism satisfying the integrability condition at the background metric may reduce the difficulties in trials and errors in the conventional approach. Finally, we check the integrability condition for a set of metrics connected to the background metric by diffeomorphisms generated by the Lie algebra $\mathcal{A}$. If the integrability condition is satisfied, the charges can be calculated as an integral along a path from the background metric to other metrics. To demonstrate our approach, we investigate asymptotic symmetries on the Rindler horizon in $(1+3)$-dimensional Rindler spacetime. We derive a general condition for vector fields that generate diffeomorphisms and along which the variations of the corresponding charges do not vanish at the background metric. We show that supertranslations and superrotations on the Rindler horizon generate diffeomorphisms which cannot be gauged away, confirming the result in prior research \cite{Hotta_2016}. Furthermore, we find a new class of non-trivial diffeomorphisms, which we term superdilatation. This superdilatation includes two classes of diffeomorphisms. One of them is an extension of dilatation in the direction perpendicular to the horizon. The other is an extension of dilatation in the time direction. We explicitly calculate the expression of charges for an example of the superdilatation algebra. This paper is organized as follows: In Sec.~\ref{sec:conventional}, we briefly review a conventional approach requiring much effort in trial and error. In Sec.~\ref{sec:Wald_method}, we briefly review the covariant phase space method, which is adopted in this paper. In Sec.~\ref{sec:setup}, we explain our approach to construct a building block of asymptotic symmetries. In Sec.~\ref{sec:asymptotic_sym_in_Rindler}, we find a new symmetry on the Rindler horizon called superdilatation by using our approach. In Sec.~\ref{sec:summary}, we present the summary of this paper. In this paper, we set the speed of light to unity: $c=1$. \section{A conventional approach dependent on luck} \label{sec:conventional} A standard approach to explore the asymptotic symmetries requires setting the asymptotic form of metrics near the boundary. The success of exploration severely depends on this metric setting. If an inappropriate metric is chosen, then we completely fail to find the non-trivial symmetries. If we have a deep insight to fix the metric, the non-trivial symmetries appear in the theory. In order to explain this situation, we first make a brief review of the conventional approach with the canonical method \cite{REGGE_1974}. For example, the authors in Ref.~\cite{brown1986} analyzed asymptotic symmetries in $(1+2)$-dimensional asymptotic anti-de Sitter (AdS) spacetime. The background metric $\bar{g}_{\mu\nu}$ is given by \begin{align} \left( \begin{array}{ccc} \bar{g}_{tt} & \bar{g}_{tr} & \bar{g}_{t\phi} \\ \bar{g}_{rt} & \bar{g}_{rr} & \bar{g}_{r\phi} \\ \bar{g}_{\phi t} & \bar{g}_{\phi r} & \bar{g}_{\phi\phi} \end{array} \right) & = \left( \begin{array}{ccc} -\left(\frac{r^{2}}{l^{2}} + 1 \right) & 0 & 0 \\ 0 & \left(\frac{r^{2}}{l^{2}} + 1 \right)^{-1} & 0 \\ 0 & 0 & r^{2} \end{array} \right), \end{align} where $l = (-1/\Lambda)^{1/2}$. It describes the exact AdS metric which is a solution of the Einstein equations with negative cosmological constant $\Lambda$. The exact AdS spacetime has six Killing vectors, thus the goal of exploration of the asymptotic symmetries is to get at least six asymptotic Killing vectors. The AdS boundary is located at $r = \infty$. Near the AdS boundary, we set the asymptotic form of the metric as \begin{align} g_{\mu\nu} & = \bar{g}_{\mu\nu} + \delta g_{\mu\nu} \label{set_of_metric_with_asy_behav_1}. \end{align} Let us consider two forms of the metric. One of them is the following ansatz: \begin{align} \left(\delta g_{\mu\nu}\right) & = \left( \begin{array}{ccc} 0 & 0 & A\left(\frac{r^{2}}{l^{2}}+1\right) \\ 0 & 0 & 0 \\ A\left(\frac{r^{2}}{l^{2}}+1\right) & 0 & A^{2}\left(\frac{r^{2}}{l^{2}}+1\right) \end{array}\label{eq_asy_1} \right),\quad (|A| <|l|). \end{align} It can be shown that the vector field preserving the above metric is given by a linear combination of $\partial_{t}$ and $\partial_{\phi}$, which is denoted by $\xi$. Thus, in this case, we have only two asymptotic Killing vectors. The variation of associated charges $J[\xi]$ is \begin{align} \delta J[\xi] = 4\pi\xi^{\phi}\delta A. \end{align} The charges are integrable and calculated as \begin{align} J[\partial_{t}] & =0, \\ J[\partial_{\phi}] & = 4\pi A, \end{align} where the integral constants are chosen such that $J[\xi]=0$ at the AdS spacetime. In order to get more than one non-vanishing charge, we should replace Eq.~\eqref{eq_asy_1} with another form. A successful one is the following: \begin{align} \left(\delta g_{\mu\nu}\right) & = \left( \begin{array}{ccc} {\mathcal O}(1) & {\mathcal O}(r^{-3}) & {\mathcal O}(1) \\ {\mathcal O}(r^{-3}) & {\mathcal O}(r^{-4}) & {\mathcal O}(r^{-3}) \\ {\mathcal O}(1) & {\mathcal O}(r^{-3}) & {\mathcal O}(1) \end{array} \right). \label{eq_asy_2} \end{align} The solution of the asymptotic Killing equation is given by \begin{align} \xi =\left( \begin{array}{c} \xi^{t} \\ \xi^{r} \\ \xi^{\phi} \end{array} \right)=\left( \begin{array}{c} lT(t,\phi)+\frac{l^{3}}{r^{2}}\overline{T}(t,\phi) + {\mathcal O}(r^{-4}) \\ rR(t,\phi) + {\mathcal O}(r^{-1}) \\ \Phi(t,\phi) + \frac{l^{2}}{r^{2}}\overline{\Phi}(t,\phi) + {\mathcal O}(r^{-4}) \end{array} \right), \end{align} where the functions $T(t,\phi),\overline{T}(t,\phi),R(t,\phi),\Phi(t,\phi)$ and $\overline{\Phi}(t,\phi)$ satisfy \begin{align} l\partial_{t}T(t,\phi) & = \partial_{\phi}\Phi(t,\phi) = -R(t,\phi),\ \partial_{\phi}T(t,\phi) = l\partial_{t}\Phi(t,\phi), \\ \overline{T}(t,\phi) & = -\frac{l}{2}\partial_{t}R(t,\phi),\ \overline{\Phi}(t,\phi) = \frac{1}{2}\partial_{\phi}R(t,\phi). \end{align} It is shown that the charges associated with the vector fields are integrable. Surprisingly, the algebra of the charges is a direct sum of two Virasoro algebras which are infinite dimensional Lie algebras in contrast to the first case. Unfortunately, however, there is no systematic way to find such a successful asymptotic form in Eq.~\eqref{eq_asy_2}. The conventional approach is shown schematically in FIG.~\ref{flowchart_1}. In the first step, we determine an asymptotic form of the metric near the boundary. In the second step, we solve asymptotic Killing equations for the metric components so that the asymptotic form of the metric is preserved under diffeomorphisms generated by vector fields. In the third step, we check whether the charges associated with the diffeomorphisms are integrable. If the charges are not integrable, we have to repeat the above three steps until we successfully find an appropriate asymptotic condition. In the fourth step, if the charges are integrable, we check whether they take various values for solutions of the Einstein equations. If they do, we obtain non-trivial charges. However, if not, we have to restart from the first step since all the diffeomorphisms generated by the vector fields we have selected are gauge freedom. Such a failure often happens in the conventional approach. As we have seen, we have to determine the asymptotic form of metric by trials and errors. It usually takes much efforts and might turn out not to serve the purpose in the end. So far, we gave a review of conventional approach with the canonical method. The same approach has been taken in studies of asymptotic symmetries using the covariant phase space method. In other words, the flow chart in Fig.~\ref{flowchart_1} is often adopted in the covariant phase space method, e.g., in Refs.~\cite{Hollands_2005,Ishibashi_2005}. In the next section, we introduce the covariant phase space method which we adopt in this paper. In Sec.~\ref{sec:setup}, we explain our approach which may reduce the above efforts. \newpage \begin{figure}[H] \includegraphics[width=15cm,keepaspectratio]{flow_chart_conventional.pdf} \caption{A flow chart of the conventional approach.} \label{flowchart_1} \end{figure} \newpage \section{A brief review on a covariant phase space method}\label{sec:Wald_method} In Sec.~\ref{sec:conventional}, we briefly reviewed the canonical method to explore the asymptotic symmetries. In this paper, we will use the covariant phase space method \cite{Lee_Wald_1990,Wald_1993,Iyer_Wald_1994,Iyer_Wald_1995,Wald_Zoupas_2000}. An advantage of the method is covariant calculation independent of local coordinates without using the Arnowitt-Deser-Misner (ADM) decomposition \cite{ADM_1959}. Here let us briefly review the covariant phase space method to calculate the charge corresponding to a diffeomorphism. Although the covariant phase space method can be applied to all diffeomorphism invariant theories, we focus on the Einstein gravity. Consider the Einstein-Hilbert action \begin{align} S & = \int_{\mathcal M}\mathrm{d}^{d}x {\mathcal L}_{EH}, \end{align} where the Lagrangian density is given by ${\mathcal L}_{EH} \coloneqq \frac{1}{16\pi G}\sqrt{-g}R$, $\int_\mathcal{M}d^dx$ denotes the integral over a $d$-dimensional spacetime $\mathcal{M}$, $g$ and $R$ are the determinant of the metric $g_{\mu\nu}$ and the Ricci scalar, respectively. The variation of ${\mathcal L}_{EH}$ is given by \begin{align} \delta {\mathcal L}_{EH} & = -\frac{\sqrt{-g}}{16\pi G}G^{\mu\nu}\delta g_{\mu\nu} + \partial_{\mu}\Theta^{\mu}(g,\delta g), \end{align} where $G_{\mu\nu}$ is the Einstein tensor and $\Theta$ is the pre-symplectic potential defined by \begin{align} \Theta^{\mu}(g, \delta g) & = \frac{\sqrt{-g}}{16\pi G}\left(g^{\mu\alpha}\nabla^{\beta}\delta g_{\alpha\beta} - g^{\alpha\beta}\nabla^{\mu}\delta g_{\alpha\beta}\right). \label{eq:presymplectic_potential} \end{align} In the following, for notational symplicity, the metric $g_{\mu\nu}$ is abbreviated as $g$ in the arguments of functions. The Einstein-Hilbert action is invariant under the Lie derivative along an arbitrary vector field $\xi$ up to a total derivative term. Therefore, for an infinitesimal transformation of the metric $\delta_\xi g_{\mu\nu}=\pounds_\xi g_{\mu\nu}$ where $\pounds_{\xi}$ represents the Lie derivative with respect to $\xi$, the corresponding Noether current is given by \begin{align} J^{\mu}[\xi] :=\Theta^{\mu}(g, \pounds_{\xi}g) -\xi^{\mu}{\mathcal L}_{EH}, \end{align} which satisfies \begin{align} \partial_\mu J^{\mu}[\xi]=\frac{\sqrt{-g}}{16\pi G} G^{\mu\nu}\pounds_{\xi}g_{\mu\nu}. \end{align} For a solution $g_{\mu\nu}$ of the Einstein equations, the current is conserved: \begin{align} \partial_\mu J^\mu[\xi]\approx 0, \end{align} where $\approx$ means that the equality holds for any solution of the equation of motion, i.e., the Einstein equations. By using the Poincar\'e lemma, there exists a 2-form $Q^{\mu\nu}[\xi]$ of the spacetime satisfying \begin{align} \label{Noether} J^\mu[\xi]\approx \partial_{\nu}Q^{\mu\nu}[\xi]. \end{align} More generally, as shown in the Appndix of \cite{Iyer_Wald_1995}, we have \begin{align} J^{\mu}[\xi] = \partial_{\nu}Q^{\mu\nu}[\xi] + \mathcal{C}\indices{^{\mu}_{\nu}}\xi^{\nu}, \end{align} where $\mathcal{C}\indices{^{\mu}_{\nu}}$ is a constraint satisfying $\mathcal{C}\indices{^{\mu}_{\nu}} \approx 0$. In the case of Einstein gravity, the 2-form is given by \begin{align} Q^{\mu\nu}[\xi] = -\frac{\sqrt{-g}}{8\pi G}\nabla^{[\mu}\xi^{\nu]} \label{eq:Komar}, \end{align} while $C\indices{^\mu_\nu}$ is given by \begin{align} \mathcal{C}\indices{^{\mu}_{\nu}} = \frac{\sqrt{-g}}{8\pi G}G\indices{^{\mu}_{\nu}}, \end{align} where the bracket $[\ \ ]$ for indices is an anti-symmetric symbol defined as \begin{align} A_{[\mu_{1} \cdots \mu_{d}]} \coloneqq \frac{1}{d!}\sum_{\sigma \in S_{d}}(-1)^{\sigma}A_{\mu_{\sigma(1)}\cdots\mu_{\sigma(d)}}, \end{align} where $S_{d}$ is a permutation group. The corresponding Noether charge of $\xi$ is given by \begin{align} Q[\xi] & \coloneqq \int_{\Sigma}(\mathrm{d}^{d-1}x)_{\mu}J^{\mu}[\xi] \nonumber \\ & \approx \int_{\Sigma}(\mathrm{d}^{d-1}x)_{\mu}\partial_{\nu}Q^{\mu\nu}[\xi] \nonumber \\ & =\oint_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}Q^{\mu\nu}[\xi], \label{Q_xi} \end{align} where $\Sigma$ is a $(d-1)$-dimensional submanifold embedded in ${\mathcal M}$, $\partial\Sigma$ is the boundary of $\Sigma$ and the integral measure is defined as \begin{align} (\mathrm{d}^{d-p}x)_{\mu_{1}\dots\mu_{p}} \coloneqq \frac{\epsilon_{\mu_{1}\dots\mu_{p}\mu_{p+1}\dots\mu_{d}}}{d!(d-p)!}\mathrm{d} x^{\mu_{p+1}}\wedge\dots\wedge\mathrm{d} x^{\mu_{d}}. \label{int_measure} \end{align} In Eq.~\eqref{int_measure}, $\epsilon_{{\mu_{1}}\cdots\mu_{d}}$ is the $d$-dimensional Levi-Civita symbol defined as \begin{align} \epsilon_{\mu_{1}\cdots\mu_{d}} &= \epsilon_{[\mu_{1}\cdots\mu_{d}]} \\ \epsilon_{1\cdots d} &=1. \end{align} In the third line in Eq.~\eqref{Q_xi}, we have used Stokes' theorem. Let $\delta_1 g$ and $\delta_2 g$ be arbitrary linearized perturbations of metric $g$ in question. Let $\delta_i f(g)$ denote the variation of a function $f(g)$ with respect to each perturbation $\delta_i g$. With these notations, the pre-symplectic current is defined by \begin{align} \omega^{\mu}(g, \delta_{1}g, \delta_{2}g) := \delta_{1}\Theta^{\mu}(g, \delta_{2}g) - \delta_{2}\Theta^{\mu}(g,\delta_{1}g). \end{align} We further define the pre-symplectic form $\Omega(g, \delta_{1}g, \delta_{2}g)$ as \begin{align} \Omega(g, \delta_{1}g, \delta_{2}g) := \int_{\Sigma}(\mathrm{d}^{d-1}x)_{\mu}\omega^{\mu}(g,\delta_{1}g, \delta_{2}g), \end{align} which is a 2-form on the field configuration space. Let $H[\xi]$ denote the charge which generates an infinitesimal transformation along a vector field $\xi$. The variation of the charge with respect to an arbitrary perturbation $\delta g$ is given by \cite{Lee_Wald_1990,Wald_1993,Iyer_Wald_1994,Iyer_Wald_1995,Wald_Zoupas_2000} \begin{align} \delta H[\xi] & = \Omega(g, \delta g, \pounds_{\xi}g) =\int_{\Sigma}(\mathrm{d}^{d-1}x)_{\mu}\omega^{\mu}(g,\delta g, \pounds_{\xi}g). \end{align} The variation of the Noether current can be recast into \begin{align} \delta J^{\mu}[\xi] \approx \omega^{\mu}(g, \delta g, \pounds_{\xi}g) -\partial_{\nu}[2\xi^{[\mu}\Theta^{\nu]}(g,\delta g)]\label{eq_omega}, \end{align} where $g_{\mu\nu}$ is assumed to be the solution of the Einstein equations, while $\delta g_{\mu\nu}$ does not necessarily satisfy the linearized Einstein equations. Equation~\eqref{eq_omega} can be rewritten as \begin{align} \omega^\mu(g,\delta g, \pounds_\xi g) \approx \delta\mathcal{C}\indices{^{\mu}_{\nu}}\xi^{\nu}+\partial_\nu S^{\mu\nu}\left(g,\delta g,\pounds_\xi g\right), \end{align} where we have defined \begin{align} S^{\mu\nu}\left(g,\delta g,\pounds_\xi g\right) & \coloneqq \delta Q^{\mu\nu}[\xi]+2\xi^{[\mu}\Theta^{\nu]}(g,\delta g)\nonumber \\ & = \frac{\sqrt{-g}}{8\pi G}\left( -\frac{1}{2}\delta g^{\alpha}_{\ \alpha}\nabla^{[\mu}\xi^{\nu]} + \delta g^{\alpha[\mu}\nabla_{\alpha}\xi^{\nu]} - \nabla^{[\mu}\delta g^{\nu]\alpha}\xi_{\alpha} + \xi^{[\mu}\nabla_{\alpha}\delta g^{\nu]\alpha} - \xi^{[\mu}\nabla^{\nu]}\delta g^{\alpha}_{\ \alpha}\right). \label{eq_definition_S} \end{align} Thus, if $H[\xi]$ exists, it satisfies \begin{align} \delta H[\xi] \approx \int_{\Sigma}(\mathrm{d}^{d-1}x)_{\mu}\delta \mathcal{C}\indices{^{\mu}_{\nu}}\xi^{\nu} + \oint_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}S^{\mu\nu}(g,\delta g, \pounds_{\xi}g). \label{eq:H_for_on_shell} \end{align} When $\delta g_{\mu\nu}$ is a solution of the linearized Einstein equation, $\delta \mathcal{C}\indices{^\mu_\nu}=0$. Therefore, we get \begin{align} \delta H[\xi] \approx \oint_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}S^{\mu\nu}(g,\delta g, \pounds_{\xi}g). \end{align} Since $H[\xi]$ does not always exist, we need to impose an additional condition for $g_{\mu\nu}$ and $\xi$ to obtain the charge $H[\xi]$, which is referred to as the integrability condition. In Ref.~\cite{Wald_Zoupas_2000}, the integrability condition is introduced. As a simplest example, let us first consider whether the charges are integrable for a set of solutions of the Einstein equation $g_{\mu\nu}(\lambda_1,\lambda_2)$, which is smoothly parameterized by two real parameters $\lambda_1$ and $\lambda_2$. A linearized perturbation $\delta_i g_{\mu\nu}(\lambda_1,\lambda_2) $ is defined by $\delta_i g_{\mu\nu}(\lambda_1,\lambda_2)\coloneqq \frac{\partial }{\partial \lambda_i}g_{\mu\nu}(\lambda_1,\lambda_2)$. If $H[\xi]$ exists, due to the equality of mixed partial derivatives, we have \begin{align} 0&=\left(\frac{\partial}{\partial \lambda_{1}}\frac{\partial}{\partial \lambda_{2}}-\frac{\partial}{\partial \lambda_{2}}\frac{\partial}{\partial \lambda_{1}}\right)H[\xi]\bigg|_{g(\lambda_1,\lambda_2)}\nonumber\\ &=(\delta_{1}\delta_{2}-\delta_{2}\delta_{1})H[\xi]|_{g(\lambda_1,\lambda_2)}. \end{align} As long as there is no topological obstruction, this is a necessary and sufficient condition for the charge $H[\xi]$ to exist. Similarly, for a general set of solutions $g_{\mu\nu}$ of the Einstein equations, for $H[\xi]$ to exist, it must holds \begin{align} \label{condition1} 0 & = (\delta_{1}\delta_{2}-\delta_{2}\delta_{1})H[\xi]\nonumber \\ & = -\int_{\partial \Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}\left(\xi^{[\mu}\delta_{1}\Theta^{\nu]}(g,\delta_{2}g)-\xi^{[\mu}\delta_{2}\Theta^{\nu]}(g,\delta_{1}g)\right) \nonumber \\ & =-\int_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}\xi^{[\mu}\omega^{\nu]}(g,\delta_{1}g,\delta_{2}g)\nonumber \\ & \approx-\int_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}\xi^{[\mu}\partial_\alpha S^{\nu]\alpha}(g,\delta_{1}g,\delta_{2}g) \end{align} for arbitrary linearized perturbations $\delta_1 g$ and $\delta_{2}g$ of the metric in question, where we have used Eq.~\eqref{eq:H_for_on_shell}. This is a necessary condition for $H[\xi]$ to exist. It is also a sufficient condition if the space of $g_{\mu\nu}$ does not have any topological obstruction \cite{Wald_Zoupas_2000}. Shifting the charge by a constant, it is always possible to make the charges vanish at a reference metric $g^{(0)}_{\mu\nu}$. By using a smooth one-parameter set of solutions $g_{\mu\nu}(\lambda)$ such that $g_{\mu\nu}(0)=g^{(0)}_{\mu\nu}$ and $g_{\mu\nu}(1)=g_{\mu\nu}$, the charge is given by \begin{align} H[\xi] = \int_{0}^{1}\mathrm{d}\lambda\int_{\partial \Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}(\partial_{\lambda} Q^{\mu\nu}[\xi](g,\partial_{\lambda}g) + 2\xi^{[\mu}\Theta^{\nu]}(g,\partial_{\lambda} g)).\label{eq_charge_int_along_path} \end{align} Note that the charge defined in Eq.~\eqref{eq_charge_int_along_path} is independent of the choice of the path $g_{\mu\nu}(\lambda)$ as long as Eq.~\eqref{condition1} is satisfied. In the later sections, we adopt this method. \section{Our approach} \label{sec:setup} In this section, we explain our approach to explore the asymptotic symmetries. A guiding principle is proposed to determine $\delta g_{\mu\nu}$. The choice of $\delta g_{\mu\nu}$ ensures us to obtain the non-trivial charges of the asymptotic symmetries as long as the integrability of the charges is satisfied. As a consequence, we can get the diffeomorphisms which cannot be gauged away. We consider a Lie algebra $\mathcal{A}$ of vector fields, and a set of metrics connected to the fixed background metric $\bar{g}_{\mu\nu}$, which is a solution of the Einstein equations, by all the diffeomorphism generated by $\mathcal{A}$. For an arbitrary variation $\delta$ and an arbitrary element $g_{\mu\nu}$ of this set, there exists a vector field $\chi$ in the algebra such that \begin{align} \label{set2} \delta g_{\mu\nu} = \pounds_{\chi} g_{\mu\nu}. \end{align} With this set of metrics, we can analyze the properties of the background metric $\bar{g}_{\mu\nu}$ since all the metrics are diffeomorphic to it. It should be noted that as opposed to the conventional approach, we do not need to check whether the variation of the metric satisfies the linearized Einstein equations since the Einstein equations are invariant under diffeomorphisms. Hereafter, $g_{\mu\nu}$ denotes a metric connected to $\bar{g}_{\mu\nu}$ via a diffeomorphism generated by the Lie algebra $\mathcal{A}$. A schematic picture of the set of metric is shown in FIG.~\ref{fig:configuration_space}. \begin{figure}[htbp] \centering \includegraphics[width=7.5cm]{configuration_space.pdf} \caption{A schematic picture of the set of metrics we analyze in this paper. The vector fields $\xi$ and $\eta$ are elements of a Lie algebra $\mathcal{A}$. Metrics are connected to the background metric $\bar{g}_{\mu\nu}$ by diffeomorphisms generated by $\mathcal{A}$. For any metric $g_{\mu\nu}$, there exists a smooth path $g_{\mu\nu}(\lambda)$ from $\bar{g}_{\mu\nu}$ to $g_{\mu\nu}$.} \label{fig:configuration_space} \end{figure} Here, we will provide a guiding principle to find a Lie algebra $\mathcal{A}$ as a building block of the symmetries. In most cases, even if the charges are integrable, the diffeomorphisms generated by the Lie algebra correspond to a gauge freedom. For example, consider an algebra formed by vector fields with support in a finite spatial region in $\Sigma$ far away from the boundary $\partial\Sigma$. In this case, although the charges are trivially integrable, all Poisson brackets of charges vanish, implying that the diffeomorphisms generated by the algebra is a gauge freedom since metrics connected by them are physically indistinguishable. As we have already mentioned in Sec.~\ref{sec:conventional}, in the conventional approach, such a failure often happens. In order to find a non-trivial algebra of charges, it is required that \begin{align} \label{delH} \delta_{\eta}H[\xi] \neq 0, \end{align} or equivalently, $\{H[\xi],H[\eta]\}\neq 0$ holds for some vector fields $\eta,\xi$ in the algebra. Here, $\delta_\eta$ denotes a variation of metric such that $\delta_\eta g_{\mu\nu}=\pounds_\eta g_{\mu\nu}$. From Eq.~\eqref{eq_definition_S}, the left hand side of Eq.~\eqref{delH} can be recast into \begin{align} \label{non_triviality} \int _{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}S^{\mu\nu}(g,\pounds_\eta g,\pounds_\xi g) & \neq 0\nonumber \\ \iff \int_{\partial\Sigma}(\mathrm{d} x^{d-2})_{\mu\nu}\sqrt{-g}\left[(2\nabla^{\alpha}\eta^{\mu}\nabla_{\alpha}\xi^{\nu}-\nabla_{\alpha}\eta^{\alpha}\nabla^{\mu}\xi^{\nu}\right. & \left.+\nabla_{\alpha}\xi^{\alpha}\nabla^{\mu}\eta^{\nu}) \nonumber\right. \\ & \left.-C_{\alpha\beta}^{\ \ \ \mu\nu}\xi^{\alpha}\eta^{\beta}\right] \neq 0, \end{align} where $C_{\alpha\beta\mu\nu}$ is the Weyl tensor. The differomorphism associated with the algebra cannot be gauged away as long as there exist $\eta,\xi$ and $g_{\mu\nu}$ satisfying Eq.~\eqref{non_triviality}. In particular, here we adopt the following sufficient condition for Eq.~\eqref{non_triviality}: \begin{align} \int _{\partial \Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}S^{\mu\nu}(\bar{g},\pounds_{\eta} \bar{g},\pounds_{\xi} \bar{g}) \neq 0\label{eq_non-triviality_background} \end{align} as the guiding principle. Of course, the integrability condition in Eq.~\eqref{condition1} must be satisfied. It can be recast into \begin{align} 0=\int_{\partial \Sigma}\left(\mathrm{d}^{d-2}x\right)_{\mu\nu}\xi^{[\mu}\partial_\alpha S^{\nu]\alpha}\left(g,\pounds_\eta g,\pounds_\chi g\right), \quad \forall \xi,\eta,\chi \in \mathcal{A}\label{eq_integrability} \end{align} where we have used Eq.~\eqref{set2}. For a given background metric $\bar{g}_{\mu\nu}$, it takes much efforts to find an appropriate Lie algebra $\mathcal{A}$ so that Eqs.~\eqref{eq_integrability} and \eqref{non_triviality} hold. It corresponds to the difficulties to find an appropriate asymptotic behavior of $\delta g_{\mu\nu}$ by trials and errors in the conventional approach. We propose the following six steps as a practical and useful way to overcome these difficulties: \begin{enumerate}[Step~1] \item Fix a background metric $\bar{g}_{\mu\nu}$ of interest. \item For a fixed background metric $\bar{g}_{\mu\nu}$ of interest, find two vector fields $V_1$ and $V_2$ satisfying Eq.~\eqref{eq_non-triviality_background}. These are the candidates generating non-trivial diffeomorphisms whose charges are integrable. \item Introduce the minimal Lie algebra $\mathcal{A}$ including $V_1,V_2$ by calculating their commutators. Check whether the integrability condition at the background metric, i.e., \begin{align} \int_{\partial \Sigma}\left(\mathrm{d}^{d-2}x\right)_{\mu\nu}\xi^{[\mu}\partial_\alpha S^{\nu]\alpha}\left(\bar{g},\pounds_\eta \bar{g},\pounds_\chi \bar{g}\right)=0, \quad\forall \xi,\eta,\chi \in \mathcal{A}\label{eq_integrability_background} \end{align} is satisfied for the algebra $\mathcal{A}$ as a necessary condition for Eq.~\eqref{eq_integrability}. If it holds, go to the next step. Otherwise, go back to Step~2. \item Construct a set of metrics $g_{\mu\nu}$ which are connected to the background metric $\bar{g}_{\mu\nu}$ via differomorphisms generated by $\mathcal{A}$. \item Check the integrability condition in Eq.~\eqref{condition1}. If it is satisfied, then go to the following step. If not, go back to Step~2. \item Calculate the charges by using Eq.~\eqref{eq_charge_int_along_path}. Here, we fix the reference metric as the background metric: $g^{(0)}_{\mu\nu}=\bar{g}_{\mu\nu}$. \end{enumerate} An advantage of the above algorithmic protocol is the fact that Steps~2 and 3 can be done by using only the background metric $\bar{g}_{\mu\nu}$. In particular, it should be noted that no trials and errors are required to calculate the left hand side of Eq.~\eqref{eq_non-triviality_background}. Furthermore, the diffeomorphism generated by $\mathcal{A}$ cannot be gauged away since the corresponding charge algebra has non-vanishing Poisson bracket by construction. This may significantly reduce the efforts involved in finding an appropriate algebra and asymptotic behaviors of the metric in the conventional approach. In other words, Eq.~\eqref{eq_non-triviality_background} is the guiding principle to find a non-trivial charge algebra. Such a guiding principle does not exist in the conventional approach. A flow chart of our approach is shown in Fig.~\ref{flowchart}. \begin{figure}[H] \centering \includegraphics[width=18.5cm, keepaspectratio]{flow_chart_our_revised.pdf} \caption{Flow chart of our approach.} \label{flowchart} \end{figure} Indeed our approach is quite powerful. In the following section, we will apply our approach to the Rinlder spacetime as a demonstration. We successfully find a new class of asymptotic symmetries on the Rindler horizon. \section{Asymptotic symmetries on Rindler horizon}\label{sec:asymptotic_sym_in_Rindler} In this section, we demonstrate our approach in the case where the background metric is $(1+3)$-dimensional Rindler spacetime. In particular, we will investigate asymptotic symmetries on the Rindler horizon. \vspace{\baselineskip} \uline{Step1 : Fix a background metric $\bar{g}_{\mu\nu}$.} \\ Here, the background metric is fixed to be the Rindler metric given by \begin{align} \mathrm{d}\bar{s}^{2} = -\kappa^{2}\rho^{2}\mathrm{d}\tau^{2} + \mathrm{d}\rho^{2} + \mathrm{d} y^{2}+\mathrm{d} z^{2}, \end{align} where $-\infty<\tau<\infty$, $0<\rho<\infty$, $-\infty<y<\infty$, $-\infty<z<\infty$ and $\kappa > 0$ is a constant. The Rindler horizon is located at $\rho = 0$. \vspace{\baselineskip} \uline{Step 2 : Select two vector fields $V_1$ and $V_2$ satisfying Eq.~\eqref{eq_non-triviality_background}.}\\ Since we are interested in asymptotic symmetries in Rindler spacetime, we will analyze diffeomorphisms which map a point in the Rindler spacetime into itself. Let $\xi$ be the Lie algebra of such a diffeomorphism. Through an infinitesimal diffeomorphism generated by $\xi$, a point $x$ of the spacetime is mapped into \begin{align} x^\mu\mapsto x^\mu +\epsilon\xi^\mu +\mathcal{O}(\epsilon^2)\quad (\epsilon \to 0). \end{align} Since the Rindler horizon is located at $\rho=0$ in our coordinate system, unless the $\rho$-component of the vector field $\xi$ vanishes in the limit $\rho\to 0$, a point inside (resp. outside) the Rindler horizon can be mapped to the outside (resp. inside). Therefore, we require that the vector field $\xi$ has the following asymptotic behavior \begin{align} \xi^\tau=\mathcal{O}(1),\quad \xi^\rho=\mathcal{O}(\rho),\quad\xi^y=\mathcal{O}(1),\quad \xi^z=\mathcal{O}(1)\quad (\rho \to 0 ) \end{align} near the Rindler horizon. We assume that the vector fields have supports in a finite region near the Rindler horizon. In general, the components of the vector fields $V_{1}$ and $V_{2}$ can be written for $\rho \to 0$ as \begin{align} V_{1} & =(X^{\tau}(\tau,y,z)+{\mathcal O}(\rho),X^{\rho}(\tau,y,x)\rho+{\mathcal O}(\rho^{2}),X^{A}(\tau,y,z)+{\mathcal O}(\rho)),\nonumber \\ V_{2}&=(Y^{\tau}(\tau,y,z)+{\mathcal O}(\rho),Y^{\rho}(\tau,y,z)\rho+{\mathcal O}(\rho^{2}),Y^{A}(\tau,y,z)+{\mathcal O}(\rho)), \end{align} where $A$ runs over $y$ and $z$. Equation~\eqref{eq_non-triviality_background} is evaluated as \begin{align} & \int_{\partial\Sigma}(\mathrm{d}^{d-2}x)_{\mu\nu}S^{\mu\nu}(\bar{g},\pounds_{V_2}\bar{g},\pounds_{V_1}\bar{g})\nonumber \\ & =\frac{1}{8\pi G\kappa}\int_{\mathbb R^{2}} \left[(2\kappa^{2}Y^{\tau} + \partial_{\tau}Y^{\rho})\partial_{A}X^{A} + \partial_{\tau}X^{\rho}\partial_{\tau}Y^{t} - (X \leftrightarrow Y)\right] \mathrm{d} y\mathrm{d} z \label{eq_non-triviality_Rindler}, \end{align} where we took the limit $\rho\to 0$ in the second line since the Rindler horizon is located at $\rho= 0$. From this formula, we can identify several candidates for vector fields which yield a non-trivial charge algebra. As a known example, consider the case where $X^\rho=Y^\rho=0$. If $Y^\tau$ and $\partial_AX^A$ do not vanish, the corresponding Poisson bracket does not vanish. In this case, the vector fields $V_1$ and $V_2$ correspond to a special class of diffeormorhisms called superrotation and supertranslation, respectively, which are shown to be integrable on the Rindler horizon in Ref.~\cite{Hotta_2016}. See Appendix~\ref{sec:app_integrability} for detailed calculations of the charges. Another interesting candidate, which we will investigate in detail here, is the case where $X^\rho=X^A=0$ and $Y^\tau=Y^A=0$. If $\int \mathrm{d} y \mathrm{d} z\partial_\tau X^\tau \partial_\tau Y^\rho\neq 0$, the Poisson bracket does not vanish. The vector field $V_1=(X^\tau + \mathcal{O}(\rho),0,0,0)$ generates a class of dilatation transformation in time direction since $\partial_\tau X^\tau\neq 0$ must hold. On the other hand, the vector field $V_2=(0,\rho Y^\rho+\mathcal{O}(\rho^{2}),0,0)$ generates a dilatation in $\rho$ direction. We term these two transformations superdilatations since the generators depend on the position in spacetime in general. As a particular case, we will analyze the charges corresponding to two vector fields as $\rho \to 0$ \begin{align} V_1&=(\tau T_{1}(y,z)+\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})), \nonumber\\ V_2 &=(\mathcal{O}(\rho^{2}), \tau\rho T_{2}(y,z)+\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})) \label{eq_vector_fields_st_sd}, \end{align} where $T_{1}$ and $T_{2}$ are arbitrary functions of $y,z$. \vspace{\baselineskip} \uline{Step 3: Construct the Lie algebra including $V_1$ and $V_2$ and check the integrability at the background metric $\bar{g}_{\mu\nu}$.}\\ Since the vector fields in Eq.~\eqref{eq_vector_fields_st_sd} satisfy \begin{align} [V_1,V_2]=V_3, \end{align} where \begin{align} V_3 =(\mathcal{O}(\rho^{2}), \tau\rho T_{3}(y,z)+\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})),\quad T_3(y,z)\coloneqq T_1(y,z)T_2(y,z), \end{align} the algebra $\mathcal{A}$ defined by \begin{align} &\mathcal{A}\nonumber \\ &\coloneqq \left\{V=(\tau T_{1}(y,z)+\mathcal{O}(\rho^{2}), \tau\rho T_{2}(y,z)+\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2}))\mid T_1,T_2 \text{ are arbitrary functions of $y,z$}\right\} \end{align} forms a closed algebra. A straightforward calculation shows that Eq.~\eqref{eq_integrability_background}, i.e., the integrability condition at the background metric, is satisfied. \vspace{\baselineskip} \uline{ Step 4: Calculate the set of metrics.}\\ Since we investigate the asymptotic symmetries near the Rindler horizon, let us identify the asymptotic behavior of all the diffeomorphisms $\phi^\mu(x)$ generated by the Lie algebra $\mathcal{A}$. We here first calculate the asymptotic behavior of the diffeomorphisms in the form of $\phi_\xi^\mu(x)\coloneqq \exp[\xi](x^\mu)$ for $\xi\in\mathcal{A}$, where $\exp[\ ]$ is an exponential map. Introducing a real parameter $\lambda$ and calculating the integral curve $\varphi^{\mu}_{\lambda}(x)\coloneqq \exp[\lambda \xi](x^\mu)$ of the vector field $\xi$, the diffeomorphism $\phi_{\xi}^\mu(x)$ is given by $\phi_{\xi}^\mu(x)=\varphi^\mu_{\xi;\lambda=1}(x)$. The integral curve is the solution of the following differential equation: \begin{align} \frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{\xi;\lambda}^\mu(x) =\xi^\mu (\varphi(x)),\quad \varphi_{\xi;0}^\mu(x)=x^\mu.\label{eq_integral_curve_ODE}. \end{align} Any vector field $\xi$ of the algebra $\mathcal{A}$ can be decomposed into two parts: \begin{align} \xi^\mu(x) &=\Xi^\mu(x)+h^\mu(x),\\ \Xi^\mu(x)&\coloneqq (\tau F_{1}(y,z), \tau\rho F_{2}(y,z),0,0),\\ h^\mu(x)&\coloneqq (\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})) \quad (\rho \to 0), \end{align} where $F_1$ and $F_2$ are arbitrary functions of $(y,z)$. When $\xi=\Xi$, the solution of the differential equation is straightforwardly calculated as \begin{align} \varphi^\mu_{\Xi;\lambda}(x)=\left(\tau e^{F_{1}(y,z)\lambda}, \rho\exp\left(\frac{F_{2}(y,z)}{F_{1}(y,z)}\tau\left(e^{F_{1}(y,z)\lambda}-1\right)\right), y,z\right). \end{align} In Appendix~\ref{Flow_proof}, it is proven that \begin{align} \varphi_{\xi;\lambda}^\mu(x)= \varphi_{\Xi;\lambda}(x)+(\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})) \quad (\rho \to 0). \end{align} This is the asymptotic behavior of the integral curve. Thus, the asymptotic behavior of the diffeomorphism $\phi_\xi^\mu(x)=\exp[\xi](x^\mu)$ is given by \begin{align} \phi_\xi^\mu(x)&=\varphi_{\xi;\lambda=1}^\mu(x)\nonumber\\ &=\phi_{\Xi}^\mu(x)+(\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2}))\nonumber\\ &=\left(\tau e^{F_{1}(y,z)}, \rho\exp\left(\frac{F_{2}(y,z)}{F_{1}(y,z)}\tau\left(e^{F_{1}(y,z)}-1\right)\right), y,z\right)+(\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2}))\label{eq:asymp_single} \end{align} as $\rho\to 0$. So far, we have calculated the asymptotic behavior of the diffeomorphisms in the form of $\phi_\xi^\mu(x)=\exp[\xi](x^\mu)$ for $\xi\in \mathcal{A}$. In general, diffeomorphisms generated by $\mathcal{A}$ and connected to the identity transformation are given by a product of such maps, i.e., \begin{align} (\phi_{\xi^{(1)}}\circ \phi_{\xi^{(2)}}\circ\cdots \circ \phi_{\xi^{(N)}})(x) \end{align} for some $N$. Let us analyze the asymptotic behavior for $N=2$. For two vector fields \begin{align} \left(\xi^{(i)}\right)^{\mu}(x) & =(\tau F_{1}^{(i)}(y,z)+\mathcal{O}(\rho^2), \tau\rho F_{2}^{(i)}(y,z)+\mathcal{O}(\rho^2),\mathcal{O}(\rho^2),\mathcal{O}(\rho^2)),\quad i=1,2, \end{align} as $\rho\to 0$, Eq.\eqref{eq:asymp_single} implies that \begin{align} &(\phi_{\xi^{(1)}}\circ \phi_{\xi^{(2)}})^\mu(x)\nonumber\\ &=\left(\tau e^{\tilde{F}_{1}(y,z)}, \rho\exp\left(\frac{\tilde{F}_{2}(y,z)}{\tilde{F}_{1}(y,z)}\tau\left(e^{\tilde{F}_{1}(y,z)}-1\right)\right),y,z\right)+(\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})), \end{align} where we have defined \begin{align} \tilde{F}_1(y,z)&\coloneqq F_1^{(1)}(y,z)+F_1^{(2)}(y,z),\nonumber\\ \tilde{F}_2(y,z)&\coloneqq \tilde{F}_1(y,z)\left(\frac{F_{2}^{(2)}(y,z)}{F_{1}^{(2)}(y,z)}\left(e^{F_{1}^{(2)}(y,z)}-1\right) +\frac{F_{2}^{(1)}(y,z)}{F_{1}^{(1)}(y,z)} e^{F^{(2)}_1(y,z)}\left(e^{F_{1}^{(1)}(y,z)}-1\right)\right). \end{align} Repeating the same argument, it is shown that the asymptotic behavior of a general diffeomorphism $\chi_{(F_1,F_2)}$ is characterized by two real functions $F_1$ and $F_2$ of $(y,z)$ as \begin{align} \chi_{(F_1,F_2)}^\mu(x) =\left(\tau e^{F_{1}(y,z)}, \rho\exp\left(\frac{F_{2}(y,z)}{F_{1}(y,z)}\tau\left(e^{F_{1}(y,z)}-1\right)\right), y,z\right)+(\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^2),\mathcal{O}(\rho^{2}),\mathcal{O}(\rho^{2})) \end{align} for $\rho \to 0$. Thus, the asymptotic behavior of the components of the metrics in question is characterized by arbitrary functions $F_1$ and $F_2$ of $(y,z)$ as \begin{align} \left(g_{\mu\nu}^{(F_1,F_2)}(x)\right) & \coloneqq \left(\frac{\partial \chi_{(F_{1},F_{2})}^{\alpha}}{\partial x^{\mu}}\frac{\partial \chi_{(F_{1},F_{2})}^{\beta}}{\partial x^{\nu}}\bar{g}_{\alpha\beta}(\chi_{(F_{1},F_{2})}(x))\right)\nonumber \\ & = \begin{pmatrix} J_{11}\rho^{2} & J_{12}\rho & J_{1y}\rho^{2} & J_{1z}\rho^{2} \\ J_{12}\rho & J_{22} & J_{2y}\rho & J_{2z}\rho \\ J_{1y}\rho^{2} & J_{2y}\rho & 1 & 0 \\ J_{1z}\rho^{2} & J_{2z}\rho & 0 & 1 \end{pmatrix} + (\text{higher order term}),\label{eq:asym_metric} \end{align} where we have defined \begin{align} J_{11} (\tau,y,z)&\coloneqq e^{2f(y,z)\tau}\left(-\kappa^{2}e^{2F_{1}(y,z)}+f^{2}(y,z)\right),\nonumber \\ J_{12}(\tau,y,z) & \coloneqq f(y,z)e^{2f(y,z)\tau},\ J_{1A}(\tau,y,z)\coloneqq \tau e^{2f(y,z)\tau}(-\kappa^{2}\partial_{A}F_{1}(y,z)e^{2F_{1}(y,z)} + f(y,z)\partial_{A}f(y,z)) \nonumber ,\\ J_{22}(\tau,y,z) & \coloneqq e^{2f(y,z)\tau},\ J_{2A}(\tau,y,z) \coloneqq \tau\partial_{A}f(y,z)e^{2f(y,z)\tau}, \end{align} and \begin{align} f (y,z)\coloneqq \frac{F_{2}(y,z)}{F_{1}(y,z)}\left(e^{F_{1}(y,z)}-1\right). \end{align} As explicit calculations show, it turns out that the second term in Eq.~\eqref{eq:asym_metric} does not affect the integrability condition nor the expression of the charges. \vspace{\baselineskip} \uline{Step 5: Check the integrability condition.}\\ A straightforward but lengthy calculation shows that the integrand of Eq.~\eqref{eq_integrability} is $\mathcal{O}(\rho)$ as $\rho\to 0$ for any metric given in Eq.~\eqref{eq:asym_metric}. Therefore, the integrability condition is satisfied. \vspace{\baselineskip} \uline{Step 6: Calculate the charges.}\\ To calculate the charges for $V_{1}, V_{2}$ defined in Eq.~\eqref{eq_vector_fields_st_sd}, we need $Q^{\tau\rho}, \Theta^{\tau}$ and $\Theta^{\rho}$ in Eq.~\eqref{eq_charge_int_along_path}. Since the integrability condition is satisfied, the parametrization of metric in Eq.~\eqref{eq_charge_int_along_path} can be taken arbitrarily. In order to calculate the charges at metric $g_{\mu\nu}^{(F_1,F_2)}(x)$ given in Eq.~\eqref{eq:asym_metric}, we here adopt following: \begin{align} g_{\mu\nu}(x;\lambda) & =\frac{\partial \chi_{(\lambda F_{1},\lambda F_{2})}^{\alpha}}{\partial x^{\mu}}\frac{\partial \chi_{(\lambda F_{1},\lambda F_{2})}^{\beta}}{\partial x^{\nu}}\bar{g}_{\alpha\beta}(\chi_{(\lambda F_{1},\lambda F_{2})}(x)). \end{align} For $\lambda=1$, $ g_{\mu\nu}(x;\lambda=1)=g_{\mu\nu}^{(F_1,F_2)}(x)$, while for $\lambda =0$, $ \left(g_{\mu\nu}(x;\lambda=0)\right)=\left(\bar{g}_{\mu\nu}(x)\right)$ up to the higher order terms in Eq.~\eqref{eq:asym_metric}, which does not affect the charges, shown as follows: From Eq.~\eqref{eq:Komar}, we get \begin{align} Q^{\tau\rho}\left[V_{1}\right]\biggl|_{\left(g_{\mu\nu}(x;\lambda)\right)} & = \frac{ T_{1}}{8\pi G \kappa}e^{-\lambda F_{1}}\left(\kappa^{2}e^{2\lambda F_{1}}\tau + \frac{f}{2}\right)+\mathcal{O}(\rho) \\ Q^{\tau\rho}\left[V_{2}\right] \biggl|_{\left(g_{\mu\nu}(x;\lambda)\right)}& = \frac{T_{2}}{16\pi G \kappa}e^{-\lambda F_{1}}+\mathcal{O}(\rho) \end{align} as $\rho\to 0$. On the other hand, from Eq.~\eqref{eq:presymplectic_potential}, we have \begin{align} \Theta^{\tau} & =\mathcal{O}(\rho)\\ \Theta^{\rho} & =-\frac{\kappa }{8\pi G}\partial_{\lambda} (e^{\lambda F_{1}})+\mathcal{O}(\rho) \end{align} as $\rho \to 0$. Thus, the second term in Eq.~\eqref{eq:asym_metric} does not contribute to the expression of the charges. From Eq.~\eqref{eq_charge_int_along_path}, the charges are evaluated as \begin{align} H[V_{1}] & = \frac{1}{16\pi G\kappa}\int \mathrm{d} y\mathrm{d} z\ T_{1}(y,z)\frac{F_{2}(y,z)}{F_{1}(y,z)}\left(1-e^{-F_{1}(y,z)}\right), \label{eq_charge_V_1_Rindler} \\ H[V_{2}] & =\frac{1}{16\pi G\kappa}\int \mathrm{d} y\mathrm{d} z\ T_{2}(y,z)\left(e^{-F_{1}(y,z)}-1\right). \label{eq_charge_V_2_Rindler} \end{align} where the reference of the charges are chosen so that they vanish at the background metric, which corresponds to the case where $F_1=F_2=0$. The transformation generated by the vector fields $V_1$ and $V_2$ is an example of superdilatation. Since the Rindler horizon can be interpreted as the horizon of a Schwarzschild black hole in the limit of infinite black hole mass, it would be interesting to investigate a similar asymptotic symmetry on the latter one. To the authors' knowledge, the algebra of charges corresponding to the supardilatation on the horizon has not been investigated neither in the Rindler spacetime nor in the Schwarzschild spacetime in prior researches. \section{summary}\label{sec:summary} In this paper, we have proposed a useful approach to construct integrable charges which form a non-trivial algebra in general spacetime. Our approach using the guiding principle in Eq.~\eqref{eq_non-triviality_background} may significantly reduce the effort involved in finding proper asymptotic conditions by trials and errors in the conventional approach. In particular, a key idea of our approach is to use Eq.~\eqref{eq_non-triviality_background} to find an algebra of symmetries with a non-vanishing Poisson bracket at the background metric $\bar{g}_{\mu\nu}$. The metrics connected to the background metric through a diffeomorphism generated by the Lie algebra $\mathcal{A}$ satisfying Eq.~\eqref{eq_non-triviality_background} can be physically distinguished from each other since the Poisson brackets do not vanish. In our analysis, we have investigated a set of metrics which are connected to a fixed background metric by diffeomorphisms generated by a Lie algebra of vector fields. Since all the metrics are diffeomorphic to the background metric, it is possible to investigate the properties of the asymptotic symmetries of the background spacetime. The set in our approach is different from that in the conventional approach, where the set of metrics are defined by their asymptotic behaviors. As an explicit example, we have analyzed the asymptotic symmetries on the Rindler horizon in $(1+3)$-dimensional Rindler spacetime. Equation~\eqref{eq_non-triviality_Rindler} is the general result of Eq.~\eqref{eq_non-triviality_background} for arbitrary vector fields evaluated at the Rindler horizon. From this formula, we can read out candidates of transformations which yields a non-trivial charge algebra. It is shown that the supertranslation and superrotation on the Rindler horizon can be found in our approach, which is known to be integrable \cite{Hotta_2016}. In addition, we have found a new class of symmetries, which generates position dependent dilatations in time and in the direction perpendicular to the horizon. We have termed such a transformation superdilatation. For a concrete example of superdilatation algebra, we have shown that the charges are integrable. The explicit expressions of the charges are given in Eqs.~\eqref{eq_charge_V_1_Rindler} and \eqref{eq_charge_V_2_Rindler}. Of course, our analysis here in $(1+3)$-dimensional Rindler spacetime can be directly extended to $(1+D)$-dimensional case with any $D\geq 2$. It remains an open problem whether there are such dilatation-like asymptotic symmetries in other setups. It will also be interesting to investigate whether known results can be reproduced with our approach, e.g., a class of dilatations at null infinity of asymptotic flat spacetime \cite{Haco2017}. So far, we have started with two vector fields $V_1$ and $V_2$ satisfying Eq.~\eqref{eq_integrability_background} and constructed a minimal Lie algebra $\mathcal{A}$ spanned by the vector fields and their commutators. This approach enables us to find building blocks of the asymptotic symmetries. To proceed the classification of the symmetry in general relativity, it will be quite interesting to investigate how the charge algebra changes by adding other elements to $\mathcal{A}$. It is also interesting to derive a condition under which Eq.~\eqref{non_triviality} holds at a particular metric $g_{\mu\nu}$ but not at the background metric $\bar{g}_{\mu\nu}$. Although we have used our approach to analyze the asymptotic symmetries on the Rindler horizon, it is applicable to an arbitrary spacetime. For background spacetimes without symmetry, it may turn out that the left hand side of Eq.~\eqref{eq_non-triviality_background} vanishes, suggesting that there is no asymptotic symmetry. We expect that our approach will be helpful to investigate other important spacetimes, such as black holes, the de Sitter spacetime and the anti-de Sitter spacetime. \begin{acknowledgments} The authors thank Ursula Carow-Watamura, Hiroyuki Kitamoto, Kohei Miura, Kengo Shimada, Naoki Watamura, Satoshi Watamura, Masaki Yamada and Kazuya Yonekura for useful discussions. This research was partially supported by JSPS KAKENHI Grants No. JP18J20057 (K.Y.), No. JP19K03838 (M.H.) and 21H05188(M.H.), and by Graduate Program on Physics for the Universe of Tohoku University (T.T. and K.Y.). \end{acknowledgments}
{ "timestamp": "2021-10-29T02:22:25", "yymm": "2012", "arxiv_id": "2012.14050", "language": "en", "url": "https://arxiv.org/abs/2012.14050" }
\section{Introduction} \graphicspath{{figures/}} The question of how we perceive the world around us has been an intriguing topic since the ancient times. For example, we can consider the philosophical debate around the concept of \emph{entelechy}, which started with the early studies of the Aristotelian school to answer this question, while, on the side of phenomenology and on its relation to natural sciences, we can think of the theory started by Husserl. A well-known and accepted theory of perception is the one formulated within Gestalt psychology \cite{wertheimer1938laws, kohler1992gestal}. The Gestalt psychology is a theory for understanding the principles underlying the configuration of local forms giving rise to a meaningful global perception. The main idea of the Gestalt psychology is that the mind constructs the whole by grouping similar fragments rather than simply summing the fragments as if they were all different. In terms of visual perception, such similar fragments correspond to point stimuli with same (or very close) valued features of the same type. As an enlightening example from vision science, we tend to group the same colored objects in an image and to perceive them as an ensemble rather than the objects with different colors. There have been many psychophysical studies which attempted to provide quantitative parameters describing the tendencies of the mind in visual perception based on Gestalt psychology. A particularly important one is the pioneering work of Field \textit{et. al.} \cite{field1993contour}, where the authors proposed a representation, called \emph{association field}, modelling specific Gestalt principles. Furthermore, they also showed that it is more likely that the brain perceives together fragments which are similarly oriented and aligned along a curvilinear path than the ones rapidly changing orientations. The presented model for neuronal activity is a geometrical abstraction of the orientation-sensitive V1 hypercolumnar architecture observed by Hubel and Wiesel \cite{hubel1959receptive, hubel1962receptive, hubel1963shape}. This abstraction generates a good \emph{phenomenological} approximation of the V1 neuronal connections existing in the hypercolumnar architecture as reported by Bosking \textit{et.\ al.} \cite{bosking1997orientation}. In this framework, the corresponding projections of the neuronal connections in V1 onto a 2D image plane are considered to be the association fields described above and the neuronal connections are modeled as the horizontal integral curves generated by the model geometry. The projections of such horizontal integral curves were shown to produce a close approximation of the association fields, see Figure~\ref{fig:association}. For this reason, the approach considered by Citti, Petitot and Sarti and used in this work is referred to be cortically-inspired. We remark that the presented model for neural activity is a phenomenological model which provides a mathematical understanding of early perceptual mechanisms at the cortical level by starting from very structure of receptive profiles. Nevertheless, it has been very useful for many image-processing applications, see for example, \cite{BCGPR18, zhangNumerical2016}. \begin{figure}[th] \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width = \linewidth]{figures/association1.png} \caption{Association fields} \label{fig:association1} \end{subfigure} \hspace{2.5cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[ width =\linewidth]{figures/association2.png} \caption{Projected horizontal integral curves} \label{fig:association2} \end{subfigure} \caption{Projections of horizontal integral curves approximate the association fields from the experiment of Field, Hayes and Hess \cite{field1993contour}. They are generated by the sub-Riemannian model geometry proposed by Citti and Sarti \cite{citti2006cortical}. Figures are adapted from \cite{field1993contour} and \cite{citti2006cortical}.} \label{fig:association} \end{figure} In this work, we follow this approach for a better understanding of the visual perception biases due to visual distortions often referred to as visual illusions. Visual illusions are described as the mismatches between the reality and its visual perception. They result either from a neural conditioning introduced by external agents such as drugs, microorganisms, tumors \cite{levi1990visual,gaillard2003persisting} or from self-inducing mechanisms evoking visual distortions via proper neural functionality applied on specific stimulus \cite{hine1995illusion,prinzmetal2001ponzo}. The latter type of illusions are due to the effects of neurological and biological constraints on the visual system, \cite{purves2008visual}. In this work, we focus on illusions induced by contrast induction and orientation misalignments, with a particular focus on the well-known Poggendorff illusion and its variations, see Figure \ref{fig:Poggendorff}. This is a geometrical optical illusion \cite{westheimer2008illusions,Weintraub1971} in which a misaligned oblique perception is induced by the presence of a central bar \cite{day1976components}. \begin{figure}[htb!] \centering \includegraphics[height=5cm]{figures/Poggendorff_illusion_wiki.png} \caption{The original Poggendorff illusion: The red colored line is aligned with the black line although the blue one is falsely perceived as its continuation. Source: Wikipedia.} \label{fig:Poggendorff} \end{figure} \subsection{The functional architecture of the primary visual cortex } \label{sec:V1} It has been known since the celebrated experiments of Hubel and Wiesel \cite{hubel1959receptive, hubel1962receptive, hubel1963shape} that neurons (simple cells) in the primary visual cortex (V1) perform boundary (hence orientation) detection and propagate their activations through the cortical connectivity, in accordance with the psychophysical results of Fields and Hayes \cite{field1993contour}. Hubel and Wiesel showed that simple cells have a spatial arrangement based on the so-called \emph{hypercolumns} in V1. In this arrangement, simple cells which are sensitive to different orientations at the same retinal location are found in the same vertical column constructed on the cortical surface. Adjacent columns contain the simple cells which are sensitive to close positions. Several models have been proposed to describe the functional architecture of V1 and the neural connectivity within it. Koenderink et.\ al.\ \cite{koenderink1984structure, koenderink1987representation} focused on differential geometric approaches to study the visual space where they modelled the invariance of simple cells with respect to suitable symmetries in terms of a family of Gaussian functions. Hoffman \cite{hoffman1970higher, hoffman1989visual} provided the basic framework of vision models by interpreting the hypercolumn architecture of V1 as a fiber bundle. Following a similar reasoning, Petitot and Tondut \cite{petitot1999vers} further developed this modelling, providing a new model, coherent both with the structure of orientation sensitive simple cells and the long range neural connectivity between V1 simple cells. In their model, they first observed that the simple cell orientation selectivity induces a contact geometry (associated to the first Heisenberg group) rendered by the fibers of orientations. Moreover, they showed that a specific family of curves found via a constrained minimization approach in the contact geometry fits the aforementioned association fields reported by Field et.\ al.\ \cite{field1993contour}. In \cite{citti2006cortical, citti2014neuromathematics}, Citti and Sarti further developed the model of Petitot and Tondut, by introducing a group based approach, then refined by Boscain, Gauthier \textit{et al.} \cite{BDGR12, boscainHypoelliptic2014}. See also the monograph \cite{PGbook}. The so-called Citti-Petitot-Sarti (CPS) model exploits the natural sub-Riemannian (sR) structure of the group of rotations and translations $\text{SE}(2)$ as the V1 model geometry. In this framework, simple cells are modelled as points of the three-dimensional group $\mathcal M = \mathbb{R}^2\times \mathbb P^1$. Here, $\mathbb P^1$ is the projective line, obtained by identifying antipodal points in $\mathbb S^1$. The response of simple cells to the two-dimensional visual stimuli is identified by lifting them to $\mathcal M$ via a Gabor wavelet transform. Neural connectivity is then modelled in terms of \emph{horizontal integral curves} given by the natural sub-Riemannian structure of $\mathcal M$. Activity propagation along neural connections can further be modelled in terms of diffusion and transport processes along the horizontal integral curves. In recent years, the CPS model have been exploited as a framework for several cortical-inspired image processing problems by various researchers. We mention the large corpus of literature by Duits \textit{et al.}, see e.g., \cite{duits2009line, duits2010left1, duits2010left2} and the state-of-the-art image inpainting and image recognition algorithms developed by Boscain, Gauthier, \textit{et al.} \cite{bohiFourier2016,BCGPR18}. Some extensions of the CPS model geometry and its applications to other image processing problems can be found in \cite{bekkers2014multi, barbieri2014cortical, citti2016sub, baspinar2018geometric, janssen2018design, franceschiello2018neuromathematical,lafarge2020roto, baspinar2020sub}. \subsection{Mean-field neural dynamics \& visual illusions} Understanding neural behaviors is in general a very challenging task. Reliable responses to stimuli are typically measured at the level of population assemblies comprised by a large number of coupled cells. This motivates to reduce, whenever possible, the dynamics of a neuronal population to a neuronal mean-field model which describes large-scale dynamics of the population as the number of neurons goes to infinity. These mean-field models, inspired by the pioneering work of Wilson and Cowan \cite{wilson1972excitatory, wilson1973mathematical}, and Amari \cite{amari1977dynamics}, are low dimensional in comparison to their corresponding ones based on large-scale population networks. Yet, they capture the same dynamics underlying the population behaviors. In the framework of the CPS model for V1 discussed above, several mathematical models were proposed to describe the neural activity propagation favoring the creation of visual illusions, including Poggendorff type illusions. In \cite{franceschiello2018neuromathematical}, for instance, illusions are identified with suitable strain tensors, responsible of the perceived displacement from the grey levels of the original image. In \cite{franceschiello2019geometrical}, illusory patterns are identified by a suitable modulation of the geometry of $\text{SE}(2)=\mathbb{R}^2\times S^1$ and computed as the associated geodesics via the fast-marching algorithm. In \cite{bertalmio2020cortical,bertalmio2020visual,SSVMproceeding2019}, a variant of the Wilson-Cowan (WC) model based on a variational principle and adapted to the $\mathcal M$ geometry of V1 was employed to model the neuronal activity and generate illusory patterns for different illusion types. The modelling considered in these works is strongly inspired by the integro-differential model firstly studied in \cite{bertalmio2007perceptual} for perception-inspired Local Histogram Equalization (LHE) techniques and later applied in a series of work, see, e.g., \cite{bertalmio2009implementing,BertalmioFrontiers2014} for the study of contrast and assimilation phenomena. By further incorporating a cortical-inspired modelling, the authors showed in \cite{bertalmio2020cortical,bertalmio2020visual,SSVMproceeding2019} that cortical LHE models are able to replicate visual misperceptions induced not only by local contrast changes, but also by orientation-induced biases similar as the ones in Figure \ref{fig:Poggendorff}. Interestingly, the cortical LHE model \cite{bertalmio2020cortical,bertalmio2020visual,SSVMproceeding2019} was further shown to outperform both standard and cortical-inspired WC models and rigorously shown to correspond to the minimization of a variational energy, which suggests better efficient representation properties \textcolor{blue}{\cite{Attneave1954,Barlow1961}}. One major limitation in the modelling considered in these works is the use of neuronal interaction kernels (essentially, isotropic 3D Gaussian) which are not compatible with the natural sub-Riemannian structure of V1 proposed in the CPS model. \subsection{Main contributions} In this work, we encode the sub-Riemannian structure of V1 into both WC and LHE models by using a sub-Laplacian procedure associated with the geometry of the space $\mathcal M$ described in Section \ref{sec:V1}. Similarly as in \cite{bertalmio2020cortical,bertalmio2020visual,SSVMproceeding2019}, the lifting procedure associating to a given two dimensional image the corresponding neuronal response in $\mathcal M$ is performed by means of all-scale cake wavelets, introduced in \cite{duits2005perceptual,duits2007invertible}. A suitable gradient-descent algorithm is applied to compute the stationary states of the neural models. Within this framework, we study the family of Poggendorf visual illusions induced by local contrast and orientation alignment of the objects in the input image. In particular, we aim to reproduce such illusions by the proposed models in a way which is qualitatively consistent with the psychophysical experience. Our findings show that it is possible to reproduce Poggendorff-type illusions by both the sR cortical-inspired WC and LHE models. This, compared with the results in \cite{bertalmio2020cortical, bertalmio2020visual} where the cortical WC model endowed with a Riemannian (isotropic) 3D kernel was shown to fail to reproduce Poggendoff-type illusions, shows that adding the natural sub-Laplacian procedure in the computation of the flows improves the capability of those cortical-inspired models in terms of reproducing orientation-dependent visual illusions. \section{Cortical-inspired modelling} \label{sec:cortical_modelling} In this section we recall the fundamental features of CPS models. The theoretical criterion underpinning the model relies on the so-called neurogeometrical approach introduced in \cite{petitot1999vers, citti2006cortical,sarti2008symplectic}. According to this model, the functional architecture of V1 is based on the geometrical structure inspired by the neural connectivity in V1. \subsection{Receptive profiles} A simple cell is characterized by its \emph{receptive field}, which is defined as the domain of the retina to which the simple cell is sensitive. Once a receptive field is stimulated, the corresponding retinal cells generates spikes which are transmitted to V1 simple cells via retino-geniculo-cortical paths. The response function of each simple cell to a spike is called \emph{receptive profile} (RP), and denoted by $\psi_{(\zeta,\theta)}:Q\to \mathbb C$. It is basically the impulse response function of a V1 simple cell. Conceptually it is the measurement of the response of the corresponding V1 simple cell to a stimulus at a point\footnote{Note that we omit the coordinate maps between the image plane and retina surface, and the retinocortical map from the retina surface to the cortical surface. In other words, we assume that the image plane and the retinal surface are identical and denote both by $Q\subset\mathbb{R}^2$.} $\zeta=(x,y)\in Q$. In this study we assume the response of simple cells to be linear. That is, for a given visual stimulus $f:Q\to \mathbb{R}$ we assume the response of the simple cell at V1 coordinates $(\zeta,\theta)$ to be \begin{equation}\label{eq:firstEq} a_0(\zeta,\theta) = \langle f, \psi_{(\zeta,\theta)} \rangle_{L^2(Q)} =\int_Q \psi_{(\zeta,\theta)}(u)f(u)\,du. \end{equation} This procedure defines the cortical stimulus $a_0:\mathcal M\to \mathbb C$ associated with the image $f$. We note that receptive field models consisting of cascades of linear filters and static non-linearities, although not perfect, may be more adequate to account for responses to stimuli \cite{koenderink1987representation, bekkers2018roto, lindeberg2013computational}. Several mechanisms such as, e.g., response normalization, gain controls, cross-orientation suppression, or intra-cortical modulation, might intervene to change radically the shape of the profile. Therefore, the above static and linear model for the receptive profiles should be considered as a first approximation of the complex behavior of a real dynamic receptive profile, which cannot be perfectly described by static wavelet frames. Regarding the form of the RP, in \cite{citti2006cortical}, a simplified basis of Gabor functions were proposed as good candidates for modelling the position-orientation sensitive receptive profiles for neuro-physiological reasons \cite{daugman1985uncertainty, barbieri2012uncertainty}. This basis has then been extended to take into account additional features such as scale \cite{sarti2008symplectic}, velocity \cite{barbieri2014cortical}, and frequency-phase \cite{baspinar2020sub}. On the other hand, Duits et al. \cite{duits2007invertible} proposed so-called \emph{cake kernels} as a good alternative to Gabor functions, and showed that cake kernels were adequate for obtaining simple cell output responses which were used to perform certain image processing tasks such as image enhancement and completion based on sR diffusion processes . In this study, we employ cake kernels as the models of position-orientation RPs obtaining the initial simple cell output responses to an input image, and we use the V1 model geometry $\mathcal M$ to represent the output responses. We model the activity propagation along the neural connectivity by using the combination of a diffusion process based on the natural sub-Laplacian and a Wilson-Cowan type integro-differential system. \subsection{Horizontal connectivity and sub-Riemannian diffusion} \label{sec:connectivity} Neurons in V1 present two type of connections: local and lateral. Local connections connect neurons belonging to the same hypercolumn. On the other hand, lateral connections account for the connectivity between neurons belonging to different hypercolums, but along a specific direction. In the CPS model these are represented\footnote{This expression does not yield smooth vector fields on $\mathcal M$. Indeed, e.g., $X_1(\zeta, 0)=-X_1(\zeta,\pi)$ despite that $0$ and $\pi$ are identified in $\mathbb P^1$. Although in the present application such difference is inconsequential, since we are only interested in the direction (which is smooth) and not in the orientation, this problem can be solved by defining $X_1$ in an appropriate atlas for $\mathcal M$ \cite{BDGR12}.} by the vector fields \begin{equation}\label{eq:horizontal_VFs} X_1 = \cos\theta\partial_x + \sin\theta\partial_y, \qquad X_2=\partial_\theta. \end{equation} The above observation yield to the modelling of the dynamic of the neuronal excitation $\{Z_t\}_{t\ge_0}$ starting from a neuron $(\zeta,\theta)$ via the following stochastic differential equation \begin{equation} dZ_t = X_1 du_t + X_2 dv_t, \qquad Z_0 = (\zeta,\theta), \end{equation} where $u_t$ and $v_t$ are two one-dimensional independent Wiener processes. As a consequence, in \cite{BDGR12} the cortical stimulus $a_0$ induced by a visual stimulus $f_0$ is assumed to evolve according to the Fokker-Planck equation \begin{equation}\label{eq:heat-sr} \partial_t \psi = \mathcal L \psi, \qquad \mathcal L = X_1^2+\beta^2X_2^2. \end{equation} Here, $\beta>0$ is a constant encoding the unit coherency between the spatial and orientation dimensions. The operator $\mathcal L$ is the sub-Laplacian associated to the sub-Riemannian structure on $\mathcal M$ with orthonormal frame $\{X_1,X_2\}$, as presented in \cite{citti2006cortical, BDGR12}. It is worth mentioning that this operator is not elliptic, since $\{X_1,X_2\}$ is not a basis of $T\mathcal M$. However, $\operatorname{span}\{X_1,X_2,[X_1,X_2]\}=T\mathcal M$. Hence, $\{X_1,X_2\}$ satisfies the Hörmander condition and $\mathcal L$ is an hypoelliptic operator \cite{hormander1967hypoelliptic} which models the activity propagation between neurons in V1 as the diffusion concentrated to a neighborhood along the (horizontal) integral curves of $X_1$ and $X_2$. A direct consequence of hypoellipticity is the existence of a smooth kernel for \eqref{eq:heat-sr}. That is, there exists a function $(t,\xi,\nu)\in \mathbb{R}_+\times\mathcal M\times\mathcal M\mapsto k_t(\xi,\nu)$ such that the solution of \eqref{eq:heat-sr} with initial datum $a_0$ reads \begin{equation}\label{eq:evolution} \psi(t, \xi) = e^{t\mathcal L} a_0(\xi) = \int_{\mathcal M} k_t(\xi,\nu)a_0(\nu)\,d\nu. \end{equation} An analytic expression for $k_t$ can be derived in terms of Mathieu functions \cite{duits2010left1, zhangNumerical2016}. This expression is however cumbersome to manipulate, and it is usually more efficient to resort to different schemes for the numerical implementation of \eqref{eq:heat-sr}, see, e.g.,~Section~\ref{sec:Algorithms}. \subsection{Reconstruction on the retinal plane} Activity propagation evolves the lifted visual stimulus in time. In order to obtain a meaningful result, which is represented on a 2-dim image plane, we have to transform back the evolved lifted image to the 2-dim image plane. We achieve this by using the projection given by \begin{equation}\label{eq:reconstruction} f(\zeta,T) = \int_0^\pi a(\zeta,\theta,T)\,d\theta, \end{equation} where $f: \mathbb{R}^2\times (0,T]\rightarrow \mathbb{R}$ and $0<T<\infty$ denote the processed image and the final time of the evolution, respectively. One easily checks that this formula yields $f(\cdot, 0) = f_0$ under the assumption \begin{equation} \int_{0}^\pi \psi_{\xi,\theta}(u)\,d\theta = 1. \end{equation} \section{Describing neuronal activity via Wilson-Cowan-type models} \label{sec:wc_models} In neurophysiological experiments, reliable neural responses to visual stimuli are generally observed at the neuronal population level: the information processing and the response produced are obtained by integrating the individual dynamics of the neurons interacting within the population. Modelling neuronal populations can be done via coupled differential systems (networks) consisting of a large number of equations, and the average behavior can in principle be used to represent the population behavior. This requires high computational power and the use of challenging analytical approaches due to the high dimension of the network. A different mesoscopic approach consists in considering the average network behavior as the number of neurons in the network is let to infinity. The asymptotic limit of the network can thus be written in terms of the probability distribution (density) of the state variables. This asymptotic limit is the so-called mean-field limit. It has been successfully used as a reference framework in several papers, see, e.g., \cite{Faugeras2009,Bressloff2002,WCreview2009} and will also be the approach considered in this work. \subsection{Wilson-Cowan (WC) model} Let $a(\zeta,\theta, t)$ denote the evolving activity of the neuronal population located at $\zeta\in\mathbb{R}^2$ and sensitive to the orientation $\theta\in\mathbb{P}^1$ at time $t\in(0,T]$. By using the shorthand notation $\xi=(\zeta,\theta),\; \eta=(\nu, \phi)\in \mathcal M$, the Wilson-Cowan (WC) model on $Q\subset\mathbb{R}^2$ can be written as follows: \begin{equation}\label{eq:WCRef} \partial_t a(\xi,t) = -(1+\lambda) a(\xi,t)+ \frac{1}{2M}\int_{Q\times [0,\pi)}\omega_{\xi}(\eta)\sigma\Big( a(\eta,t) \Big )d\eta + \lambda a_0(\xi)+\mu(\xi). \end{equation} Here $\mu:Q\to \mathbb{R}$ is a smoothed version of the simple cell output response $a_0$ via a Gaussian filtering, while parameters $\lambda>$ and $M>0$ are fixed positive constants. Following the standard formulation of WC models studied, e.g., in \cite{Faugeras2009,sarti2015constitution} we have that the role of the time-independent external stimulus $h:Q\times[0,\pi)\to \mathbb{R}$ is played here by $h(\xi):= \lambda a_0(\xi)+\mu(\xi)$ while model parameters can be set as $\beta:=1+\lambda$ and $\nu:=1/2M$. The function $\sigma:\mathbb{R}\to [-1, 1]$ stands for a nonlinear saturation function, which we choose as the sigmoid: \begin{equation}\label{eq:sigma} \sigma(r) := -\min\Big(1, \max(\alpha (r-1/2),\,-1) \Big),\quad \alpha>1. \end{equation} The connectivity kernel $\omega_{\xi}$ models the interaction between neurons in $\mathcal M$. Its definition should thus take into account the different type of interactions happening between connected neurons in V1, e.g. it should model at the same time both local and lateral connections via the sub-Riemannian diffusion described in Section \ref{sec:connectivity}. In \cite{bertalmio2020cortical,bertalmio2020visual} the authors showed that \eqref{eq:WCRef} does not arise from a variational principle. That is, it there exists no energy function $E:L^2(\mathcal M)\to \mathbb R$ such that \eqref{eq:WCRef} can be recast as the problem \begin{equation}\label{eq:variationalFormula} \partial_t a (\xi,t)= -\nabla E (a(\xi,t)),\qquad a(\xi,0)=a_0 = L f_0. \end{equation} Under this formulation, stationary states $a^*$ of \eqref{eq:WCRef} are (local) minima of $E$. The interest of considering an evolution model following a variational principle in the sense \eqref{eq:variationalFormula} is given by its connection with the optimization-based approaches considered in \cite{Olshausen2000} to describe the efficient coding problem as an energy minimization problem which involves natural image statistics and biological constraints which force the final solution to show the least possible redundancy. Under this interpretation, the non-variational model \eqref{eq:WCRef} is suboptimal in reducing redundant information in visual stimuli, see \cite[Section 2.1]{bertalmio2020cortical} for more details. \subsection{Local Histogram Equalization (LHE) model} In order to build a model which complies with the efficient neural coding described above, in \cite{bertalmio2020cortical,bertalmio2020visual}, the authors showed that \eqref{eq:WCRef} can be transformed into a variational problem by replacing the term $\sigma(a(\eta,t))$ with $\hat{\sigma}(a(\xi, t)- a(\eta,t))$ for a suitable choice of the nonlinear sigmoid function $\hat{\sigma}$, thus enforcing non-linear activations on local contrast rather than on local activity. The corresponding model reads: \begin{equation}\label{eq:LHERef} \partial_t a(\xi,t) = -(1+\lambda)a(\xi,t) + \frac{1}{2M}\int_{Q\times [0,\pi)}\omega_{\xi}(\eta)\hat{\sigma}\Big( a(\xi,t) - a(\eta, t) \Big )d\eta + \lambda a_0(\xi)+\mu(\xi). \end{equation} where $\hat{\sigma}(r) := -\sigma ( r+1/2 )$, and $\sigma$ as in \eqref{eq:sigma}. This model has been first introduced in \cite{bertalmio2007perceptual} as a variational reformulation of the Local Histogram Equalization (LHE) procedure for RGB images. The corresponding energy $E:L^2(\mathcal M)\to \mathbb R$ for which \eqref{eq:variationalFormula} holds is: \begin{multline}\label{eq:lhe-energy} E(a) = \frac{\lambda}{2} \int_{Q\times[0,\pi)}|a(\xi)- a_0(\xi) |^2\,d\xi + \frac{1}{2} \int_{Q\times[0,\pi)}|a(\xi)- \mu(\xi) |^2\,d\xi\\ +\frac1{2M}\int_{Q\times[0,\pi)}\int_{Q\times[0,\pi)}\omega_\xi(\eta) \Sigma\left( a(\xi)-a(\eta)\right)\,d\xi\,d\eta , \end{multline} where $\Sigma:\mathbb R\to \mathbb R$ is any (even) primitive function for $\hat{\sigma}$. As it is clear from \eqref{eq:lhe-energy}, the local histogram equalization properties of the model are due here to the activation averaging which is localized by the kernel $\omega_\xi$, which should thus be adapted to the natural geometry of $\mathcal M$ (see Section \ref{sec:kernel} for a more detailed discussion). \subsection{A sub-Riemannian choice of the interaction kernel $\omega_\xi$} \label{sec:kernel} In \eqref{eq:WCRef} and \eqref{eq:LHERef}, the geometric structure of the underlying space $\mathcal{M}$ is captured by the connectivity kernel $\omega_\xi$, which characterizes the activity propagation along neural connections in V1. In \cite{bertalmio2020cortical, bertalmio2020visual}, simple 3-dimensional Gaussian-type kernels were considered. This choice was shown to be good enough in these works to reproduce a large number of contrast- and orientation-dependent Poggendorff-like illusions via the LHE model in \eqref{eq:LHERef}, but not by the WC one \eqref{eq:WCRef}. Here, motivated by he discussion in Section~\ref{sec:connectivity}, we study the effect of a more natural choice for the interaction kernel $\omega_\xi$, which we set as $\omega_\xi(\eta) = k_\tau(\xi,\eta)$, where $k_\tau:\mathcal M\times\mathcal M\to \mathbb R$ is the sub-Riemannian heat kernel evaluated at time $\tau>0$. Indeed, 3-dimensional isotropic Gaussian kernels are obtained via the Euclidean heat equation are not coherent with the intrinsically anisotropic neuronal connectivity structure of V1. Recalling \eqref{eq:evolution}, this choice of $\omega_\xi$ allows to rewrite the WC equations \eqref{eq:WCRef} as \begin{equation}\tag{sR-WC}\label{eq:sr-wc} \partial_t a(\xi,t) = -(1+\lambda) a(\xi,t)+ \frac{1}{2M} e^{\tau\mathcal L}\left[\sigma\left( a(\cdot,t) \right)\right](\xi) + \lambda a_0(\xi)+\mu(\xi). \end{equation} Using this formulation, the evaluation of the interaction term at point $(\xi,t)\in\mathcal{M}\times (0,T]$ can be done by solving the sub-Riemannian heat equation and let it evolve for a certain inner-time $\tau>0$. This avoids to deal directly with the explicit expression of $k_\tau$ whose numerical implementation is very delicate, as explained, e.g., in \cite{zhangNumerical2016}. A similar simplification is not readily available for the LHE equation \eqref{eq:LHERef}, due to the dependence on $\xi$ of the integrand function. In this setting, we follow the discussion in \cite{bertalmio2007perceptual} and replace the non-linearity $\hat{\sigma}$ by a polynomial approximation of sufficiently large order $n$. Namely, we look for a polynomial approximation of $\hat\sigma$ of the form $\hat\sigma(r) = c_0 + \ldots + c_n r^n$, which allows us to write \begin{equation}\label{eq:RTerm} \begin{split} \hat{\sigma} \Big ( a(\xi,t)- a(\eta,t) \Big ) & \approx \sum_{i=0}^n\underbrace{\Big [ \sum_{j=0}^i (-1)^{j-i+1}c_j\begin{pmatrix} j \\ i \end{pmatrix}a^{j-i}(\xi,t)\Big ]}_{ C_i(\xi, t):=} a^i(\eta,t) \\ & = \sum_{i=0}^n C_i(\xi, t)a^i(\eta,t). \end{split} \end{equation} This allows to approximate the interaction term in~\eqref{eq:LHERef} as \begin{equation}\label{eq:approxFilteringLHE} \begin{split} \int_{Q\times [0,\pi)}k_\tau(\xi,\eta)\hat{\sigma}\Big( a(\xi,t) - a(\eta, t) \Big )d\eta &\approx \sum_{i=0}^nC_i(\xi,t)\int_{Q\times [0,\pi)}k_\tau(\xi,\eta)a^i(\eta, t)\,d\eta\\ &= \sum_{i=0}^nC_i(\xi,t) \: e^{\tau \mathcal{L}}\left[a^i(\cdot, t)\right](\xi). \end{split} \end{equation} Finally, the resulting (approximated) sub-Riemannian LHE equation reads: \begin{equation} \tag{sR-LHE}\label{eq:sr-lhe} \partial_t a(\xi,t) = -(1+\lambda) a(\xi,t)+ \frac{1}{2M} \sum_{i=0}^nC_i(\xi,t) \: e^{\tau\mathcal{L}}\left[a^i(\cdot, t)\right](\xi) + \lambda a_0(\xi)+\mu(\xi). \end{equation} \section{Discrete modelling and numerical realisation}\label{sec:Algorithms} In this Section, we report a detailed description of how models \eqref{eq:sr-wc} and \eqref{eq:sr-lhe} can be formulated in a complete discrete setting, providing, in particular, some insights on how the sub-Riemannian evolution can be realised. We further add a self-contained section regarding the gradient-descent algorithm used to perform the numerical experiments reported in Section \ref{sec:experiments}, for more details see \cite{SSVMproceeding2019,bertalmio2020cortical}. \subsection{Discrete modelling and lifting procedure via cake wavelets} \label{sec:Discrete_Gabor_coeff} The sub-Riemannian diffusion $\operatorname{e}^{\tau \mathcal{L}}$ is discretised by a final time $\tau = m\Delta\tau$ where $m$ and $\Delta\tau$ denote the number of iterations and the time-step, respectively. For $N\in\mathbb{N}^+$ and $\Delta x,\Delta y\in \mathbb{R}^+$ denoting the spatial sampling size, we then discretise the given gray-scale image function $f_0$ associated to the retinal stimulus on a uniform square spatial grid $Q:=\{ (x_i,y_j) = (i\Delta x, j\Delta y): i,j = 1,2,\dots,N\}\subset\mathbb{R}^2$ and denote, for each $i,j= 1,2,\dots,N$, the brightness value at point $\zeta_{i,j}:=(x_i,y_j)\in Q$ by \begin{equation} \label{eq:discrete_f0} F_0[i,j]=f_0(x_i, y_j)=f_0(\zeta_{i,j}). \end{equation} As far as the orientation sampling is concerned, we use a uniform orientation grid with points $\Theta:=\{\theta_k := k\Delta\theta, k=1,\dots, K\},~ K\in\mathbb{N}^+$ and $\Delta\theta=\pi/K$ . We can then define the discrete version of the simple cell response $a_0(x_i, y_j, \theta_k)$ to the visual stimulus located at $\zeta_{i,j}\in Q$ with local orientation $\theta_k\in\Theta$ at time $t=0$ of the evolution as \begin{equation}\label{eq:discrete-lift} A_0[i,j,k]=a(x_i,y_j,\theta_k, 0) = a(\zeta_{i,j},\theta_k, 0) = (L f_0)_{i,j,k}, \end{equation} where $L:Q\to Q\times \Theta$ is the lifting operation to be defined. To do so, we consider in the following the image lifting procedure based on cake kernels introduced in \cite{duits2007invertible} and used, e.g., in \cite{bekkers2014multi,SSVMproceeding2019,bertalmio2020cortical}. We write the cake kernel centered at $\zeta_{i,j}$ and rotated by $\theta_k$ as \begin{equation} \Psi_{[i,j,k]}[\ell,m]=\psi_{(\zeta_{i,j},\theta_k)}(x_\ell, x_m) , \end{equation} where $\ell,m\in\{1,2,\dots,N\}$. We can then write the lifting operation applied to the initial image $f_0$ for all $\zeta_{i,j}\in Q$ and $\theta_k\in\Theta$ as: \begin{equation} \label{eq:lifting_op} (Lf_0)_{i,j,k}=A_0[i,j, k]=\sum_{l,m}\Psi_{[i,j,k]}[\ell,m]\,f_0[\ell,m]. \end{equation} Finally, for $P\in\mathbb{N}^+$ we consider a time-discretisation of the interval $(0,T]$ at time nodes $\mathcal{T}:=\{t_p:=p\Delta t, p=1,\ldots P \}, P\in\mathbb{N}^+$ with $\Delta t:=T/P$. The resulting fully-discretised neuronal activation at $\zeta_{i,j}=(x_i,y_j)\in Q$, $\theta_k\in\Theta$ and $t_p \in \mathcal{T}$ will be thus denoted by: \begin{equation} A_p[i,j,k]=a(\zeta_{i,j},\theta_k, t_p). \end{equation} \subsection{Sub-Riemannian heat diffusion}\label{sec:sr-heat-discrete} Let $g:\mathcal M\to\mathbb{R}$ be a given cortical stimulus, and denote and set $G[i,j,k]=g(\xi_{i,j}, \theta_k)$. In this section we describe how to compute \begin{equation} \label{eq:sR_diff} \exp_\tau G[i,j,k] \approx e^{\tau\mathcal L}g(\zeta_{i,j},\theta_k), \end{equation} The main difficulty here is due the degeneracy arising from the anisotropy of the sub-Laplacian. Indeed, developing the computations in \eqref{eq:heat-sr}, we have \begin{equation} \mathcal L = \mathcal D^T \ell \mathcal D, \qquad \mathcal D = \left(\begin{array}{c}\partial_x\\ \partial_y \\ \partial_\theta\end{array}\right), \qquad \ell = \left( \begin{array}{ccc} \cos^2\theta & \cos\theta\sin\theta & 0 \\ \cos\theta\sin\theta & \sin^2\theta & 0 \\ 0 & 0 & \beta^2 \end{array} \right). \end{equation} In particular, it is straightforward to deduce that the eigenvalues of $\ell$ are $(0,\beta^2, 1)$. The discretisation of such anisotropic operators can be done in several ways, see for example \cite{duits2010left1, duits2010left2, mirebeau2014anisotropic, baspinar2020sub}. In our implementation, we follow the method presented in \cite{boscainHypoelliptic2014} which is tailored around the group structure of $\text{SE}(2)$, the universal cover of $\mathcal M$, and based on the non-commutative Fourier transform, see also \cite{BCGPR18}. It is convenient to assume for the following discussion $\Delta x = \Delta y = \sqrt{N}$ and $\Delta\theta = \pi/K$. The ``semi-discretised'' sub-Laplacian $\mathcal L_K$ can be defined by \begin{equation} \label{eq:semi_discrete_sR} \mathcal L g \approx \mathcal L_K G := D^2 G +\Lambda_K G, \end{equation} where by $\Lambda_K$ we denote the central difference operator discretising the derivatives along the $\theta$ direction, i.e. the operator \begin{equation} \partial_\theta^2 G[i,j,k] \approx \Lambda_K G[i,j,k] = \frac{g(\xi_{i,j}, \theta_{k-1}) -2g(\xi_{ij}, \theta_{k}) +g(\xi_{i,j}, \theta_{k+1}) }{2}. \end{equation} The operator $D$ is the diagonal operator defined by \begin{equation} D G[i,j,k] = \left(\cos (k\Delta\theta) \partial_x + \sin(k\Delta\theta) \partial_y\right) g(\xi_{i,j}, \theta_k). \end{equation} The full discretisation is then achieved by discretising the spatial derivatives as \begin{gather} \partial_x G[i,j,k] \approx \frac{\sqrt{N}}{2} \left(g(\xi_{i+1,j}, \theta_{k})-g(\xi_{i-1,j}, \theta_{k-1})\right),\\ \partial_y G[i,j,k] \approx \frac{\sqrt{N}}{2} \left(g(\xi_{i,j+1}, \theta_{k})-g(\xi_{i,j-1}, \theta_{k-1})\right). \end{gather} Under the discretisation $\mathcal{L}_K$ of $\mathcal{L}$ defined in \eqref{eq:semi_discrete_sR}, we now resort to Fourier methods to compute efficiently the solution of the sub-Riemannian heat equation \begin{equation} \label{eq:sR_heat_semidisc} \partial_t \psi = \mathcal L_g \psi,\qquad \psi|_{t=0}=g. \end{equation} In particular, let $\hat G[r,s,k]$ be the discrete Fourier transform (DFT) of G w.r.t. the variables $i,j$, i.e. \begin{equation} \hat G[r,s,k] = \frac1N \sum_{r,s=1}^{N} G[i,j,k] e^{\frac{\iota 2\pi }N \left( (r-1)(i-1) + (s-1)(j-1) \right)} \end{equation} A straightforward computation shows that \begin{equation} \begin{split} \widehat{DG}[r,s,k] = & \iota \sqrt{N} d[r,s,k]\hat G[r,s,k],\\ d[r,s,k] := & \cos (k\Delta\theta) \sin\left(\frac{2\pi r}N\right) + \sin(k\Delta\theta)\sin\left(\frac{2\pi s}N\right). \end{split} \end{equation} Hence, \eqref{eq:sR_heat_semidisc} is mapped by the DFT to the following completely decoupled system of $N^2$ ordinary linear differential equations on $\mathbb C^K$: \begin{equation}\label{eq:discr-sys} \begin{cases} \frac{d}{dt} \Psi_t[r,s,\cdot] = \left(\Lambda_N - \frac{N}{2} \operatorname{diag}_k d[r,s,k]^2 \right) \Psi_t[r,s,\cdot],\\ \Psi_0[r,s,k] = \hat G[r,s,k] \end{cases} \qquad r,s\in \{1,\ldots, N\}, \end{equation} which can be solved efficiently through a variety of standard numerical schemes. We chose the semi-implicit Crank-Nicolson method \cite{crank1947practical} for its good stability properties. Let us remark that the operator at the r.h.s.\ of the above equations are periodic tridiagonal matrices, i.e., tridiagonal matrices with additional non-zero values at positions $(1,K)$ and $(K,1)$. Thus, the linear system appearing at each step of the Crank-Nicolson method can be solved in linear time w.r.t.\ $K$ via a variation of the Thomas algorithm. The desired solution $\exp_\tau G$ can be then simply recovered by applying the inverse DFT to the solution of \eqref{eq:discr-sys} at time $\tau$. \subsection{Discretisation via gradient descent} We follow \cite{bertalmio2007perceptual, bertalmio2020cortical, bertalmio2020visual} and discretise both models \eqref{eq:sr-wc} and \eqref{eq:sr-lhe} via a simple explicit gradient descent scheme. Denoting the discretised version of the local mean average $\mu(\xi)$ appearing in the models by $U[i, j,k]= \mu(i\Delta x, y\Delta j, k\Delta\theta)$, we have that the the time stepping reads for all $p\geq 1$ \begin{equation} \label{eq:grad_desc_discr} A_{p}[i,j,k] =A_{p-1}[i,j,k]+ \Delta t \Big(-(1+\lambda)A_{p-1}[i,j,k] + A_0[i,j,k] + \lambda U[i,j,k]+ SA_{p-1}[i,j,k]\Big ), \end{equation} where $SA_{p-1}$ is defined depending on the model by: \begin{equation}\label{eq:SA} SA_{p-1}[i,j,k] = \exp_\tau \sigma(A_{p-1})[i,j,k] \qquad\text{or}\qquad SA_{p-1}[i,j,k] = \sum_{\ell=0}^n C_{\ell,p-1}[i,j,k] \exp_\tau A_{p-1}[i,j,k], \end{equation} with $C_{\ell,p-1}$ being the discretised version of the coefficient $C_\ell$ in \eqref{eq:RTerm} at time $t_{p-1}$. A sufficient condition on the time-step $\Delta t$ guaranteeing the convergence of the numerical scheme \eqref{eq:grad_desc_discr} is $\Delta t\leq 1/(1+\lambda)$ (see \cite{bertalmio2007perceptual}). \subsection{Pseudocode} Our algorithmic procedure consists of three main numerical sub-steps. The first one is the lifting of the two dimensional input image $f_0$ to the space $\mathcal{M}$ via \eqref{eq:lifting_op}. The second one is the Fourier-based procedure described in Section \ref{sec:sr-heat-discrete} to compute the sub-Riemannian diffusion \eqref{eq:sR_diff} which can be used as kernel to describe the neuronal interactions along the horizontal connection. This step is intrinsically linked to the last iterative procedure, based on computing the gradient descent update \eqref{eq:grad_desc_discr}-\eqref{eq:SA} describing the evolution of neuronal activity in the cortical framework both for the \eqref{eq:sr-wc} and \eqref{eq:sr-lhe}. We report the simplified pseudo-code below. The detailed Julia package used to produce the following examples is freely available at the following webpage \url{https://github.com/dprn/srLHE}. \begin{algorithm*}[H] \SetAlgoLined \KwData{Initial image $f_0[i,j]$ \\ \textbf{Parameters}: $\lambda, \alpha, \sigma_\mu, \alpha, K, \beta, \Delta t, T, M $, \texttt{tol}} \KwResult{Processed image $F[i,j]$} Compute lift $A_0[i,j,k] \leftarrow Lf_0[i,j,k]$ via \eqref{eq:discrete-lift}\; Initialize iteration index $p\leftarrow 0$\; \Repeat{${\|A_p - A_{p-1}\|}/{\|A_p\|} < \texttt{tol}$}{ $p\leftarrow p+1$\; Compute interaction term $SA_{p-1}$ via \eqref{eq:SA}\; Compute $A_{p}$ via GD update \eqref{eq:grad_desc_discr}\; } Projection on retinal plane $F[i,j] \leftarrow \sum_{k=1}^K A_p[i,j,k]$\; \caption{sR-WC and sr-LHE pseudocode} \label{algo:pseudocode} \end{algorithm*} \section{Numerical Experiments} \label{sec:experiments} In this section we present the results obtained by applying models \eqref{eq:sr-wc}, \eqref{eq:sr-lhe} via Algorithm \ref{algo:pseudocode} to two Poggendorf-type illusions reported in Figure~\ref{fig:Poggendorff_tests}. Our results are compared to the ones obtained by applying the corresponding WC and LHE 3-dimensional models with a 3D-Gaussian kernel as described in \cite{bertalmio2020cortical,bertalmio2020visual}. The objective of the following experiments is to understand whether the output produced by applying \eqref{eq:sr-wc} and \eqref{eq:sr-lhe} to the images in Figure~\ref{fig:Poggendorff_tests} agrees with the illusory effects perceived. Since the quantitative assessment of the strength of these effects is a challenging problem, the outputs of Algorithm \ref{algo:pseudocode} have too be evaluated by visual inspection. Namely, for each output, we consider whether the continuation of a fixed black stripe on one side of a central bar connects with a segment on the other side. Differently from inpainting-type problems, we stress that for these problems the objective is to replicate the perceived wrong alignments due to contrast and orientation effects rather than its collinear prosecution and/or to investigate when both type of completions can be reproduced. \textbf{Testing data: Poggendorff-type illusions.} We test the \eqref{eq:sr-wc} and \eqref{eq:sr-lhe} models on a grayscale version of the Poggendorff illusion in Figure~\ref{fig:Poggendorff} and on its modification reported in Figure~\ref{fig:pogg2} where the background is constituted by a grating pattern: in this case the perceived bias depends also on the contrast between the central surface and the background lines. \begin{figure}[th!] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[height=5cm]{figures/poggendorff_N200_w30_pi3_g0_7.png} \caption{Poggendorf illusion.} \label{fig:pogg_test} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[height=5cm]{figures/Poggendorff_gratings.png} \caption{Poggendorff gratings.} \label{fig:pogg2} \end{subfigure} \caption{Gray-scale Poggendorff-type illusions. Figure \eqref{fig:pogg_test} is the standard $200\times 200$ Poggendorff illusion with a 30 pixel-wide central and an incidence angle of $\pi/3$ drawn by the black lines with the central bar . Figure \eqref{fig:pogg2} is a variation of the classical Poggendorff illusion where a further background grating is present.} \label{fig:Poggendorff_tests} \end{figure} \textbf{Parameters.} Images in Figure \ref{fig:Poggendorff_tests} have size $N \times N$ pixels, with $N=200$. The lifting procedure to the space of positions and orientations is obtained by discretising $[0,\pi)$ into $K=16$ orientations (this is in agreement with the standard range of 12-18 orientations typically considered to be relevant in literature \cite{Chariker2016,Pattadkal2018}). The relevant cake wavelets are then computed following \cite{bekkers2014multi}, setting the frequency band $\texttt{bw}=5$ for all experiments. The scaling parameter $\beta$ appearing in \eqref{eq:heat-sr} is set\footnote{Such parameter adjusts the different spatial and orientation sampling. A single spatial unit is equal to $\sqrt{2}$ pixel edge whereas a single orientation unit is $1$ pixel edge.} to $\beta = {K}/(N^2\sqrt{2})$, and the parameter $M$ appearing in \eqref{eq:sr-wc}, \eqref{eq:sr-lhe} is set to $M=1$. Parameters varying from test to test are: the slope $\alpha>0$ of the sigmoid functions $\sigma$ in \eqref{eq:sigma} and $\hat{\sigma}$, the fidelity weight $\lambda>0$, the variance of the 2D Gaussian filtering $\sigma_\mu$ use to compute the local mean average $\mu$ in \eqref{eq:sr-wc} and \eqref{eq:sr-lhe}, the gradient descent time-step $\Delta t$, the time step $\Delta \tau$ and the final time $\tau$ used to compute the sub-Riemannian heat diffusion $\operatorname{e}^{\tau \mathcal{L}}$. \subsection{Poggendorff gratings} In Figure \ref{fig:WC-gratings} we report the results obtained by applying \eqref{eq:sr-wc} to the Poggendorff grating image in Figure~\ref{fig:pogg2}. We compare them with the ones obtained by the cortical-inspired WC model considered \cite{bertalmio2020cortical,bertalmio2020visual} where the sR heat-kernel is an isotropic 3D Gaussian which are reported in Figure \ref{fig:gratings-WCold}. In~Figure~\ref{fig:gratings-srWC}, we observe that the sR diffusion encoded in \eqref{eq:sr-wc} favours the propagation of the grating throughout the central gray bar so that the resultant image agrees with our perception of misalignment. We stress that such illusion could not be reproduced via the cortical-inspired isotropic WC model proposed in \cite{bertalmio2020cortical,bertalmio2020visual}. The use of the appropriate sub-Laplacian diffusion is thus crucial in this example to replicate the illusion. \begin{figure}[th] \centering \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/wcPreviousRes.png} \caption{(WC)} \label{fig:gratings-WCold} \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/wc_tau_5_sigma_6_5_lambda_0_01_alpha_20.png} \caption{(sR-WC)} \label{fig:gratings-srWC} \end{subfigure} \caption{Model output for Poggendorff gratings in Figure \ref{fig:pogg2} via WC models. Figure~\ref{fig:gratings-WCold}: result of the WC model proposed in \cite{bertalmio2020cortical,bertalmio2020visual}. Figure~\ref{fig:gratings-srWC}: result of \eqref{eq:sr-wc} with parameters $\lambda=0.01$, $\alpha = 20$, $\sigma_\mu = 6.5$, $\Delta t=0.1$, $\Delta\tau=0.01$, $\tau = 5$.} \label{fig:WC-gratings} \end{figure} We further report in Figure \ref{fig:LHE-gratings} the result obtained by applying \eqref{eq:sr-lhe} on the same image. We observe that in this case both \eqref{eq:sr-lhe} model and the LHE cortical model introduced in \cite{bertalmio2020cortical,bertalmio2020visual} reproduce the illusion. Note that both \eqref{eq:sr-wc} and \eqref{eq:sr-lhe} further preserve fidelity w.r.t.~the given image outside the target region, which is not the case in the LHE cortical model presented in \cite{bertalmio2020cortical,bertalmio2020visual}. \begin{figure}[t] \centering \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/LHE_oldStyle.png} \caption{(LHE)} \label{fig:gratings-LHEold} \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/pogg_gratings_alpha8_tau5_0_sigma1_lambda2.png} \caption{(sR-LHE)} \label{fig:gratings-srLHE} \end{subfigure} \caption{Model output for Poggendorff gratings in Figure \ref{fig:pogg2} via LHE models. Figure~\ref{fig:gratings-LHEold}: result of the LHE model proposed in \cite{bertalmio2020cortical,bertalmio2020visual}. Figure~\ref{fig:gratings-srLHE}: result of \eqref{eq:sr-lhe} with parameters $\alpha=8$, $\tau=5$, $\lambda=2$, $\sigma_\mu = 1$, $\Delta t=0.15$, $\Delta\tau=0.01$. } \label{fig:LHE-gratings} \end{figure} \subsection{Dependence on parameters: inpainting vs. perceptual completion} The capability of the \eqref{eq:sr-lhe} model to reproduce visual misperceptions depends on the chosen parameters. This fact was already observed in \cite{bertalmio2020visual} for the cortical-inspired LHE model proposed therein endowed by a standard Gaussian filtering. There, LHE was shown to reproduce illusory phenomena only in the case where the chosen standard deviation of the Gaussian filter was set to be large enough (w.r.t.\ the overall size of the image). On the contrary, the LHE model was shown to perform geometrical completion (inpainting) for small values of the standard deviation. Roughly speaking, this corresponds to the fact that perceptual phenomena -- such as geometrical optical illusions -- can be modelled only when the interaction kernel is wide enough for the information to cross the central gray line. This is in agreement with psycho-physical experiences in \cite{Weintraub1971} where the width of the central missing part of the Poggendorff illusion is shown to be directly correlated with the intensity of the illusion. In the case under consideration here, the parameter encoding the width of the interaction kernel is the final time $\tau$ of the sub-Riemannian diffusion used to model the activity propagation along neural connections. To support this observation, in Figure~\ref{fig:LHE_inpainting} we show that the completion obtained via \eqref{eq:sr-lhe} shifts from a geometrical one (inpainting), where $\tau$ is small, to a perceptual one, for $\tau$ sufficiently big. As far as \eqref{eq:sr-wc} model is concerned, we observed that despite the improved capability of replicating the Poggendorf gratings, the transition from perceptual completion to inpainting could not be reproduced. In agreement with the efficient representation principle, this supports the idea that visual perceptual phenomena are better encoded by variational models as \eqref{eq:sr-lhe} than by non-variational ones as \eqref{eq:sr-wc}. \begin{figure}[htb!] \centering \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=.9\textwidth]{figures/pogg_gratings_alpha6_tau0_1_sigma1_lambda2.png} \caption{($\tau=0.1$)} \label{fig:inp-srlhe-1} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=.9\textwidth]{figures/pogg_gratings_alpha6_tau0_5_sigma1_lambda2.png} \caption{($\tau=0.5$)} \label{fig:inp-srlhe-2} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=.9\textwidth]{figures/pogg_gratings_alpha6_tau5_0_sigma1_lambda2_bigdtau.png} \caption{($\tau=2.5$)} \label{fig:inp-srlhe-3} \end{subfigure} \caption{Sensitivity to the parameter $\tau$ for \eqref{eq:sr-lhe} model for the visual perception of Figure \ref{fig:pogg2}. The completion inside the central gray bar changes from geometrical (inpainting type) to illusory (perception type). Parameters: $\tau$ varies from $0.1$ to $5$, $\alpha=6$, $\lambda=2$, $\sigma_\mu = 1$, $\Delta t=0.15$, $\Delta\tau=0.01$. } \label{fig:LHE_inpainting} \end{figure} \subsection{Poggendorff illusion} \begin{figure}[ht] \centering \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/pogg_orig_LHE_sigma2_sigmaw12_lambda1_alpha5.png} \caption{(LHE)} \label{fig:orig-LHEold} \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=5cm]{figures/pogg_orig_norm_tau2_5_sigma2_5_lambda0_5_alpha8.png} \caption{(sR-LHE)} \label{fig:orig-srLHE} \end{subfigure} \medskip \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=3cm]{figures/pogg_orig_LHE_sigma2_lambda0_7_alpha5_zoom.png} \caption{(LHE), zoomed.} \label{fig:orig-LHEold-z} \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[height=3cm]{figures/pogg_orig_zoom_tau2_5_sigma2_5_lambda0_5_alpha8.png} \caption{(sR-LHE), zoomed} \label{fig:orig-srLHE-z} \end{subfigure} \caption{Model output for Poggendorff illusion in Figure \ref{fig:pogg_test} via LHE models. Figure~\ref{fig:orig-LHEold}: result of the LHE model proposed in \cite{bertalmio2020cortical,bertalmio2020visual} (with parameters $\sigma_\mu=2$, $\sigma_\omega=12$, $\lambda=0.7$, $\alpha= 5$). Figure~\ref{fig:orig-srLHE}: result of \eqref{eq:sr-lhe} with parameters $\alpha=8$, $\tau=2.5$, $\lambda=0.5$, $\sigma_\mu = 2.5$, $\Delta t=0.15$, $\Delta\tau=0.1$. Figure~\ref{fig:orig-srLHE-z} (resp.\ Figure~\ref{fig:orig-LHEold-z}): zoom and renormalization on $[0,1]$ of the central region of the result in Figure~\ref{fig:orig-srLHE} (resp.~Figure~\ref{fig:orig-LHEold}). } \label{fig:pogg_orig_LHE} \end{figure} In Figure \ref{fig:pogg_orig_LHE} we report the results obtained by applying LHE methods to the standard Poggendorff illusion in Figure Figure~\ref{fig:pogg_test}. In particular, in Figure~\ref{fig:orig-LHEold} we show the result obtained via the LHE method of \cite{bertalmio2020cortical,bertalmio2020visual}, while in Figure~\ref{fig:orig-srLHE} we show the result obtained via \eqref{eq:sr-lhe}, with two close-ups in Figures \ref{fig:orig-LHEold-z} and \ref{fig:orig-srLHE-z} showing a normalized detail of the central region onto the set of values $[0,1]$. As shown by these preliminary examples, the prosecutions computed by both (LHE) models agree with our perception as the reconstructed connection in the target region links the two misaligned segments, while somehow 'stopping' the connection of the collinear one. This phenomenon as well as a more detailed study on how the choice of the parameters used to generate Figure \ref{fig:pogg_test} (such as the incidence angle, the width of the central gray bar, the distance between lines), in a similar spirit to \cite{Retsa2020} where psysho-physicis experiments were performed on analogous images, is an interesting topic for future research. \FloatBarrier \section{Conclusion} In this work we presented a sub-Rimannian version of the Local Histogram Equalization mean-field model previously studied in \cite{bertalmio2020cortical, bertalmio2020visual} and here denoted by \eqref{eq:sr-lhe}. The model considered is a natural extension of existing ones where the kernel used to model neural interactions was simply chosen to be 3D Gaussian kernel, while in \eqref{eq:sr-lhe} this is chosen as the sub-Riemannian kernel of the heat equation formulated in the space of positions and orientations given by the primary visual cortex (V1). A numerical procedure based on Fourier expansions is described to compute such evolution efficiently and in a stable way and a gradient-descent algorithm is used for the numerical discretisation of the model We tested the \eqref{eq:sr-lhe} model on orientation-dependent Poggendorff-type illusions and showed that (i) in presence of a sufficiently wide interaction kernel, model \eqref{eq:sr-lhe} is capable to reproduce the perceptual misalignments perceived, in agreement with previous work (see Figures~\ref{fig:LHE-gratings} and \ref{fig:pogg_orig_LHE}), (ii) when the interaction kernel is too narrow, (sr-LHE) favors a geometric-type completion (inpainting) of the illusion (see Figure~\ref{fig:LHE_inpainting}) due to the limited amount of diffusion considered. We also considered \eqref{eq:sr-wc}, a similar model obtained by using the sub-Riemannian interaction kernel in the standard Wilson-Cowan equations. We showed that the introduction of such cortical-based kernel improves the capability of WC-type models of reproducing Poggendorff-type illusions, in comparison to the analogous results reported \cite{bertalmio2020cortical,bertalmio2020visual} where the cortical version of WC with a standard 3D Gaussian kernel was shown to fail to replicate the illusion. Finally, we stress that, in agreement with the standard range of 12-18 orientations typically considered to be relevant in literature \cite{Chariker2016,Pattadkal2018}, all the aforementioned results have been obtained by considering $K=16$ orientations. The LHE and WC models previously proposed were unable to obtain meaningful results with less that $K=30$ orientations. \section*{Acknowledgments} LC, VF and DP acknowledge the support of a public grant overseen by the French National Research Agency (ANR) as part of the \textit{Investissement d'avenir program}, through the iCODE project funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02. VF acknowledges the support received from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 794592. \bibliographystyle{ieeetr}
{ "timestamp": "2021-02-01T02:07:36", "yymm": "2012", "arxiv_id": "2012.14184", "language": "en", "url": "https://arxiv.org/abs/2012.14184" }
\section{Optimality conditions of the variational principle}\label{app:A} For the sake of brevity, we forgo the explicit written dependencies on $x$, $t$, and $\bs{\alpha}$ in the following derivation. Using index notation, we start by expanding \ref{Eqn:min_princ} \begin{align*} \mathcal{G}(\dot{\bm{U}},\dot{\bm{Y}}, \lambda) &= \inner{\udot{i}}{\udot{j}}\left(\y{i}^T\y{j}\right) + \inner{\u{i}}{\u{j}}\left(\ydot{i}^T\ydot{j}\right) +2 \inner{\udot{i}}{\u{j}}\left(\y{i}^T\ydot{j}\right) \\ &-2 \inner{\udot{i}}{\mathcal{L}(\u{j})}\left(\y{i}^T\y{j}\right) -2 \inner{\u{i}}{\mathcal{L}(\u{j})}\left(\ydot{i}^T\y{j}\right) \\ &+ \inner{\mathcal{L}(\u{i})}{\mathcal{L}(\u{j})}\left(\y{i}^T\y{j}\right) -2 \inner{\udot{i}}{\bm{F}'\y{i}} -2 \inner{\u{i}}{\bm{F}'\ydot{i}} \\ &+2 \inner{\mathcal{L}(\u{i})}{\bm{F}'\y{i}} + \big\|\bm{F}'\big\|_F^2 + \lambda_{ij}\left(\inner{\u{i}}{\udot{j}}-\phi_{ij}\right). \end{align*} The first order optimality condition requires the derivative of $\mathcal{G}$ with respect to $\dot{\bm{U}},\dot{\bm{Y}}$ and $ \lambda$ vanish. The derivative of $\mathcal{G}$ with respect to $\lambda$ produces the time derivative of the orthonormality constraint given by Equation \ref{eq:orthodot}. Provided that the f-OTD modes are orthonormal at $t=0$, the time integration of Equation \ref{eq:orthodot} reproduces the orthonormality condition of the f-OTD modes for $t>0$: $\inner{\bm{u}_i}{\bm{u}_j}=\delta_{ij}$. To take the derivative of $\mathcal{G}$ with respect to $\Tilde{\dot{\bm{u}}}_k$ we use the Fr\'{e}chet differential as follows: \begin{equation*} \mathcal{G}'|_{\dot{\bm{U}}} \triangleq \lim_{\epsilon \rightarrow 0 } \frac{\mathcal{G}(\dot{\bm{U}}+\epsilon \dot{\bm{U}}',\dot{\bm{Y}},\lambda)-\mathcal{G}(\dot{\bm{U}},\dot{\bm{Y}},\lambda)}{\epsilon}. \end{equation*} Using the above definition we have: \begin{align*} \mathcal{G}'|_{\udot{k}} &= 2 \inner{\dot{\bm{u}}'}{\udot{j}}\left(\y{k}^T\y{j}\right) +2 \inner{\dot{\bm{u}}'}{\u{j}}\left(\y{k}^T\ydot{j}\right) -2 \inner{\dot{\bm{u}}'}{\mathcal{L}(\u{j})}\left(\y{k}^T\y{j}\right) \\ &-2 \inner{\dot{\bm{u}}'}{\bm{F}'\y{k}} + \lambda_{jk}\inner{\dot{\bm{u}}'}{\u{j}} = 0. \end{align*} The above equation can be written as $\inner{\dot{\bm{u}}'}{\nabla_{\dot{\bm{u}}_k} \mathcal{G}}$ and we observe that for any arbitrary direction $\dot{\bm{u}}'$, we must satisfy $\nabla_{\dot{\bm{u}}_k} \mathcal{G}=\bm{0}$. This leads to the following condition: \begin{equation}\label{eqn:udot_optimality} \nabla_{\dot{\bm{u}}_k} \mathcal{G} = 2 \udot{j}\left(\y{k}^T\y{j}\right) +2 \u{j}\left(\y{k}^T\ydot{j}\right) -2 \mathcal{L}(\u{j})\left(\y{k}^T\y{j}\right) -2 \bm{F}'\y{k} + \lambda_{jk}\u{j} = \bm{0}. \end{equation} To eliminate $\lambda_{jk}$, we take the inner product of $\u{l}$ with \ref{eqn:udot_optimality} to obtain \begin{align*} \left\langle\u{l}, \nabla_{\dot{\bm{u}}_k} \mathcal{G}\right\rangle &= 2\phi_{lj}(\y{k}^T\y{j}) +2 \delta_{lj}\left(\y{k}^T\ydot{j}\right) -2 \inner{\u{l}}{\mathcal{L}(\u{j})}\left(\y{k}^T\y{j}\right) \\ &-2 \inner{\u{l}}{\bm{F}'\y{k}} + \lambda_{jk}\delta_{lj} = 0, \end{align*} where we have used $\inner{\u{l}}{\udot{j}}=\phi_{lj}$ and $\inner{\u{l}}{\u{j}}=\delta_{lj}$. Rearranging for $\lambda_{lk}$ gives \begin{equation*} \lambda_{lk} = 2\left[-\phi_{lj}(\y{k}^T\y{j}) - \left(\y{k}^T\ydot{l}\right) + \inner{\u{l}}{\mathcal{L}(\u{j})}\left(\y{k}^T\y{j}\right) + \inner{\u{l}}{\bm{F}'\y{k}}\right]. \end{equation*} Dividing \ref{eqn:udot_optimality} by 2 and substituting $\lambda_{lk}$ gives \begin{equation*} \left[\udot{j} - \mathcal{L}(\u{j}) + \inner{\u{l}}{\mathcal{L}(\u{j}})\u{l} -\phi_{lj}\bm{u}_l \right]\left(\y{k}^T\y{j}\right) - \bm{F}'\y{k} + \inner{\u{l}}{\bm{F}'\y{k}}\u{l} = \bm{0}. \end{equation*} Rearranging the above equation for $\udot{j}$ we get \begin{equation*} \boxed{\udot{j} = \mathcal{L}(\u{j}) - \inner{\u{l}}{\mathcal{L}(\u{j})}\u{l} + \left[\bm{F}'\y{k}-\inner{\u{l}}{\bm{F}'\y{k}}\u{l}\right]C_{kj}^{-1}+\phi_{lj}\bm{u}_l,} \end{equation*} where $C_{kj}=\y{k}^T\y{j}$. Similarly, the first order optimality condition of $\mathcal{G}$ with respect to $\ydot{k}$ requires that \begin{align} \pfrac{\mathcal{G}}{\ydot{k}} = \inner{\u{k}}{\u{j}}\ydot{j} + \inner{\udot{j}}{\u{k}}\y{j} - \inner{\u{k}}{\mathcal{L}(\u{j})}\y{j} - \inner{\bm{F}'}{\u{k}} = \bm{0} \nonumber \end{align} Again, we use $\inner{\u{k}}{\u{j}}=\delta_{kj}$ and $\inner{\udot{j}}{\u{k}}=-\phi_{jk}$. Rearranging for $\ydot{k}$ gives \begin{equation*} \boxed{\ydot{k} = \inner{\u{k}}{\mathcal{L}(\u{j})}\y{j} + \inner{\bm{F}'}{\u{k}} + \phi_{jk}\y{j}.} \end{equation*} \section{Equivalence of reductions}\label{app:B} We prove the equivalence by using the evolution equation for the ${\bm{U},\bm{Y}}$ and using the matrix differential equation for the rotation matrix $\bm{R}$ and recovering the evolution equations for ${\tilde{\bm{U}},\tilde{\bm{Y}}}$. To this end, we substitute $\bm{U}=\tilde{\bm{U}}\bm{R}$ and $\bm{Y}=\tilde{\bm{Y}}\bm{R}$ into the quasimatrix form of Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff}. The evolution equation for the orthonormal modes becomes: \begin{align*} \dot{\bm{U}} &= \dot{\tilde{\bm{U}}}\bm{R} + \tilde{\bm{U}}\dot{\bm{R}} \\ &= \mathcal{L}(\tilde{\bm{U}})\bm{R} - \tilde{\bm{U}}\bm{R}\inner{\tilde{\bm{U}}\bm{R}}{\mathcal{L}(\tilde{\bm{U}})\bm{R}} + [\bm{F}'\tilde{\bm{Y}}\bm{R} - \tilde{\bm{U}}\bm{R}\inner{\tilde{\bm{U}}\bm{R}}{\bm{F}'\tilde{\bm{Y}}\bm{R}}] + \tilde{\bm{U}}\bm{R}\bs{\Phi}. \end{align*} Substituting $\dot{\bm{R}}=\bm{R}\bs{\Phi}-\tilde{\bs{\Phi}}\bm{R}$ and solving for $\dot{\tilde{\bm{U}}}$ yields \begin{align*} \dot{\tilde{\bm{U}}} &= \big[ \mathcal{L}(\tilde{\bm{U}})\bm{R} - \tilde{\bm{U}}\bm{R}\inner{\tilde{\bm{U}}\bm{R}}{\mathcal{L}(\tilde{\bm{U}})\bm{R}} + [\bm{F}'\tilde{\bm{Y}}\bm{R} - \tilde{\bm{U}}\bm{R}\inner{\tilde{\bm{U}}\bm{R}}{\bm{F}'\tilde{\bm{Y}}\bm{R}}] \\ &+ \tilde{\bm{U}}\bm{R}\bs{\Phi} - \tilde{\bm{U}}[\bm{R}\bs{\Phi}-\tilde{\bs{\Phi}}\bm{R} \big]\bm{R}^T. \end{align*} Simplifying the above equation and using $\inner{\tilde{\bm{U}}\bm{R}}{\cdot}=\bm{R}^T\inner{\tilde{\bm{U}}}{\cdot}$ and $\bm{R}^{-1}=\bm{R}^T$, since $\bm{R}$ is an orthonormal matrix results in: \begin{align*} \dot{\tilde{\bm{U}}} = \mathcal{L}(\tilde{\bm{U}}) - \tilde{\bm{U}}\inner{\tilde{\bm{U}}}{\mathcal{L}(\tilde{\bm{U}})} + [\bm{F}'\tilde{\bm{Y}} - \tilde{\bm{U}}\inner{\tilde{\bm{U}}}{\bm{F}'\tilde{\bm{Y}}}]\tilde{\bm{C}}^{-1} + \tilde{\bm{U}}\tilde{\bs{\Phi}}, \end{align*} where $\tilde{\bm{C}}=\bm{R}\bm{C}\bm{R}^T$ and $\tilde{\bm{C}}^{-1}=\bm{R}\bm{C}^{-1}\bm{R}^T$, where $\bm{C}$ and $\tilde{\bm{C}}$ are similar matrices and thus have the same eigenvalues. Following a similar procedure, the evolution equation for the coefficients becomes: \begin{align*} \dot{\bm{Y}} &= \dot{\tilde{\bm{Y}}}\bm{R} + \tilde{\bm{Y}}\dot{\bm{R}} \\ &= \tilde{\bm{Y}}\bm{R}\inner{\mathcal{L}(\tilde{\bm{U}})\bm{R}}{\tilde{\bm{U}}}\bm{R} + \inner{\bm{F}'}{\tilde{\bm{U}}}\bm{R} + \tilde{\bm{Y}}\bm{R}\bs{\Phi}. \end{align*} Substituting $\dot{\bm{R}}=\bm{R}\bs{\Phi}-\tilde{\bs{\Phi}}\bm{R}$ and solving for $\dot{\tilde{\bm{Y}}}$ yields \begin{align*} \dot{\tilde{\bm{Y}}} &= \big[\tilde{\bm{Y}}\bm{R}\bm{R}^T\inner{\mathcal{L}(\tilde{\bm{U}})}{\tilde{\bm{U}}}\bm{R} + \inner{\bm{F}'}{\tilde{\bm{U}}}\bm{R} + \tilde{\bm{Y}}\bm{R}\bs{\Phi} - \tilde{\bm{Y}}[\bm{R}\bs{\Phi}-\tilde{\bs{\Phi}}\bm{R}] \big]\bm{R}^T \\ &= \tilde{\bm{Y}}\inner{\mathcal{L}(\tilde{\bm{U}})}{\tilde{\bm{U}}} + \inner{\bm{F}'}{\tilde{\bm{U}}} + \tilde{\bm{Y}}\tilde{\bs{\Phi}}. \end{align*} Thus, we have shown that the evolution of $\{\bm{U}(x,t),\bm{Y}(t)\}$ and $\{\tilde{\bm{U}}(x,t),\tilde{\bm{Y}}(t)\}$ according to Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} are equivalent. \section{Exactness of f-OTD}\label{app:C} Start by substituting $\bm{V}'(x,t) = \bm{U}(x,t)\bm{Y}(t)^T$ into the quasimatrix form of Equation \ref{eqn:sensitivity}: \begin{align*} \dot{\bm{U}}\bm{Y}^T + \bm{U}\dot{\bm{Y}}^T = \mathcal{L}(\bm{U})\bm{Y}^T + \bm{F}'. \end{align*} Next we substitute $\dot{\bm{Y}}^T$ from Equation \ref{eqn:evol_coeff} \begin{equation*} \dot{\bm{U}}\bm{Y}^T + \bm{U}\left( \bm{L}_r\bm{Y}^T + \inner{\bm{U}}{\bm{F}'} \right) = \mathcal{L}(\bm{U})\bm{Y}^T + \bm{F}', \end{equation*} where we have used $\bm{L}_r(t) = \inner{\bm{U}(x,t)}{\mathcal{L}(\bm{U}(x,t))}$. We multiply by $\bm{Y}$ from right and rearrange to get \begin{equation*} \dot{\bm{U}}\bm{C} = \mathcal{L}(\bm{U})\bm{C} - \bm{U}\bm{L}_r\bm{C} + \left( \bm{F}'\bm{Y} - \bm{U}\inner{\bm{U}}{\bm{F}'\bm{Y}} \right), \end{equation*} where we have used $\bm{C} = \bm{Y}^T\bm{Y}$. Finally, we multiply by $\bm{C}^{-1}$ from right to get \begin{equation*} \dot{\bm{U}} = \mathcal{L}(\bm{U}) - \bm{U}\bm{L}_r + \left( \bm{F}'\bm{Y} - \bm{U}\inner{\bm{U}}{\bm{F}'\bm{Y}} \right)\bm{C}^{-1}, \end{equation*} which is the same as the evolution Equation \ref{eqn:evol_modes} for the orthonormal basis. Here we have shown that the evolution of $\bm{V}'(x,t)$ under Equation \ref{eqn:sensitivity} is equivalent to the evolution of $\bm{U}(x,t)$ under Equation \ref{eqn:evol_modes}. That is, when $r=d$, $\bm{Y}(t)^T$ is a linear transformation that exactly maps the orthonormal subspace $\bm{U}(x,t)$ to $\bm{V}'(x,t)$. \section{f-OTD Derivation for Tensor Sensitivities}\label{app:E} We start by considering the third order quasitensor $\tilde{\bm{V}}'=[\tilde{\bm{v}}'_{ij}] \in \mathbb{R}^{\infty \times n_s \times n_r}$ that we seek to flatten into a quasimatrix $\bm{V}'=[\bm{v}'_m]\in\mathbb{R}^{\infty\times d}$. For ease of reference, we rewrite the tensor evolution Equation \ref{eqn:rxn_sens_tensor} below: \begin{equation*} \pfrac{\tilde{\bm{v}}_{ij}'}{t} + \left( \bm{u}\cdot\nabla \right)\tilde{\bm{v}}_{ij}' = \tilde{\kappa}_{ik}\nabla^{2}\tilde{\bm{v}}_{kj}' + \tilde{\mathcal{L}}_{\bm{s}_{ik}} \tilde{\bm{v}}_{kj}'+\tilde{\bm{s}}'_{ij}, \end{equation*} where $i,k=1,2,\dots,n_s$ and $j=1,2,\dots,n_r$. We define the indices $m(i,j)=j+(i-1)n_r$ and $n(i',j')=j' + (i'-1)n_r$, where $i'=1,2,\dots,n_s$ and $j'=1,2,\dots,n_r$. In the above equation, the terms $\tilde{\bm{v}}_{ij}'$ and $\tilde{\bm{s}}'_{ij}$ are flattened by replacing the index pair $ij$ with the single index $m$: $\bm{v}'_{m(i,j)} = \tilde{\bm{v}}_{ij}'$ and $\bm{s}'_{m(i,j)} = \tilde{\bm{s}}_{ij}'$. Next, we define a new diffusion coefficient matrix $\kappa_{mn}\in\mathbb{R}^{d\times d}$ such that the $m$th diagonal entry is equal the diffusion coefficient of the $i$th species. That is, $\kappa_{mn}$ is independent of parameter index $j$ and remains constant across all sensitivities of a given species $i$. Finally, the linearized reactive source term is defined as $\mathcal{L}_{\bm{s}_{m(i,j)n(i',j')}} = \tilde{\mathcal{L}}_{\bm{s}_{ii'}} \delta_{j j'}$, where $\delta_{jj'}$ is the Kronecker delta and $n$ is a dummy index corresponding to $\bm{v}'_n$. From this definition, $\delta_{jj'}$ results in non-zero contribution to the summation over $n$ only for sensitivities with respect to parameter $j'=j$. Putting this all together, the above equation can be written as: \begin{equation}\label{eqn:} \pfrac{\bm{v}'_m}{t} + (\bm{w}\cdot \nabla)\bm{v}'_m = \kappa_{mn}\nabla^2 \bm{v}'_n+ \mathcal{L}_{\bm{s}_{mn}}\bm{v}'_n + \bm{s}'_m, \end{equation} where $\mathcal{L}_{\bm{s}_{mn}}\bm{v}'_n$ should be interpreted as a matrix-vector multiplication for any $(x_1,x_2)$ point in the physical space. As a result of the parametric dependence of the linear operator, Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} do not hold for the tensor flattened equation. Therefore, we must derive new evolution equations for the f-OTD modes and coefficients for tensor flattened quantities. Substituting the approximation $\bm{v}'_m=\sum_{i=1}^{r} \bm{u}_iY_{mi}$ into the above equation, it is straightforward to show that the evolution equations for the f-OTD modes and coefficients are: \begin{align}\label{eq:uspec} \dot{\bm{u}}_i = &-\left[(\bm{w}\cdot\nabla)\bm{u}_i - \bm{u}_j\inner{\bm{u}_j}{(\bm{w}\cdot\nabla)\bm{u}_i}\right] + \left[ \nabla^2\bm{u}_k - \bm{u}_j\inner{\bm{u}_j}{\nabla^2\bm{u}_k} \right]Y_{nk}\kappa_{mn}Y_{ml}C^{-1}_{il} \nonumber \\ & + \left[ \mathcal{L}_{\bm{s}_{mn}}\bm{u}_k - \bm{u}_j\inner{\bm{u}_j}{\mathcal{L}_{\bm{s}_{mn}}\bm{u}_k} \right]Y_{nk}Y_{ml}C^{-1}_{il} + \left[ \bm{s}'_m - \bm{u}_j\inner{\bm{u}_j}{\bm{s}'_m} \right]Y_{ml}C^{-1}_{il}, \end{align} and \begin{align}\label{eq:yspec} \dot{Y}_{mj} = &-\inner{\bm{u}_j}{(\bm{w}\cdot \nabla)\bm{u}_i}Y_{mi} + \inner{\bm{u}_j}{\nabla^2\bm{u}_i}Y_{ni}\kappa_{mn} \nonumber \\ &+\inner{\bm{u}_j}{\mathcal{L}_{\bm{s}_{mn}}\bm{u}_i}Y_{ni} +\inner{\bm{u}_j}{\bm{s}'_m}, \end{align} where $\bm{Y}=[Y_{mi}]$ and the indices $m,n=1,2,\dots,d$ and $i,j,k=1,2,\dots,r$. \clearpage \section{Reactive source term specification}\label{app:D} \begin{table}[hbtp] \centering \def\arraystretch{1.25}% \begin{tabular}{l c} \hline $\bm{s}_1 = (\alpha_1 [13][2])/(\alpha_2 + [2]) -\alpha_3[1][15]$ \\ $\bm{s}_2 = -(\alpha_1 [13][2]/(\alpha_2 + [2])$ \\ $\bm{s}_3 = (\alpha_4 [9][4]/(\alpha_5 + [4]) - \alpha_6[3] - (\alpha_7[17][3])/(\alpha_8 +[3]) $ \\ $\bm{s}_4 = (\alpha_4 [9][4])/(\alpha_5 + [4])$ \\ $\bm{s}_5 = (\alpha_9 [9][6])/(\alpha_{10} + [6]) - \alpha_{11}[5]- (\alpha_{12}[17][5])/(\alpha_{13} +[5]) $ \\ $\bm{s}_6 = -(\alpha_9 [9][6]/(\alpha_{10} + [6])$ \\ $\bm{s}_7 = (\alpha_{14} [24][8])/(\alpha_{15} + [8]) - \alpha_{16}[7][15] -\alpha_{17}[16][7]$ \\ $\bm{s}_8 = -(\alpha_{14} [24][8])/(\alpha_{15} + [8])$ \\ $\bm{s}_9 = (\alpha_{18} [25][10])/(\alpha_{19} + [10]) -\alpha_{20}[9][15]$ \\ $\bm{s}_{10} = -(\alpha_{18} [25][10])/(\alpha_{19} + [10])$ \\ $\bm{s}_{11} = (\alpha_{21} [9][12])/(\alpha_{22} + [12]) - (\alpha_{23}[21][11])/(\alpha_{24} +[11]) $ \\ $\bm{s}_{12} = -(\alpha_{21} [9][12])/(\alpha_{22} + [12])$ \\ $\bm{s}_{13} = (\alpha_{25} [9][14])/(\alpha_{26} + [14]) - \alpha_{27}[13][15] -\alpha_{28}[13][19]$ \\ $\bm{s}_{14} = -(\alpha_{25} [9][14])/(\alpha_{26} + [14])$ \\ $\bm{s}_{15} = -(\alpha_3[1] + \alpha_{16}[7] + \alpha_{20}[9] +\alpha_{27}[13])[15]$ \\ $\bm{s}_{16} = -\alpha_{17}[16][7]$ \\ $\bm{s}_{17} = (\alpha_{29} [9][18])/(\alpha_{30} + [18]) -\alpha_{31}[17][19]$ \\ $\bm{s}_{18} = -(\alpha_{29} [9][18](\alpha_{30} + [18])$ \\ $\bm{s}_{19} = -\alpha_{31}[17][19] -\alpha_{28}[13][19]$ \\ $\bm{s}_{20} =0$\\ $\bm{s}_{21} = (\alpha_{32} [20][22])/(\alpha_{33} + [22]) -\alpha_{34}[21][23]$ \\ $\bm{s}_{22} = -(\alpha_{32} [20][22])/(\alpha_{33} + [22])$ \\ $\bm{s}_{23} = -\alpha_{34}[21][23]$ \\ \hline \end{tabular} \caption{Reactive source terms with species concentration denoted by $[\cdot]$. Each $\bm{s}_i$ is scaled by $10^2$ for time scale adjustment with the flow and the parameter values are assigned as follows: $\alpha_1 = 2.54\times 10^{-2}$, $\alpha_2 = 160$, $\alpha_3 = 3.74\times10^{-5} $, $\alpha_4 = 0.449$, $\alpha_5 = 1.12 \times 10^{5}$, $\alpha_6 = 5.13 \times 10^{-4}$, $\alpha_7 = 2.36 \times 10^{-2}$, $\alpha_8 = 14.6$, $\alpha_9 = 6.24\times 10^{-2}$, $\alpha_{10} = 140.5$, $\alpha_{11} = 3.93 \times 10^{-4}$, $\alpha_{12} = 2.36 \times 10^{-2}$, $\alpha_{13} = 14.6$, $\alpha_{14} = 5.523$, $\alpha_{15} = 160$, $\alpha_{16} = 8.01 \times 10^{-4}$, $\alpha_{17} = 1.11 \times 10^{-3}$, $\alpha_{18} = 3.105$, $\alpha_{19} = 1060$, $\alpha_{20} = 1.65 \times 10^{-3}$, $\alpha_{21} = 8.177$, $\alpha_{22} = 3160$, $\alpha_{23} = 3.456$, $\alpha_{24} = 2.50 \times 10^{5}$, $\alpha_{25} = 1.80 \times 10^{-5}$, $\alpha_{26} = 50$, $\alpha_{27} = 3.70\times 10^{-6}$, $\alpha_{28} = 3.00 \times 10^{-8}$, $\alpha_{29} = 9.01 \times 10^{-2}$, $\alpha_{30} = 3190$, $\alpha_{31} = 1.52 \times 10^{-9}$, $\alpha_{32} = 2.77 \times 10^{-2}$, $\alpha_{33} = 18$, and $\alpha_{34} = 2.22 \times 10^{-4}$} \label{tab:rxn} \end{table} \section{Conclusions} We present a real time reduced order modeling approach for the computation of sensitivities in evolutionary systems governed by time dependent ordinary/partial differential equations. The computational cost of solving the f-OTD equations of rank $r$ is roughly equivalent to that of solving $r$ forward sensitivity equations. We demonstrated that the rank of f-OTD for two diverse applications is much smaller than the number of parameters. In contrast to adjoint based methods, f-OTD requires solving a system of \emph{forward} equations and therefore it does not require any I/O operation. We showed that a single set of f-OTD modes can be formulated to compress the sensitivities of multi-variable PDEs. We demonstrated this capability by computing sensitivities of multiple species with respect to reaction parameters in a turbulent reactive flow. In contrast to traditional ROM approaches, f-OTD extracts the low-rank approximation directly from the sensitivity equations as opposed to a data-driven approach, such as POD or DMD, which requires the full-dimensional sensitivity data. Moreover, the low-rank subspace in the data-driven approach is fine tuned to particular operating conditions, whereas the f-OTD subspace is evolved with the dynamics of the system and does not require such fine-tuning. As such, f-OTD is an \emph{on the fly} model compression that is achieved by extracting instantaneous correlated structures in the solution. \section{Demonstration Cases} \subsection{R\"{o}ssler system} We first present a simple demonstration of f-OTD by computing sensitivities of the R\"{o}ssler system. The R\"{o}ssler system is governed by: \begin{equation*} \frac{dv_1}{dt} = -v_2 - v_3, \quad \quad \frac{dv_2}{dt} = v_1 + \alpha_1v_2, \quad \quad \frac{dv_3}{dt} = \alpha_2 + v_3(v_1 - \alpha_3). \end{equation*} In the above equations, we set $\alpha_1=\alpha_2=0.1$ and $\alpha_3=14$, which are common values used to study the chaotic behavior of the attractor. The goal is to calculate the sensitivity of $\bm{v}$ with respect to the model parameters $\bs{\alpha} = (\alpha_1, \alpha_2,\alpha_3)$ as $\partial\bm{v}/\partial\bs{\alpha}$. To this end, we take the derivative of the above system of equations with respect to model parameter $\alpha_i$ to obtain the sensitivity equation \begin{equation}\label{eqn:sensitivity_finite} \frac{d\bm{V}'}{dt} = \bm{L} \bm{V}' + \bm{F}', \end{equation} where \[ \bm{L} = \begin{bmatrix} 0 & -1 & -1 \\ 1 & \alpha_1 & 0 \\ v_3 & 0 & v_1-\alpha_3 \end{bmatrix}, \quad \bm{V}' = \begin{bmatrix} \vline & \vline & \vline \\ \bm{v}'_1 & \bm{v}'_2 & \bm{v}'_3 \\ \vline & \vline & \vline \end{bmatrix}, \quad \bm{F}' = \begin{bmatrix} 0 & 0 & 0\\ v_2 & 0 & 0 \\ 0 & 1 & -v_3 \end{bmatrix}, \] and $\bm{v}'_i$ is the sensitivity of the position with respect to $\alpha_i$ and $\bm{L} \in \mathbb{R}^{n\times n}$ and $\bm{F}' \in \mathbb{R}^{n\times d}$. \begin{figure} \centering \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/ros_attractor.pdf} \label{fig:ros_att} } \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/ros_error.pdf} \label{fig:ros_error} } \caption{(a) Chaotic R\"{o}ssler attractor with optimal f-OTD subspace shown in green and OTD subspace shown in black for $r=2$. Red arrows depict the orthonormal sensitivity vectors that define each subspace. (b) Percent error for $e(t)$ plotted versus time for the f-OTD and OTD subspaces.} \end{figure} We choose a subspace with dimension $r=2$ for the low-rank approximation of the three-dimensional ($d=3$) sensitivities ($\bm{V}'$). Although it is obvious that OTD modes are not based on parametric sensitivities and they are based on perturbations in the initial condition (IC) in all directions of the phase space, we believe it is instructive to contrast the OTD versus f-OTD to better understand f-OTD. To this end, we build two real-time ROMs using OTD modes and f-OTD modes. In the case of OTD, we solve the OTD evolution equation and we project the forced sensitivity Equation \ref{eqn:sensitivity_finite} onto the OTD modes, resulting in: \begin{equation*} \frac{d\bm{U}_{otd}}{dt} = (\bm{I}-\bm{U}_{otd}\bm{U}_{otd}^T)\bm{L}\bm{U}_{otd} \quad \mbox{and} \quad \frac{d\bm{Y}_{otd}}{dt}=\bm{Y}_{otd}\bm{U}^T_{otd}\bm{L}^T\bm{U}_{otd} + \bm{F}'^T\bm{U}_{otd}. \end{equation*} We also solved the f-OTD evolution Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} for the finite-dimensional system. Both OTD and f-OTD modes are initialized with the same subspace and the evolution equations are solved for $T_f = 10$ units of time. These subspaces are initialized by first solving the full-dimensional sensitivity Equation \ref{eqn:sensitivity_finite} for one $\Delta t=10^{-2}$ and then computing the OTD and f-OTD subspaces as the first two left singular vectors of $\bm{V}'(x,t=\Delta t)$. In Figure \ref{fig:ros_att}, both OTD and f-OTD subspaces are visualized along with the attractor of the R\"{o}ssler system. The OTD subspace is shown at only one instant for clarity and that point corresponds to the case where the nonlinear dynamics is in the $v_1-v_2$ plane. At this point, the OTD subspace is oriented such that it nearly coincides with the $v_1-v_2$ plane. This result is to be expected since the OTD subspace follows the sensitivities associated with the perturbations in the IC and we know that the IC-perturbed solutions will lie on the \emph{same} attractor. On the other hand, the f-OTD subspace is correctly oriented along the most sensitive subspace for perturbations in the model parameters, i.e. $\delta\bs{\alpha}=(\delta \alpha_1,\delta \alpha_2,\delta \alpha_3)$, which lead to perturbations in the attractor itself. That is, the perturbed solutions lie on \emph{different} attractors which can readily be seen as $\delta \bs{\alpha}$ results in nonzero $\delta v_3$, despite $v_3\simeq 0$. This results in the f-OTD subspace having a large out-of-plane component in the $v_3$ direction, which the OTD subspace fails to capture in Figure \ref{fig:ros_att}. In Figure \ref{fig:ros_error}, the percent errors of $e(t)$ are shown for OTD and f-OTD, which confirms that f-OTD performs significantly better than OTD. This simple example demonstrates that the OTD basis is not optimal and may be inaccurate for reduced order modeling of the forced sensitivity equation. \subsection{Chaotic Kuramoto Sivashinsky equation} The objective of this example is to evaluate the performance of f-OTD in computing sensitivities of a chaotic system with many positive Lyapunov exponents and a high-dimensional parametric space. The intent of this example is not to compute the gradient of a time-averaged quantity for a chaotic system, but rather computing the solution of the sensitivity equation for a chaotic system with much larger unstable directions than the rank of the f-OTD subspace. For computing sensitivities of time-averaged quantities, one can use f-OTD in conjunction with Ruelle’s linear response formula \cite{R97,EHL04} to compute ensemble sensitivities. To this end, we consider the sensitivity of the Kuramoto Sivashinsky (KS) equation with respect to a time dependent forcing parameter $\alpha(t)$. The KS equation is a fourth order PDE given by: \begin{align}\label{eqn:ks_pde} \pfrac{\bm{v}}{t} + \frac{1}{2}\pfrac{\bm{v}^2}{x} + \pfrac{^2 \bm{v}}{x^2} + \nu\pfrac{^4 \bm{v}}{x} = \alpha(t)\sin{\left( 2\pi x/L \right)}, \quad x\in[0,L], \end{align} where $\bm{v}=\bm{v}(x,t)$. Approximately 110 positive Lyapunov exponents exist for the parameters used in this study: $\nu=1$ and $L=1000$. The space time solution of Equation \ref{eqn:ks_pde} for these parameters is shown in Figure \ref{fig:ks_nonlin}. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/ks_nonlin_sln.pdf} \caption{Solution of the chaotic Kuramoto-Sivashinksy equation $\bm{v}(x,t)$ solved on domain length $L=1000$ for $T_f=100$ units of time. } \label{fig:ks_nonlin} \end{figure} Here $\alpha(t)$ represents an infinite-dimensional parametric space. To compute the sensitivities numerically, we consider a discrete representation of $\alpha(t)$ in the interval $t_i\in[0,T_s]$, where $T_s\leq T_f$ is a subset of the full integration time $T_f$, and $t_i$ is a discrete instance in time. To this end, we consider the value of $\alpha(t)$ at discrete time $t_i=(i-1)\times \Delta t$, where $\Delta t$ is the time step. This results in a vector, $\bs{\alpha} = (\alpha_1,\alpha_2,\dots,\alpha_d)$, where $\alpha_{i} = \alpha(t_i)$ and $d=T_s/\Delta t$ is the number of instances in time (i.e. number of parameters). In general, $\Delta t$ can be chosen independently of the numerical time integration step size, however, for simplicity, we use the same value of $\Delta t$ for both the parametric discretization and numerical integration of the nonlinear solver and f-OTD equations. In this example, we consider $\Delta t = 10^{-2}$ and $T_s=10$, which results in $d=1000$ parameters. Further decrease in $\Delta t$ did not change our results. This leads to the sensitivity of $\bm{v}$ with respect to the value of $\alpha(t)$ at 1000 evenly spaced instances in time. We evolve these sensitivities over the interval $t\in[0,T_f]$ with $T_f=100$. We also choose $\alpha(t)=0$ for $t_i\in[0,T_f]$, and therefore, the nonlinear solver $\bm{v}(t)$ is the solution of the unforced KS equation. We consider the time-discrete form of Equation \ref{eqn:ks_pde} and differentiate with respect to design parameter $\alpha_i$. This leads to an evolution equation for the sensitivity of $\bm{v}$ with respect to $\alpha_i$, in which the linear operator and forcing terms are: \begin{equation}\label{eqn:ks_sens} \mathcal{L}(\bm{v}_i')= -\left[\pfrac{(\bm{v v}_i')}{x} + \pfrac{^2 \bm{v}'_i}{x^2} + \nu\pfrac{^4 \bm{v}'_i}{x}\right] \ \mbox{and} \ \bm{f}_i' = \delta(t-t_i)\sin\left( 2\pi x/L \right), \ i=1,2,\dots, d \end{equation} where $\delta(t-t_i)=0$ for $t\neq t_i$ and $\delta(t-t_i)=1$ for $t= t_i$. Our goal is to solve Equation \ref{eqn:ks_sens} using f-OTD. \begin{figure} \centering \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/ks_error.pdf} \label{fig:ks_error} } \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/ks_sing_val.pdf} \label{fig:ks_sing_val} } \caption{(a) Comparison of the reconstruction error between f-OTD approximation ($e(t)$) and optimal rank-$r$ approximation ($e_u(t)$) for different reduction sizes. Resolved error, $e_r(t)$, dominates the f-OTD error for long term integration. Error decreases as the number of modes increases. (b) Comparison of singular values between f-OTD and optimal low-rank decomposition for $r=5$.} \end{figure} We discretize the KS equation and the f-OTD equations using $n=2^{13}=8192$ Fourier modes and use exponential time-differencing Runge-Kutta fourth-order (ETDRK4) time stepping scheme \cite{KT05}. We verify our solution by directly solving Equation \ref{eqn:ks_sens} for all 1000 sensitivities. Further decreasing $\Delta t$ and increasing the number of Fourier modes did not change our results. We also compare the f-OTD error with that of optimal instantaneous same-rank approximation of the full sensitivities, which is obtained by computing the SVD of $\bm{V}'(x,t)$ at each time. In Figure \ref{fig:ks_error}, we compare the reconstruction error of f-OTD ($e(t)$) with the reconstruction error of same-rank SVD ($e_u(t)$). We also show the resolved error $e_r(t)$, which measures the discrepancy between the f-OTD approximation and the optimal same-rank approximation. We compute these errors for $r=1,3$ and 5. While the optimal low-rank approximation with a single mode captures approximately 99\% of the system energy of the full sensitivity (see Figure \ref{fig:ks_sing_val}), the f-OTD approximation performs poorly with only a single mode, i.e., a dramatic reduction for 1000 sensitivities. This is a direct result of the memory effect from the lost interactions with the unresolved modes ($e_r(t)$) that ultimately dominate the error for long term integration. By increasing the number of f-OTD modes, both $e(t)$ and $e_r(t)$ decrease. It is possible to control the error in real-time through an adaptive strategy that adds/removes modes with an appropriate criterion. For example, a candidate criterion could be $p=\sigma^2_r(t)/\sum_{i=1}^r \sigma^2_i(t)$, where for $p<p_{th}$ the last mode can be removed and for $p>p_{th}$ a new mode can be added. See \cite{Babaee:2017aa} for similar strategies for adaptive mode addition and removal. In Figure \ref{fig:ks_sing_val}, we compare the 15 largest instantaneous singular values of quasimatrix $\bm{V}'(x,t)$ with those obtained from f-OTD with rank $r=5$, which shows that f-OTD closely captures the most dominant subspace. In Figures \ref{fig:ks_y1} and \ref{fig:ks_y2} the orthonormalized coefficients of the first two dominant f-OTD modes for the case of $r=5$ are compared to the right singular vectors from the instantaneous SVD of $\bm{V}'(x,t)$. These coefficients represent the hidden parametric space: for example, $\hat{\bm{y}}_1$ is a series of weights that represent the contribution of each of the $d=1000$ sensitivities to the most dominant direction of the full sensitivity matrix, $\hat{\bm{u}}_1$. Due to the chaotic nature of this problem, we observe that these coefficients can be highly time-dependent, especially for the lower energy modes; see $\hat{\bm{y}}_2$. Nevertheless, we have demonstrated that f-OTD extracts the most dominant subspace and associated coefficients of the sensitivity matrix for a chaotic system with large number of unstable directions and parameters. \begin{figure}[btp] \centering \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/y1.pdf} \label{fig:ks_y1} } \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/y2.pdf} \label{fig:ks_y2} } \caption{Kuramoto-Sivashinsky: The first two columns of the orthonormalized design variables matrix shown at different instances in time: (a) $\hat{\bm{y}}_1(t)$ (b) $\hat{\bm{y}}_2(t)$. The horizontal axis corresponds to the $i^{th}$ design parameter $\alpha_i$.} \end{figure} \subsection{Species transport equation: Turbulent reactive flow} In this example, we show how a single set of f-OTD modes can lead to significant computational gains for computing sensitivities in problems with multiple coupled field variables, where each field variable has a different linear operator. We consider a species transport problem, where parameter identification via sensitivity analysis plays an important role in allocating computational and experimental resources to reduce parameter uncertainty. Moreover, the sensitivity analysis is used to create reduced reaction mechanisms for complex chemical systems involving a large number of species and reactions. See references \cite{BOR15,LCR19,LLC20}. \begin{figure}[btp] \centering \includegraphics[width=.7\textwidth]{figures/rnx_set_up.pdf} \caption{Schematic of the flow visualized with a passive scalar.} \label{fig:rxn_setup} \end{figure} \subsubsection{Problem setup} To this end, we consider a 2D incompressible turbulent reactive flow: \begin{equation}\label{eq:ADR} \pfrac{\bm{v}_i}{t} + \left( \bm{w}\cdot\nabla \right)\bm{v}_i = \tilde{\kappa}_{ik}\nabla^{2}\bm{v}_{k} + \bm{s}_i, \end{equation} where $\bm{w}=(\bm{w}_{x_1}(x_1,x_2,t),\bm{w}_{x_2}(x_1,x_2,t))$ is the velocity field from the 2D incompressible Navier-Stokes equations, $\bm{v}_i=\bm{v}_i(x_1,x_2,t)$ is the concentration of species $i$, $\tilde{\kappa}_{ik}\in\mathbb{R}^{n_s\times n_s}$ is the diffusion coefficient matrix, and $\bm{s}_i=\bm{s}_i(\bm{v}_1,\bm{v}_2,\dots,\bm{v}_{n_s};\bs{\alpha})$ is the non-linear reactive source term. We choose a diagonal diffusion coefficient matrix, where the $i$th diagonal entry is the diffusion coefficient of the $i$th species, and $n_s$ is the number of species. For the reactive source term $\bm{s}_i$, we consider the biological reactions used in \cite{LYTK15}. These terms are listed in Table \ref{tab:rxn} in Appendix \ref{app:D} for reference. A schematic of the flow is shown in Figure \ref{fig:rxn_setup}, where $L$ and $H$ are the channel length and height, respectively. The no-slip boundary condition is enforced at the top and bottom walls while the outflow boundary condition is enforced downstream. At the inlet a parabolic velocity with the average inlet velocity of $\overline{w}$ is prescribed. The Reynolds number based on reference length of half the height ($H/2$) and the kinematic viscosity $\nu$ is $Re=\overline{w}H/2\nu=1000$. The inlet boundary condition is $\bm{v}_i(0,x_2,t) = 1/2\big(\tanh{(x_2+H/2)/\delta} -\tanh{(x_2-H/2)/\delta} \big)$ for all species, where $\delta = 0.1$. The velocity field is governed by two-dimensional incompressible Navier-Stokes equation. We solved the velocity field once as it is independent from the species using spectral/hp elements method with 4008 quadrilateral elements and polynomial order 5. For more details on the spectral element method see for example \cite{KS05,Babaee:2013ab,Babaee:2013aa} . We then solve the species transport equations and f-OTD equations in the rectangular domain shown by dashed lines in Figure \ref{fig:rxn_setup}. In the rectangular domain, we used structured spectral elements with 50 elements in $x_1$ direction and 15 elements in $x_2$ direction. We used spectral polynomial of order 5 in each direction. The velocity field was interpolated onto this grid. The f-OTD equations, which are presented in the next sections, and the species transport equation are integrated forward in time using RK4 with $\Delta t = 5 \times 10^{-4}$. \subsubsection{f-OTD formulation} Our goal is to calculate sensitivity of the species concentration with respect to the reaction parameters $\bs{\alpha}=(\alpha_1,\alpha_2,\dots,\alpha_{n_r})$, where $n_r$ is the number of reaction parameters. To this end, we take the derivative of the above equation with respect to reaction parameter $\alpha_j$ to obtain an evolution equation for the sensitivity: \begin{equation}\label{eqn:rxn_sens_tensor} \pfrac{\tilde{\bm{v}}_{ij}'}{t} + \left( \bm{w}\cdot\nabla \right)\tilde{\bm{v}}_{ij}' = \tilde{\kappa}_{ik}\nabla^{2}\tilde{\bm{v}}_{kj}' + \tilde{\mathcal{L}}_{\bm{s}_{ik}} \tilde{\bm{v}}_{kj}'+\tilde{\bm{s}}'_{ij}, \end{equation} where $\tilde{\bm{v}}'_{ij}=\partial \bm{v}_i/\partial \alpha_j \in \mathbb{R}^{\infty\times 1}$ is the sensitivity of the concentration of species $\bm{v}_i$ with respect to reaction rate $\alpha_j$, $\tilde{\mathcal{L}}_{\bm{s}_{ik}}=\partial \bm{s}_i/ \partial \bm{v}_k$ is the linearized reactive source term, and $\tilde{\bm{s}}'_{ij}= \partial \bm{s}_i/ \partial \alpha_j$. In the above equation, $\tilde{\mathcal{L}}_{\bm{s}_{ik}} \tilde{\bm{v}}_{kj}'$ should be interpreted as a matrix-matrix multiplication for any $(x_1,x_2)$ point in the physical space. In this notation, sensitivities are represented by a quasitensor i.e. $\tilde{\bm{V}}' =[\bm{v}'_{ij}]$ with $i=1,2, \dots, n_s$ and $j=1,2,\dots, n_r$, where $\tilde{\bm{V}}' \in \mathbb{R}^{\infty \times n_s \times n_r}$ is the third order quasitensor depicted in the left-hand side of Figure \ref{fig:tensor_flatten}. Here $\tilde{\cdot}$ denotes terms associated with the tensor equation. In the discrete representation of $\tilde{\bm{V}}'$, the dimension $\infty$ is replaced with the number of grid points. Solving for the sensitivities $\tilde{\bm{v}}_{ij}'$ using adjoint would require solving $n_s$ AEs: one adjoint field for each species. See for example \cite{BOR15,LCR19,LLC20}. \begin{figure} \centering \includegraphics[width=.8\textwidth]{figures/tensor_flatten.pdf} \caption{Schematic of the tensor flattening from a 3D quasitensor to a 2D quasimatrix.} \label{fig:tensor_flatten} \end{figure} To solve for sensitivities using f-OTD, one could also solve for $n_s$ sets of f-OTD modes, i.e. one set of f-OTD modes for each species. This straightforward approach would only exploit the correlation between sensitivities of each species separately, i.e. correlations between $\bm{v}'_{ij}$ for a fixed $i$, while leaving the correlations between sensitivities of different species unexploited. In this example, we demonstrate how a single set of f-OTD modes can be used to accurately model the entire sensitivity tensor. Therefore, the compression ratio both in terms of memory and computational cost in comparison to the full sensitivity equation is $r/d$. In comparison to AE, the compression ratio is $r/n_s$. Also, the f-OTD is a forward system and does not impose any I/O operation. To this end, we flatten the sensitivity tensor, as shown in Figure \ref{fig:tensor_flatten}, which results in a quasimatrix of size $\infty\times d$. Here, $d=n_s\times n_r$, where $n_s=23$ and $n_r=34$. This leads to a total of $d=782$ sensitivity equations that we seek to compute. In Appendix \ref{app:E}, we show that the flattened sensitivity evolution equation is: \begin{equation}\label{eqn:rxn_sens_flatten} \pfrac{\bm{v}'_m}{t} + (\bm{w}\cdot \nabla)\bm{v}'_m = \kappa_{mn}\nabla^2 \bm{v}'_n + \mathcal{L}_{\bm{s}_{mn}}\bm{v}'_n + \bm{s}'_m, \end{equation} where $m(i,j)=j+(i-1)n_r$ and $n(i',j')=j' + (i'-1)n_r$, resulting in $m,n=1,2,\dots,d$. Equation \ref{eqn:rxn_sens_tensor} is a tensor evolution equation, whereas Equation \ref{eqn:rxn_sens_flatten} is the equivalent matrix evolution equation. The tensor flattening carried out here is similar to the unfolding carried out in the Tucker tensor decomposition \cite{KB09}. However, unlike Tucker tensor decomposition we do not consider flattening the tensor in the other two dimensions of species and parameters. Each $\bm{y}_k(t)$ is a vector of size $(n_s n_r)\times 1$ and contains coefficients for species and parameters. Once the sensitivity tensor is flattened to a quasimatrix, we use f-OTD to extract low-rank structure from the quasimatrix. In Equation \ref{eqn:rxn_sens_flatten}, the linear operator changes from one species to the other due to the different diffusion coefficients $\kappa_{mn}$. In Appendix \ref{app:E} we show how f-OTD evolution equations can be derived for this case, which is different from the previous demonstration cases. \begin{figure} \centering \subfigure[]{ \includegraphics[width=.45\textwidth]{figures/rxn_error.pdf} \label{Fig:Jet_Error} } \subfigure[]{ \includegraphics[width=.485\textwidth]{figures/rxn_sing_vals.pdf} \label{Fig:Jet_sing_vals} } \caption{(a) Percent error plotted as a function of time. Error decreases as the number of modes $r$ increases. (b) Singular values plotted as a function of time for $r=8$.} \end{figure} We solve Equations \ref{eq:uspec} and \ref{eq:yspec} for different f-OTD ranks along with the species transport (Equation \ref{eq:ADR}). In Figure \ref{Fig:Jet_Error} the f-OTD error $(e(t))$ and optimal low-rank approximation error $(e_u(t))$ are shown using three different ranks of $r=2,5$ and 8. Again, we observe that the growth of $e(t)$ surpasses $e_u(t)$ for long term integration as a direct result of the lost interactions with the unresolved modes. However, with only 5-8 modes, we have shown that f-OTD can approximate 782 sensitivities with error on the order of 0.1\%. These results can be explained by studying Figure \ref{Fig:Jet_sing_vals}, where we observe that more than 99\% of the system energy is captured by the reduction. The \% energy is calculated from the singular values as \% En. $=\sum_{i=1}^{r}\sigma_i^2/\sum_{i=1}^{d}\sigma_i^2\times 100$, and can be used to get a sense of the dimensionality of the system, when expressed in the time-dependent basis. This allows the f-OTD algorithm to extract the latent features associated with the most dominant singular values and successfully approximate the full sensitivity tensor with a high degree of accuracy. \begin{figure}[tbp] \centering \includegraphics[width=.9\textwidth]{figures/rxn_modes.pdf} \caption{First three orthonormal f-OTD modes shown for $r=8$. Each row shows the modes at a different instance in time.} \label{fig:my_label} \end{figure} In Figure \ref{fig:my_label}, the time-dependent evolution of the three most dominant f-OTD modes are shown. These modes are energetically ranked where low mode numbers correspond to larger (higher energy) structures and high mode numbers correspond to finer (lower energy) structures in the flow. As opposed to static basis, such as POD or DMD, the f-OTD modes evolve with the flow and exploit the instantaneous correlations between sensitivities. While this system is low-dimensional in the time-dependent basis, when expressed in POD or DMD basis, the system is high-dimensional and many modes are needed to capture the complex spatio-temporal evolution of $\bm{V}'$. See reference \cite{B19} for comparison between time-dependent basis versus POD and DMD. \begin{figure}[hbt!] \centering \includegraphics[width=.9\textwidth]{figures/Y_matrix.pdf} \caption{Orthonormalized f-OTD coefficients $\hat{\bm{y}}_1(t)$ and $\hat{\bm{y}}_2(t)$ visualized as a matrix with rows corresponding to species concentration and columns corresponding to reaction parameters. Color map shows most dominant sensitivities at different time instances.} \label{fig:y_matrix} \end{figure} To demonstrate the interpretability of the f-OTD decomposition, we show how the hidden parameter space represented by $\hat{\bm{Y}}(t)$ can be used to identify the most important reaction parameters. In this context, importance refers to a parameter for which a small change in its value elicits a large change in the response of the system (i.e. highly sensitive). To demonstrate this capability of f-OTD, the first two sensitivity coefficients are visualized as matrices in Figure \ref{fig:y_matrix}, where each $\hat{\bm{y}}_i$ is a $d\times 1$ vector that has been reshaped into an $n_s\times n_r$ matrix. In this form, each $\bm{v}'_{ij}$ is visualized using a heat map of the sensitivity coefficients, with rows corresponding to species $i$ and columns corresponding to reaction parameter $j$. Using this heat map, Figure \ref{fig:y_matrix} shows that only a handful of sensitivities are non-zero, while the majority have zero contribution for the entire duration of the simulation. \section{Introduction} Sensitivity analysis is required in a diverse set of evolutionary systems that are governed by differential equations in the form of $\dot{\bm{v}} = \bm{g}(\bm{v};\bs \alpha)$, where $\dot{(\sim)}=d(\sim)/dt$, $\bm{v} \in \mathbb{R}^n$ is the state space variable and $\bs \alpha \in \mathbb{R}^d$ is the design space. These sensitivities, denoted by $\bm{v}'_i = \partial \bm{v}/\partial \alpha_i$, $i=1, \dots, d$, are needed in numerous applications such as gradient-based optimization \cite{G00,G99}, optimal control \cite{BMT01}, grid adaptivity \cite{DK19}, and parameter identification \cite{EC11}, to name a few. The sensitivities are commonly computed via finite difference (FD), or by directly solving a sensitivity equation (SE) or adjoint equation (AE). The computational cost of using FD or SE scales linearly with the number of parameters -- making them impracticable when sensitivities with respect to a large number of parameters are needed. On the other hand, the computational cost of solving AE is independent of the number of parameters as it requires solving a single ordinary/partial differential equation (ODE/PDE). While AE is certainly a preferred approach for computing sensitivities for stationary problems, for time-dependent problems, the \emph{forward-backward} workflow of adjoint solver can pose several challenges. In particular, solving AE imposes a significant storage cost as the AE must be solved backward in time. On the other hand, the adjoint operator utilizes the forward time-resolved solution of the nonlinear dynamical system, i.e. $\bm{v}$. As a result, the dynamical system must be solved forward in time and its time-resolved solution must be stored. The adjoint solver is then solved backward in time, in which the nonlinear state is read from the disk at every time step. This workflow is not adequate for problems where real-time sensitivities are required, e.g. grid adaptivity for time-dependent problems \cite{DK19}. Moreover, for high dimensional dynamical systems, i.e. $n \sim \mathcal{O}(10^{10})$, the imposed I/O operations in the AE workflow could lead to insurmountable limitations. The I/O limitations continue to become more restrictive in the future high performance computing architectures and it is one of the major challenges in the transition from current sub-petascale and petascale computing to exascale computing \cite{AM_DOE_14}. For example, in high fidelity simulations of turbulent reactive flows, the solution can only be stored at every 400th time step in order to maintain I/O overhead at a reasonable level, while important events such as the ignition kernel occur rapidly on the order of 10 simulation time steps \cite{BAB12}. Storing the time-resolved solution for these problems is required for AE and it is currently exceeding the acceptable I/O levels and this trend continues to become even more unfavorable for exascale computing. This alone gives rise to a growing need for algorithms that can accurately compute sensitivities while minimizing or eliminating I/O requirements, and this is one of the motivations of the method presented in this paper. \\ Sensitivities of a dynamical system with respect to different parameters are often highly correlated and therefore they are amenable to low-rank approximations. To this end, a new low-dimensional model was recently presented in \cite{Babaee_PRSA} that can describe transient instabilities in high-dimensional nonlinear dynamical systems. This approach is based on a time-dependent basis known as the optimally time-dependent (OTD) modes. The evolution equation for the OTD modes is obtained by minimizing the functional \begin{equation} \mathcal{F}(\dot {\bm{u}}_1,\dot{\bm{u}}_2, \dots,\dot{\bm{u}}_r) =\sum_{i=1}^{r}\big\Vert \dot {\bm{u}}_i-\bm{L}({\bm{v}(t),t)\bm{u}_i(t)}\big\Vert ^{2},\label{eq:functional} \end{equation} subject to the orthonormality of the OTD modes, i.e. $\bm{u}_i^T\bm{u}_j = \delta_{ij}$, where $\bm{u}_i \in \mathbb{R}^n, i=1, \dots, r$ are the OTD modes. In the above functional, $\left\Vert \bm{u}\right\Vert ^{2}= \bm{u}^T\bm{u}$ and $\bm{L}(\bm{v}(t),t):=\nabla_{\bm{v}}\bm{g}$ is the instantaneous linearized operator. The optimality condition of the above variational principle leads to a closed form evolution equation for the OTD subspace: $\dot{\bm{U}} = (\bm{I}-\bm{U}\bm{U}^T)\bm{L} \bm{U}$, where $\bm{U} = [\bm{u}_1 | \bm{u}_2 | \dots |\bm{u}_r ] \in \mathbb{R}^{n\times r}$ and $\bm{I}\in \mathbb{R}^{n \times n}$ is the identity matrix. It was shown later that the OTD subspace converges exponentially fast to the eigendirections of the Cauchy–Green tensor associated with the most intense finite-time instabilities \cite{BFHS17}. In this sense, the OTD reduction can be interpreted as a low-rank subspace that approximates the evolution of perturbed initial condition in all directions of the phase space. One of the computational advantages of OTD is that it only requires solving forward equations. Moreover, the computational complexity of solving OTD reduction scales linearly with respect to the number of modes. The OTD has also been used for flow control \cite{BMS18}, building precursors for bursting phenomena \cite{FS16} as well as detection of edge manifolds in infinite-dimensional dynamical systems \cite{BDSH20}. We also note that the time-dependent bases have also been developed in the context of stochastic reduced order modeling; see for example \cite{SL09,CHZI13,MNZ15,Babaee:2017aa,PB20}. Our objective in this paper is to approximate sensitivities with respect to a large number of parameters using forward low-rank systems similar to OTD. In particular we seek to reduce the computational cost of solving forward sensitivity equations by exploiting the \emph{correlations} between various sensitivities. However, OTD is not adequate when applied to systems subject to perturbations in a parametric space. These perturbations are governed by the forced linear sensitivity equation: $\partial \bm{v}'_i/\partial t = \bm{L} \bm{v}'_i + \partial \bm{g}/\partial \alpha_i $, and in general, the OTD subspace is not an optimal basis for the evolution of $\bm{v}'_i$. To this end, we present a new approach based on time-dependent basis for solving time-varying linear systems forced by a high-dimensional function. The contributions of this paper are twofold: (i) we present a new variational principle, whose optimality conditions lead to forward real-time low rank evolution equations for the approximation of the forced sensitivity equation. We coin this approach ``forced OTD", which we will simply refer to as f-OTD. (ii) We extend the application of the presented method to compute \emph{tensor-like} sensitivities. An example of tensor-like sensitivities is in reactive flows where the goal is to compute the sensitivity of $n_s$ species with respect to $m$ parameters. In these systems, the full sensitivities can be represented as a third-order tensor where the first dimension is the number of grid points, the second dimension represents species ($n_s$), and the third dimension represents the parameters ($m$). We show that with a single set of orthonormal modes, we can approximate sensitivities by exploiting correlations between \emph{all} sensitivities. We compare the computational cost of f-OTD against adjoint based sensitivities where one adjoint variable for each species must be solved \cite{BOR15,LCR19,LLC20}. We show how the presented approach can be used for computing sensitivities with respect to a large number of parameters by solving forward low-rank evolution equations without the need to store the state variables. In the sections that follow, we present the formulation of the f-OTD method and demonstrate a number of outcomes. We start in Section 2 with the variational principle whose optimality conditions lead to a set of closed form evolution equations for a low rank approximation of the forced sensitivity equation. In Section 3, we present three demonstration cases: (1) sensitivity with respect to model parameters in the R\"{o}ssler system (2) sensitivity with respect to an infinite dimensional forcing parameter in the chaotic Kuramoto-Sivashinsky equation and (3) sensitivity with respect to reaction parameters for species transport in a turbulent reacting flow. In Section 4, we present the conclusions and implications of our work. \section*{Acknowledgments} We gratefully acknowledge the support received from the NASA Transformational Tools and Technologies Project, grant no. 80NSSC18M0150. We would also like to thank Dr. Joseph Derlaga for his support and intellectual insight throughout the process. \bibliographystyle{unsrt} \section{Methodology}\label{sec:Methodology} \subsection{Preliminaries} We denote $\bm{u}(x,t)$ to be a time dependent field variable. We denote the spatial domain as $D \subset \mathbb{R}^m$, where $m=$ 1, 2, or 3. The spatial coordinate is denoted by $x\in D$ and the function is evaluated at time $t$. We define the inner product of functions $\bm{u}(x,t)$ and $\bm{v}(x,t)$ as \begin{equation*} \inner{\bm{u}(x,t)}{\bm{v}(x,t)} = \displaystyle \int\limits_{D} \bm{u}(x,t) \bm{v}(x,t) dx \end{equation*} and the $L_2$ norm induced by the above inner product is given as \begin{equation*} \left\Vert\bm{u}(x,t)\right\Vert_{2} = \inner{\bm{u}(x,t)}{\bm{u}(x,t)}^{\frac{1}{2}}. \end{equation*} We introduce a quasimatrix notation to represent a set of functions in matrix form, and denote the quasimatrix $\bm{U}(x,t)\in \mathbb{R}^{\infty\times r}$ as \cite{BT04}: \begin{equation*} \bm{U}(x,t) = \bigg[\bm{u}_1(x,t)\ \Big| \ \bm{u}_2(x,t) \ \Big| \ \dots \ \Big| \ \bm{u}_d(x,t) \bigg]_{\infty \times r}, \end{equation*} where the first dimension is infinite and represents the continuous state space contained by $D$ and the second dimension is discrete. Similarly, we use the term quasitensor for tensors whose first dimension is infinity. For example, $\bs{\mathcal{T}} \in \mathbb{R}^{\infty \times r_1 \times r_2}$ is a third-order quasitensor. Following this definition, we define the inner product between quasimatrices $\bm{U}(x,t)\in\mathbb{R}^{\infty\times r}$ and $\bm{V}(x,t)\in \mathbb{R}^{\infty\times d}$ as \begin{equation*} \bm{S}(t) = \inner{\bm{U}(x,t)}{\bm{V}(x,t)}, \end{equation*} where $\bm{S}(t)\in\mathbb{R}^{r\times d}$ is a matrix with components $S_{ij}(t) = \inner{\bm{u}_i(x,t)}{\bm{v}_j(x,t)}$, where $\bm{v}_j(x,t)$ is the $j$th column of $\bm{V}(x,t)$. The discrete analogue of this operation is the matrix multiplication, $\bm{U}(t)^T \bm{V}(t)$, where $\bm{U}(t)\in\mathbb{R}^{n\times r}$ and $\bm{V}(t)\in\mathbb{R}^{n\times d}$ are space discrete with $n$ grid points. Correspondingly, the Frobenius norm of a quasimatrix is defined as: \begin{equation*} \Big \Vert\bm{U}(x,t)\Big \Vert_{F} = \sqrt{\tr \inner{\bm{U}(x,t)}{\bm{U}(x,t)}}. \end{equation*} Similarly, we define the inner product between quasimatrix $\bm{U}(x,t)$ and function $\bm{v}(x,t)$ as: \begin{equation*} \bm{g}(t) = \inner{\bm{U}(x,t) }{\bm{v}(x,t)}, \end{equation*} where $\bm{g}(t)=(g_1(t),g_2(t),\dots,g_r(t))^T \in \mathbb{R}^{r\times 1}$ is a vector with components $g_i(t)=\inner{\bm{u}_i(x,t)}{\bm{v}(x,t)}$. The discrete analogue of this operation is the matrix vector multiplication, $\bm{U}(t)^T \bm{v}(t)$, where $\bm{v}(t)\in\mathbb{R}^{n\times 1}$ is space discrete with $n$ grid points. Finally, we define multiplication between a quasimatrix and a vector \begin{equation*} \bm{c}(x, t) = \bm{U}(x,t)\bm{b}(t), \end{equation*} where $\bm{b}(t) = (b_1(t),b_2(t),\dots,b_r(t))^T\in\mathbb{R}^{r\times 1}$ is an arbitrary vector and $\bm{c}(x,t)\in\mathbb{R}^{\infty\times 1}$ is a function given by $\bm{c}(x, t)=b_i(t)\bm{u}_i(x,t)$. We use index notation and the same indexes imply summation. We consider the nonlinear partial differential equation (PDE) for the evolution of $\bm{v}(x,t)$: \begin{equation}\label{eq:gennon} \pfrac{\bm{v}(x,t)}{t} = \mathcal{N}\left((\bm{v}(x,t); \bs \alpha \right), \quad t\in [0, T_f] \end{equation} where $\mathcal{N}$ is in general a nonlinear differential operator. Our goal is to compute the sensitivity of $\bm{v}(x,t)$ with respect to the design parameters $\bs{\alpha}$, which can either be infinite-dimensional, i.e., a function $\bs{\alpha} = \bs{\alpha}(x,t)$, or finite-dimensional, i.e., a vector $\bs{\alpha} = (\alpha_1, \alpha_2, \dots, \alpha_d)$. For the sake of simplicity in the exposition we consider the finite-dimensional parametric space. Differentiating Equation \ref{eq:gennon} with respect to design parameter $\alpha_i$ leads to an evolution equation for the sensitivity of the dynamical system: \begin{equation}\label{eqn:sensitivity} \pfrac{\bm{v}_{i}'(x,t)}{t} = \mathcal{L}\left(\bm{v}_{i}'(x,t)\right) + \bm{f}'_{i}(x,t;\bs{\alpha}), \end{equation} where $\bm{v}_{i}' = \partial \bm{v}/\partial \alpha_i$ is the sensitivity of $\bm{v}(x,t)$ with respect to $\alpha_i$, $\mathcal{L} (\sim ) = \partial \mathcal{N}/ \partial \bm{v} (\sim )$ is the linearized operator, and $\bm{f}'_{i} = \partial\mathcal{N}/ \partial \alpha_i$ is the forcing term. \subsection{Variational Principle for Reduced Order Modeling} Different sensitivities in a dynamical system tend to be highly correlated at any given time and therefore, these sensitivities can potentially be approximated effectively by a low rank time-dependent subspace. In this section, we present a real-time reduced order modeling strategy that aims to extract this subspace and utilize it for building sensitivity ROMs. In particular, we present a variational principle, whose first-order optimality conditions lead to the evolution equations of a time-dependent subspace and its coefficients. We estimate the sensitivities using the low-rank decomposition: \begin{equation} \bm{V}'(x,t) = \bm{U}(x,t)\bm{Y}(t)^T + \bm{E}(x,t), \end{equation} where $\bm{V}'(x,t) = \big[\bm{v}'_1(x,t)\ \big| \ \bm{v}'_2(x,t) \ \big| \ \dots \ \big| \ \bm{v}'_d(x,t) \big]_{\infty \times d}$ is the sensitivities quasimatrix, $\bm{U}(x,t) = \big[\bm{u}_1(x,t)\ \big| \ \bm{u}_2(x,t) \ \big| \ \dots \ \big| \ \bm{u}_r(x,t) \big]_{\infty \times r}$ is a quasimatrix representing a rank-$r$ time-dependent orthonormal basis, in which $\inner{\bm{u}_i(x,t)}{\bm{u}_j(x,t)} = \delta_{ij}$, $\bm{Y}(t)=\big[\bm{y}_1(t)\ \big| \ \bm{y}_2(t) \ \big| \ \dots \ \big| \ \bm{y}_r(t) \big]_{d\times r}$ is the coefficient matrix, and $\bm{E}(x,t) \in \mathbb{R}^{\infty \times d}$ is the approximation error. The f-OTD decomposition is shown schematically in Figure \ref{fig:schematic}. \begin{figure}[tbp] \centering \includegraphics[width=1\textwidth]{figures/schematic.pdf} \caption{Overview of the reduced order modeling strategy. Shown on left in red is the full dimensional system of sensitivities that we seek to model using the f-OTD low-rank approximation. Shown on right is the low-rank approximation which consists of a set of temporally evolving orthonormal modes (green) and hidden design variables (gray). The hidden design variables are coefficients that map the orthonormal basis to each sensitivity in the full-dimensional system. That is, each of the $d$ sensitivities are approximated as a linear combination of the $r$ orthonormal modes, where $r \ll d$. It is important to note that the orthonormal basis and hidden design variables are model-driven and evolve based on the linear sensitivity dynamics. Thus, the proposed method only requires solving a system of $r$ PDEs and $r$ ODEs for the modes and coefficients, respectively.} \label{fig:schematic} \end{figure} We formulate a variational principle with control parameters $\dot{\bm{U}}(x,t)$ and $\dot{\bm{Y}}(t)$, that seeks to optimally update the subspace $\bm{U}(x,t)$ and its coefficients $\bm{Y}(t)$ by minimizing the distance between the time derivative of the low-rank approximation and the full-dimensional sensitivity dynamics: \begin{equation}\label{eq:functional} \mathcal{F}(\dot{\bm{U}}(x,t), \dot{\bm{Y}}(t)) = \left\Vert \pfrac{(\bm{U}(x,t) \bm{Y}(t)^T)}{t} - \mathcal{L}\left(\bm{U}(x,t) \bm{Y}(t)^T\right)-\bm{F}'(x,t;\bs{\alpha}) \right\Vert_{F}^{2}. \end{equation} where $\bm{F}'(x,t) = \big[\bm{f}'_1(x,t)\ \big| \ \bm{f}'_2(x,t) \ \big| \ \dots \ \big| \ \bm{f}'_d(x,t) \big]_{\infty \times d}$. Taking the time derivative of the orthonormality condition leads to the following constraint for the minimization problem: \begin{equation}\label{eq:orthodot} \inner{\dot{\bm{u}}_i(x,t)}{\bm{u}_j(x,t)} + \inner{\bm{u}_i(x,t)}{\dot{\bm{u}}_j(x,t)} = 0. \end{equation} We denote $\bs{\phi}_{ij}(t) = \inner{\bm{u}_i(x,t)}{\dot{\bm{u}}_j(x,t)}$, in which $\bs{\Phi}(t)=[\phi_{ij}(t)]\in \mathbb{R}^{r\times r}$. It is easy to see that $\bs{\Phi}(t)$ must be a skew-symmetric matrix in order to satisfy Equation \ref{eq:orthodot}, i.e., $\bs{\phi}_{ji}(t) = -\bs{\phi}_{ij}(t)$. Incorporating this constraint leads to the following unconstrained optimization problem functional: \begin{align}\label{Eqn:min_princ} \mathcal{G}(\dot{\bm{U}}(x,t), \dot{\bm{Y}}(t),\lambda(t)) &=\left\Vert \pfrac{(\bm{U}(x,t) \bm{Y}(t)^T)}{t} - \mathcal{L}\left(\bm{U}(x,t)\right) \bm{Y}(t)^T-\bm{F}'(x,t;\bs{\alpha}) \right\Vert_F^{2}\\ \nonumber &+ \sum_{i,j=1}^r \lambda_{ij}(t) \big( \inner{\bm{u}_i(x,t)}{\dot{\bm{u}}_j(x,t)} - \bs{\phi}_{ij}(t) \big), \end{align} where $\lambda(t) = [\lambda_{ij}(t)] \in \mathbb{R}^{r \times r}$ are Lagrange multipliers. To derive the optimality conditions, we follow a procedure similar to the one that was recently presented in \cite{B19}. In Appendix \ref{app:A}, we show that minimizing the above functional with respect to $\dot{\bm{U}}(x,t)$ and $\dot{\bm{Y}}(t)$ leads to closed form evolution equations for the modes and corresponding sensitivity coefficients: \begin{align}\label{eqn:evol_modes} \pfrac{\bm{u}_i(x,t)}{t} &= \mathcal{L}\left(\bm{u}_i\right) - \inner{\bm{u}_j}{\mathcal{L}\left(\bm{u}_i\right)}\bm{u}_j+ \big[ \bm{F}'\bm{y}_k - \inner{\bm{u}_j}{\bm{F}'\bm{y}_k}\bm{u}_j\big] C_{ik}^{-1} - \phi_{ij}\u{j}, \end{align} \begin{align}\label{eqn:evol_coeff} \frac{d\bm{y}_i(t)}{dt} &= \inner{\bm{u}_i}{\mathcal{L}(\bm{u}_j)} \bm{y}_j + \inner{\bm{F}'}{\bm{u}_i} - \phi_{ij}\y{j}, \end{align} where $\bm{C}(t)=[C_{ik}(t)] \in \mathbb{R}^{r\times r}$ is the low-rank \emph{correlation} matrix, in which $C_{ik}(t) = \bm{y}_i(t)^T \bm{y}_k(t)$. These equations are initialized by solving Equation \ref{eqn:sensitivity} for a single time step and computing the singular value decomposition (SVD) of $\bm{V}'(x,t=\Delta t)$, such that $\bm{U}(x,t=\Delta t)$ contains the first $r$ left singular vectors and $\bm{Y}(t=\Delta t)$ is the matrix multiplication of the first $r$ right singular vectors and singular values; see Section \ref{sec:approx_error}. We show in Section \ref{Sec:Eq} that the skew symmetric matrix $\phi_{ij}$ can be taken to be zero, i.e., $\phi_{ij}=0$. In the following, we make several observations about Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff}: (i) Equation \ref{eqn:evol_modes} determines the evolution of the f-OTD subspace. For $\phi_{ij}=0$, the right hand side of Equation \ref{eqn:evol_modes} is equal to the projection of $\mathcal{L}\left(\bm{U}\right) + \bm{F}\bm{Y}\bm{C}^{-1}$ onto the complement of the space spanned by $\bm{U}$. Therefore, if $\mathcal{L}\left(\bm{U}\right) + \bm{F}\bm{Y}\bm{C}^{-1}$ is in the span of $\bm{U}$, the f-OTD subspace does not evolve, i.e., $\dot{\bm{U}}=\bm{0}$. However, when $\mathcal{L}\left(\bm{U}\right) + \bm{F}\bm{Y}\bm{C}^{-1}$ is not in the span of $\bm{U}$, the f-OTD subspace evolves optimally to follow the right hand side. Equation \ref{eqn:evol_coeff} is the f-OTD reduced order model (ROM) that determines the evolution of the sensitivities within the f-OTD subspace. (ii) We observe that if we set $\bm{F}'(x,t)=\bm{0}$ in the above equations, we recover the OTD evolution equations presented in \cite{Babaee_PRSA}. However, unlike the OTD equations, where the evolution of the OTD modes are independent of the evolution of the coefficients ($\bm{Y}$), there is a two-way nonlinear coupling between the f-OTD evolution equations for $\bm{U}$ and $\bm{Y}$. (iii) From the above equations, it is clear to see that f-OTD extracts the low-rank approximation directly from the sensitivity evolution equation. In that sense, it is different from data-driven low-rank approximations such as proper orthogonal decomposition \cite{A91,ABGA15,PTB20} or dynamic mode decomposition \cite{S10,LKB18}, in which the low rank subspace is extracted from preexisting data. The need to generate data simply does not exist in the f-OTD workflow. (iv) The computational cost of solving the f-OTD Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} is roughly equivalent to that of solving $r$ forward sensitivity equations. This is because the evolution of the f-OTD modes described by Equation \ref{eqn:evol_modes} inherits the same differential operators from the sensitivity equation and the computational cost of solving each f-OTD mode is roughly equivalent to that of solving the sensitivity equation for a single parameter. Equation \ref{eqn:evol_coeff} is an ODE and therefore its computational cost is negligible compared to the f-OTD modes, which are governed by a PDE. \subsection{Equivalence}\label{Sec:Eq} It is important to note that the choice of $\phi_{ij}$ in Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} is not unique, and any skew-symmetric matrix yields an equivalent reduction. Similar to the OTD equations \cite{Babaee_PRSA}, we choose $\phi_{ij}=0$, which corresponds to the dynamically orthogonal (DO) condition. This property is summarized in the theorem below. \begin{theorem} Let $\{\bm{U}(x,t),\bm{Y}(t)\}$ and $\{\tilde{\bm{U}}(x,t),\tilde{\bm{Y}}(t)\}$ represent two reductions that satisfy Equations \ref{eqn:evol_modes} and \ref{eqn:evol_coeff} with corresponding skew-symmetric matrices $\bs{\Phi}(t)$ and $\tilde{\bs{\Phi}}(t)$, respectively. If the reductions are equivalent at $t=0$, i.e. they are initially related by an orthogonal rotation matrix $\bm{R}_0\in\mathbb{R}^{r\times r}$ as $\bm{U}(x,0)=\tilde{\bm{U}}(x,0)\bm{R}_0$ and $\bm{Y}(0)=\tilde{\bm{Y}}(0)\bm{R}_0$, then the two reductions will remain equivalent for $t>0$ with rotation matrix $\bm{R}(t)$ governed by $\dot{\bm{R}}=\bm{R}\bs{\Phi}-\tilde{\bs{\Phi}}\bm{R}$. \end{theorem} For proof of the above theorem see Appendix \ref{app:B}. \subsection{Exactness of f-OTD} For the case where the full sensitivity quasimatrix is of rank $d$, the rank $d$ f-OTD equations are exact. To show this, we start by considering an arbitrary perturbation subspace, $\bm{V}'(x,t)\in\mathbb{R}^{\infty\times d}$, governed by the quasimatrix form of Equation \ref{eqn:sensitivity}: \begin{align*} \pfrac{\bm{V}'}{t} = \mathcal{L}(\bm{V}') + \bm{F}'(x,t), \quad \bm{V}'(x,0) = \bm{V}'_0(x), \end{align*} where columns of $\bm{V}'(x,t)$ are independent, i.e. $\inner{\bm{v}_i'}{\bm{v}_j'}=0$ if $i\neq j$, and the evolution of an orthonormal subspace, $\bm{U}(x,t)\in\mathbb{R}^{\infty\times d}$, governed by the quasimatrix form of Equation \ref{eqn:evol_modes}: \begin{align*} \pfrac{\bm{U}}{t} = \mathcal{L}(\bm{U}) - \bm{U}\bm{L}_r(t) + \left( \bm{F}'\bm{Y} - \bm{U}\inner{\bm{U}}{\bm{F}'\bm{Y}} \right)\bm{C}^{-1}, \quad \bm{U}(x,0) = \bm{U}_0(x). \end{align*} The corresponding matrix of sensitivity coefficients are governed by the matrix form of Equation \ref{eqn:evol_coeff} as: \begin{align*} \frac{ d\bm{Y}}{dt} = \bm{Y}\bm{L}_r^T + \inner{\bm{F}'}{\bm{U}}, \quad \bm{Y}(0)=\bm{Y}_0, \end{align*} where $\bm{L}_r(t) = \inner{\bm{U}(x,t)}{\mathcal{L}(\bm{U}(x,t))}$ is the $r\times r$ low-rank linear operator. We can show that if the two subspaces are initially equivalent, i.e., $\bm{U}_0(x)$ can be mapped to $\bm{V}'_0(x)$ via the linear transformation $\bm{Y}_0^T$, then $\bm{V}'(x,t)$ and $\bm{U}(x,t)$ remain equivalent for all time $t$ and are related by the linear transformation $\bm{Y}(t)^T$. This leads to the following theorem: \begin{theorem} Let $\bm{V}'(x,t) \in \mathbb{R}^{\infty\times d}$ be an arbitrary subspace evolved by the linear dynamics of Equation \ref{eqn:sensitivity}, and $\bm{U}(x,t)\in \mathbb{R}^{\infty\times d}$ be an orthonormal subspace evolved by Equation \ref{eqn:evol_modes}. If initially $\bm{V}'_0(x)$ and $\bm{U}_0(x)$ are equivalent, i.e. $\bm{V}'_0(X)=\bm{U}_0(x)\bm{Y}_0^T$, then the perturbation subspace can be exactly determined via the linear transformation $\bm{V}'(x,t)=\bm{U}(x,t)\bm{Y}(t)^T$ for all time $t$, where $\bm{Y}(t)$ is governed by Equation \ref{eqn:evol_coeff}. \end{theorem} For a detailed proof of the theorem, refer to Appendix \ref{app:C}. \subsection{Approximation error}\label{sec:approx_error} The approximation error of estimating sensitivities using f-OTD can be expressed as $e(t) = \| \bm{V}'(x,t) - \bm{U}(x,t)\bm{Y}(t)^T \|_F$. This error can be properly analyzed and better understood by considering two types of error: (i) the resolved error, denoted by $e_r(t)$ and (ii) the unresolved error, denoted by $e_u(t)$. The resolved error is the discrepancy between approximating the sensitivities with rank-$r$ f-OTD and the optimal rank-$r$ approximation: $e_r(t) = \| \bm{U}(x,t)\bm{Y}(t)^T - \tilde{\bm{U}}(x,t)\tilde{\bm{Y}}(t)^T \|_F$, where $\tilde{\bm{U}}(x,t) \in \mathbb{R}^{\infty \times r}$ and $\tilde{\bm{Y}}(t) \in \mathbb{R}^{d \times r}$ are the optimal rank-$r$ orthonormal modes and their coefficients, respectively. The unresolved error is the error of the optimal rank-$r$ approximation: $e_u(t)=\| \tilde{\bm{U}}(x,t) \tilde{\bm{Y}}(t)^T - \bm{V}'(x,t) \|_F$, that is a direct result of truncating the $d-r$ least energetic modes. Thus, the optimal rank-$r$ approximation is obtained by minimizing: \begin{equation}\label{eq:functional_opt} \mathcal{E}_u(\tilde{\bm{U}}(x,t), \tilde{\bm{Y}}(t)) = \left\Vert \tilde{\bm{U}}(x,t) \tilde{\bm{Y}}(t)^T - \bm{V}'(x,t) \right\Vert_{F}, \end{equation} subject to the orthonormality condition of $\tilde{\bm{U}}(x,t)$ modes. The optimal decomposition can be obtained by performing instantaneous SVD of the sensitivity matrix, where $\tilde{\bm{U}}(x,t)$ is the matrix of $r$ most dominant left singular vectors of $\bm{V}'(x,t)$ and $\tilde{\bm{Y}}(t) = \tilde{\bm{Z}}(t)\tilde{\bm{\Sigma}}(t)$, where $ \tilde{\bm{Z}}(t) \in \mathbb{R}^{d \times r}$ and $ \tilde{\bm{\Sigma}}(t) =\mbox{diag}(\tilde{\bm{\sigma}}_1(t),\tilde{\bm{\sigma}_2}(t), \dots, \tilde{\bm{\sigma}}_r(t))$ are the matrix of the $r$ most dominant right singular vectors and the matrix of singular values, respectively. It is straightforward to show that: $e_u(t) = (\sum_{i=r+1}^d \tilde{\sigma}_i^2(t))^{1/2}$. The error $e_u(t)$ represents the minimum error that any rank-$r$ approximation can achieve, and therefore, it amounts to a lower bound for the f-OTD error: $e(t) \geq e_u(t)$. On the other hand, as with any reduced order model of a time-dependent system, the unresolved subspace induces a \emph{memory error} in the f-OTD approximation. This means that the unresolved error \emph{drives} the resolved error $e_r(t)$, and under appropriate conditions, it has been shown that for similar time-dependent basis low-rank approximations, $e_r(t)$ can be bounded by: $e_r(t) \leq c_1 e^{c_2t}\int_{t_0}^t e_u(s)ds$ \cite{KL07} for $c_1,c_2>0$. The interplay between $e_u(t)$ and $e_r(t)$ can be more rigorously studied within the Mori-Zwanzig formalism \cite{CHK02}. These error estimates can guide an adaptive f-OTD, in which modes are added or removed to maintain the error below some threshold value \cite{Babaee:2017aa}, however these aspects are not in the scope of this paper and are not explored any further here. Since sensitivities can either be very small or very large with errors following the same trend, we compute the relative error percentages as shown here: \begin{equation}\label{eq:error} \text{\% Error} = \frac{e(t)}{\left\Vert\bm{V}'(x,t)\right\Vert_{F}} \times 100. \end{equation} Similar quantities are computed for $e_u(t)$ and $e_r(t)$. \subsection{Mode ranking} In this section we present a procedure to rank the f-OTD modes and their coefficients according to their significance. To this end, we start by considering the reduced correlation matrix $\bm{C}(t)$, which is in general a full matrix. This implies that the sensitivity coefficients are correlated and there exists a linear mapping from the correlated coefficients, $\bm{Y}(t)$, to the uncorrelated coefficients, $\hat{\bm{Y}}(t)\bs{\Sigma}(t)$, where $\hat{\bm{Y}}(t)$ are the orthonormal coefficients and $\bs{\Sigma}(t) = \mbox{diag}(\sigma_1(t),\sigma_2(t),\dots,\sigma_r(t))$ is a diagonal matrix of singular values. To find such a mapping, we consider the eigen-decomposition of $\bm{C}(t)$ as follows: \begin{equation} \bm{C}(t)\bm{R}(t)=\bm{R}(t)\boldsymbol{\Lambda}(t), \end{equation} where $\bm{R}(t) \in \mathbb{R}^{r \times r}$ is a matrix whose columns contain the eigenvectors of $\bm{C}(t)$ and $\boldsymbol{\Lambda}(t)$ = diag($\lambda_1(t),\lambda_2(t),\dots,\lambda_r(t)$) is a diagonal matrix containing the eigenvalues of $\bm{C}(t)$. Since $\bm{C}(t)$ is a symmetric positive matrix, the matrix $\bm{R}(t)$ is an orthonormal matrix, i.e. $\bm{R}(t)^T \bm{R}(t) = \bm{I}$, and the eigenvalues are all non-negative and can be sorted as: $\lambda_1(t) > \lambda_2(t) > \dots > \lambda_r(t) \geq 0$. It is also straightforward to show that the singular values of the f-OTD low-rank approximation are $\sigma_i(t) = \lambda_i(t)^{1/2}$, for $i=1,2, \dots, r$. The ranked f-OTD components can be defined as: \begin{equation*} \hat{\bm{Y}}(t) = \bm{Y}(t)\bm{R}(t)\bs{\Sigma}^{-1}(t), \quad \quad \hat{\bm{U}}(x,t) = \bm{U}(x,t)\bm{R}(t), \end{equation*} where the columns of $\hat{\bm{Y}}(t)$ and $\hat{\bm{U}}(x,t)$ are ranked by energy ($\sigma_i^2$) in descending order. We shall refer to \{$\hat{\bm{Y}}(t), \bs{\Sigma}(t),\hat{\bm{U}}(x,t)$\} as the bi-orthonormal form of the reduction. Since the above equations are simply an in-subspace rotation, \{$\hat{\bm{Y}}(t)\bs{\Sigma}(t),\hat{\bm{U}}(x,t)$\} and \{$\bm{Y}(t),\bm{U}(x,t)$\} yield equivalent low-rank approximations of the full-dimensional dynamics. This is easily verified by considering the bi-orthonormal form of the low-rank approximation as $\hat{\bm{U}}(x,t)\bs{\Sigma}(t)\hat{\bm{Y}}(t)^T$ $= \bm{U}(x,t)\bm{Y}(t)^T$, where we have made use of the identity $\bm{R}(t)^T \bm{R}(t) = \bm{I}$. We refer to $\hat{\bm{Y}}$ as the \emph{hidden} parametric space as each column of matrix $\hat{\bm{Y}}$ can be taken as a new ranked parameter that represents the contribution of all parameters ($\bs{\alpha}$). In the following sections, all figures will be presented in bi-orthonormal form.
{ "timestamp": "2020-12-29T02:22:21", "yymm": "2012", "arxiv_id": "2012.14028", "language": "en", "url": "https://arxiv.org/abs/2012.14028" }
\section{Introduction} The field of network neuroscience has made substantial advances in characterizing the human brain network (or connectome modeling the pairwise relationship between brain regions of interest (ROIs)) by means of large-scale connectomic datasets collected through various projects such as Human Connectome Project (HCP) \cite{HCP} and Connectome Related to Human Disease (CRHD) \cite{CRHD}. These rich and multimodal brain datasets allow us to map brain connectivity and efficiently detect atypical deviations from the healthy brain connectome. Particularly, learning how to \emph{normalize} a population of brain networks by estimating a \emph{well-centered} and \emph{representative} connectional brain template (CBT) is an essential step for group comparison studies as well as discovering the integral signature of neurological disorders \cite{Dhifallah:2019}. Intuitively, a CBT can be defined as a `normalized connectome' of a population of brain networks, which can be simply produced by linear averaging. However, such a normalization technique is very sensitive to outliers and overlooks non-linear relationships between subjects. This normalization process can be regarded as an `integration' or `standardization' of brain networks. More broadly, estimating a CBT of a population of heterogeneous multi-view brain networks, where each view captures particular traits of the brain construct (e.g., cortical thickness, function, cognition), is even a more challenging task since such connectomic data might lie on complex high-dimensional manifolds. To address this challenge, \cite{Dhifallah:2019} proposed a clustering-based approach based on similarity network fusion (SNF) \cite{SNF} to fuse multi-view brain networks in each cluster. Fused networks are then linearly averaged to produce a CBT for a population of multi-view brain networks (MVBN). Despite its promising results, \cite{Dhifallah:2019} heavily depends on the selection of the number of clusters. In order to overcome this problem, \cite{Dhifallah:2020} introduced the netNorm framework, which instead of clustering, constructs a high-order graph using cross-view connectional features as nodes and their Euclidean distance as dissimilarity measure to select the most centered brain connections in a population of MVBN. Next, the selected connections are integrated into a single network using SNF \cite{SNF}. Currently, netNorm presents the state-of-the-art method by outperforming SCA \cite{Dhifallah:2019} and other baseline methods in the CBT estimation task. However, netNorm \cite{Dhifallah:2020} has several limitations. \emph{First}, it uses Euclidean distance as a \emph{pre-defined metric} for selecting the most representative brain connection which might fail to capture complex non-linear patterns in the brain connectome across subjects. \emph{Second}, netNorm also uses SNF for fusing different views. Even though SNF is a powerful tool since it is a generic unsupervised technique, it comes with assumptions such as emphasizing top $k$ local connections for each node (i.e., brain region) and equally averaging the global topology of complementary networks for each iterative update to ultimately merge them. Instead of relying on such general assumptions, this MVBN normalization process can instead be \emph{learned} to decide which information provided by the networks is important for the target CBT estimation. \emph{Third}, netNorm consists of \emph{independent} feature extraction, feature selection, and fusion steps. These fully independent steps cannot provide feedback to each other in order to globally optimize the CBT estimation process. Therefore errors might accumulate throughout the estimation pipeline. To address all these limitations, we propose \textbf{Deep Graph Normalizer (DGN)}: an unprecedented approach capitalizing on geometric deep learning (GDL) to learn how to normalize a population of heterogeneous MVBNs and generate a well-representative and centered CBT in an \emph{end-to-end} fashion. Although GDL achieved remarkable results in several recent biomedical data analysis works such as disease classification \cite{classification} and protein interface prediction \cite{protein}, to the best of our knowledge, no previous works used GDL to address the problem of integrating a population of multi-view networks \cite{survey1,survey2}. To fill this gap, we present several major contributions to the state-of-the-art as follows. \emph{First}, we design a GDL architecture that takes multi-view brain networks and maps them into a normalized CBT. Specifically, we propose a GDL architecture that maps MVBN of a training subject to a population-representative CBT. Brain networks of each training subject passes through several layers of graph convolutional neural networks that are consecutively applied to learn hidden embeddings for each node (i.e., brain ROI) by locally integrating connectivities offered by different heterogeneous views and blending the previous layer's embeddings using integrated connectivities. Next, we compute the pairwise absolute difference of the final layer's node embeddings to derive connectivity weights of the generated CBT. \emph{Second}, we introduce the Subject Normalization Loss (SNL) which is a randomized weighted loss function that evaluates the representativeness of a generated \emph{subject-biased} CBT (i.e. obtained by feeding a particular subject to the model) against a random subset of brain networks in the training set to achieve \emph{subject-to-population mapping}. Besides forcing the model to learn how to generate population-based representative CBTs by fusing complementary information supplied by MVBNs, SNL also acts as a regularization due to randomization and different weights assigned to each view according to their connectivity weight distributions. \emph{Third}, finalized CBT can be obtained by feeding an arbitrary subject of the training population to the trained model since the model learns how to map any subject to a population-representative CBT thanks to SNL optimization. However, the choice of the subject biases the output CBT and leads to non-optimal performance. We introduce a post-training step to overcome this bias and further refine the finalized CBT. \begin{comment} \textbf{Methodological advance.} We propose the first geometric deep learning for \emph{brain multiplex prediction} rooted in G-GAN. This allows to \emph{learn} the multiplex instead of predefining it for each individual in the population and circumvents the scarcity of multi-view connectomic brain data compared to the breadth of computer vision datasets. Furthermore, to the best of our knowledge, this is the first work adapting geometric deep learning for brain connectome synthesis and classification in general. \textbf{Clinical/scientific advances.} This is the first attempt to explore gender differences using brain multiplexes and reveal new connectional gender markers on both low and high-order levels. This contribution will enable early and personalized treatments for gender-biased brain disorders. content... \end{comment} \begin{figure}[ht!] \centering {\includegraphics[width=12.5cm]{figure1}} \caption{\emph{Proposed Deep Graph Normalizer (DGN) architecture for estimating connectional brain templates for a given population of multi-view brain networks.} \textbf{(A) Tensor representation of multi-view brain networks.} Each subject $s$ is represented by $\mathcal{T}_{s} \in \mathbb{R}^{n_{r} \times n_{r} \times n_{v}}$, composed of a set of undirected, fully connected graphs, each capturing single connectional feature. \textbf{(B) Geometric deep learning layers.} Our model includes a sequence of edge conditioned \cite{edge-conv} graph convolutional neural network layers which are separated by ReLU non-linearity. Each layer learns deeper embeddings for ROIs by utilizing activation of the previous layer and topological structure of the brain network. \textbf{(C) CBT generation and loss function.} ROI embeddings that are output by the final layer are passed through a series of tensor operations to calculate the pairwise absolute difference of each pair of nodes for CBT construction. Next, the representativeness of the estimated CBT is evaluated against a random subset of training views for loss calculation. \textbf{(D) CBT refinement after training.} To select the most centered connections for final CBT generation, we first pass each training subject through the trained model to generate its corresponding CBT. Finally, we produce the final CBT by selecting the element-wise median of all training CBTs. } \label{fig:1} \end{figure} \section{Proposed Method} In this section, we detail the components of our DGN architecture for estimating CBTs (\textbf{Fig.}~\ref{fig:1}). First, each subject in a population of multi-view brain networks is represented by an undirected fully-connected graph where each node (i.e., brain ROI) is initialized with identity features (trivially set to 1) and each edge has $n_{v}$ attributes that correspond to connectivity weights in different network views. We generate CBTs using a training set of MVBNs $T = \{ \mathbf{T}_1^1, \mathbf{T}_2^1, \dots, \mathbf{T}_i^v, \dots, \mathbf{T}_N^{n_v} \}$ , where $\mathbf{T}_{i}^{v}$ denotes the $v^{th}$ brain view of subject $i$, and evaluate the representativeness on a testing set using 5-fold cross-validation. These training subjects pass through 3 layers of GDL layers that includes edge conditioned filter learner (a shallow neural network). From layer to layer, deeper embeddings are learned for each ROI using edge-conditioned graph convolution \cite{edge-conv} which aggregates the information passed by its neighbours while taking into consideration the multi-view attributes of its neighboring edges. We then use ROI embeddings output by the final layer to produce the connectivity matrix of the generated CBT by calculating the between the final embeddings of each ROI pair. This generated CBT is evaluated against a random subset of the training MVBNs for loss optimization. Once the DGN architecture is fully trained using this randomized training sample selection strategy, each training subject is then fed to the trained model to produce a population representative CBTs that are \emph{biased by the given input subject}. Finally, we eliminate outlier connectivities due to subject bias by taking the element-wise median of all possible CBTs. We detail all these steps in what follows. \textbf{A- Tensor representation of multi-view brain networks}. Given a population of subjects, we represent each subject $s$ by a tensor $\mathcal{T}_{s} \in \mathbb{R}^{n_{r} \times n_{r} \times n_{v}}$ (\textbf{Fig.}~\ref{fig:1}--A) that is composed by stacking connectivity matrices of $n_{v}$ brain networks where each network has $n_{r}$ nodes (i.e., ROI). Note that using this subject-specific brain tensor representation, an $\mathbf{e}_{ij} \in \mathbb{R}^{n_v \times 1}$ connecting ROIs $i$ and $j$ encapsulates $n_{v}$ attributes. We set diagonal entries in the tensor to zero to eliminate self-loops (i.e., ROI self-connectivity). In addition to inputting edge multi-view attributes, our DGN architecture also takes a node attributes matrix $\mathbf{V}^{0} \in \mathbb{R}^{n_{r} \times d_{0}}$ as an input, where $d_{0}$ denotes the number of initial attributes for each ROI. As for each brain ROI, we do not have predefined attributes, we set each entry of the $\mathbf{V}^{0}$ matrix to `$1$' (i.e., identity), and through our deep model training for optimizing the SNL function, we learn these node-specific implicit attribute representations by simply using the input edge attributes. In each graph convolution layer and for each multi-view brain connection $\mathbf{e}_{ij}$ linking nodes $i$ and $j$, we generate an edge-specific weight matrix utilizing an \emph{edge-conditioned filter learner} \cite{edge-conv}. Then these weight matrices multiplied by ROIs attributes to compute the next layer's ROI attributes therefore after the first convolution each ROI will have a different set of attributes even though they were identical in the beginning. \textbf{B- Geometric deep learning layers}. In this section, we detail the graph convolutional layers of our architecture that maps ROIs with identical attributes to a high dimensional distinctive representations by utilizing edge features $\mathbf{e}_{ij}$ between ROIs. This is achieved through 3 graph convolutional network layers (\textbf{Fig.}~\ref{fig:1}--B) with edge-conditioned convolution operation \cite{edge-conv} that are separated by ReLU non-linearity. Each layer $l\in \{ 1, 2, ..., L \}$ includes edge-conditioned filter learner neural network $F^{l} : \mathbb{R}^{n_v} \mapsto \mathbb{R}^{d_{l} \times d_{l - 1}}$ that dynamically generates edge specific weights for filtering message passing between ROIs $i$ and $j$ given the features of $\mathbf{e}_{ij}$. This operation is defined as follows: \begin{gather*} \mathbf{v}{_{i}^{l}} = \mathbf{\Theta}^{l} .\mathbf{v}{_{i}^{l-1}} + \frac{1}{\left |N(i)\right |} \left (\sum_{j \epsilon N(i)} \ F^{l}(\mathbf{e}_{ij}; \mathbf{W}^{l}) \mathbf{v}^{l - 1}_{j} + \mathbf{b}^{l}\right ); F^{l}(\mathbf{e}_{ij}; \mathbf{W}^{l}) = \mathbf{\Theta}_{ij} \end{gather*} where $\mathbf{v}{_{i}^{l}}$ is the embedding of ROI $i$ at layer $l$, $\mathbf{\Theta}^{l}$ is a learnable parameter, and $N(i)$ denotes the neighbours of ROI $i$. $\mathbf{b}^{l} \in \mathbb{R}^{d_{l}}$ denotes a network bias and $F^{l}$ is a neural network that maps $\mathbb{R}^{n_{v}}$ to $\mathbb{R}^{d_{l} \times d_{l - 1}}$ with weights $\mathbf{W}^{l}$. $\mathbf{\Theta}_{ij}$ represents the dynamically generated edge specific weights by $F^{l}$. Note that $F^{l}$ can be any type of neural network and vary in each layer depending on the characteristics and complexity of edge weights. \textbf{C- CBT construction layer and subject normalization loss function}. After obtaining the output $\mathbf{V}^{L} = \left [ \mathbf{v}_1^L, \mathbf{v}_2^L, ..., \mathbf{v}_{n_{r} - 1}^L, \mathbf{v}_{n_{r}}^L \right ]^T$ of the final DGN layer, which consists of embeddings for each ROI, we compose the output CBT by computing the pair-wise absolute difference of the learned embeddings. We formulate this process using several tensor operations (\textbf{Fig.}~\ref{fig:1}--C) for easy and efficient backpropagation. First, $\mathbf{V}^{L}$ is replicated horizontally $n_{r}$ times to obtain $\mathcal{R} \in \mathbb{R}^{n_{r} \times n_{r} \times d_{L}}$. Next, $\mathcal{R}$ is transposed (replacing all elements $\mathcal{R}_{xyz}$ with $\mathcal{R}_{yxz}$) to get $\mathcal{R}^{T}$. Last, we compute the element-wise absolute difference of $\mathcal{R}$ and $\mathcal{R}^{T}$. The resulting tensor is summed along $z$-axis (i.e. size of node embeddings) to estimate the final CBT $\mathbf{C} \in \mathbb{R}^{n_{r} \times n_{r}}$. \emph{Subject Normalization Loss (SNL) optimization}. We propose to evaluate the representativeness of the generated CBT using a random subset of the training subject views. This random selection procedure has two main advantages compared to evaluating against all training subjects. First, randomization has a regularization effect since it is much easier for the model to overfit if the loss is calculated against the same set of subjects in each iteration. Secondary, the sample size can be fixed to a constant number so that the magnitude of the loss and the computation time will be independent of the size of the training set. Note that since SNL compares generated CBT with a subset of training subject views, model weights updated in a way so that the generated CBT represents a population of MVBNs even though it is rooted in a single subject input. Given the generated CBT $\mathbf{C}_{s}$ for subject $s$ and a random subset $S$ of training subject indices, we define our SNL for training subject $s$ and the optimization loss as follows: \begin{equation*} SNL_{s} = \sum_{v = 1}^{n_{v}} \sum_{i \in S} \left \| \mathbf{C}_{s} - \mathbf{T}_{i}^{v} \right \|_{F} \times \lambda_{v} \textcolor{blue}{;} \ \min\limits_{\mathbf{W}_1, \mathbf{b}_1 \dots \mathbf{W}_L, \mathbf{b}_L} \frac{1}{\left | T \right |} \sum_{s = 1}^{\left | T \right |}\\ SNL_{s} \label{eq:1} \end{equation*} The $\lambda_{v}$ is a view specific normalization term that is defined as: \begin{equation*} \lambda _{v} = \frac{\frac{1}{\mu_{v}}}{\max \left \{ \frac{1}{\mu_{j}} \right \}_{j = 1}^{n_{v}}} \end{equation*} where $\mu_{v}$ is the mean of brain graph connectivity weights of view $v$ and $\max \left \{ \frac{1}{\mu_j} \right \}_{j = 1}^{n_{v}}$ is the maximum of mean reciprocals $\frac{1}{\mu_{1}}$ to $\frac{1}{\mu_{n_v}}$. We use this view-specific normalization weight since brain network connectivity distribution and value range might largely vary across views. This will help avoid view-biased CBT estimation where the trained model might overfit some views and overlook others. Related problems in the literature are addressed by normalizing the connectivity matrix. For example, SNF \cite{SNF} divides connectivities in each row by the sum of the entries in that row to normalize measurements; however, this breaks the symmetry in the views, therefore, it is not applicable in our case. Another simple normalization approach such as using min-max scaling can saturate our inputs at 0 and 1 while standard z-score scaling generates negative connectivities in the graph that is not suitable for representing a fully positive brain connectomes (such as structural or morphological). Therefore, we introduce $\lambda _{v}$ to ensure that the model gives equal attention to each brain view regardless of their value range. \textbf{D- CBT refinement after the training}. Our model learns to map multi-view brain networks of a particular subject $s$ to a population-based representative CBT $\mathbf{C}_{s}$. Although all of the CBTs generated by the model are representative of the randomly sampled training population, they are biased towards the given input subject $s$. To eliminate this bias, we propose an additional step (\textbf{Fig.}~\ref{fig:1}--D) to obtain a more refined and representative CBT for the whole training set. First, each subject is fed through the trained model to obtain corresponding CBTs. Next, most centered connections are selected from these CBTs by calculating the element-wise median. The median operation produces a valid view with a non-negative symmetrical adjacency matrix. This operation could also be replaced with other measures of central tendency; however, we used the representativeness score to verify that the median is the most suitable for our case. \section{Results and Discussion} \textbf{Connectomic datasets and model hyperparameter setting.} We benchmarked our DGN against state-of-the-art method netNorm for CBT estimation \cite{Dhifallah:2020} on two small-scale and large-scale connectomic datasets using 5-fold cross-validation. The first dataset (AD/LMCI dataset) consists of 77 subjects (41 subjects diagnosed with Alzheimer's diseases (AD) and 36 with Late Mild Cognitive Impairment (LMCI)) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database GO public dataset \cite{ADdataset}. Each subject is represented by 4 cortical morphological brain networks derived from maximum principal curvature, the mean cortical thickness, the mean sulcal depth, and the average curvature as in \cite{Raeper:2018,Lisowska:2019,Nebli:2019}. The second dataset (NC/ASD dataset) is collected from the Autism Brain Imaging Data Exchange ABIDE I public dataset \cite{ASDdataset} and includes 310 subjects (155 normal control (NC) and 155 subjects with autism spectral disorder (ASD)) with 6 cortical morphological brain networks extracted from the 4 aforementioned cortical measures in addition to cortical surface area and minimum principle area. For each hemisphere, the cortical surface is reconstructed from T1-weighted MRI using FreeSurfer pipeline \cite{FreeSurfer} and parcellated into 35 ROIs using Desikan-Killiany atlas \cite{parc} and its corresponding brain network is derived by computing the pairwise absolute difference in cortical measurements between pairs of ROIs. We trained 8 different models to generate CBTs for both hemispheres of 4 populations namely; AD, LMCI, NC, and ASD. We empirically set all hyperparameters for the DGN models using grid search. Each model includes 3 edge-conditioned convolutional neural network layers with an edge-conditioned filter learner neural network that maps 4 (for AD/LMCI dataset) or 6 (for NC/ASD dataset) attributes obtained from heterogeneous views to $\mathbb{R}^{d_{l} \times d_{l - 1}}$. These layers are separated by ReLU activation function and output embeddings with 36, 24 and 5 (for AD/LMCI dataset) or 8 (for NC/ASD dataset) dimensions for each ROI in the MVBN, respectively. We trained all models using gradient descent with Adam optimizer with a learning rate of $0.0005$. We fixed the number of random samples in our SNL function to $10$. \begin{figure}[htp!] \centering \includegraphics[width=12cm]{figure3} \caption{\emph{Representativeness comparison between CBTs generated by the proposed model and netNorm \cite{Dhifallah:2020}.} Charts illustrate the average Frobenius distance between the CBTs generated using the training set and the network views in the testing set. Also, p-values obtained by two-tailed t-test are reported for each population. LH: left hemisphere. RH: right hemisphere.} \label{fig:3} \end{figure} \textbf{CBT representativeness test.} To evaluate the representativeness of generated CBT, we computed mean Frobenius distance which is calculated as $d_{F}(A,B) = \sqrt{\sum_{i} \sum_{j} \left | A_{ij} - B_{ij} \right |^{2} }$ between the estimated CBT and the different views in the testing set. We split both datasets into training and testing sets using 5-fold cross-validation for reproducibility and generalizability. \textbf{Fig.}~\ref{fig:3} depicts the average Frobenius distance between CBTs generated by DGN and netNorm \cite{Dhifallah:2020} using the training set and the views in the left out test population. We note that our proposed model significantly ($p < 0.001$) outperforms netNorm in terms of representativeness across all left-out folds and evaluation datasets. \textbf{CBT discriminativeness reproducibility test.} We hypothesize that a well-representative CBT can capture the most discriminative traits of a population of MVBNs, acting as a connectional brain fingerprint. To test this hypothesis, we first spot the top $k$ most discriminative brain connectivities where a class-A CBT largely differs from a class-B CBT. To do so, we compute the absolute difference between both estimated CBTs using respectively A and B populations. Next, we sum the columns of this difference matrix to obtain discriminability score for each brain ROI. We then pick the top $k$ most discriminative ROIs with the highest score. To evaluate the reproducibility of CBT-based discriminative ROIs, for each brain view $v$, we independently train a support vector machine (SVM) and with a supervised feature selection method. Specifically, we extract connectional features from each brain network view by vectorizing the upper triangle entries. Next, for each network view, we use 5-fold randomized partitioning to divide each population $p^{A}$ and $p^{B}$ into 5 subpopulations. For each brain view $v$ and a combination of $p^{A}_{i}$ and $p^{B}_{j}$ a SVM is trained and a weight vector $\mathbf{w}_{ij}^{v}$ that scores the discriminativeness of each feature (i.e., ROI) is learned using Multiple Kernel Learning (MKL), which is a wrapper method assigning weights to features according to their distinctiveness for the given classification task. The final feature weight vector is computed by summing up the weight vectors for all views and all possible A-B combinations of their 5 subpopulations as follows: $\mathbf{\omega} = \sum_{\mathbf{w} = 1}^{n_{v}} \sum_{i,j = 1}^{5} \mathbf{w}_{i,j}^{v}$. Next, we anti-vectorize $\mathbf{\omega}$ vector and obtain matrix $ \mathbf{M} \in \mathbb{R}^{n_{r} \times n_{r}}$. By summing the columns of the resulting matrix we get ROIs discriminability scores. Finally, we picked the top $k$ ROIs with the highest score. \textbf{Table}~\ref{tab:1} reports the overlap between the top $k=15$ ROIs identified using CBT-based method (netNorm and DGN) and MKL-based SVM method. Remarkably, our proposed model not only generates more representative and centered CBTs but is significantly more reproducible in discriminability than netNorm \cite{Dhifallah:2020}. \textbf{Discovery of most discriminative brain ROIs for each disordered population.} DGN also revealed left insula cortex, left superior temporal sulcus (STS) and right frontal pole as most discriminative regions of ASD population, which resonates with existing findings on autism. {\cite{insula} reports that the alteration of the left insula in the ASD population might be the cause of abnormalities in emotional and affective functions. Next, by comparing activation of STS in different social scenarios, \cite{STS} shows that the dysfunction of STS is the essential factor of social perception impairment in autism. For instance, in contrast to NC subjects, individuals with autism show hypoactivation in the STS when exposed to matched visual and auditory information. Furthermore, \cite{STS} demonstrates that healthy children had greater response in STS triggered by biological (e.g. human movement) than by non-biological motions (e.g. clock). However, STS activation's of children with autism do not differ significantly depending on the nature of the motion. Lastly, \cite{pole} shows that the faces of boys with ASD have atypical right dominant asymmetry and suggests that the asymmetric growth of the right frontal pole can explain this facial anomaly. As for the AD-LMCI dataset, DGN picked the left temporal pole (TP) and right entorhinal cortex (EC) as the most discriminative regions of the brain. \cite{TL} highlights that the pathological changes in TP are a common trait among all AD patients. Moreover, \cite{EC} confirms that the alteration of the EC is a good biomarker of AD and LMCI and indicates that the AD patients show greater atrophy in the right EC which supports DGN's choice. \begin{table} \centering \begin{tabular}{c c c c} \hline\noalign{\smallskip} Overlap Rate & netNorm \cite{Dhifallah:2020} & \bf{DGN} \\ \hline\noalign{\smallskip} AD-LMCI Left Hem. & 0.60 & \bf{0.73} \\ AD-LMCI Right Hem. & 0.33 & \bf{0.40} \\ \hline\noalign{\smallskip} \end{tabular} \begin{tabular}{c c c c} \hline\noalign{\smallskip} Overlap Rate & netNorm \cite{Dhifallah:2020} & \bf{DGN} \\ \hline\noalign{\smallskip} NC-ASD Left Hem. & 0.53 & \bf{0.53} \\ NC-ASD Right Hem. & 0.33 & \bf{0.40} \\ \hline\noalign{\smallskip} \end{tabular} \caption{\emph{Overlap rate between ROIs selected by MKL and CBT-based methods.}} \label{tab:1} \end{table} \section{Conclusion} In this paper, we introduced Deep Graph Normalizer for estimating connectional brain templates for a given population of multi-view brain networks. Beside capturing non-linear patterns across subjects, the proposed method also learns how to fuse complementary information offered by MVBNs in an end-to-end fashion. We showed that the proposed DGN outperformed state-of-the-art method for estimating CBTs in terms of both representativeness and discriminability. In our future work, we will evaluate our architecture on multi-modal brain networks such as functional and structural brain networks while capitalizing on geometric deep learning for estimating \emph{holistic} CBTs. Also, we will introduce topological loss constraints such as Kullback-Leibler divergence of node degree distributions of generated CBTs and population brain networks to further ensure that the generated CBTs are topologically sound. \section{Acknowledgments} I. Rekik is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Individual Fellowship grant agreement No 101003403 (\url{http://basira-lab.com/normnets/}).
{ "timestamp": "2020-12-29T02:26:12", "yymm": "2012", "arxiv_id": "2012.14131", "language": "en", "url": "https://arxiv.org/abs/2012.14131" }
\section{Introduction} Space inversion symmetry breaking in crystal structure makes asymmetric spin-orbit coupling (ASOC), which often leads to unique superconducting properties beyond the conventional Bardeen-Cooper-Schrieffer (BCS) framework\cite{1,2}. For example, unconventional features have been experimentally reported in CePt$_3$Si\cite{3}, CeMSi$_3$ (M = Rh, Ir)\cite{4,5,6,7}, and Li$_2$(Pd, Pt)$_3$B\cite{8,9,10,11}, R$_2$C$_3$ (R = La, Y)\cite{12,13,14}. The transition metal dichalcogenide-based layer compound PbTaSe$_2$ without inversion symmetry is a type-II superconductor with a superconducting transition temperature of $T_{\rm_c} \sim 3.8$ K\cite{15}. As shown in Fig. 1(a), the crystal structure of PbTaSe$_2$ is composed of 2H-TaSe$_2$ and intercalated Pb layers\cite{16}. The strong ASOC stemming from heavy elements (Pb and Ta) lifts the spin degeneracy and can induce a parity-mixed superconducting state\cite{17,18}. Furthermore, PbTaSe$_2$ possesses topological electronic states such as nodal line fermions and drumhead surface states\cite{19,20}, which have the potential to realize Dirac/Weyl superconducting states\cite{21} and topological superconductivity in the surface\cite{22}. Many experimental studies have been carried out for detection of the unique superconducting state of PbTaSe$_2$. However, so far only the signatures of conventional BCS-type $s$-wave superconductivity have been reported, including heat capacity\cite{23,24,25}, thermal conductivity\cite{26}, tunnel diode oscillator\cite{27}, STM\cite{28}, and $\mu$SR\cite{29}. Microscopic evidence for conventional BCS-type $s$-wave superconductivity was also reported by $^{207}$Pb nuclear magnetic resonance (NMR) measurements\cite{30}, which were performed for vortex states in an external magnetic field $\mu_{0}H = 0.19$ T close to $\mu_{0} H_{\rm_{c}2}^{c} (T=0) \approx 0.3$ T\cite{23,24,25,26}. To elucidate hidden electronic states and superconducting properties in zero magnetic field, it is also desirable to perform $^{181}$Ta NQR, which allows local electronic states of TaSe$_2$ layer to be probed directly. The band structure of PbTaSe$_2$ is characterized by multiple Fermi surfaces (FSs) as shown in Fig. 1(b) structured around the $\Gamma$ point in $k$-space dominated by Se-4$p$ and Ta-5$d$ orbitals and around K and H points dominated by Ta-5$d$ and Pb-4$p$ derived orbitals\cite{31}. A multiple gap scenario related to multiband superconductivity in PbTaSe$_2$ has been proposed by some experiments and theory\cite{23,25,26,27,28,29,30,31}. Taking the possible difference in hyperfine coupling with the nucleus for each FS into account, it is advantageous to select the Ta site for comparison with Pb site results to aid further understanding of multiband superconductivity of PbTaSe$_2$. In this paper, we report a {\it pure} NQR study of the $^{181}$Ta-nucleus in PbTaSe$_2$ at zero fields, expected to be a sensitive probe of local electronic bands of the multiple FSs of TaSe$_2$ layer. In our results we found two inequivalent Ta sites: the Ta(1) site is identified as an intrinsic site derived from a non-centrosymmetric structure for $P$\={6}$m2$ symmetry using $^{181}$Ta-NQR spectrum analysis combined with density functional theory (DFT) calculations, while the Ta(2) site is expected to arise from the inevitable structural impurity. The nuclear spin-lattice relaxation rate ($1/T_1$) at the intrinsic Ta(1) site exhibits an exponential decrease well below $T_{\rm_c}$, which indicates that the superconducting state is fully gapped in PbTaSe$_2$. Temperature dependence of $1/T_1$ is reproduced by the superposition of {\it quadrupole} and {\it magnetic} relaxation mechanisms in addition to the existence of distribution of gap size. The gap parameter $2\Delta/k_{\rm_B}T_{\rm_c}$ obtained in this work was near 3.1, which is smaller than the value obtained in other experiments. This may suggest that FSs dominantly composed of Ta-5$d$ orbitals can be primarily probed by $^{181}$Ta-NQR. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth]{Fig1.eps} \caption[Structure]{\label{fig1}(Color online) (a) Crystal structure of PbTaSe$_2$ with space group $P$\={6}$m2$. (b) Fermi surfaces (FS) obtained by the DFT calculation. FSs at the $\Gamma$ point and an outer FS around K, H points are mainly composed of Ta-5$d$, while an inner cylindrical FS around K, H points are mainly composed of Pb-4$p$ orbitals. (c) Powder X-ray diffraction spectrum of PbTaSe$_2$ at room temperature is shown in the upper panel, which is consistent with the diffraction pattern simulation for space group $P$\={6}$m2$ in the lower panel. } \end{center} \end{figure} \section{Experiment and Calculation} Polycrystalline samples of PbTaSe$_2$ were synthesized via a solid-state reaction. Stoichiometric amounts of Pb (powder, 99.9\%), Ta (powder, 99.9\%), and Se (grain, 99.999\%) were placed in a silica tube, sealed under vacuum, and heated for $3 - 5$ days\cite{15,16}. The dc and ac susceptibilities were measured by a magnetic property measurement system (MPMS, Quantum Design) and {\it in situ} NQR coil, respectively. The NQR measurement was performed using a conventional phase-coherent-type spectrometer. The $^{181}$Ta-NQR spectrum was obtained by sweeping the frequency and integrating the spin-echo intensity. The nuclear spin-lattice relaxation rate ($1/T_1$) was measured by the saturation-recovery method in a temperature range of $1.6-20$ K. The observed nuclear magnetization recovery curve was well-fitted by $1 - M(t)/M(\infty) = 1/42\exp(-3 t/T_{1}) + 18/77\exp(-10t/T_{1}) + 49/66\exp(-21t/T_{1})$\cite{32}. To evaluate the NQR frequency theoretically, the electronic field gradient at the Ta site was calculated using the all-electron full-potential linearized-augmented-plane-wave program package HiLAPW\cite{33}. Additionally, the FSs were calculated using the WIEN2k package\cite{34} and drawn by Xcrysden. These calculations were performed using GGA-PBE exchange-correlation potential\cite{35} and the full-potential LAPW (FLAPW) basis set taking into account the spin-orbit interaction. The lattice parameters and atomic positions were fixed at their experimental values to obtain the band structure. \section{Results and Discussion} Figure 1(c) shows the result of the powder X-ray diffraction measurement, which is consistent with the space group $P$\={6}$m2$. The lattice parameters were estimated to be $a = 3.4445(5) $\AA\ and $c = 9.3798(18)$\AA\ by the Rietveld analysis. They are close to the values reported previously\cite{16,30,36}. Although impurity diffraction peaks that corresponded to the 2H-TaSe$_2$ were observed, the results from two-phase Rietveld analysis show that the volume fraction of TaSe$_2$ is less than 5\%, meaning the small amount of impurity phase has no influence on the NQR result. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{./Fig2.eps} \caption[Structure]{\label{spectrum}(Color online) $^{181}$Ta-NQR spectrum of PbTaSe$_2$ at 4.2 K. Three resonance lines denoted as Ta(1) are reproduced by $\nu_{\rm_Q}()1) =58.7 \pm 0.5$ MHz and $\eta(1)=0.00\sim0.02$, corresponding to the intrinsic Ta site for PbTaSe$_2$ with $P$\={6}$m2$ symmetry. The other broader ones denoted as Ta(2) are reproduced by $\nu_{\rm_Q} (2)=64.5 \pm 0.5$ MHz and $\eta(1)=0.03\sim0.04$, corresponding to the extrinsic Ta site slightly affected by in-plane disorder. } \end{center} \end{figure} Figure 2 shows the $^{181}$Ta-NQR spectrum of PbTaSe$_2$ at $T = 4.2$ K. Six resonance peaks were observed at $f \approx 58.7, 64.8, 117.3, 128.9, 176.1$, and $193.5 \pm 0.2$ MHz. In general, nuclear quadrupole interaction is described by the following Hamiltonian, \begin{equation} {\cal H}_{Q} = \frac{e^{2} q Q}{4I(2I-1)} \{3 I_{z}^{2} -I(I+1) +\eta(I_{x}^{2}-I_{y}^{2})\}, \end{equation} where $eQ$ is the nuclear quadrupole moment, $eq = V_{zz}$ is the electronic field gradient (EFG) along the principal axis defined by the maximum EFG direction, and $\eta = |V_{\rm_{xx}}-V_{\rm_{yy}}|/V_{\rm_{zz}}$ is an asymmetric parameter of the EFG. Here the NQR frequency is defined as $\nu_{\rm_Q} = (3e^{2}qQ)/2I(2I-1)h$. In the case of $^{181}$Ta (nuclear spin $I = 7/2$), the energy level of the nuclear magnetic moment is split into four levels ($m = \pm1/2, \pm3/2, \pm5/2, \pm7/2$) by the nuclear quadrupole interaction, and thus, three resonance peaks per Ta site should be observed in the $^{181}$Ta-NQR spectrum. Therefore, the observation of the six NQR peaks indicated the presence of two inequivalent Ta sites in the sample, where a single Ta site per unit cell is expected for ideal crystal structure with space group $P$\={6}$m2$ in PbTaSe$_2$. Three narrower peaks at $58.7, 117.3$, and $176.1 \pm 0.2$ MHz are reproduced for the case of $\nu_{\rm_{Q}}(1) = 58.7\pm0.5$ MHz and $\eta(1) = 0.00\sim0.02$, which is denoted as the Ta(1) site, while the other three peaks with the broad tails at $64.8, 128.9$, and $193.5 \pm 0.2$ MHz can be reproduced for the case of $\nu_{\rm_{Q}}(2)=64.5\pm0.5$ MHz and $\eta(2)=0.03\sim~0.04$, which is denoted as the Ta(2) site. The $\eta(1)\sim0$ represents the axial symmetry around the Ta atom, indicating Ta(1) is an intrinsic Ta site with $P$\={6}$m2$ structure because the Ta site is located on the $C_3$ rotation axis along the $c$ axis in $P$\={6}$m2$. In this context, the Ta(2) site with a finite value of $\eta(2)$ should be assigned to the impurity site, which arises from the impurity phase that was not observed in X-ray spectrum. Note that the comparable intensity of the Ta(2) signal to Ta(1) ensures that this impurity site does not arise from TaSe$_2$, the amount of which is only 5\%. \begin{table}[t] \begin{center} \caption[Structure]{Experimental and theoretical values of $\nu_{\rm_Q}$ and $\eta$. Experimental values of $\nu_{\rm_Q}$ for two inequivalent Ta(1) and Ta(2) sites were determined by Lorentzian fitting, and their error bars defined as the FWHM. Values of $\eta$ were determined to reproduce the experimental resonance frequencies. \label{table2}} \begin{tabular}{ccc} &\qquad$\nu_{\rm_Q}$ (MHz) &\qquad$\eta$ \\ \hline\hline Experiment & &\\ Ta(1) site & \qquad$58.7\pm0.5$ & \qquad$0.00\sim0.02$\\ Ta(2) site & \qquad$64.5\pm0.5$ & \qquad$0.03\sim0.04$\\ \hline DFT calculation & &\\ $P$\={6}$m2$&\qquad57.78&\qquad0.00\\ $P6_{3}/mmc$&\qquad46.39&\qquad0.00\\ \end{tabular} \end{center} \end{table} For further verification, the values of $\nu_{\rm_Q}$ and $\eta$ for Ta sites with $P$\={6}$m2$ symmetry were simulated by DFT calculations. The calculated values are $\nu_{\rm_Q}=57.8$ MHz and $\eta = 0.0$ as shown in Table I, which is very close to the experimental values assigned to the Ta(1) site. Consequently, we determined that the Ta(1) site was derived from the non-centrosymmetric structure of PbTaSe$_2$ with $P$\={6}$m2$ symmetry. The calculation of $P6_{3}/mmc$, which is another possible structure in intercalated transition metal dichalcogenides\cite{16}, does not agree with the experimentally obtained $\nu_{\rm_Q}(2)$ for the Ta(2) site, as shown in Table I. The $1/T_1$ measurement at Ta(2) shows similar temperature dependence and absolute value to those of Ta(1), indicating that the electronic state and superconductivity in the impurity phase are close to those in the intrinsic one. Pb defects are verified by scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS) at approximately 5\% in our sample. Those defects can change the local electric field, and break the in-plane symmetry to result in a finite $\eta$. Since one Pb defect may affect the six nearest neighboring Ta sites, an amount of $4 - 6\%$ defects can explain the comparable NQR intensity of Ta(1) and Ta(2) sites. Thus, we assume that Ta(2) comes from the Ta site around Pb defects in the intrinsic $P$\={6}$m2$ structure. \begin{figure}[b] \begin{center} \includegraphics[width=0.9\linewidth]{./Fig3.eps} \caption[Structure]{\label{T1}(Color online) $T$ dependence of $1/T_1$ at the Ta(1) sites in PbTaSe$_2$. The upper panel shows the $T$ dependence of AC and DC susceptibilities, which proves $T_c \sim 3.8$ K. Above $T_c$, $1/T_1$ is proportional to $T$, indicating that PbTaSe$_2$ is a non-correlated metal. $1/T_1$ shows exponential decrease below $T_c$, which is evidence for fully-gapped superconductivity. The superconducting gap size $2\Delta$ obtained by the Arrhenius plot of $T_1$ is $3.09 \pm 0.07 k_{\rm_B}T_{\rm_c}$ (inset).} \end{center} \end{figure} Next, we focus on the nuclear spin-lattice relaxation rate, $1/T_1$. Figure 3 shows the temperature dependence of $1/T_1$ for the Ta(1) site, which was measured from the NQR peaks corresponding to $m=\pm1/2 \leftrightarrow\pm3/2$ transition. The upper panel of this figure shows the $T$ dependence of the AC susceptibility measured using an {\it in situ} NQR coil (circles), together with that of the DC susceptibility by MPMS (solid line), indicating that bulk superconductivity takes place below $T_{\rm_{c}} \sim 3.8$ K, which is consistent with the values in previous reports\cite{15,19,20,23,24,25,26,27,28,29,30,36}. At normal states ($T > T_{\rm_c}$), $1/T_1$ is proportional to temperature, which is generally seen in non-correlated normal metals. Note that $^{181}$Ta-NQR is a good probe to reveal charge fluctuations through coupling with the local electric field gradient at the Ta site. In the case of Ta$_3$Pd$_4$Te$_{16}$, the $^{181}$Ta-NQR study revealed a large enhancement in $1/T_{1}T$ due to the charge fluctuations derived from charge density wave (CDW) instability\cite{37}. As for the case of PbTaSe$_2$, the $T$ dependence of $1/T_1$ without anomalies above $T_{\rm_c}$ suggests that the charge fluctuations are negligible, although the parent layered compound TaSe$_2$ shows CDW order\cite{38,39}, and Pb$_x$TaSe$_2$ ($x = 0.25 - 0.75$) was expected to show charge fluctuations from the Raman spectroscopy\cite{40}. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\linewidth]{./Fig4.eps} \caption[Structure]{\label{iT1T}(Color online) $(T_{1}T)_{T=T_{\rm_c}}/(T_{1}T)$ {\it vs} $T/T_{\rm_c}$ probed by $^{181}$Ta-NQR for PbTaSe$_2$. Suppression of the coherence peak has been observed by $^{181}$Ta-NQR probes even in typical BCS superconductors like Ta$_3$Pd$_4$Te$_{16}$\cite{37} and Ta$_3$Sn\cite{41} due to the predominant nuclear quadrupole relaxation mechanism. Solid curve is the simulation of PbTaSe$_2$ using the BCS model, assuming the superposition of magnetic and quadrupole relaxation mechanisms in addition to the distribution of the superconducting gap (see in the text), which reproduces the experimental data well, in contrast to the simulation through only the magnetic relaxation mechanism (dashed curve). } \end{center} \end{figure} In the superconducting state ($T < T_{\rm_c}$), $1/T_1$ shows exponential behavior with decreasing temperature, as seen in the Arrhenius plots of $T_1$ (see the inset of Fig. 3). We conclude that the superconducting state is dominated by the fully-gapped $s$-wave state in PbTaSe$_2$, which is consistent with previous results\cite{23,24,25,26,27,28,29,30,36}. The gap parameter $2\Delta$, estimated from the slope of the Arrhenius plot, is $\sim(3.09 \pm 0.07) k_{\rm_B}T_{\rm_c}$, which is slightly smaller than the value obtained in other experiments\cite{24,25,27,28,30}. Here we comment on the smaller superconducting gap size in $^{181}$Ta-NQR at zero magnetic fields compared to $^{207}$Pb-NMR results\cite{30}, which is not attributed to the absence of vortex cores. It is worth mentioning that recent theoretical calculations suggested that pairing interaction varies among different FSs\cite{31}. This leads to the distribution of superconducting gap sizes in PbTaSe$_2$, which was previously discussed in terms of multiple or two gap structure in some experiments\cite{23,24,25,26,27,28,29,30}. According to their calculation, the gap size of the FSs dominated by Ta-5$d$ orbitals is relatively smaller than that of the FSs dominated by Pb-4$p$ orbitals\cite{31}. Thus, the smaller gap size in the present Ta-NQR may indicate that the superconducting gaps in the FSs composed of dominant Ta-5$d$ orbitals were primarily probed assuming that the Ta-originated FSs may be strongly coupled with Ta nucleus. Further experiments are required to verify site-selective observation of specific superconducting gaps in multiband superconductors. Finally, we discuss the temperature dependence of $1/T_1$. It is well known that even in BCS $s$-wave superconductors like Ta$_3$Pd$_4$Te$_{16}$\cite{37} and Ta$_3$Sn\cite{41}, $1/T_1$ obtained by $^{181}$Ta-NQR drops abruptly without coherence peak just below $T_{\rm_c}$, as shown in Fig. 4. This is because the quadrupole mechanism in nuclear spin relaxation process is dominant due to the large nuclear quadrupole momentum of the $^{181}$Ta nucleus. As shown in Fig. 4, the suppression of the coherence peak in PbTaSe$_2$ is not as remarkable in comparison with the other two Ta-based BCS superconductors, while prominent in comparison with the simulation curve of $s$-wave superconductivity with the gap size obtained by the Arrhenius plot (dashed curve in Fig. 4). This indicates that the quadrupole relaxation mechanism is minor in PbTaSe$_2$ although it affects the relaxation process to a certain degree. Furthermore, the temperature dependence of $1/T_1$ in PbTaSe$_2$ should be affected by the multiple gap structure mentioned above since distribution of superconducting gap size also results in the suppression of the coherence peak. Taking these points into account, we have fitted the temperature dependence of $1/T_1$ by changing the following parameters: the ratio of quadrupole relaxation mechanism ($R_{\rm_Q}$) to magnetic one ($R_{\rm_M}$), and the broadness of quasi-particle energies in the density of states ($\delta$)\cite{42}. As shown in Figs. 3 and 4, the experimental results are reproduced well for $R_{\rm_Q}:R_{\rm_M}\sim2:8$ and $\delta/\Delta(0) \sim0.55$ (solid line in Figs. 3 and 4). The large broadening parameter $\delta/\Delta(0)$ is consistent with the essential broadening of gap energy proposed by the recent theoretical calculation that reveals the distribution of pairing interaction depending on each FS\cite{31}. \section{Summary} In summary, we performed a $^{181}$Ta-NQR measurement without magnetic fields in the non-centrosymmetric superconductor PbTaSe$_2$. Two sets of resonance peaks were observed in the $^{181}$Ta-NQR spectrum suggesting two inequivalent Ta sites. The DFT calculation identified the intrinsic Ta site in ideal $P$\={6}$m2$ structure, which enabled us to measure $T_1$ at the intrinsic site selectively. The exponential decrease in $1/T_1$ well below $T_{\rm_c}$ indicates a fully gapped superconducting state with superconducting gap size $2\Delta/(k_{\rm_B}T_{\rm_c}) = 3.09 \pm 0.07$. The smaller gap size obtained by $^{181}$Ta-NQR compared to $^{207}$Pb-NMR may suggest that the superconducting gaps in the Fermi surfaces attributed to Ta-5$d$ orbitals can be primarily probed, where the pairing interaction is calculated to be smaller than for FSs attributed to Pb-4$p$ orbitals. The temperature dependence of $1/T_1$ can be well explained by taking account of the gap size distribution proposed by theory\cite{31} in addition to the quadrupole relaxation mechanism. The {\it pure} $^{181}$Ta-NQR measurements in non-centrosymmetric PbTaSe$_2$ provide microscopic experimental evidence for the multiple superconducting gap properties inherent to multiband systems. \section*{Aknowledgements} We thank H. Usui for fruitful discussions. This work is partially supported by JSPS KAKENHI (Grant Nos. 16H04013, 18K18734, 16H06114, 18H04226, 16H06015 and 19H05173), JST PRESTO (No. JPMJPR16R2), the Murata Science Foundation, and the Mitsubishi Foundation. \section*{$*$Corresponding author} yokoi@gmr.phys.sci.osaka-u.ac.jp murakawa@phys.sci.osaka-u.ac.jp mukuda@mp.es.osaka-u.ac.jp
{ "timestamp": "2020-12-29T02:25:41", "yymm": "2012", "arxiv_id": "2012.14107", "language": "en", "url": "https://arxiv.org/abs/2012.14107" }
\section{Introduction} \label{sec:intro} Despite the detection of more than 4000 exoplanets in the past 25 years, very little is known about most exoplanets aside from an orbital period and a radius or minimum mass. This is particularly frustrating for exoplanet populations such as warm and hot Jupiters, which have no analogue in our own solar system and therefore pose interesting questions for theories of planet formation and evolution. The formation of hot Jupiters has been an open topic since the first exoplanet was discovered around a Sun-like star \citep{mayor}. Recent theoretical work and analysis of multi-planet systems has raised the possibility that hot Jupiters may form in-situ \citep[e.g.][]{Batygin_2016}, contrasting with earlier work suggesting formation beyond the H$_2$O snow line followed by inward migration and tidal circularization \citep[e.g.][]{Lin1996}. Intermediate-temperature gaseous planets ($\rm T_{eq} \approx 1000$ K) are particularly interesting in this context, as the tidal circularization timescale for more widely-separated planets is often much longer than the age of the system \citep{correia_2010}. Furthermore, while nearly all known hot Jupiters (period $\rm P < 10$ days) lack close companions, half of warm Jupiters ($\rm 10 < P < 200$ days) have such companions, suggesting different evolutionary processes may be responsible for hot versus warm giant planets \citep{Huang_2016}. In recent years, a range of formation mechanisms have been proposed for warm Jupiters. Many of these theories involve planet-planet scattering interactions based on the high companion fraction and moderate eccentricities of known warm Jupiters \citep{Petrovich_2016, Masuda_2017, Anderson_2017, Anderson2020}, but it is not clear whether such interactions are responsible for delivering these planets to their observed location from beyond the snow line, or whether these planets initially form in-situ. It is also unclear how the hot and warm Jupiter populations are related. Detailed constraints on exoplanet atmospheres have the potential to resolve the ambiguous origins of both hot and warm Jupiters. The ratio of carbon to oxygen (C/O ratio) in gas versus solids in a static protoplanetary disk has been shown to vary consistently with orbital distance \citep{Oberg_2011}. This suggests the atmospheric C/O ratio of giant exoplanets may offer insight into where and how these planets form \citep{Madhusudhan_2014}, though accretion of solid material significantly complicates the relationship \citep[e.g.][]{espinoza_2017}. With sufficiently accurate models of accretion histories, atmospheric metallicity and C/O ratio could allow planets which formed beyond the water snow line and migrated to their present position to be distinguished from planets which initially formed in the inner disk. This would help answer the question of in-situ formation versus migration mechanisms and shed light on the possible difference in formation mechanisms for hot and warm Jupiters. Making these measurements requires the ability to simultaneously detect carbon and oxygen-bearing species in both stellar and exoplanetary atmospheres and estimate their relative abundances. Treating the star/planet system as a spectroscopic binary offers an avenue for planet characterization that is less dependent on orbital inclination compared with transit-based techniques. Hot Jupiters are sufficiently bright in the thermal infrared to be detected using high-resolution spectrographs and cross-correlation with a model planet spectrum, an approach referred to as high-resolution cross-correlation spectroscopy (HRCCS). In the past decade, this has been used to detect molecules including H$_2$O, CO, TiO, HCN, and CH$_4$ in both transiting and non-transiting hot Jupiter atmospheres \citep[e.g.][]{Snellen2010, Rodler2012, lockwood, Brogi2014, Birkby_2017, nugroho_2017, Hawker_2018, Guilluy_2019}, as well as provide rough constraints on the planetary C/O ratio \citep{Brogi_2017, Piskorz2018}. As these techniques yield a value for the target exoplanet's radial velocity semi-amplitude $\rm K_p$, these detections have the added benefit of breaking the mass/inclination degeneracy of RV-detected planets, giving the true planet mass. An important difference compared with other exoplanet characterization techniques is that the cross-correlation function uses all lines present in both the observed spectrum and a model spectrum to detect the planet. Placing constraints on the target atmosphere therefore requires assessing how the cross-correlation function varies as a result of changes in the model spectrum used to perform the correlation. For example, \citet{lockwood} uniquely identified the presence of H$_2$O in the atmosphere of $\tau$ Boo b by performing the cross-correlation with a set of models containing only a single molecule's spectrum, only making a detection when the H$_2$O model was used. \citet{Piskorz2018} extended this approach to attempt constraints on the C/O ratio, metallicity, and incident stellar flux in the atmosphere of KELT-2Ab by correlating with a series of planet models with different parameters. Two techniques for HRCCS have been developed. The ``1D" approach, described in \citet{Snellen2010}, is effective for very close-in planets whose projected orbital velocity changes significantly over a few (typically 5--7) hours of observation. Over short timescales, stellar lines remain fixed in wavelength, while planet lines shift due to orbital motion. A 1D cross-correlation with a planet spectral template can then be used to identify lines that shift over the course of the observation, effectively measuring the planet's radial acceleration, which yields the planet radial velocity semi-amplitude $\rm K_p$. As stellar lines are fixed in wavelength over the time series, the impact of such features on the planet detection is minimal, greatly simplifying the analysis procedures. An alternative approach, known as the ``2D" or ``multi-epoch" technique, was first described in \citet{lockwood}. Rather than taking a nearly continuous series of exposures over many hours, this approach takes shorter observations spread over several nights, during each of which the planet features are effectively fixed in wavelength. In this technique, the extreme contrast between stellar and planetary lines and fixed planetary velocity within an observation requires a 2D cross-correlation using both stellar and planetary model spectra to simultaneously identify the fixed stellar and planetary velocities. The cross-correlation surfaces for each night are then combined to determine a best-fit planetary velocity semi-amplitude. As a result, the 2D technique is sensitive to the total variation in the planetary radial velocity over the planetary orbital period, rather than only the change over a few hours of observation. This should enable the detection of slower-moving planets on longer orbital periods which are inaccessible to the 1D technique, as the planetary radial acceleration is a much stronger function of the semimajor axis than the planetary radial velocity. Although both HRCCS techniques have proven successful, the limitations of the 2D approach are not well understood. While the 1D technique is limited to close-in planets by the large radial acceleration required, the total orbital velocity change measured by the 2D technique is significantly larger than the resolution of existing instruments for circular orbits up to $\sim 4$ AU from a Sun-like star. This suggests that the detection limit for the 2D technique will be determined by the physical and chemical properties of the system, rather than instrument resolution. A detailed understanding of how various physical and chemical factors, such as photometric contrast, star/planet equilibrium temperatures, and planet atmospheric composition affect detection could enable the characterization of a broad population of exoplanets which are too widely separated from their host stars for the 1D technique to be effective but too close to be detected with direct imaging. A key advantage over transit-based characterization techniques is the much weaker inclination requirements allow cross-correlation techniques to be used on many more targets, particularly at larger orbital separations. In this paper, we present a series of simulations that begin to address the shortcomings in our knowledge of the limits to the 2D multi-epoch approach, based on observations taken with Keck-NIRSPEC2.0 \citep{McLean1998, martin2018}, focusing primarily on the photometric contrast, equilibrium temperature, and C/O ratio of the target exoplanet. We consider in detail the factors affecting exoplanet detectability in the $L$-band (2.9--3.7 $\mu$m) using the 2D multi-epoch approach from \citet{lockwood}, simulating observational, instrumental, and physical inputs as described in \citet{Buzard_2020}. We describe these simulations in detail in Section \ref{sec:sims}, and discuss the resulting limits on planet size/photometric contrast in Section \ref{sec:rsim} and impact of equilibrium temperature on detection in Section \ref{sec:tlim}. Based on the findings in Section \ref{sec:rsim}, we assess the ability of the cross-correlation technique to constrain C/O ratio in Section \ref{sec:co}, focusing on planets cooler than those previously studied. Instrumental factors in detection are assessed through simulations in Section \ref{sec:inst}. Section \ref{sec:disc} discusses the results of these simulations and the implications for future multi-epoch HRCCS observations. Section \ref{sec:conc} summarizes our findings. \section{Methods}\label{sec:sims} \begin{figure*} \centering \noindent\includegraphics[width=39pc]{specplots.pdf} \caption{Model $L$-band planet spectra for $\rm T_{eq}$ of 1400 K (left) and 600 K (right), masking regions not observed with NIRSPEC. The top row plots the total spectrum including all species, while the middle and bottom rows plot the individual H$_2$O and CH$_4$ spectra respectively. While the spectrum is dominated by weak H$_2$O features at 1400 K, the 600 K spectrum is dominated by deep CH$_4$ features. The dramatic difference in spectra indicates the spectroscopic contrast evolves differently with $\rm T_{eq}$ than the photometric contrast, benefiting the cross-correlation detection of cooler planets.} \label{plmodspec} \end{figure*} \begin{figure} \centering \noindent\includegraphics[width=20pc]{brogilinecomparison.pdf} \caption{Comparison of the \citet{zucker2003} $\log L$ and \citet{brogi2019} techniques for combining cross-correlations with 10 and 25 epochs. Side peaks near the exoplanet velocity due to features in the stellar spectrum are stronger in the \citet{brogi2019} approach compared with the \citet{zucker2003} approach, causing us to focus on the \citet{zucker2003} method for this work.} \label{brogizuckercomp} \end{figure} High-fidelity simulations of 2D multi-epoch HRCCS were first presented in \citet{Buzard_2020}. The description of the simulated spectra is summarized here for completeness. These simulations make use of PHOENIX stellar models from \citet{phoenix}, interpolating over equilibrium temperature $\rm T_{eq}$, metallicity $z$, and surface gravity $\log g$ in order to match the desired stellar properties. For all simulations, a model stellar spectrum based on the Sun-like star HD 187123 A from \citet{Buzard_2020} is used, with effective temperature $\rm T_{eff} = 5815$ K, surface gravity $\log g = 4.359$, and metallicity $\rm [Fe/H] = 0.121$. Planet models are created with SCARLET \citep{benneke, benneke_2019}, and can be generated with specified values for $z$, $\log g$, C/O ratio, and $\rm T_{eq}$. The planet models are similar to those used in \citet{Buzard_2020} for HD 187123 b, though with a noninverted atmospheric temperature-pressure profile. The lack of an inverted T--P profile at the temperatures simulated is supported by both theory and observations \citep[e.g.][]{Fortney_2008, Line_2016}. Unless otherwise noted, planet models have a solar C/O ratio and metallicity, Jupiter surface gravity, $\rm R = 1.0\ R_J$, and $\rm T_{eq} = 1400\ K$. Figure \ref{plmodspec} plots simulated planet spectra with equilibrium temperature of 1400 K and 600 K over the wavelength ranges simulated, which are based on prior observations with Keck/NIRSPEC2. In general, 10 or 25 epochs were simulated, evenly spaced in orbital phase, with a per-epoch signal-to-noise ($\rm S/N_{epoch}$) of 1500 for 25 epochs and 2500 for 10 epochs. The 10--epoch, high $\rm S/N_{epoch}$ case is meant to approximate a large telescope such as Keck, while the 25--epoch case approximates a smaller, less oversubscribed telescope which can obtain more epochs but with lower S/N in a given integration time. These simulations represent a significantly larger data set than previous observations, and are intended to set limits for the many-epoch, lower signal-to-noise observing strategy suggested in \citet{Buzard_2020}. To create the simulated observed spectra, stellar and planetary model spectra are first Doppler-shifted based on the systemic velocity of the target at the time of observation and the planet orbital velocity: \begin{equation} \begin{split} v_{pri} & = v_{rad} - v_{bary} \\ v_{sec} & = K_p \sin \frac{2 \pi t_{obs}}{P} + v_{rad} - v_{bary}\\ \end{split} \end{equation} Where $v_{pri}$ is the velocity of the star, $v_{sec}$ is the velocity of the planet, $v_{rad}$ is the systemic radial velocity, $v_{bary}$ is the velocity of the observer at the time of observation with respect to the Earth-Sun barycenter in the direction of the target system, $\rm K_p$ is the planet radial velocity semi-amplitude, and $t_{obs}$ is the time of observation measured from inferior conjunction. The stellar reflex velocity is much smaller than the velocity precision of NIRSPEC, and therefore not included. Barycentric velocities for simulations were taken from a selection of values evenly spaced between -15 and 15 km $\rm s^{-1}$, and were the same for all simulations of the same number of epochs. An orbital period of 3.1 days and $\rm K_p = 75$ km $\rm s^{-1}$\ were chosen based on typical properties for a hot Jupiter system around a Sun-like star. Observation times were selected to give even phase sampling of the planetary orbit. Changes in period do not impact the final cross-correlation surface, provided $\rm K_p$, $v_{bary}$, and observed orbital phases are fixed, and these simulations should therefore apply to longer orbital periods as well. The spectra are then scaled to the desired radii and combined, interpolating onto the wavelength grid of the planet model. Stellar continuum is removed with a third--order polynomial fit over 2.8--4.0 $\mu$m in wavenumber space, the same procedure as is used for the stellar spectral template in the cross-correlation routine. The combined model is convolved with an instrument profile determined from prior NIRSPEC2 observations and interpolated onto a NIRSPEC2 wavelength grid, at which point regions with observed telluric absorption of more than 25 percent are masked. Finally, Gaussian noise is added at the desired per-pixel signal-to-noise ratio of the observation. The combined simulated spectrum is passed into a 2D cross-correlation routine to identify the planet signal and compute a log-likelihood $\log L$. For this work, the \citet{zucker2003} $\log L$ approach as outlined in \citet{Buzard_2020} is used to convert 2D cross-correlation functions into log-likelihoods: \begin{equation} \log L = -\frac{N}{2}\log(1-R^2) \end{equation} Where $N$ is the number of points in the spectra and $R$ is the 2D cross-correlation function, calculated as described in \citet{zuckertodcor}. An alternative technique to combining 1D cross-correlation functions was described in \citet{brogi2019}, and adapted for 2D cross-correlation functions in \citet{Buzard_2020}: \begin{equation} \log L = -\frac{N}{2} \left\{ \log(\sigma_f \sigma_g) + \log\left[\frac{\sigma_f}{\sigma_g} + \frac{\sigma_g}{\sigma_f} - 2R\right] \right\} \end{equation} Where $\sigma_f$ is the variance of the observed spectrum and $\sigma_g$ is the combined variance of the two correlation templates. We compare these two approaches in 10 and 25 simulated epochs, shown in Figure \ref{brogizuckercomp}. The peak corresponding to the planet is similar in both cases, but as discussed in \citet{Buzard_2020}, off-peak correlation features are stronger relative to the true peak using the \citet{brogi2019} technique, particularly the feature near 25 km $\rm s^{-1}$. We therefore focus on the \citet{zucker2003} technique as it gives a clearer detection of the true planet peak in the $\log L(\rm K_p)$ curve. In Section \ref{sec:rsim}, we introduce a technique to correct for these off-peak correlation features, which leads to nearly identical performance between the two approaches. We continue to prefer the \citet{zucker2003} technique since it is unlikely star-only simulations will be able to correct off-peak structure with the same precision in real observations. In cases where there is significant non-planetary structure in the $\log L$ surface, it is useful to consider the relative likelihood surface, $\log RL$: \begin{equation}\label{rleqn} \log RL = \log L - \log \Bar{L} \end{equation} Where $\log \bar{L}$ is the log-likelihood function arising from features unrelated to the planet spectrum, such as stellar features or telluric residuals. The subtraction in log space is equivalent to a division in linear space, and is effectively normalizing the planetary log-likelihood function based on prior knowledge of the structure of the correlation space. \citet{Buzard_2020} showed that simulations constructed in this way could closely replicate actual NIRSPEC observations for HD 187123, a system containing a hot Jupiter orbiting a Sun-like star. Notably, the guided PCA telluric removal is sufficiently complete that features included in model telluric spectrum are almost entirely removed. This allows us to generate simulated observations without simulating the full telluric removal procedure. Unlike observations, these simulations will not include off-peak structure due to differences between the observed and model telluric spectra, but such features do not dominate the log-likelihood surface. A range of exoplanet systems can be simulated to assess the role of various observational and physical factors in planet detection through the 2D multi-epoch approach. Different stellar and planetary properties can be mimicked by changing the model spectra used to create the simulations, though we can only compare with observational results for hot Jupiters. Observational properties can be modeled by changing the signal-to-noise ratio, the number of epochs, and the sampling of the orbital phase by the simulated epochs. Orbital properties are modeled by changing the period and amplitude of the planet radial velocity variation to match the desired inclination and semi-major axis. \section{Results}\label{sec:res} \subsection{Photometric Contrast Simulations}\label{sec:rsim} \begin{figure*} \centering \noindent\includegraphics[width=40pc]{rgrid_allmolecules_e25sn1500.pdf} \caption{Relative likelihood curves for 25 observed epochs with $\rm S/N_{epoch} = 1500$ of planets with equilibrium temperatures of 600 K, 1000 K, and 1400 K and photometric contrasts corresponding to radii ranging from 0.7 $\rm R_J$ to 1.3 $\rm R_J$. Structured correlation has been removed by subtracting the log-likelihood curve of a star-only simulation, and all curves have been shifted to have an off-peak median of zero for plotting. The top left panel plots the combined relative likelihood function of all molecules in the SCARLET planet model, while the remaining panels plot the relative likelihood curves from individual molecules. While H$_2$O dominates the full model detection at 1400 K, CH$_4$ dominates at 600 K, transitioning around 1000 K. The detection of cooler planets is stronger than expected based on the difference in photometric contrast compared with warmer planets.} \label{radgridall} \end{figure*} \begin{deluxetable*}{ccccccc}\centering \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{\colhead{Planet Radius} & \colhead{T = 600, 10 Epochs} & \colhead{T = 600, 25 Epochs} & \colhead{T= 600, 50 Epochs} & \colhead{T = 1400, 10 Epochs} & \colhead{T = 1400, 25 Epochs} & \colhead{T= 1400, 50 Epochs} \\ $\rm R_J$ & $\rm S/N_{epoch}$ = 2500 & $\rm S/N_{epoch}$ = 1500 & $\rm S/N_{epoch}$ = 1500 & $\rm S/N_{epoch}$ = 2500 & $\rm S/N_{epoch}$ = 1500 & $\rm S/N_{epoch}$ = 1500} \startdata 1.3 & 4.6 & 9.7 & 19.7 & 5.4 & 12.3 & 24.7 \\ 1.2 & 3.3 & 11.1 & 20.5 & 4.3 & 10.6 & 21.8 \\ 1.1 & 2.4 & 7.5 & 14.5 & 3.6 & 8.4 & 17.7 \\ 1.0 & 2.6 & 5.7 & 11.5 & 3.1 & 6.9 & 15.0 \\ 0.9 & 1.2 & 4.7 & 8.7 & 2.6 & 5.9 & 12.7 \\ 0.8 & 1.3 & 5.7 & 7.0 & 1.8 & 5.5 & 10.5 \\ 0.7 & 0.9 & 2.4 & 5.4 & 1.6 & 3.7 & 7.9 \\ \enddata \caption{Detection likelihood ratios with varying $\rm N_{epochs}$ and $\rm S/N_{epoch}$, structured correlation removed, using complete planet templates. This approach to the detection confidence is easy to compute and yields the correct relative detection strengths, but underestimates the absolute confidence compared with modeling-based approaches which account for the width of the planet feature in the likelihood space. Additional epochs significantly improve the detection strength, even when the total signal-to-noise across all epochs is similar.} \label{detstrengths} \end{deluxetable*} To set rough limits on the equilibrium temperature and photometric contrast required to make a detection, we simulate the thermal emission spectra of planets with temperature structures generated self-consistently for equilibrium temperatures 600 K, 1000 K, and 1400 K. We vary photometric contrast by scaling the planet models from 0.7 $\rm R_J$ to 1.3 $\rm R_J$ in steps of 0.1 $\rm R_J$. Note that we do not re-compute the planet model for each radius/photometric contrast to account for changes in gravity or temperature--pressure (T--P) profile. We simulate 10 epochs with $\rm S/N_{epoch}$ of 2500 and 25 and 50 epochs with $\rm S/N_{epoch} = 1500$, for total S/N of 7900, 7500, and 10600, respectively. The variation in radius corresponds to an $L$-band photometric contrast range of $2.2\times10^{-5}$ to $7.6\times10^{-5}$ for $\rm T_{eq} = 600$ K, $1.6\times10^{-4}$ to $5.4\times10^{-4}$ for $\rm T_{eq} = 1000$ K, and $3.7\times10^{-4}$ to $1.3\times10^{-3}$ for $\rm T_{eq} = 1400$ K. This allows the effect of variations in temperature given radius, variations in radius given temperature, and simultaneous variations in both radius and temperature to be distinguished and compared. \emph{A priori}, we expect larger and hotter planets should be more easily detected for fixed stellar properties due to the smaller photometric contrast with the host star giving a better signal-to-noise on planet lines. Previous applications of the 2D multi-epoch cross-correlation technique have found significant non-planetary structure in the observed $\log L(\rm K_p)$ curves, well in excess of the errors estimated from jackknife tests \citep{piskorz88133, piskorzupsand, Piskorz2018, Buzard_2020}. These features are believed to arise from a combination of telluric residuals, features in the observed star/planet not represented in the template spectra, and the correlation between the planet template and observed stellar spectrum. In some cases \citep[e.g.][]{lockwood, Buzard_2020}, these effects can be comparable in strength to the cross-correlation features caused by the planet, requiring careful identification and modeling to correctly identify the planet signal and complicating estimates of the detection strength. Proper echelle and cross-disperser settings can also help to minimize these effects by avoiding spectral regions prone to strong telluric residuals or windows where the stellar and planetary spectra are strongly correlated. As our simulation framework does not include tellurics and uses the same model framework for both cross-correlation templates and simulating observed spectra, the off-peak structure (visible in Figure \ref{brogizuckercomp}) should be dominated by the correlation between the planet template and the stellar spectrum. In order to remove structured off-peak correlation, a set of star-only observations containing no planet spectrum was simulated and the resulting $\log \bar{L}(\rm K_p)$ curve subtracted from each of the simulated planet $\log L(\rm K_p)$ curves to give a relative likelihood, $\log RL(\rm K_p)$ (see equation \ref{rleqn}). This has the effect of normalizing the likelihood surface by the correlation of the planet model with the observed stellar spectrum. The lack of significant structure in Figure \ref{radgridall} between -150 and 0 km $\rm s^{-1}$\ compared with Figure \ref{brogizuckercomp} shows that the subtraction of the star-only simulation almost entirely eliminates the structured off-peak correlation in our simulations. Simulations of the stellar spectrum were compared to observations of the combined star/planet spectrum in \citet{Buzard_2020} and successfully reproduced some, but not all, observed non-planetary features in the log-likelihood surface, indicating that the near-perfect correction of structured correlation is not yet achievable in practice. A discussion of how this correction may be improved in observations is presented in Section \ref{sec:disc}. The inability to achieve good corrections for off-peak correlation in observations complicates the detection of smaller/cooler planets compared with simulations, and the results presented in Figure \ref{radgridall} and Table \ref{detstrengths} likely overestimate the detectability of such planets with current pipelines. However, correcting the off-peak correlation enables a more robust comparison between different simulations, and offers insight on the possibilities with future pipeline improvements. The $\log RL(\rm K_p)$ curves for each $\rm T_{eq}$ and $\rm R_p$ are plotted in Figure \ref{radgridall}, and detection likelihood ratios are listed in Table \ref{detstrengths}. Jackknife tests are used to estimate the shaded 1$\sigma$ error region. In addition to the total planet spectrum, we perform the cross-correlation with molecule-specific planet templates for H$_2$O, CH$_4$, CO, NH$_3$, H$_2$S, CO$_2$, and PH$_3$, in order to assess the impact of individual species on the detection. Only H$_2$O and CH$_4$ are robustly detected, and are plotted in Figure \ref{radgridall} along with the NH$_3$ and complete templates for comparison. The relative strength of the H$_2$O and CH$_4$ detections varies with simulated temperature, as expected from the equilibrium chemistry. The NH$_3$ template gives a featureless $\log RL(\rm K_p)$ curve in the 1400 K and 1000 K models, but shows weak features in the 600 K case, though not significant in comparison with H$_2$O or CH$_4$. The H$_2$S template shows some weak features near the expected planet peak in the 1000 K models, but the detection is similarly not significant compared with H$_2$O or CH$_4$. The CO, CO$_2$, and PH$_3$ templates all produce featureless $\log RL(\rm K_p)$ curves after correcting for the stellar contribution in all cases, consistent with the expected lack of $L$-band spectral features. In the 600 K simulations, correlating with all molecules in the planet model results in a much stronger detection than would be expected from the individual H$_2$O and CH$_4$ cross-correlation functions. This does not appear to be the result of other molecules contributing substantially to the total detection. Rather, we note that omitting a molecule from the cross-correlation template -- but not the simulated observation -- effectively decreases the signal-to-noise of the observation. The features present in the simulated data but omitted from the template will still correlate with the template spectrum, but at values other than the planetary radial velocity, increasing the off-peak correlation and reducing the relative size of the planet peak. The impact is more pronounced than an increase in the Gaussian noise because the non-random structure produces off-peak structure that will add consistently when epochs are combined, whereas structure arising from random noise will average to zero across epochs. Detection strength estimates for HRCCS detections have been made in a variety of ways. In NIRSPEC observations prior to the 2019 upgrade, \citet{lockwood} reported a 6$\sigma$ detection in 5 epochs by comparing observed and synthetic log-likelihood functions. \citet{piskorz88133} reported a strong detection based on 6 epochs by computing the Bayes factor, comparing a Gaussian planet detection with a linear nondetection in the log-likelihood space. \citet{piskorzupsand} reported a 3.7$\sigma$ detection in 7 epochs and \citet{Piskorz2018} reported a 3.8$\sigma$ detection in 6 epochs using a similar approach. \citet{Buzard_2020} reported a 6.5$\sigma$ detection in 7 epochs, two of which were taken with the upgraded NIRSPEC, by fitting the full observed log-likelihood function with simulations. Using the Bayes factor technique described in \citet{Piskorz2018} and the complete all-molecule planet template results in $>5\sigma$ detections for all 50-epoch simulations and all 25-epoch simulations except for two cases with $\rm T_{eq} = 600$ K, $\rm R \leq 1.1 R_J$. This is consistent with the higher resolution, increased spectral coverage, and increased number of epochs used in the simulations compared with prior NIRSPEC observations, as well as the correction for stellar correlation features clarifying the planet signal. The 10-epoch simulations range from $<2\sigma$ to $>5\sigma$, despite roughly the same total S/N as the 25-epoch simulations, indicating that many low-S/N observations yield more reliable detections than a smaller number of epochs with similar total S/N, consistent with the simulations from \citet{Buzard_2020}. The correction of non-planetary correlation features allows us to compare the Bayes factor approach with a likelihood ratio, which we compute as: \begin{equation} LR = 2\times[\log L(\hat K_p) - \log L(\Bar{K_p})] \end{equation} Where $\log L(\hat K_p)$ the the value of the log-likelihood function at the planet peak and $\log L(\Bar{K_p})$ is the median value far from the planet peak. The resulting LR values are listed in Table \ref{detstrengths}. While these values can be used to compute a $\sigma$ detection confidence using Wilkes' Theorem, doing so fails to account for the width of the planet feature in the log-likelihood space, and underestimates the detection confidence as a result. However, the likelihood ratio approach is easier to calculate for a large number of simulations, particularly for very well-detected cases, and still yields the correct {\it relative} detection strengths. These factors make the likelihood ratio a more useful measure of how detection confidence will vary with planet properties and observation strategy. \subsection{Planet Temperature Simulations}\label{sec:tlim} \begin{figure*} \centering \noindent\includegraphics[width=39pc]{temp_mlrl_3.pdf} \caption{Two-parameter $\log RL(\rm K_p, T_{eq})$ relative likelihood functions for simulations with equilibrium temperatures of 700 K, 1000 K, and 1300 K. True values are indicated in dashed red. Contours in the left column represent approximately $3\sigma$ changes in the 2D surface while contours in the right column are set at $2\sigma$. The right column subtracts a star-only set of simulated observations to remove structure arising from the stellar spectrum, clarifying the temperature sensitivity. The best-fit $\rm T_{eq}$ is consistently lower than the true value by several hundred K. The preferred value of $\rm K_p$ is nearly independent of temperature.} \label{tgridall} \end{figure*} We assess the impact of planet equilibrium temperature on the cross-correlation detection by correlating 10 and 25 epochs of simulated observations with $\rm R = 1.0\ R_J$, $\rm S/N_{epoch} = 1500$, and varying planet equilibrium temperatures with models ranging from 600 K to 1400 K in steps of 100 K, covering the ``warm/hot" range of exoplanet temperatures. This allows us to compute the two-parameter log-likelihood function $\log L(\rm K_p, T_{eq})$, plotted in the left column of Figure \ref{tgridall} for $\rm T_{eq} = $ 1300 K, 1000 K, and 700 K. As before, we also compute the log-likelihood surface from a simulation of the stellar spectrum alone, which is then subtracted in order to remove structure arising from the correlation between stellar and planetary spectra (see equation \ref{rleqn}). The resulting log relative likelihood surfaces, $\log RL(\rm K_p, T_{eq})$, are plotted in the right column of Figure \ref{tgridall}. \begin{figure*} \centering \noindent\includegraphics[width=40pc]{t800e25_model_grid_prob.pdf} \caption{Temperature constraint for 25 epochs of a 1 $\rm R_J$ planet with $\rm T_{eq} = 800$ K. On the left, the 1D marginalization of the 2D log relative likelihood surface near the best-fit $\rm K_p$ is plotted for the ``observed" spectrum and a grid of models with different temperatures. The right panel plots the probability density $\rm P(T_{eq})$ computed from the deviations between the ``observed" and model curves in the left panel, with the true value indicated in solid red, best fit in dashed red, and 1$\sigma$ errors in dotted red.} \label{tfitpeak} \end{figure*} We then marginalize the two-parameter $\log RL(\rm K_p, T_{eq})$ surface over the 10 km $\rm s^{-1}$\ region surrounding the best-fit value of $\rm K_p$, roughly corresponding to the resolution of NIRSPEC, to obtain $\log RL(\rm T_{eq})$, plotted in black in the left panel of Figure \ref{tfitpeak}. The size of the marginalization region has minimal impact on the final constraints obtained for simulations in which unwanted star/planet correlation features are effectively removed. We use a 10 km $\rm s^{-1}$\ region for subsequent analysis in order to reflect the uncertainty in the best-fit $\rm K_p$ value while minimizing contamination from residual non-planetary correlation features. As expected based on the right column of Figure \ref{tgridall}, the $\log RL(\rm T_{eq})$ curve is temperature-dependent but does not show a clear peak at the true value. We therefore use a modeling approach to attempt to constrain the temperature, analogous to the technique used by \citet{Buzard_2020} to determine $\rm K_p$ in the presence of significant non-planetary features in the $\log RL(\rm K_p)$ curve. In this case, $\rm K_p$ is already well-constrained (see Figure \ref{tgridall}, right column), and we instead attempt model extraneous features in the $\log RL(\rm T_{eq})$ curve in order to better estimate $\rm T_{eq}$. A series of models is created with equilibrium temperatures from 600 K to 1400 K in steps of 100 K. The two-parameter log-likelihood surface is calculated for each model, and a simulated stellar spectrum is subtracted to remove off-peak correlation features. Finally, we marginalize the models over the same $\rm K_p$ range as was previously used for the ``observed" spectrum. Both the ``observed" slice we wish to constrain and these model slices are subtracted to have zero mean, accounting for the lack of Gaussian noise in the models, and plotted in the left panel of Figure \ref{tfitpeak}. While there is not a unique peak at the correct temperature, the $\log RL(\rm T_{eq})$ function still shows significant changes in its shape with model equilibrium temperature. Constraining the temperature therefore requires an additional step to compare the observed and expected $\log RL(\rm T_{eq})$. We compute the negative log-likelihood of the deviation between the ``observed" curve and each model, under the assumption that the model points are Gaussian-distributed: \begin{equation}\label{logldev} - \log L(T_{eq}) = -\sum_i \frac{(O_i - M(T_{eq})_i)^2}{2\sigma^2(T_{eq})_i} \end{equation} Where $O_i$ is the value of the observed log-likelihood function at equilibrium temperature $i$ in the parameter space and $M(\rm T_{eq})_i$ is the value of the model log-likelihood function with equilibrium temperature $\rm T_{eq}$ at temperature $i$. $\sigma(\rm T_{eq})_i$ is the error in the associated observed values determined from a jackknife test. To more easily compute best-fit values and confidence intervals, we convert the log-likelihood to a probability distribution $\rm P(\rm T_{eq})$ by exponentiating and normalizing the integral. This can be written as: \begin{equation}\label{loglpdf} P(T_{eq}) = C \prod_i \exp\left(\frac{(O_i - M(T_{eq})_i)^2}{2\sigma^2(T_{eq})_i}\right) \end{equation} \begin{equation} = C \exp\left(\sum_i\frac{(O_i - M(T_{eq})_i)^2}{2\sigma^2(T_{eq})_i^2}\right) \end{equation} Where $C$ is a numerically-determined normalization constant so that $\rm P(T_{eq})$ integrates to unity. The value of $\rm T_{eq}$ which maximizes $\rm P(T_{eq})$ is the best-fit temperature of the ``observed" planet and the 1$\rm \sigma$ errors on $\rm T_{eq}$ are computed as the interval enclosing an area of 0.68 around the best-fit value. These are reported in Table \ref{teffcon}. Meaningful constraints are obtained for $\rm T_{eq} \leq 1200$ K in both the 10 and 25 epoch cases. At higher temperatures, the fitting becomes unreliable, with the 1300 K and 1400 K cases effectively indistinguishable even in the 25 epoch simulations. The measured values are generally smaller than the true values, and an increased number of epochs results in negligible improvements to accuracy. Additionally, the reported precision based on jackknife estimates of the error in $\log RL(\rm T_{eq})$ appears to underestimate the true uncertainty. Comparing the true and measured values suggests the precision is $\sim$50--100 K, though the 100 K resolution of the model grid limits our ability to make a robust estimate. At the same time, Figure \ref{tgridall} shows that the correct value of $\rm K_p$ is recovered even with significant mismatches in $\rm T_{eq}$ between observed and template spectra, so strong priors are not required in order to make an initial detection. While neglecting the day/night temperature mismatch will have a negative impact on the detection strength compared with models accounting for the full temperature structure, it does not produce an error in $\rm K_p$. Finally, we note that the detection strength is maximized when the cross-correlation template is cooler than the observed spectrum by $\sim200$ K. \begin{deluxetable}{ccc}\centering \tablewidth{0pt} \tabletypesize{\scriptsize} \vspace{0.5cm} \tablehead{\colhead{``Observed" $\rm T_{eq}$ [K]} & \colhead{Measured, 10 epoch} & \colhead{Measured, 25 epoch}} \startdata 700 & $660\pm20$ & $640\pm10$ \\ 800 & $720\pm30$ & $760\pm20$ \\ 900 & $850\pm20$ & $850\pm20$ \\ 1000 & $1010\pm20$ & $1020\pm20$ \\ 1100 & $1100\pm30$ & $1120\pm20$ \\ 1200 & $1160\pm90$ & $1180\pm90$ \\ \enddata \caption{$\rm T_{eq}$ constraints from 10 and 25 epochs of a 1 $\rm R_J$ planet with $\rm S/N_{epoch} = 1500$. Accurate constraints are obtained for planets with $\rm T_{eq} \leq 1200$ K, though the jackknife estimates appear to underestimate the errors.} \label{teffcon} \end{deluxetable} \subsection{C/O Simulations for Warm Planets}\label{sec:co} \begin{figure*} \centering \noindent\includegraphics[width=39pc]{co_mlrl_3.pdf} \caption{Two-parameter $\log L(\rm K_p,C/O)$ and $\log RL(\rm K_p,C/O)$ log-likelihood functions for simulations with equilibrium temperatures of 900 K and C/O of 0.8, 0.5, and 0.2. True values are indicated in dashed red. Contours in the left column represent approximately $3\sigma$ changes in the 2D surface while contours in the right column are set at $2\sigma$. Stellar correlation features have been subtracted in the right column to produce a relative likelihood. The changes with simulated C/O suggest a sensitivity to the C/O ratio. The recovery of the correct $\rm K_p$ despite significant mismatches between the observed and correlation model C/O ratios indicates strong priors are not required for detection. } \label{cogridall} \end{figure*} \citet{Piskorz2018} attempted to place a C/O ratio constraint on the hot Jupiter KELT-2Ab using 2D multi-epoch cross-correlation with data from Spitzer and pre-upgrade Keck-NIRSPEC. The large errors in this measurement are consistent with Figure \ref{plmodspec} and Figure \ref{radgridall}, which for a hot planet indicate the $L$-band spectrum and resulting cross-correlation function is dominated by H$_2$O features, with no measurable contribution from carbon-bearing species. The lack of carbon species lines at high temperature prevents measurement of the C/O ratio. However, the simultaneous detection of H$_2$O and CH$_4$ at equilibrium temperatures of approximately 1000 K in Figure \ref{radgridall} suggests a meaningful C/O constraint may be possible for cooler planets at the same wavelengths, assuming chemical equilibrium and known metallicity. To assess the ability to constrain C/O in cooler planets, we simulate 25 epochs of planets with an equilibrium temperature of 900 K, radius 1 $\rm R_J$, $\rm S/N_{epoch} = 1500$, and a range of C/O ratios, analogous to the simulations in Section \ref{sec:tlim} but varying C/O ratio instead of $\rm T_{eq}$. We perform a set of simulations with solar metallicity and another with $10\times$ solar metallicity to estimate the impact on the C/O constraint. The left column of Figure \ref{cogridall} plots the two-parameter log-likelihoods, $\log L(\rm K_p, C/O)$ for simulations with C/O of 0.8, 0.5, and 0.2 at solar metallicity. As with the two-parameter temperature likelihood functions, a star-only simulation was subtracted to reduce structured off-peak correlation, producing the $\log RL(\rm K_p, C/O)$ surfaces plotted in the right column of Figure \ref{cogridall} (see equation \ref{rleqn}). The correct value of $\rm K_p$ is recovered regardless of C/O ratio, indicating that a even a large mismatch between template and observed C/O ratio will not result in a non-detection. In contrast with the $\log RL(\rm K_p, T_{eq})$ surfaces, the $\log RL(\rm K_p, C/O)$ surfaces appear to be uniquely peaked near the correct value of C/O, after correcting for features arising from the stellar spectrum. \begin{figure*} \centering \noindent\includegraphics[width=40pc]{CO05e25_model_grid_prob.pdf} \caption{C/O constraint for a simulation with C/O = 0.5 and $\rm T_{eq} = 900$ K. Zero-mean 1D slices of the likelihood surface along the best-fit value of $\rm K_p$ are plotted at left for the target simulation and a series of models. The corresponding probability density function of the ``observed" spectrum having a given C/O, $\rm P(C/O)$, is plotted at right, with the true value in solid red, best-fit in dashed red, and 1$\rm \sigma$ confidence interval in dotted red.} \label{copeaks} \end{figure*} \begin{deluxetable}{ccc}\centering \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{\colhead{``Observed" C/O} & \colhead{Measured C/O} & \colhead{Measured C/O} \\ & $\rm z = z_\odot$ & $\rm z = 10\times z_\odot$} \startdata 0.2 & $0.24\pm0.04$ & $0.26\pm0.04$ \\ 0.3 & $0.34\pm0.03$ & $0.39\pm0.04$ \\ 0.4 & $0.41\pm0.04$ & $0.45\pm0.04$ \\ 0.5 & $0.52\pm0.04$ & $0.53\pm0.03$ \\ 0.6 & $0.60\pm0.04$ & $0.62\pm0.03$ \\ 0.7 & $0.82\pm0.06$ & $0.72\pm0.03$ \\ 0.8 & $0.86\pm0.06$ & $0.82\pm0.02$ \\ 0.9 & $0.96\pm0.08$ & $0.92\pm0.02$ \\ 1.0 & $0.92\pm0.08$ & $0.96\pm0.03$ \enddata \caption{Measured C/O values for 25 epochs with $\rm T_{eq} = 900$ K, $\rm R = 1.0\ R_J$, and $\rm S/N_{epoch} = 1500$, at solar and $10\times$ solar metallicity. Accurate constraints are obtained for C/O $< 0.7$ at solar metallicity and C/O $< 0.9$ at $10\times$ solar metallicity. At high values of C/O, the ``observed" spectra become difficult to distinguish reliably. These results are sufficient to distinguish between sub-stellar, super-stellar, and approximately stellar values of the planetary C/O ratio.} \label{coconstraints} \end{deluxetable} As was the case with equilibrium temperature, the two-parameter likelihood surface changes with model C/O ratio, indicating the multi-epoch technique is sensitive to observed C/O, and the recovery of the correct $\rm K_p$ at all C/O values suggests strong priors on the C/O ratio are not required to make a detection. We use the same approach to characterize the C/O sensitivity as was previously used for equilibrium temperature. We begin by marginalizing the 2D $\log RL(\rm K_p, C/O)$ surface over the 10 km $\rm s^{-1}$\ region surrounding the best-fit $\rm K_p$, roughly corresponding to the resolution of NIRSPEC, to obtain $\log RL(C/O)$, which we plot in the left panel of Figure \ref{copeaks}. While the $\log RL(C/O)$ curves are generally peaked near the correct value, the modeling approach described in the previous section continues to provide more accurate constraints when the models closely replicate observations. We therefore correlate a set of noise-free models which we fit to the observed $\log RL(C/O)$ curve following equation \ref{logldev}, which we then convert to $\rm P(C/O)$ following equation \ref{loglpdf}. The right panel of Figure \ref{copeaks} plots the resulting $\rm P(C/O)$ function for a simulation with $\rm C/O = 0.5$, analogous to the right panel of Figure \ref{tfitpeak}. Table \ref{coconstraints} shows that a good constraint on C/O is achieved for input values below C/O = 0.7 from 25 epochs of $\rm S/N_{epoch} = 1500$ and solar metallicity, and below C/O = 0.9 for $10\times$ solar metallicity. At higher C/O ratios, the planet spectra become too similar to reliably distinguish between different values for C/O from the $L$-band spectrum alone. At lower C/O ratios, the measured values tend to overestimate the true C/O, particularly in the higher-metallicity models. Despite this, the measured C/O values are accurate to better than 0.1, which should enable differentiation between substellar, approximately stellar, and superstellar C/O ratios. Higher metallicity leads to somewhat better constraints, particularly at high C/O ratios, as the increased heavy element content leads to stronger, better-detected H$_2$O and CH$_4$ lines. Both accuracy and precision are substantially improved in the 25-epoch simulations compared with 10-epoch simulations of similar total signal-to-noise. \subsection{Instrument Properties}\label{sec:inst} \begin{figure} \centering \noindent\includegraphics[width=19pc]{instrument_double.pdf} \caption{Comparison of the relative likelihood curves for different instrument properties. The top panel plots a simulated planet with $\rm T_{eq} = 1400$ K, the middle $\rm T_{eq} = 1000$ K, and the bottom panel $\rm T_{eq} = 600$ K. Ten epochs are used in each simulation. The NIRSPEC2 case uses the wavelength range plotted in Figure \ref{plmodspec} and R $\sim$ 35000, while the double resolution case considers the same wavelength coverage, but with R $\sim$ 70000. The double grasp case uses R $\sim$ 35000, but with double the wavelength coverage of the NIRSPEC2 case.} \label{instplot} \end{figure} Our final set of simulations assesses the role of spectral grasp and spectral resolution in both detection and atmospheric characterization. In order to compare instrumental properties directly, without considering the impact of the significant absorption features present in the $L$-band, we use ``space-like" simulations in which ixels affected by saturated tellurics are not masked, unlike the simulations presented previously. We consider three cases with ten evenly-spaced epochs each. First, we consider a space-based NIRSPEC2 analog. The $\rm S/N_{epoch}$ is 2500, spectral resolution is $R\sim35000$ and the spectral coverage is plotted in Figure \ref{plmodspec}. Second, we consider a space-based instrument with the same wavelength range as NIRSPEC2, but with double the resolution ($R\sim70000$). To account for the increased dispersion, $\rm S/N_{epoch}$ is reduced to 1768. Finally, we consider a space-based instrument with the $R\sim35000$ and $\rm S/N_{epoch} = 2500$, but with the spectral grasp doubled to provide nearly continuous coverage from 2.9--3.8 $\mu$m. To maintain a consistent sampling of the line spread function, the latter two cases were run with 4096 pixels instead of 2048 for NIRSPEC2. For each set of instrument parameters, we simulate a set of planets with $\rm T_{eq} = 1400$ K and radii from 0.7 $\rm R_J$ to 1.3 $\rm R_J$, similar to Section \ref{sec:rsim}, as well as a set with $\rm T_{eq} = 900$ K, $\rm R = 1.0\ R_J$, solar metallicity, and a range of C/O ratios, similar to Section \ref{sec:co}. Following the same procedures as in those sections, we present detection strengths as likelihood ratios in Table \ref{detsinst} and C/O constraints in Table \ref{coconinst}. \begin{deluxetable}{cccc}\centering \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{\colhead{Planet Radius} & \colhead{NIRSPEC2} & \colhead{Double Resolution} & \colhead{Double Grasp}} \startdata 1.3 & 24.8 & 37.9 & 41.5 \\ 1.2 & 20.9 & 32.9 & 35.9 \\ 1.1 & 17.0 & 27.4 & 30.0 \\ 1.0 & 14.9 & 21.6 & 25.1 \\ 0.9 & 12.1 & 18.1 & 20.2 \\ 0.8 & 9.4 & 15.0 & 15.2 \\ 0.7 & 7.1 & 11.4 & 12.1 \enddata \caption{Relative likelihoods of detection for 10 epochs of a planet with $\rm T_{eq} = 1400$ K, observed with three hypothetical space instruments. The NIRSPEC2 case uses the wavelength range plotted in Figure \ref{plmodspec} and $\rm R\sim35000$ based on NIRSPEC2 observations. Double resolution considers the same grasp as NIRSPEC2, but $\rm R\sim70000$. Double grasp considers $\rm R\sim35000$, but with double the wavelength coverage compared with the NIRSPEC2 case. Both spectral resolution and spectral coverage have a significant impact on detection strength.} \label{detsinst} \end{deluxetable} Examining first the impact on detection strength, we find that both grasp and resolution lead to a significant improvement in detection. This can be seen in Figure \ref{instplot}, which plots the relative likelihood curves for each set of instrumental factors for 10 simulated epochs with $\rm T_{eq} = 1400$ K, $\rm T_{eq} = 1000$ K, and $ \rm T_{eq} = 600$ K. Comparing with the $\rm T_{eq} = 1400$ K, 10 epoch case in Table \ref{detstrengths}, all detections listed in Table \ref{detsinst} are significantly stronger than the corresponding entries in Table \ref{detstrengths}, indicating that telluric losses are significantly degrading ground-based observations. Both increased spectral resolution and increased wavelength coverage offer additional improvements in detection strength, with the double resolution case producing a $\sim60$ percent improvement in the likelihood ratio compared with the ``space-like" NIRSPEC2 and the double grasp case yielding a $\sim70$ percent improvement. The improvement in detection strength appears to be somewhat dependent on planet temperature. While for the $\rm T_{eq} = 1400$ K and $ \rm T_{eq} = 1000$ K planet models both increased resolution and increased grasp produce similar improvements in detection strength, the $\rm T_{eq} = 600$ K models are much better detected in the increased grasp case. \begin{deluxetable}{cccc}\centering \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{\colhead{``Observed" C/O} & \colhead{NIRSPEC2} & \colhead{Double Resolution} & \colhead{Double Grasp}} \startdata 0.2 & $0.19\pm0.01$ & $0.19\pm0.02$ & $0.23\pm0.01$ \\ 0.3 & $0.34\pm0.02$ & $0.35\pm0.02$ & $0.34\pm0.01$ \\ 0.4 & $0.40\pm0.02$ & $0.36\pm0.02$ & $0.40\pm0.02$ \\ 0.5 & $0.53\pm0.02$ & $0.52\pm0.02$ & $0.52\pm0.01$ \\ 0.6 & $0.64\pm0.03$ & $0.61\pm0.03$ & $0.60\pm0.02$ \\ 0.7 & $0.68\pm0.03$ & $0.69\pm0.03$ & $0.70\pm0.02$ \\ 0.8 & $0.84\pm0.03$ & $0.87\pm0.04$ & $0.83\pm0.02$ \\ 0.9 & $0.98\pm0.06$ & $0.92\pm0.05$ & $0.92\pm0.02$ \\ 1.0 & $1.06\pm0.04$ & - & $0.99\pm0.04$ \enddata \caption{C/O constraints from 10 simulated epochs with varying space-based instrument properties and no tellurics. The NIRSPEC2 case uses the grasp and resolution of previous NIRSPEC2 observations. Double resolution considers the same grasp as NIRSPEC2, but double the spectral resolution. Double grasp considers the same spectral resolution as NIRSPEC2, but with nearly complete wavelength coverage from 2.9--3.8 $\mu$m. Increased wavelength coverage offers better performance improvements, particularly at high C/O.} \label{coconinst} \end{deluxetable} Table \ref{coconinst} shows the impact of instrument parameters on the C/O measurement. While increased spectral range appears to offer some improvements in accuracy, particularly at high C/O, increased resolution does not have a significant impact on the measurements obtained. We can also estimate the impact of telluric features on the achievable C/O constraint by comparing the space-like NIRSPEC2 case in Table \ref{coconinst} with the values in Table \ref{coconstraints}. Both accuracy and precision are substantially worse when saturated telluric absorption features are removed, particularly at higher C/O, consistent with the large loss in effective spectral grasp due to telluric features. \section{Discussion}\label{sec:disc} \subsection{Observation Strategy} Simulations of identical systems with varying number of epochs allow us to directly compare the results of different observation strategies. Table \ref{detstrengths} lists detection strengths for simulations with 10 epochs at $\rm S/N_{epoch} = 2500$ as well as simulations of 25 epochs with $\rm S/N_{epoch} = 1500$. The total signal-to-noise ratio in these simulations combining all epochs is $\sim$7900 and $\sim$7500 respectively. Despite the slightly better total signal-to-noise, the 10-epoch case gives a substantially worse detection strength in all cases compared with the otherwise-equivalent 25-epoch simulations, corresponding to a factor of $\sim 2$ reduction in photometric contrast. Comparing the 25-epoch simulations with 50-epoch simulations, which have 40 percent greater total signal-to-noise, the 50 epoch case performs substantially better that would be expected from the increase in signal-to-noise, again corresponding to a factor of $\sim2$ in photometric contrast. This suggests that confidence in the multi-epoch detection is much more dependent on the number of epochs combined than on either the per-epoch or total signal-to-noise. This is consistent with findings from \citet{Buzard_2020} that spreading the same total signal-to-noise over an increasing number of epochs results in a stronger detection. The improvement with increasing number of epochs suggests the optimal observing strategy for multi-epoch cross-correlation is to take a large number of relatively low signal-to-noise observations, similar to stellar radial velocity surveys. While stellar optical spectra can be corrected to the required precision for RV measurements by dividing by the spectrum of an A0 telluric standard star, the extremely low planet flux measured in multi-epoch cross-correlation would require a prohibitive amount of time spent telluric standards in order to prevent the uncertainty in the standard measurement from dominating over the planet signal. Instead of telluric standards, multi-epoch cross-correlation observations since \citet{lockwood} have made use of line-by-line atmospheric models such as RFM \citep{DUDHIA2017243} or MOLECFIT \citep{Kausch2014} to fit and divide out a model atmosphere. Beginning with \citet{piskorz88133}, this is followed by a principal component analysis (PCA) to correct errors in the model line profile function, changes in molecular abundances over the observation, and instrument flexure \citep{piskorz88133, Piskorz2018}. The guided PCA requires a series of observations at varying airmass in order to identify residuals associated with tellurics, as these features should vary consistently with the airmass. Based on prior NIRSPEC observations, a time series covering at least $\sim30$ minutes per target is necessary for guided PCA to be effective. This in turn requires the use of the few-epoch, high per-epoch signal-to-noise observing strategy simulated in the 10 epoch case when observing with large telescopes, with $\rm S/N_{epoch} \sim 2500$. The results of the simulations presented here suggest that the development of alternative telluric correction procedures which do not impose a minimum integration time should be a high priority for future multi-epoch cross-correlation pipelines. We also note that more sensitive instruments may shorten the time series required for effective PCA, but will also further reduce the integration time required to achieve the desired $\rm S/N_{epoch}$ for the many-epoch observing strategy. \subsection{Photometric Contrast Limits}\label{sec:rlim} Table \ref{detstrengths} allows us to compare the impact of photometric contrast changes due to differences in equilibrium temperature with changes due to planet radius. Comparing otherwise-identical simulations with $\rm T_{eq}$ of 600 K and 1400 K, we find the likelihood ratio is only reduced by $\sim20$ percent in the 600 K case, despite a 94 percent reduction in $L$-band photometric contrast compared with the 1400 K case. At the same time, a reduction in photometric contrast of $\sim20$ percent due to planet radius results in a $\sim20$ percent reduction in likelihood ratio at fixed equilibrium temperature, indicating that photometric contrast has a significant impact on detection at fixed equilibrium temperature. We note that these numbers assume cloud-free models for both temperatures, which is not necessarily appropriate for the 600 K case. The impact of clouds and hazes is discussed in more detail below. Additionally, we do not account for changes in the planet spectrum with varying radius. Changes in surface gravity and temperature--pressure profiles in smaller planets are likely to introduce changes in spectroscopic contrast in addition to the change in photometric contrast considered in these simulations. The difference in the relationship between photometric contrast and detection strength from changes in radius compared with changes in equilibrium temperature emphasizes that multi-epoch cross-correlation is much more sensitive to spectroscopic contrast than photometric. On a per-photon basis, the $\rm T_{eq} = 600\ K$ planet models are significantly easier to detect via multi-epoch cross-correlation, due to the presence of deep CH$_4$ features at cooler temperatures. However, the presence of additional species in the spectra of cooler exoplanets increases the difficulty of accurately matching the correlation template to the observed spectrum, a challenge which is not present in the simulations. This suggests anticipating the detectability of exoplanet systems with cross-correlation spectroscopy requires an understanding of the target's thermochemical properties, in particular considering the potential presence and impact of clouds/hazes. Figures \ref{tgridall} and \ref{cogridall} indicate a detection can be made despite significant differences between the template and observed spectra, provided the template includes the species present in the observed spectrum. An initial detection with a poorly-matched model can be subsequently revised by varying the template to maximize detection strength. Section \ref{sec:rlim} found that the 25-epoch simulations detected nearly all simulated planets with $>5\sigma$ confidence using the Bayes' factor approach described in \citet{piskorz88133}. This suggests that using a many-epoch, low per-epoch signal-to-noise observing strategy, $L$-band multi-epoch cross-correlation using NIRSPEC is sensitive to planets with $\rm T_{eq} \geq 600 K$ and $\rm R \geq 0.7\ R_J$ around a Sun-like host star. Assuming a Sun-like host star, this corresponds to planets within $\sim0.2$ AU approximately Saturn-size and larger, which should enable the detection and characterization of warm Jupiters. While we do not simulate the change in orbital period with semi-major axis, this does not affect the detection provided the sampling of the orbital phase is fixed. We caution that the exact limits are likely to be strongly dependent on the exact wavelength range observed, the planet chemical composition, the presence of clouds/hazes, and how well features arising from the stellar spectrum can be removed from the $\log L(\rm K_p)$ space. \subsection{Sensitivity to Temperature}\label{sec:teffsens} Figure \ref{tgridall} indicates that while mismatches between the true temperature and the correlation model negatively impact the detection strength, the best-fit value of $\rm K_p$ is independent of the model equilibrium temperature over an 800 K range. This is particularly relevant to planning observations for tidally-locked planets such as hot Jupiters, which are known to have day/night temperature differences of several hundred Kelvin \citep[e.g.][]{Knutson2007, Wong_2016, Komacek_2016}. Figure \ref{tgridall} indicates both dayside and nightside observations can be used effectively in the absence of longitudinally-varying clouds/hazes with a relatively minor impact on the detection strength, though the correlation model should use the lowest estimated temperature. Underestimating the planetary equilibrium temperature in the template spectrum consistently results in a better detection. While the weak dependence on mismatches between the observed and model temperature is useful observationally, it suggests $L$-band observations have a limited ability to constrain thermal properties of the system. While Figures \ref{tgridall} and \ref{tfitpeak} do show variations with equilibrium temperature, the values in Table \ref{teffcon} show significant systematic errors. Additionally, the temperature constraint in the $L$-band arises primarily from the relative strengths of H$_2$O and CH$_4$ features, which will also depend on metallicity and C/O ratio. Multi-dimensional model grids to assess these degeneracies are computationally impractical at present, and will require improvements in the cross-correlation routine. The $L$-band temperature sensitivity is therefore unlikely to be useful in practice beyond providing rough constraints on the day/night temperature contrast for $\rm T_{eq} \leq 1200$ K, when CH$_4$ features are present in the spectrum. However, other NIR spectral features with stronger temperature dependencies, such as CO bandheads in the $K$ and $M$-bands, may offer much better constraints on 3D thermal properties, as was demonstrated in \citet{beltz2020}. With a large number of epochs, variations in the best-fit temperature between epochs at different orbital phase could enable thermal mapping of non-transiting planets, similar to the phase-curve mapping technique in \citet{Knutson2007}. Future simulations will explore the wavelength-dependence of thermal constraints from multi-epoch cross-correlation and the impact of phase-dependent observed planet temperatures. We expect the temperature-dependence of the planetary spectrum will lead to constraints on planet properties varying strongly with equilibrium temperature, even for observations at fixed wavelength. \subsection{Sensitivity to C/O} Figure \ref{cogridall} indicates that while the best-fit $\rm K_p$ is unaffected by a mismatched C/O ratio, multi-epoch cross-correlation successfully recovers the true value for $\rm T_{eq} = 900$ K and C/O $\leq 0.8$, as shown in Figure \ref{copeaks} and Table \ref{coconstraints}. Higher metallicity offers marginally better performance at high C/O. The contrast with the weak C/O constraint obtained in \citet{Piskorz2018} is due to the presence of CH$_4$ in the $L$-band spectrum for $\rm T_{eq} \leq 1000$ K, which allows carbon and oxygen bearing species to be observed simultaneously. This again illustrates the dependence of multi-epoch cross-correlation detection and characterization on the wavelength range observed and the target's atmospheric composition. While $L$-band observations can provide useful C/O constraints for warm planets, hot Jupiters require observations at different bands which contain carbon species (e.g. CO) features at high temperature. Useful C/O constraints are not obtained for simulated planets with $\rm T_{eq} \geq 1000$ K due to the lack of CH$_4$ lines at high temperatures, though for cooler planets differences in $\rm T_{eq}$ between the observed and template spectra do not significantly impact the C/O constraint beyond increasing the uncertainty to $\sim0.1$. We note that $L$-band C/O constraints require an equilibrium chemistry assumption, as only H$_2$O and CH$_4$ features are observed. \citet{Wallack_2019} found that disequilibrium processes in cooler plants have a minor impact on molecular abundances compared with atmospheric metallicity and C/O ratio for planets with equilibrium temperatures around 1000 K, suggesting the equilibrium assumption is reasonable for the warm exoplanet populations. In cases where disequilibrium processes are expected to play a significant role, a CO detection is likely to be required, and disequilibrium processes must be included in the model planet spectrum. Large-grasp instruments which can detect multiple carbon and oxygen-bearing species simultaneously would allow large-scale disequilibrium processes to be identified through cross-correlation without \emph{a priori} knowledge of the atmosphere. The ability to measure C/O from $L$-band observations has the potential to clarify the formation processes for warm Jupiters. \citet{Oberg_2011} found the C/O ratio of the gaseous and solid components of protoplanetary disks varies with distance. Gas beyond the H$_2$O snow line is carbon-enriched relative to the inner disk, while solids are carbon-depleted. If the gas accretion determines the final C/O ratio, planets which form beyond the snow line and migrate inwards should have significantly superstellar C/O \citep[e.g.][]{Oberg_2011, oberg_2016, madhusudan_2017}. Other work has suggested solid accretion dominates the final atmospheric C/O \citep[e.g.][]{mordasini_2016, espinoza_2017}, resulting in substellar C/O values which drop further beyond the H$_2$O snow line. The relative importance of gas and solid accretion may also vary significantly with planet mass, leading to compositional differences between Jupiter and Neptune-size planets \citep{cridland_2019}. The precision of the C/O constraints in Table \ref{coconinst} suggest cross-correlation spectroscopy can distinguish between migration and \emph{in-situ} formation scenarios, as well as between gas and solid-dominated accretion. The weak inclination dependence of cross-correlation techniques results in many more targets for which C/O measurements can be made compared with transit spectroscopy, increasing the potential sample size for studies of atmospheric C/O in intermediate-temperature ($\rm T_{eq}\sim900$ K) planets. \subsection{Applicability of Simulations to Observations} These simulations suggest multi-epoch cross-correlation is capable of significantly more than has been demonstrated observationally. In part, this is because these simulations are based on the capabilities of NIRSPEC2 \citep{martin2018}, which offers substantially improved wavelength coverage and resolution compared with the pre-upgrade instrument which was the basis for prior multi-epoch detections. As additional observations are taken with the upgraded instrument, future observational results should better match the simulations presented here. We also briefly note several factors which were not included in the simulations which impact observations. Stellar activity and atmospheric clouds/hazes are discussed in detail below. Figures \ref{tgridall} and \ref{cogridall} suggest minor inaccuracies in the planetary spectral template temperature or C/O ratio will have minimal impact on the ability to make a detection. While inaccuracies decrease the strength of the detection, the correct value of $\rm K_p$ is recovered for $\rm T_{eq}$ between 600 K and 1400 K and C/O between 0.1 and 1.1, regardless of the true values. Furthermore, we demonstrate that correlating with a grid of planet models can provide constraints on planet properties which are not known \emph{a priori}. The one case where differences between observed and template spectra causes significant issues with detection is when simulated observations with $\rm T_{eq} < 1000$ K are correlated with $\rm T_{eq} \geq 1000$ models. This appears to be due to the absence of CH$_4$ features in warmer models. Provided the species which dominate the observed spectrum are represented in the cross-correlation template, errors in the template appear to have a minimal and identifiable impact on the detection. Errors in line shape or position in the template will lead to a broader, lower-amplitude peak in the cross-correlation function, and may lead to shifts in the best-fit $\rm K_p$ \citep{brogi2019}. We note that errors in linelists represent a significant additional source of modelling uncertainty, particularly for CH$_4$. Using an updated linelist, \citet{gandhi_2020} is unable to reproduce the CH$_4$ detection reported in \citet{Guilluy_2019} for HD 102195 b, indicating the uncertainties in current linelists are large enough to impact both detection and C/O constraints with cross-correlation techniques. Future improvements in high-resolution linelists should reduce the impact of this model uncertainty. These simulations also lack any residuals from the telluric removal procedure, unlike observations. \citet{Buzard_2020} showed that such residuals are not necessary to reproduce major non-planetary features in the cross-correlation space, which arise from correlation between the stellar and planetary spectra. While the guided PCA approach should effectively remove all features present in a model telluric spectrum or which vary consistently over the observation, fixed differences between the observed line profiles and positions and the telluric model are likely to lead to residuals in observed spectra which are not included in this simulation framework, but which do contribute to off-peak structure in observations. Developing simulations which accurately include telluric residuals is an ongoing challenge, as it requires quantifying the deviations between observed and model telluric spectra and how such differences vary with airmass and observing conditions. The near-perfect correction of off-peak structure achieved in these simulations therefore represents a best-case scenario for telluric removal, and we emphasize the importance of accurate telluric correction for successful multi-epoch cross-correlation. \subsubsection{Stellar Activity} Stellar activity and the associated spectrophotometric variations are likely to have a significant impact on the ability to detect and characterize planets with multi-epoch cross correlation. The left columns of Figures \ref{tgridall} and \ref{cogridall} show that even a perfectly-modelled static stellar spectrum introduces significant non-random structure in the cross-correlation space. This structure can be entirely removed in simulations, as the stellar spectrum is perfectly characterized and the only source of non-random structure included. In observational applications, intrinsic differences between the observed and model stellar spectra and spectrophotometric variability from stellar activity will lead to non-planetary cross-correlation features that cannot be easily removed by subtracting a set of planet-free simulated observations created from a single stellar spectral model. The impact of stellar variability can be reduced by taking high-cadence observations. Minimizing the time between observations also minimizes changes in the stellar spectrum, reducing the resulting variable structure in the cross-correlation function. Previous multi-epoch cross-correlation detections have used data taken over three to eight years \citep{piskorz88133, piskorzupsand, Buzard_2020}, covering a large fraction of the 11--year Solar magnetic cycle and longer than the magnetic cycles observed other stars \citep[e.g.][]{donati_2008, morgenthaler_2011}. Such long baselines are not necessary for warm/hot systems with periods of $<100$ days, but allow significant changes to occur in the stellar spectrum compared with observations taken over shorter periods. Stellar activity can also be addressed in the analysis pipeline. While currently the cross-correlation is performed using a single stellar template for all epochs, using different templates at each epoch could account for changes in the stellar spectrum caused by stellar activity. This would require an accurate estimate of the stellar activity at each epoch as well as high-resolution spectral models which incorporate varying activity levels. Such modeling would also be benefited by high-cadence observations, which may allow a Gaussian processes approach to fit spectral changes due to stellar activity, similar to the approach described in \citet{Rajpaul_2015} to reduce errors in RV observations arising from stellar activity. \subsubsection{Clouds and Hazes} All simulations used planetary spectral templates without clouds or hazes, which are likely to be present in observed systems with $\rm T_{eq} < 1000\ K$. While clouds are responsible for the flat transmission spectrum in GJ 1214 b \citep{kreidberg_2014}, cross-correlation techniques are sensitive to emission features, and are not necessarily impacted in the same way. However, 3D models of hot Jupiter atmospheres find clouds lead to muted high-resolution emission features \citep{harada_2019}, and models of super-Earth atmospheres find clouds may result in blackbody-like spectra with weak line features \citep{morley_2015}. These findings suggest the presence of significant cloud cover is likely to have a negative impact on the ability to make detections with high-resolution cross-correlation techniques compared with our simulations, similar to the impact on transiting planets. Hazy planets, in contrast, may be easier to detect in some cases. Models of hazy super-Earths in \citet{morley_2015} found hazes can cause temperature inversions which produce stronger infrared emission features than similar planets without hazes, depending on irradiation, particle size, and haze coverage. Some exoplanets with featureless transmission spectra due to hazes may nevertheless be amenable to cross-correlation detection of the emission spectrum in the 1--5 $\mu$m spectral range. Determining the precise impact of clouds and hazes will require additional simulations with a range of cloud/haze properties and compositions. These simulations will also need to account for variations in cloud/haze coverage over the atmosphere and the viewing angle of the observer at each epoch, rather than using a single model planet spectrum for all epochs. We leave these simulations to future work. \subsection{Instrumental Factors} Table \ref{detsinst} and Figure \ref{instplot} indicate improvements to either spectral resolution or spectral grasp lead to significant improvements in detection strength. While the increased coverage offers a somewhat greater performance improvement compared with increased resolution for warmer equilibrium temperatures, improved spectral grasp offers substantially better performance than increased spectral resolution for the $\rm T_{eq} = 600$ K case. We believe the relative importance of resolution and grasp is likely to depend on both the observed wavelength and properties of the target planet, in particular the width of the observed planetary emission features. We leave a more detailed exploration of instrumental factors in detection strength considering a broader portion of the near-infrared spectrum to future work. Table \ref{coconinst} shows the atmospheric characterization from $L$-band features is much less dependent on instrument properties. Improving the C/O constraint is likely to require the ability to simultaneously detect additional species beyond H$_2$O and CH$_4$, necessitating a significant increase in spectral grasp rather than additional improvements in spectral resolution. Several instruments have been proposed or are being developed to offer single-shot 1--5 $\mu$m coverage with spectral resolution greater than Keck-NIRSPEC, including GMTNIRS, CRIRES+, and IGNIS. In addition to improving the detectability of planets, large simultaneous wavelength coverage is likely to offer significant improvements in the ability to characterize planets by enabling the simultaneous detection of additional molecules. The large spectral ranges of these instruments will be especially valuable in cases of significant non-equilibrium chemistry, clouds or hazes, or \emph{a prior} uncertainty in the atmospheric composition. Finally, we note that the space-like NIRSPEC2 simulations in Table \ref{detsinst} result in a factor of $\sim$5 improvement in the likelihood ratio compared with the analogous 10-epoch simulations in Table \ref{detstrengths} which remove portions of the spectrum affected by telluric absorption. A space-based high-resolution spectrograph covering the 1--5 $\mu$m spectral range would offer significantly greater capability to detect and characterize exoplanets through cross-correlation spectroscopy compared with ground-based facilities. \subsection{Applications to Late-Type Stars} These simulations used a Sun-like star for the host stellar spectrum. For a given planet radius and equilibrium temperature, the relative brightness of the planet increases with decreasing stellar radius and temperature. From a purely photometric viewpoint, we would therefore expect to make stronger detections around later-type stars, as well as the ability to detect smaller and cooler planets. However, the results from Section \ref{sec:rlim} indicate multi-epoch cross-correlation is much more sensitive to spectroscopic contrast than photometric. Increased stellar activity and stronger molecular lines in late-type stars may cause spectroscopic contrast to diverge significantly from photometric contrast, impacting planet detectability in ways that are difficult to predict without dedicated modeling. A rigorous assessment of cross-correlation techniques around K and M type primary stars is currently limited by the lack of sufficiently accurate high-resolution spectral models for late-type stars. The development of improved models for late-type stars would further expand the population accessible to characterization through multi-epoch cross correlation with existing instrumentation. Of particular interest is planets falling in the radius valley identified by \citet{Fulton_2017} near $\rm 1.8\ R_\oplus$. While our simulations show such planets are not likely to be detectable with Keck-NIRSPEC around Sun-like stars, the increased relative brightness of exoplanets around M-dwarfs may enable detections, provided accurate stellar models are available and the atmospheric spectrum offers sufficient spectroscopic contrast. Cooler planets may pose additional challenges for telluric correction as the emission spectrum of the planet becomes more similar to the terrestrial spectrum. The ability to probe the atmospheric composition of these planets with multi-epoch cross-correlation could offer a new avenue to investigate the evolutionary processes affecting intermediate-mass, highly irradiated planets near the transition between rocky and gaseous compositions. \section{Conclusions}\label{sec:conc} Our simulations indicate that the multi-epoch cross-correlation approach can be used to detect and characterize a much larger population of non-transiting planets than has been previously studied. In particular, we find: \begin{enumerate} \item Planets with $\rm R_{pl} \geq 0.7\ R_J$ and $\rm T_{eq} \geq 600$ K should be detectable around Sun-like stars with existing instruments in the $L$-band, provided the stellar contribution to the cross-correlation function can be effectively removed. Cooler planets are much more detectable than suggested by the photometric contrast. Detections are significantly improved through additional epochs, even if total signal-to-noise is held constant. \item $L$-band cross-correlation spectroscopy is weakly dependent on temperature. While day/night temperature differences have some negative impact on the detection strength using a single-temperature model, such differences are unlikely to prevent detection in the absence of clouds. Precise measurements of thermal properties will require observations at other bands, though $L$-band observations may be able to estimate day/night temperature contrast with future pipeline improvements. \item $L$-band observations can provide good constraints on the atmospheric C/O ratio for planets with $\rm T_{eq} \approx 900$ K. Such constraints require the simultaneous detection of carbon and oxygen bearing species. The lack of $L$-band CH$_4$ features in hot Jupiters requires observations at other wavelengths to make a C/O measurement for $\rm T_{eq} \geq 1000\ K$. \item Improvements in both spectral resolution and spectral grasp compared with NIRSPEC2 result in improved planet detection and atmospheric constraints. Future improvements in instrumentation will further expand the population of planets which can be detected and characterized with multi-epoch cross-correlation. The simultaneous detection of additional species should enable more robust constraints on atmospheric properties, particularly clouds/hazes and non-equilibrium chemistry. \end{enumerate} In these simulations, we used a portion of the $L$-band for which prior NIRSPEC2 observations were available. This allows the actual performance of the detector and losses to saturated tellurics to be replicated in simulation, increasing our confidence that the results presented here are obtainable in practice. As additional NIRSPEC2 observations are taken at different portions of the near-infrared, it will be possible to perform these simulations at other wavelengths, allowing us to explore how planet property constraints from multi-epoch cross-correlation depend on wavelength and anticipate the capabilities of future high-grasp instruments. \acknowledgments{ We thank the anonymous reviewer for their helpful suggestions to improve this paper. The simulations presented herein made use of data obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work was partially supported by funding from the NASA Exoplanet Research Program (grant NNX16AI14G, G.A. Blake P.I.). L.F. acknowledges the support of the Lynne Booth and Kent Kresa SURF fellowship. S.P. acknowledges funding from the Technologies for Exo-Planetary Science (TEPS) CREATE program. B.B. acknowledges financial support from the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Fond de Recherche Québécois-Nature et Technologie (FRQNT; Québec). } \facilities{Keck:II (NIRSPEC)} \software{astropy \citep{astropy2013, astropy2018}} {\footnotesize
{ "timestamp": "2020-12-29T02:23:45", "yymm": "2012", "arxiv_id": "2012.14068", "language": "en", "url": "https://arxiv.org/abs/2012.14068" }
\section{Introduction} Solving systems of differential equations is in general a difficult task. Indeed, the results of \cite{DL84} imply that computing formal power series solutions to systems of algebraic partial differential equations with (complex) formal power series coefficients is algorithmically impossible. On the other hand, the rapidly-evolving area of {\it tropical} differential algebraic geometry, introduced in \cite{grigoriev2017tropical}, provides an algebraic framework for recording precise combinatorial information about the set of formal power series solutions to systems of differential equations. The foundation of this tropical theory is an analogue of the {\it tropical fundamental theorem} (itself a tropical analogue of the classical Nullstellensatz) adapted to the differential-algebraic context. A {\it tropical fundamental theorem} in its basic form refers to a correspondence between tropicalizations of geometric objects on one side and tropicalizations of algebraic objects on the other; and constructing such correspondences is the primary impetus for the application of tropical methods in commutative algebra. By {\it tropicalization} we mean schemes $(R,{\rm S},v)$ in the sense of \cite{GG,KM19}, in which $R$ is a commutative ring (possibly endowed with some extra structure), ${\rm S}$ is a commutative (additively idempotent) semiring, and $v:R\rightarrow {\rm S}$ is a map that satisfies the usual properties of a seminorm. We are typically interested in the set of algebraic solutions $\text{Sol}(\Sigma)$ of a system of algebraic equations $\Sigma\subset R^{\pr}$ for some $R$-algebra $R^{\pr}$. Whenever the {\it tropicalization} $v(\text{Sol}(\Sigma))$ of $\text{Sol}(\Sigma)$ coincides with the solution set $\text{Sol}(v(\Sigma))$ of the {\it tropicalization} $v(\Sigma)$ of the original system, we say that a tropical fundamental theorem holds in this context. In the differential-algebraic setting, the first tropical fundamental theorem proved was for systems of ordinary differential algebraic equations over rings of formal power series; this is the main result of \cite{AGT16}. In this case, the valued ring that underlies the tropicalization scheme is $R=K[\![t]\!]$, where $K$ is an uncountable, algebraically closed field of characteristic zero; the $R$-algebra $R^{\pr}$ is equal to the ring of ordinary differential polynomials $K[\![t]\!] \{x_1,\ldots,x_n\}$; and the $\text{Sol}(\Sigma)$ are power series solution sets to systems of ordinary differential equations $\Sigma$. This tropical fundamental theorem was subsequently generalized to systems of partial differential equations in \cite{FGH20}. There is a third characterization of tropical varieties in terms of initial ideals; see \cite[Thm 3.2.5]{MS}. In \cite[Lem 2.6]{HG19}, its analogue was stated for systems of ordinary differential equations; and in \cite[Thm 45]{FGH20+}, the latter result was generalized to the partial differential setting. This fundamental theorem gives three equivalent characterizations of the {\it tropical DA varieties}; here DA stands for {\it differential algebraic}. It is the most important result in the area to date, as it opens the door to applying tropical methods to a broad array of problems in differential algebraic geometry. In this note, we re-frame tropical differential algebraic geometry in general, and the tropical fundamental theorem in particular, within a theory of {\it non-classical} non-Archimedean seminorms, in which there is no total order on the target idempotent semiring. We single out a number of non-classical non-Archimedean seminorms that may be worthy of further study. Our work also represents an important first step in the systematic valuative study of support and tropicalization maps; and we discuss potential applications of the fundamental theorem to algebraic differential equations, in both the archimedean and non-archimedean settings. The main protagonists here are certain {\it vertex polynomials}, whose (finite) supports determine the Newton polyhedra of (infinite) power series, and the semiring $V\mb{B}[T]$ they determine. $V\mb{B}[T]$ is the target of our non-classical valuation on $K[\![T]\!]$, and it has many useful properties. Crucially, we show that it is {\it integral}, which enables us to extend our valuative theory from $K[\![T]\!]$ to its quotient field $K(\!(T)\!)$; to naturally distinguish a {\it ring of integers} $K[\![T]\!]\sub K(\!(T)\!)^{\circ} \sub K(\!(T)\!)$ analogous to the valuation ring of a classical valuation; and to formulate geometric translation maps that underlie a robust theory of {\it initial forms} associated to fixed choices of weight vectors. \subsection{Roadmap} The remainder of the paper following this introduction is organized as follows. Section~\ref{Section_DifEqs} is an introduction to differential algebraic geometry over (formal) power series semirings. The differential structures of semirings of differential polynomials are codified by natural $\mb{N}^m$-actions, where $m$ is the number of derivatives. These are slightly more elegant to formulate within the framework of {\it semirings of twisted polynomials} that we introduce in section~\ref{tp}. In section~\ref{Tropical}, we introduce tropical differential equations and their associated tropical power series {\it solutions}, which are ``corner loci" of vertex polynomials; see Definition~\ref{Def_trop_vanishing} for a formal statement. In Example~\ref{Ex_Solution} we illustrate explicitly how these two concepts work together. In section~\ref{section_Idempotent_Semiring}, we study $V\mb{B}[T]$, which we construct as a quotient of boolean power series by a semiring congruence. Lemma~\ref{New_congruence} establishes that our construction exactly reproduces the vertex set semiring introduced in \cite{FGH20}. Proposition~\ref{prop_reformulation_solution} establishes that the set of tropical power series solutions to a single tropical differential polynomial is indeed its corner locus, provided we appropriately adapt the traditional definition of corner locus of tropical polynomials to the differential setting. In subsection~\ref{SubSection_Order}, we introduce a refinement of the order relation on vertex polynomials, based upon the concept of {\it relevancy}; see Definition~\ref{dfn_relevant}. This refinement allows us to recover some useful features of Krull valuations for this case. Subsection~\ref{alternative_characterizations}, on the other hand, is a more detailed study of vertex polynomials through the lens of (Newton) polyhedra. Propositions~\ref{Prop_HProperty}, \ref{finite_generation_A-tilde}, and \ref{char_of_prod} establish structural results for vertex sets, and their behavior under unions and Minkowski sums. We examine in subsection~\ref{Sub_Section_Relationship} how $V\mb{B}[T]$ relates to the semiring $\mathcal{P}_{\mathbb{Z}^m}$ of lattice polytopes in $\mathbb{Z}^m$ studied in \cite{KM19}. In section~\ref{Section_QVIF}, we introduce {\it non-Archimedean seminorms} for algebras defined over a field $K$ with targets in semirings, {\it support series} over the boolean semifield $\mb{B}$ for $K$-formal power series, and {\it support vectors} for solutions to systems of tropical algebraic differential equations. A key technical fact, established in Theorem~\ref{Longue_prop}, is that the support maps $\text{sp}: K[\![T]\!] \ra \mb{B}[\![T]\!]$ are norms that commute with the respective differentials of $K[\![T]\!]$ and $\mb{B}[\![T]\!]$. We further show in Theorem~\ref{VBT_theorem} that composing the support maps with the quotient projection $\mb{B}[\![T]\!] \ra V\mb{B}[T]$ yields {\it valuations} $\text{trop}: K[\![T]\!] \ra V\mb{B}[T]$; that is, the norms $\text{trop}$ are in fact multiplicative. It follows that our tropicalization scheme for differential algebraic geometry is a {\it differential enhancement} in the sense of J. Giansiracusa and S. Mereta of the classical tropicalization associated to the trop valuation (cf. \cite{GM}). In section \ref{Section_Initial_forms}, we extend the valuative theory from $K[\![T]\!]$ to $K[\![T]\!][x_{i,J}]$; and in Definition~\ref{initial_form}, we reconstruct the initial forms of \cite{FGH20+} in terms of these extensions and the relevancy relation introduced in section~\ref{section_Idempotent_Semiring}. In section~\ref{Section_TFT}, we introduce tropical DA varieties and we present our fundamental theorem \ref{EFT} of tropical differential algebraic geometry, which characterizes these varieties in three distinct ways. While Theorem~\ref{EFT} was previously proved in \cite{FGH20}, our approach is novel and emerges naturally from the theory developed in the preceding sections. Thinking of tropical DA varieties as the supports of power series solutions associated to differential ideals of $K[\![T]\!]\{x_1,\ldots,x_n\}$, it is likewise natural to ask for an analogue of Theorem~\ref{EFT} for differential ideals of $K(\!(T)\!)\{x_1,\ldots,x_n\}$. In section \ref{Section_Fromringtofield}, we extend the valuative theory of section \ref{Section_QVIF} from $K[\![T]\!]$ to $K(\!(T)\!)$. The key technical ingredient is Proposition~\ref{Prop_MC}, which establishes that the semiring $V\mb{B}[T]$ of vertex polynomials is integral, so canonically injects in its semifield of fractions $V\mb{B}(T)$. In Theorem~\ref{EEFT}, we partially extend the fundamental theorem to $K(\!(T)\!)$, by producing $K(\!(T)\!)$-analogues of two of the three characterizations of tropical DA varieties in Theorem \ref{EFT}; namely, as a set of weight vectors $w \in \mb{B}[\![T]\!]^n$ that define monomial-free initial differential ideals, and as a set of (coordinate-wise) supports of solutions associated to a given system of algebraic differential equations. It is worth noting that the proof presented here diverges from previous approaches, as it is based on generalized valuation theory and the relevancy relation. Theorem~\ref{prop_local_trop_basis}, meanwhile, generalizes a result of \cite{HG19final} that describes {\it tropical bases} of differential ideals with respect to prescribed support vectors when $m=1$ to the case of arbitrary $m$. Subsection~\ref{initial_degenerations} is devoted to {\it initial degenerations} defined by weight vectors $w \in \mb{B}[\![T]\!]^n$, which are the geometric counterparts of ideals of initial forms. The valuation $\text{trop}:K[\![T]\!] \ra V\mb{B}[T]$ introduced in section~\ref{Section_QVIF} extends naturally to a valuation $\text{trop}:K(\!(T)\!) \ra V\mb{B}(T)$, which in turn enables us to define its {\it ring of integers} $K(\!(T)\!)^{\circ} \sub K(\!(T)\!)$. For every weight vector $w \in \mb{B}[\![T]\!]^n$, we define a ``translation" by $w$ specified explicitly by \eqref{translation_map} that sends differential polynomials over $K(\!(T)\!)$ to differential polynomials over $K(\!(T)\!)^{\circ}$. Translating differential ideals by weight vectors, we obtain algebraic specialization maps that relate differential ideals to their initial forms, modulo a flatness hypothesis. In section~\ref{Section_CA} we discuss computational aspects of tropical DA varieties, and we highlight several outstanding problems of valuation-theoretic and polyhedral nature. In the final subsection~\ref{Example} we evaluate a particular tropical differential polynomial $P$, and we describe a general strategy for computing the solution set of {\it any} $P$. \subsection{Conventions} Outside of Section~\ref{tp}, every algebraic structure considered here will be commutative. We let $\mathbb{N}$ denote the semiring of natural numbers endowed with the usual operations. Given a nonzero natural number $m$, we let $\{e_1,\ldots,e_m\}$ denote the usual basis of the free $\mathbb{N}$-module $\mathbb{N}^m$. Given multi-indices $I=(i_1,\ldots,i_m)$ and $J=(j_1,\ldots,j_m) \in \mathbb{N}^m$, we set $\lVert I\rVert_\infty:=\text{max}_{}\{i_1,\ldots,i_m\}=\text{max}(I)$ and $J-I:=(j_1-i_1,\ldots,j_m-i_m)\in\mathbb{Z}^m$. Given any tuple $T=(t_1,\ldots,t_m)$ and any multi-index $I=(i_1,\ldots,i_m)\in \mathbb{N}^m$, we let $T^I$ denote the formal monomial $t_1^{i_1}\cdots t_m^{i_m}$. Finally, whenever $A$ is a subset of $\mathbb{N}^m$, we set $A-I:=\{J-I \::\:J\in A\}$. \section{Differential algebraic geometry over power series semirings} \label{Section_DifEqs} In this section, we develop the algebraic architecture that underlies the semiring approach to differential algebraic geometry. Recall that a semiring is an algebraic structure which satisfies all the axioms of rings, possibly with the exception of the existence of additive inverses, and throughout this section, ${\rm S}=({\rm S},+,\times,0,1)$ will denote a semiring with additive and multiplicative units 0 and 1, respectively. Semirings comprise a category, namely that of $\mathbb{N}$-algebras, that is a proper enlargement of the category of rings. The monograph \cite{JG} is a useful reference for the general theory of semirings. For tropical differential algebraic geometry, the central objects of study are differential polynomials with coefficients in a power series semiring, which generalize in a straightforward way the differential polynomials with coefficients in a differential field of characteristic zero studied in \cite{R50}. Algebraic differential equations in the traditional ring-theoretic sense are the zero loci of differential polynomials; unfortunately, as we shall see later, simply allowing all coefficients to belong to an arbitrary semiring ${\rm S}$ results in an unsatisfactory theory of tropical differential equations. \begin{dfn} {Fix an integer $m\geq1$. The semiring ${\rm S}[\![T]\!] =({\rm S}[\![T]\!] ,+,\times,0,1)$ of} formal power series {with coefficients in ${\rm S}$ in the variables $T=(t_1,\ldots,t_m)$ is the set of expressions $a=\sum_{I\in A}a_IT^I$ with $A\subseteq\mathbb{N}^m$ and $a_I\in S$, endowed with the usual operations of term-wise sum and convolution product.} \end{dfn} \noindent That is, given $a=\sum_{I\in A}a_IT^I$ and $b=\sum_{I\in B}b_IT^I$, we have $a+ b=\sum_{I\in A\cup B}(a_I+b_I)T^I$ and $a b=\sum_{I\in A+ B}(\sum_{J+K=I}a_Jb_K)T^I$. We let $\text{Supp}(a)$ denote the subset of the index set $A\subseteq\mathbb{N}^m$ in $a=\sum_{I\in A}a_I T^I$ whose associated coefficients $a_I$ are nonzero; this is the {\it support} of $a$. Given $A\subseteq\mathbb{N}^m$, we let $e_{\rm S}(A)$ denote the series $\sum_{I\in A}T^I$ in ${\rm S}[\![T]\!] $. Let $n$ and $m$ be nonzero natural numbers. We shall consider the polynomial ring ${\rm S}[\![T]\!][x_{i,J}\::\:i,J]$ where $i\in\{1,\ldots,n\}$ and $J\in \mathbb{N}^m$. The {\it order} of $x_{i,J}$ is $\lVert J\rVert_\infty$ and a monomial of order less than or equal to $r\in\mathbb{N}$ is any expression of the form \begin{equation} \label{differential_monomial} E:=\prod_{{1\leq i\leq n,\; ||J||_\infty\leq r}}x_{i,J}^{m_{i,J}}. \end{equation} We may specify the monomial \eqref{differential_monomial} using the array $M:=(m_{i,J}) \in \mathbb{N}^{n\times(r+1)^m}$, in which case we write $E=E_M$. With this notation, an element $P\in {\rm S}[\![T]\!][x_{i,J}\::\:i,J]$ is a finite sum $P=\sum_{i}a_{M_i}E_{M_i}$ of scalar multiples of differential monomials with nonzero coefficients $a_{M_i}\in {\rm S}[\![T]\!] $. \subsection{Differential semirings} Our next task is to equip these semirings with a differential structure. \begin{dfn} {A map $d:{\rm S}\rightarrow {\rm S}$ is a} \emph{derivation} {if it is linear and satisfies the Leibniz rule with respect to products of elements in ${\rm S}$. A} differential semiring {is a pair $({\rm S},D)$ consisting of a semiring ${\rm S}$ together with a finite collection $D$ of derivations on ${\rm S}$ that commute pairwise}. \end{dfn} The usual definition of partial derivations in a ring of formal power series with ring coefficients extends to the semiring coefficient case. Namely, given $a\in {\rm S}\setminus\{0\}$, $I=(i_1,\ldots,i_m)\in \mathbb{N} ^m$ and $j=1,\ldots,m$, we set \begin{equation}\label{rule_deriv} \tfrac{\partial}{\partial t_j}(aT^I):= \begin{cases} i_jaT^{I-e_j}&\text{if }i_j\neq0; \text{ and}\\ 0&\text{otherwise}. \end{cases} \end{equation} That \eqref{rule_deriv} is well-defined follows from the way in which $\mathbb{N}$ acts on ${\rm S}$. Moreover, we have $\tfrac{\partial}{\partial t_j}(aT^IbT^J)=aT^I\tfrac{\partial}{\partial t_j}(bT^J)+bT^J\tfrac{\partial}{\partial t_j}(aT^I)$; by linearity it follows that $\tfrac{\partial}{\partial t_j}(\sum_{I}a_IT^I)=\sum_{I}\tfrac{\partial}{\partial t_j}(a_IT^I)$ defines a derivation on ${\rm S}[\![T]\!]$. We now extend $D=\{\tfrac{\partial}{\partial t_1},\ldots,\tfrac{\partial}{\partial t_m}\}$ from ${\rm S}[\![T]\!] $ to ${\rm S}[\![T]\!][x_{i,J}:i,J]$ by setting $\tfrac{\partial}{\partial_{t_i}} x_{k,J}= x_{k,J+e_i}$. Hereafter we use either ${\rm S}[\![T]\!] \{x_1,\ldots,x_n\}$ or ${\rm S}_{m,n}$ as a shorthand for $({\rm S}[\![T]\!][x_{i,J}:i,J],D)$. We further set ${\rm S}_{0,0}:=({\rm S},D=\{0\})$, ${\rm S}_{m,0}:={\rm S}_m=({\rm S}[\![T]\!] ,D)$, and ${\rm S}_{0,n}:=({\rm S}\{x_1,\ldots,x_n\},D)$. The inclusion ${\rm S}_{a,b}\subset {\rm S}_{c,d}$ is an extension of differential semirings whenever $0\leq a\leq c$ and $0\leq b\leq d$. The following map is the key to interpreting elements of ${\rm S}_{m,n}$ as differential operators with coefficients in ${\rm S}$. \begin{dfn} {Given a differential semiring $(S,D=\{d_1,\ldots,d_m\})$, let $\Theta_{({\rm S},D)}:\mathbb{N}^m\rightarrow {\rm End}({\rm S})$ denote the $\mathbb{N}$-module action in which $e_i$ acts as $d_i$. That is, the endomorphism of ${\rm S}$ corresponding to $(j_1,\ldots,j_m)=J \in \mathbb{N}^m$ is $\Theta_{({\rm S},D)}(J):= d_1^{j_1}\cdots d_m^{j_m}$. Whenever there is no risk of confusion, we will write $\Theta(J)$ in place of $\Theta_{({\rm S},D)}(J)$.} \end{dfn} Each differential monomial $E$ as in \eqref{differential_monomial} singles out an evaluation map $E: {\rm S}_m ^n\rightarrow {\rm S}_m $ that sends $a=(a_1,\ldots,a_n)\in {\rm S}_m ^n$ to \begin{equation}\label{evaluation_map} E(a):=\prod_{m_{i,J}}(\Theta_{{\rm S}_m}(J)a_i)^{m_{i,J}}=\prod_{m_{i,J}}\bigl(\tfrac{\partial^{\sum_ij_i}(a_i)}{\partial t_1^{j_1}\cdots \partial t_m^{j_m}}\bigr)^{m_{i,J}}. \end{equation} The assignment \eqref{evaluation_map} extends by linearity to yield an evaluation map $P: {\rm S}_m ^n\rightarrow {\rm S}_m $ for every $P \in {\rm S}_{m,n}$; and deciding when $a\in {\rm S}_m ^n$ should be deemed a {\it solution} of $P$ will depend on the value $P(a)$. We will see later that the precise definition of solution we adopt will depend on the type of base semiring ${\rm S}$ under consideration. \begin{rem} \label{commutation} {Let $\Theta_{{\rm S}_{m,n}}:\mathbb{N}^m\rightarrow {\rm S}_{m,n}$ denote the $\mathbb{N}$-module action of $\mathbb{N}^m$ on ${\rm S}_{m,n}$. Because ${\rm S}_{m,0}\subset {\rm S}_{m,n}$ is an extension of differential semirings, it follows from \eqref{evaluation_map} that differentiation of differential polynomials commutes with evaluation, i.e., we have $(\Theta_{{\rm S}_{m,n}}(J) P)(a)=\Theta_{{\rm S}_m}(J) (P(a))$ for every $P\in {\rm S}_{m,n}$, $a\in {\rm S}_m ^n$ and $J \in \mathbb{N}^m$. } \end{rem} \subsection{Intermezzo: the semiring of twisted polynomials} \label{tp} In this subsection, we explain how to adapt differential polynomials to the semiring-theoretic context using differential modules over differential semirings. The resulting formalism is of a piece with contemporary presentations of differential algebra; see, e.g., \cite{K10}. To begin, recall from \cite[Ch.14]{JG} that modules over a (not-necessarily commutative) semiring ${\rm S}$ are pairs $({\rm M},\cdot)$ consisting of a commutative monoid ${\rm M}=({\rm M},+,0_M)$ together with a scalar multiplication $\cdot:{\rm S}\times {\rm M}\rightarrow {\rm M}$ that satisfies the usual axioms for modules over rings, along with the additional requirement that $s\cdot 0_{\rm M}=0_{\rm M}=0_R\cdot m$ for every $s \in {\rm S}$ and $m \in {\rm M}$. Similarly, the definition of differential modules over a differential semiring is an adaptation of the usual one for differential rings. We will focus on the case of ${\rm S}_m$. \begin{dfn} {A }\emph{differential module} {over ${\rm S}_m$ is a pair $({\rm M},D_{\rm M})$ in which ${\rm M}$ is a module over ${\rm S}[\![T]\!] $ equipped with additive maps $D_{\rm M}=\{\delta_1,\ldots,\delta_m\}$ for which }$$\delta_i(a\cdot m)=a\cdot\delta_i(m)+\tfrac{\partial a}{\partial t_i}\cdot m \quad \text{ for every } a\in {\rm S}[\![T]\!] ,\:m\in {\rm M}.$$ \end{dfn} \begin{dfn} {The }\emph{semiring of twisted polynomials} {${\rm S}[\![T]\!] \{d_1,\ldots,d_m\}$ is the additive semigroup $\bigoplus_{I\in\mathbb{N}^m}{\rm S}[\![T]\!] \cdot D^I$, where $D^I=d_1^{i_1}\cdots d_m^{i_m}$, and in which we impose the following rules for the product: \begin{enumerate} \item $d_ia=ad_i+\tfrac{\partial a}{\partial t_i}$ for every $a\in {\rm S}[\![T]\!] $; and \item $d_id_j=d_jd_i$ for every $i,j=1,\ldots,m$. \end{enumerate} } \end{dfn} Via the semiring of twisted polynomials ${\rm S}[\![T]\!] \{d_1,\ldots,d_m\}$, we may identify the category of differential modules over ${\rm S}_m$ with the category of left ${\rm S}[\![T]\!] \{d_1,\ldots,d_m\}$-modules. Indeed, we have an action of ${\rm S}[\![T]\!] \{d_1,\ldots,d_m\}$ on $({\rm M},D_{\rm M})$ for which $d_i$ acts as $\delta_i$; so ${\rm S}[\![T]\!] \{d_1,\ldots,d_m\}$ acts on ${\rm S}_{m,n}$ and its differential sub-semiring ${\rm S}_m$ with $d_i$ acting as $\tfrac{\partial }{\partial t_i}$. In this way, we recover the actions $\Theta_{{\rm S}_{m,n}}$ and $\Theta_{{\rm S}_{m}}$ described before. \section{Tropical differential equations and their solutions} \label{Tropical} We now introduce tropical differential polynomials and the formal power series solutions to the tropical differential equations they define. In practice, this means applying the constructions of section~\ref{Section_DifEqs} to the case in which ${\rm S}$ is the boolean semiring $\mathbb{B}:=(\{0,1\},+,\times)$. Here $\times$ denotes the usual product, while $a+b=1$ whenever $a$ or $b$ is nonzero. A semiring ${\rm S}$ is {\it additively idempotent} if $a+a=a$ for all $a\in {\rm S}$, and the category of idempotent semirings is precisely the category of $\mathbb{B}$-algebras. Thus the semiring structure map $\mathbb{N}\rightarrow {\rm S}$ factors through the idempotent semiring structure map $\mathbb{B}\rightarrow {\rm S}$ via the (unique) homomorphism $\mathbb{N}\rightarrow \mathbb{B}$. Hereafter, any reference to an idempotent semiring means one that is additively idempotent. Most of the following concepts were introduced in \cite{FGH20} using subsets of $\mathbb{N}^m$; here we reframe them using the language of formal power series with coefficients in $\mathbb{B}$ in order to maximize their proximity to their classical counterparts; see Remark~\ref{rem_difference}. We refer the curious to the Example ~\ref{Ex_Solution}, or to the more involved worked example in section~\ref{Example}, to see how these concepts apply in practice. The semiring $\mathbb{B}[\![T]\!]$ consists of the set $\mathbb{B}[\![T]\!] =\bigl\{a=\sum_{I\in A}T^I\::\:\emptyset\subseteq A\subseteq\mathbb{N}^m\bigr\}$ endowed with the operations $a+ b=\sum_{I\in A\cup B}T^I$ and $a b=\sum_{I\in A+ B}T^I$ for $a=\sum_{I\in A}T^I$ and $b=\sum_{I\in B}T^I$. Recall that $\mathbb{B}_m=(\mathbb{B}[\![T]\!] ,D)$, where $D=\{\tfrac{\partial}{\partial t_1},\ldots,\tfrac{\partial}{\partial t_m}\}$; in particular, for $J \in \mathbb{N}^m$, the map $\Theta_{\mathbb{B}_m}(J): {\mathbb{B}}[\![T]\!] \to {\mathbb{B}}[\![T]\!] $ sends $a=\sum_{I\in A}T^I$ to the series $\Theta_{\mathbb{B}_m}(J)a=\sum_{I\in (A-J)_{\geq0}}T^I$, where \begin{equation} \label{differential_sets} (A-J)_{\geq0}:=\left\{ (i_1,\ldots,i_m)\in A-J \:|\:i_1,\ldots,i_m\geq0\right\}. \end{equation} Indeed, this follows from the fact that the action of $\mathbb{N}$ used to define \eqref{rule_deriv} factors through the structure map $\mathbb{B}\rightarrow \mathbb{B}[\![T]\!]$. \begin{rem} \label{rem_difference} {Let $\mathcal{P}(\mathbb{N}^m)$ denote the semiring whose elements are subsets of $\mathbb{N}^m$, in which the sum $A+B:=A\cup B$ is the union of sets and the product $AB:=\{I+J\::\:I\in A, J\in B\}$ is the Minkowski product. We use \eqref{differential_sets} to equip $\mathcal{P}(\mathbb{N}^m)$ with a $\mathbb{N}^m$-action that turns it into a differential semiring; in so doing, the support map $\text{Supp}:(\mathbb{B}[\![T]\!] ,D)\xrightarrow[]{}(\mathcal{P}(\mathbb{N}^m),D)$ becomes an isomorphism of differential semirings with inverse $A\mapsto e_{\mathbb{B}}(A)$. It is worth noting here that $a=\sum_{I\in A}T^I\in \mathbb{B}[\![T]\!] $ and $A\in \mathcal{P}(\mathbb{N}^m)$ are distinct representations of the same object. However, we will see that the representation given by the semiring of supports $\mathcal{P}(\mathbb{N}^m)$ is more suitable for computations; see, e.g., the concrete description of the expression \eqref{differential_sets} in \eqref{concrete_description_sup_der}.} \end{rem} Our aim is now to use the elements of the differential semiring $\mathbb{B}_{m,n}$ to define tropical differential equations. This requires a bit of care, however, inasmuch as directly copying the definition operative in the ring-theoretic setting produces inadequate results. Indeed, given $P=\sum_ia_{M_i}E_{M_i}$ in $\mathbb{B}_{m,n}$, the only elements $a\in \mathbb{B}_m ^n$ whose $P$-evaluations \eqref{evaluation_map} satisfy $P(a)=0$ are such that $E_{M_i}(a)=0$ for every $i$; see section~\ref{Example}. So we will have to work harder in order to produce a useful definition of solution. For this purpose, we will make use of the following basic concepts from convex geometry. A {\it polyhedron} $P$ is the intersection of finitely many affine halfspaces in $\mathbb{R}^m$, and it is a {\it polytope} whenever it is bounded. A (proper) {\it face} of the polyhedron $P$ is the intersection of $P$ with a hyperplane $H$ that contains $P$ in one of its two half-spaces. We let $\mathcal{V}(P)$ denote the set of zero-dimensional faces, or {\it vertices} of $P$; and we let $\text{Conv}(A)$ denote the convex hull of a set $A\subset \mathbb{N}^m$. A polyhedron $P$ is {\it integral} whenever $P=\text{Conv}(P\cap\mathbb{Z}^m)$. \begin{dfn} \label{Def_New_poly} {Given $A\subset \mathbb{N}^m$, its} Newton polyhedron {$\text{New}(A)$ is the convex hull of the set $\{I+J:I\in A,J\in\mathbb{N}^m\}\subseteq\mathbb{R}_{\geq0}^m$. We say that $A^{\pr}\subseteq A$ is a} spanning set {whenever $\text{New}(A^{\pr})=\text{New}(A)$}. \end{dfn} \begin{thm}[See \cite{FGH20}]\label{vertex_poly} {Every $A\subset \mathbb{N}^m$ has $\text{Vert}(A):=\mathcal{V}(\text{New}(A))$ as its (unique) minimal spanning set. In particular, its minimal spanning set is finite}. \end{thm} Given $a=\sum_{I\in A}T^I$ in $\mathbb{B}[\![T]\!] $ we let $V(a):=e_{\mathbb{B}}(\text{Vert}(A))$ denote the {\it vertex polynomial} of $a$. We now have the ingredients necessary to construct solutions of tropical differential polynomials. The following definition differs slightly from the original in \cite{FGH20}, but it is equivalent and has the advantage of being less technical. \begin{dfn} \label{Def_trop_vanishing} {An element $a\in \mathbb{B}_m ^n$ is a} solution {of $\sum_{i}a_{M_i}E_{M_i}=P\in \mathbb{B}_{m,n}$ if for every monomial $T^{I}$ of $V(P(a))$, there are at least two distinct terms $a_{M_k}E_{M_k}$ and $a_{M_\ell}E_{M_\ell}$ of $P$ for which $T^{I}$ appears in both $V(a_{M_k}E_{M_k}(a))$ and $V(a_{M_\ell}E_{M_\ell}(a))$}. \end{dfn} It is worth emphasizing that to verify whether an $n$-tuple $a\in \mathbb{B}_m ^n$ of (a priori infinite) boolean power series satisfies the tropical differential equation determined by a differential polynomial, only a {\it finite} number of conditions need to be checked. We let $\text{Sol}(P)\subset\mathbb{B}_m ^n$ denote the set of solutions of $P$; more generally, given any collection $\Sigma\subset \mathbb{B}_{m,n}$ of tropical differential polynomials, we let $\text{Sol}(\Sigma)=\bigcap_{P\in \Sigma}\text{Sol}(P)$ denote the set of solutions common to all $P\in \Sigma$. A natural question that arises is how the tropical vanishing condition introduced in Definition \ref{Def_trop_vanishing} compares with the usual one operative in tropical geometry, in which $a\in \mathbb{B}_m ^n$ is a solution of $\sum_{i}a_{M_i}E_{M_i}=P\in \mathbb{B}_{m,n}$ provided there are at least two distinct terms $a_{M_k}E_{M_k}$ and $a_{M_\ell}E_{M_\ell}$ of $P$ for which $V(P(a))=V(a_{M_k}E_{M_k}(a))=V(a_{M_\ell}E_{M_\ell}(a))$. In the ordinary differential case, the vertex polynomial $V(\sum_{i\in A}t^i)$ is simply the monomial $t^{\text{min}(A)}$, and our definition of tropical vanishing agrees with the classical one. However when $m>1$, classical solutions are also solutions in our sense, while the converse fails to hold in general, as in the following example. \begin{figure}[!htb] \centering \includegraphics[scale=0.75]{example_solution.pdf} \vspace{-.3cm} \caption a) $V(a_{M_i}E_{M_i}(a))$ for $i=1,2,3$; and b) $V(P(a))$.} \label{Figura_soluciones} \end{figure} \begin{ex} \label{Ex_Solution} { Set $P=tx_{(1,0)}+ux_{(0,1)}+(t^2+u^3) \in \mathbb{B}[[t,u]]\{x\}$, and $a=t^2+tu+u^3\in \mathbb{B}[[t,u]]$. Then $P(a)=t(t+u)+u(t+u^2)+(t^2+u^3)=t^2+tu+u^3$, and from Figure~\ref{Figura_soluciones}b we see that $V(P(a))=P(a)$.} { For every monomial $a_ME_M$ in the expansion of $P$, we now compute $a_ME_M(a)$ and $V(a_ME_M(a))$; here both series are equal in every instance, as is clear from Figure \ref{Figura_soluciones}a. We also see that $a$ is a solution of $P$ by directly checking against Definition \ref{Def_trop_vanishing}; however, there is no monomial that appears in both $V(a_ME_M(a))$ and $V(P(a))$. } \end{ex} Vertex polynomials are the key to this theory; we analyze them closely in the next section. \section{The semiring of vertex polynomials} \label{section_Idempotent_Semiring} In section~\ref{Tropical}, the concept of vertex polynomial served primarily to define solutions of tropical differential equations; however, they fit into a broader algebraic framework. In this section, we will start by giving a more algebraic presentation of the semiring of vertex polynomials. To this end, recall that a {\it congruence} on a semiring ${\rm S}$ is an equivalence relation $\sim$ on ${\rm S}$ for which $a+c\sim b+c$ and $ac\sim bc$ for every $c\in {\rm S}$ whenever $a\sim b$. The semiring structure on ${\rm S}$ then descends to the quotient $\faktor{{\rm S}}{\sim}$, and the quotient projection $\pi:{\rm S}\to \faktor{{\rm S}}{\sim}$ becomes a homomorphism of semirings. \begin{dfn} \label{Def_VS} The \emph{semiring of vertex polynomials} is the quotient $\faktor{\mathbb{B}[\![T]\!] }{\text{New}}$, where $\text{New}\subset \mathbb{B}[\![T]\!] \times \mathbb{B}[\![T]\!] $ denotes the semiring congruence comprised of pairs of boolean power series with equal Newton polyhedra. \end{dfn} As mentioned in the previous section, vertex polynomials were introduced in an equivalent form in \cite{FGH20} using subsets of $\mathbb{N}^m$; there, the map $\text{Vert}:\mathcal{P}(\mathbb{N}^m)\xrightarrow[]{}\mathcal{P}(\mathbb{N}^m)$ was shown to satisfy $\text{Vert}^2=\text{Id}$; and the semiring of vertex sets $\mathbb{T}_m=(\mathbb{T}_m,\oplus,\odot)$ was defined to be the image of $\mathrm{Vert}$, equipped with the operations $A\oplus B=\mathrm{Vert}(A\cup B)$ and $\ A\odot B=\mathrm{Vert}(AB)$. We will show below that $\mathrm{Vert}:\mathcal{P}(\mathbb{N}^m)\rightarrow \mathbb{T}_m$ is precisely the quotient projection $\pi:\mathbb{B}[\![T]\!] \rightarrow \faktor{\mathbb{B}[\![T]\!] }{\text{New}}$. This provides a more intrinsic explanation for the facts that $\mathbb{T}_m$ is a semiring, and that $\mathrm{Vert}$ is a semiring homomorphism. \begin{lemma}\label{New_congruence}The relation $\text{New}$ from Definition \ref{Def_VS} is a semiring congruence, and the map $\text{Vert}:\mathcal{P}(\mathbb{N}^m)\xrightarrow[]{}\mathbb{T}_m$ is the quotient projection $\pi:\mathbb{B}[\![T]\!] \rightarrow \faktor{\mathbb{B}[\![T]\!] }{\text{New}}$. \end{lemma} \begin{proof} We use the isomorphism $\text{Supp}:\mathbb{B}[\![T]\!] \xrightarrow[]{}\mathcal{P}(\mathbb{N}^m)$ from Remark \ref{rem_difference}. Clearly $\text{New}$ is an equivalence relation. That $\text{New}$ is compatible with sum and product on $\mathbb{B}[\![T]\!] $ follows from the facts that $\text{New}(a+b)=\text{Conv}(\text{New}(a)\cup\text{New}(b))$ and $\text{New}(ab)=\text{New}(a)\text{New}(b)$. On the other hand, from \cite[Lem. 3.1]{FGH20} we have $\text{New}(a)=\text{New}(b)$ if and only if $\mathrm{Vert}(A)=\mathrm{Vert}(B)$, which means that $e_{\mathbb{B}}:\mathbb{T}_m\to \faktor{\mathbb{B}[\![T]\!] }{\text{New}}$ is a bijection of sets. Finally, the binary operations $\oplus, \odot$ on $\mathbb{T}_m$ satisfy $A\oplus B:=\mathrm{Vert}(A\cup B)=[e_{\mathbb{B}}(A)+e_{\mathbb{B}}(B)]$ and $A\odot B:=\mathrm{Vert}(A B)=[e_{\mathbb{B}}(A)e_{\mathbb{B}}(B)]$, which are precisely those of $\faktor{\mathbb{B}[\![T]\!] }{\text{New}}$. \end{proof} Hereafter $V\mathbb{B}[T]$ will denote the set of vertex polynomials, $i:V\mathbb{B}[T]\rightarrow \mathbb{B}[\![T]\!] $ the natural inclusion of sets induced by viewing polynomials as series, and $V:\mathbb{B}[\![T]\!] \rightarrow V\mathbb{B}[T]$ the quotient map. The operations on $V\mathbb{B}[T]$ are defined by $a\oplus b=V(i(a)+i(b))$ and $a\odot b=V(i(a)i(b))$. In particular, the support map $\text{Supp}:V\mathbb{B}[T]\xrightarrow[]{}\mathbb{T}_m$ is a semiring isomorphism. Our next goal is to recast Definition~\ref{Def_trop_vanishing} in terms of corner loci of sums in $V\mathbb{B}[T]$. Given any sum $s=a_1\oplus\cdots\oplus a_k$ in $V\mathbb{B}[T]$ involving $k \geq 2$ summands, we let $s_{\widehat{i}}:=a_1\oplus\cdots\oplus\widehat{a_i}\oplus\cdots\oplus a_k$ denote the corresponding sum obtained by omitting the $i$-th summand, for every index $i=1, \dots,k$. \begin{dfn} {Given a positive integer $k \geq 2$, we say that the sum $s=a_1\oplus\cdots\oplus a_k$ in $V\mathbb{B}[T]$} \emph{tropically vanishes} {in $V\mathbb{B}[T]$ if $s= s_{\widehat{i}}\text{ for every }i=1,\ldots,k.$} \end{dfn} \begin{prop} \label{prop_reformulation_solution} An element $a\in\mathbb{B}_m ^n$ is a \emph{solution} of $P=\sum_{i=1}^ka_{M_i}E_{M_i} \in \mb{B}_{m,n}$ if and only if $V(P(a))=\bigoplus_{i=1}^kV(a_{M_i}E_{M_i}(a))$ tropically vanishes in $V\mathbb{B}[T]$. \end{prop} \begin{proof} We have $P(a)=\sum_{i=1}^ka_{M_i}E_{M_i}(a)$, and as $V$ is a homomorphism, it follows that $V(P(a))=V(\sum_{i=1}^ka_{M_i}E_{M_i}(a))=\bigoplus_{i=1}^kV(a_{M_i}E_{M_i}(a)).$ Suppose now that $a\notin \text{Sol}(P)$; there is then some $t^I\in V(P(a))$ for which $t^I\in V(a_{M_i}E_{M_i}(a))$ for some unique index $i\in\{1,\ldots,k\}$, which means that $V(P(a))$ is not contained in $V(P(a))_{\widehat{i}}$. Conversely, if $V(P(a))$ does not tropically vanish, then $V(P(a))$ fails to be contained in $V(P(a))_{\widehat{i}}$ for some index $i\in\{1,\ldots,k\}$. This means there is some $t^I\in V(P(a))$ that appears as a term in the expansion of $V(a_{M_i}E_{M_i}(a))$ and in no other $V(a_{M_{\ell}}E_{M_{\ell}}(a))$, $\ell \neq i$; thus $a\notin \text{Sol}(P)$. \end{proof} That $V(P(a))$ tropically vanishes means precisely that $V(P(a))$ satisfies the {\it bend relations} of \cite[Sec. 5.1]{GG}; so it is precisely the classical ``corner locus" of a tropical polynomial. Proposition~\ref{prop_reformulation_solution} establishes that $\text{Sol}(P)$ is the corner locus of the map $a\mapsto V(P(a))$, i.e., a ``tropical differential hypersurface"; correspondingly, we refer to $V(P(a))$ as a {\it tropical DA hypersurface}. Note that tropical vanishing depends strongly on the realization of $P(a)$ as a sum of monomials, but not on the {\it value} of $P(a)$; see Remark~\ref{rem_sum_not_value} for an illustrative example. \subsection{The order relation} \label{SubSection_Order} In any idempotent semiring ${\rm S}$, we use $a\leq b$ to mean that $a+b=b$, or equivalently, that $a+c=b$ for some $c$ in ${\rm S}$. Then $\leq$ is an algebraic order relation, as $a\leq b$ implies $a+c\leq b+c$ and $ac\leq bc$ for all $c\in S$. It is a total order if and only if ${\rm S}$ is {\it bipotent}, i.e., such that $a+b \in \{a,b\}$ for every $a$ and $b$. In this context, the {\it least upper bound} of any finite set of elements of $S$ is equal to their sum. We use $a< b$ to mean that $a\leq b$ and $a\neq b$. Given an idempotent semiring ${\rm S}$, we let ${\rm S}^{\text{op}}$ denote the same semiring endowed with the opposite order: $a\leq b$ if and only if $a+b=a$. The sum of any finite set of elements in ${\rm S}$ then becomes their {\it greatest lower bound}. Hereafter, we let $\ell:{\rm S}\to {\rm S}^{\text{op}}$ denote the identity map. We will put this convention to use later in constructing non-archimedean amoebae; see Remark~\ref{non_arch_amoeba}. We will now explicitly describe the order relation on $V\mathbb{B}[T]$. Interesting and as-yet unexplored combinatorial phenomena emerge whenever $m>1$. The following concept will be useful for our purposes. \begin{dfn} { Given $A\subset \mathbb{N}^m$, we let $\widetilde{A}:=\text{New}(A)\cap\mathbb{N}^m$ denote the set of integer points of the Newton polyhedron of $A$. We refer to $\mathbb{I}(A):=\widetilde{A}\setminus \text{Vert}(A)$ as the {\it irrelevant part} of $A$.} \end{dfn} \begin{lemma}\label{order_relation} Given $a,b\in V\mathbb{B}[T]$ with supports $A$ and $B$ respectively, we have $a\leq b$ if and only if $A\subseteq \widetilde{B}$. \end{lemma} \begin{proof} If $a\oplus b=b$, then $A\subset \text{Conv}(A\cup B+\mathbb{N}^m)=\text{New}(B)$, so $A=A\cap \mathbb{N}^m\subset \widetilde{B}$. Conversely, suppose that $A\subseteq \widetilde{B}$. As $B\subset \widetilde{B}$, we have $B\subset A\cup B\subset \widetilde{B}$, so $B+\mathbb{N}^m\subset A\cup B+\mathbb{N}^m\subset \widetilde{B}+\mathbb{N}^m=\widetilde{B}$. Since $\text{New}(B)$ is an integral polyhedron, it follows that $\text{Conv}(A\cup B+\mathbb{N}^m)=\text{New}(B)$, and consequently $\text{New}(B)=\text{Conv}(\widetilde{B})$. Passing to vertex sets now yields $a\oplus b=b$. \end{proof} \begin{rem} {In the terminology of \cite{Bl}, Lemma~\ref{order_relation} establishes that $V:\mathbb{B}[\![T]\!] \to V\mathbb{B}[T]$ is a {\it residuated mapping}, with residual $\wt{\cdot}:V\mathbb{B}[T]\xrightarrow{}\mb{B}[\![T]\!]$.} \end{rem} Note that $\leq$ defines a partial order on $V\mathbb{B}[T]$ with least (resp., largest) element $0$ (resp., 1). Here $V\mb{B}[T]$ contains the set of monomials $\{T^I: I\in\mb{N}^m\}$, and the order induced by $\leq$ on monomials is the {\it opposite} of the usual product order on $\mb{N}^m$. We will use Lemma~\ref{order_relation} to refine this partial order. \begin{dfn}\label{dfn_relevant} { Given $0<a\leq b$ in $V\mathbb{B}[T]$ with supports $A$ and $B$, $a$ is \emph{ irrelevant} for $b$ whenever $A\cap B=\emptyset$; in this case, we write $a\prec b$. } \end{dfn} Note that $a\prec b$ implies $a<b$, but the converse is not true unless $m=1$ or $m>1$ and $b=T^I$ is a monomial. The fact that either $a\nprec b$ or $a\prec b$ whenever $a \leq b$ generalizes the usual dichotomy between $a=b$ and $a<b$ that exists in totally ordered idempotent semirings whenever $a \leq b$. In the following remark, we show that some properties of the order relation $<$ on a totally ordered idempotent semiring ${\rm S}$ also hold for the relation $\prec$; see also Remark~\ref{rem_one_more}. \begin{rem}\label{properties_of_irrelevant} { Assume that $a,b, \text{and }c$ in $V\mathbb{B}[T]\setminus\{0\}$ have supports $A,B, \text{and }C$, respectively. The relation $\prec$ satisfies the following properties: \begin{enumerate} \item {\it Transitivity:} $a\prec b, b\prec c \implies a\prec c$. This follows from the transitivity of $\leq$, and from $A\subset \mathbb{I}(B)\subset \widetilde{B}\subset \mathbb{I}(C)$. \item $c\prec a,b \implies c\prec a\oplus b$. Clearly $c\leq a\oplus b$, and Proposition \ref{char_of_prod} implies that the support of $a\oplus b$ is contained in $A\cup B$, so $C\cap(A\cup B)=\emptyset$. \item $a,b\prec c \implies a\oplus b\prec c$. Clearly $a\oplus b\leq c$, and the rest of the proof follows that of item (2). \item $a\prec b, c\neq 0 \implies a\odot c\prec b\odot c$. Clearly $a\odot c\leq b\odot c$, and the conclusion now follows from Proposition \ref{char_of_prod}. Indeed, otherwise would have $I_a+I_c=I=I_b+I_d$ for uniquely-prescribed $I_a\in A$, $I_b\in B$ and $I_c,I_d\in C$; but this is precluded by the fact that $A\cap B=\emptyset$. \item Cancellativity: $a\odot c\prec b\odot c$, with $c\neq 0 \implies a\prec b$. Here $a\odot c\leq b\odot c$ implies $a\leq b$, since $V\mathbb{B}[T]$ is cancellative by Proposition~\ref{Prop_MC}. If $I\in A\cap B$, then no sum of the form $I+J$ with $J\in C$ can appear in $b\odot c$ by hypothesis, but this contradicts the proof of Proposition \ref{Prop_MC}, which establishes that $I+J$ appears in $b\odot c$ for some $J\in C$. Thus $A\cap B=\emptyset$. \end{enumerate} } \end{rem} \subsection{Intermezzo: alternative characterizations of vertex polynomials}\label{alternative_characterizations} In this subsection we focus on the polyhedral aspects of vertex polynomials. The Minkowski-Weyl theorem for polyhedra implies that for any $a\in \mathbb{B}[\![T]\!]$ there exists a polytope $\Delta$ for which \begin{equation}\label{newton_decomposition} \text{New}(a)=\Delta+\mathbb{R}_{\geq0}^m. \end{equation} While $\Delta$ is not unique, there is a distinguished representative $\Delta=\Delta_a$ whose vertex set $\mathcal{V}(\Delta_a)$ is precisely $V(a)$; see \cite[Thm 4.1.3]{RW}. We call $\Delta_a$ the {\it Newton polytope} of $a$. We anticipate that the Newton decomposition \eqref{newton_decomposition} may be leveraged to streamline the proofs of the salient properties of vertex sets described in \cite{FGH20}. Crucial among these is that vertex sets are {\it hereditary} in the following sense. \begin{prop} \label{Prop_HProperty} Given $a\in V\mathbb{B}[T]$ with support $A$, we have $e_{\mathbb{B}}(B)\in V\mathbb{B}[T]$ for every subset $B \sub A$. \end{prop} \begin{proof} Without loss of generality we assume that $\emptyset \subsetneq B \subsetneq A$, as otherwise the conclusion holds vacuously. We will show that $B=\mathcal{V}(\text{New}(B))$. To this end, first note that $B$ is the vertex set of $\Delta_B=\text{Conv}(B)$. Indeed, no $b \in B$ is realizable as a nontrivial convex combination $b=\sum_i\lambda_i b_i$ of distinct vertices $b_i \in B$, as no such nontrivial convex realization exists for vertices of $A$. Now $\text{New}(B)=\Delta_B+\mathbb{R}_{\geq0}^m$; so $\mathcal{V}(\text{New}(B))\subset B$. On the other hand, the fact that $\text{New}(B)\subset \text{New}(A)$ implies that $\text{New}(B)\cap A\subset \mathcal{V}(\text{New}(B))$ by \cite[Lem. 3.1]{FGH20+}. Given $v\in \text{New}(B)\cap A$, write $v=w+x$ with $w\in\Delta_B\subset \Delta_A$ and $x\in \mathbb{R}_{\geq0}^m$. Since $v$ is a vertex of $\text{New}(a)$, there is a hyperplane $H$ such that $H\cap \text{New}(a)=\{v\}$. But since $w+\mathbb{R}_{\geq0}^m\subset \text{New}(a)$, we have $H\cap(w+\mathbb{R}_{\geq0}^m)\subset \{v\}$, which implies that $x=0$. As a result, we have $v=w$ and this forces $v\in B$. \end{proof} \begin{coro} Let $a,b\in V\mathbb{B}[T]$ with supports $A$ and $B$. If $0<a< b$, then there exist $a',b'\in V\mathbb{B}[T]$ with $i(a)=a'+b'$ and supports $A^{\pr}$ and $B^{\pr}$ satisfying $A^{\pr}\subsetneq B$ and $B^{\pr}\subsetneq \mathbb{I}(B)$. \end{coro} \begin{proof} Applying Lemma~\ref{order_relation} and setting $\wt{B}:=\text{Vert}(B)\sqcup \mb{I}(B)$, we have $A=A\cap \wt{B}=[A\cap \text{Vert}(B)]\sqcup [A\cap \mb{I}(B)]$, which is itself a union of vertex sets. \end{proof} As further applications of the same circle of ideas, we will show that for $A\in \mathbb{N}^m$, the set $(\widetilde{A}\sqcup\{0\},+)$ is a finitely generated $\mathbb{N}$-module; and we will give a useful refined description of the supports of the algebraic operations in $V\mathbb{B}[T]$ in terms of vertices of lattice polytopes. In particular, \cite[Lem. 6]{FGH20+}, which establishes that for every $a,b\in V\mathbb{B}[T]$, every point $I$ in the support of $a\odot b$ is associated with uniquely-prescribed elements $I_a\in A$ and $I_b\in B$, is a consequence of this refined description. \begin{prop}\label{finite_generation_A-tilde} Let $A\subset \mathbb{N}^m$ with $\Delta_A:=\text{Conv}(\text{Vert}(A))$. Then \begin{equation}\label{A-tilde_identity} \widetilde{A}=(\Delta_A+[0,1)^m)\cap\mathbb{N}^m+\mathbb{N}^m. \end{equation} \end{prop} \begin{proof} Given $v,w\in\widetilde{A}$, we have $v=p+x$ and $w=q+y$ for some $p,q\in \Delta_A$ and $x,y\in \mathbb{R}_{\geq0}^m$. So $v+w=(p+q)/2+[(p+q)/2+x+y]\in\widetilde{A}$. We now prove \eqref{A-tilde_identity}. The inclusion $\supseteq$ is clear, since any element $I$ on the right-hand side of \eqref{A-tilde_identity} may be written as $I=(I_1+I_2)+I_3$, where $(I_1+I_2)\in \Delta_A+[0,1]^m$ and $I_3\in \mathbb{N}^m$; so in particular $I=I_1+(I_2+I_3)$ with $I_1\in \Delta_A$ and $(I_2+I_3)\in \mathbb{R}_{\geq0}^m$. Conversely, given $v\in \widetilde{A}$, we have $v=p+x$ as before, and moreover $x$ decomposes as $x=m+y$ with $m\in\mathbb{N}^m$ and $y\in[0,1)^m$. So $v=p+m+y$, and $v-m=p+y\in (\Delta_A+[0,1]^m)\cap\mathbb{N}^m$ as $v$ and $m$ are integral. \end{proof} \subsection{Comparison with the semiring of lattice polytopes} \label{Sub_Section_Relationship} Let $\mathcal{P}_{\mathbb{Z}^m}=(\mathcal{P}_{\mathbb{Z}^m},\oplus,\otimes)$ denote the semiring of lattice polytopes in $\mathbb{Z}^m$, in which $\Delta_1 \oplus \Delta_2:=\text{Conv}(\Delta_1 \cup \Delta_2)$, and $\Delta_1 \otimes \Delta_2$ denotes the Minkowski sum of $\Delta_1$ and $\Delta_2$. This is an idempotent semiring in which $\Delta_1 \leq \Delta_2 $ if and only if $\Delta_1 \subseteq \Delta_2$. \begin{prop} \label{char_of_prod} The map $\text{Conv}:V\mathbb{B}[T]\rightarrow\mathcal{P}_{\mathbb{Z}^m}$ that sends $a$ to $\text{Conv}(a)=\Delta_a$ is a non-archimedean norm in the sense of Definition~\ref{dfn:quasivaluation}. \end{prop} \begin{proof} The fact that a polytope is determined by its vertices implies that $\text{Conv}$ is injective. Now suppose that $a,b$ have supports $A$, $B$ and Newton polytopes $\Delta_a,\Delta_b$, respectively. By definition, we have $a\odot b=V(ab) =\mathcal{V}(\text{New}(A+B+\mathbb{N}^m))$. As $\text{New}(A+B+\mathbb{N}^m)=(\Delta_a\otimes\Delta_b)+\mathbb{R}_{\geq0}^m$, it follows that $\text{Conv}(a\odot b)\subset \Delta_a\otimes\Delta_b$. Similarly, we have $a\oplus b=V(a+b)=\mathcal{V}(\text{New}(A\cup B+\mathbb{N}^m))$; and as $\text{New}(A\cup B+\mathbb{N}^m)=(\Delta_a \oplus \Delta_b)+\mathbb{R}_{\geq0}^m$, it follows that $\text{Conv}(a\oplus b)\subset \Delta_a \oplus \Delta_b$. In particular, we have chains of inclusions \begin{equation} \label{uniqueness} \begin{aligned} V(ab)\subseteq \mathcal{V}(\Delta_a\otimes \Delta_b)\subseteq A+B \text{ and }V(a+b)\subseteq \mathcal{V}(\Delta_a\oplus \Delta_b)\subseteq A\cup B. \end{aligned} \end{equation} On the other hand, according to \cite[\S 3.1]{DT}, we have \[ \mathcal{V}(\Delta_a\otimes\Delta_b)=\{I\in A+B: I \text{ decomposes {\it uniquely} as }I_a+I_b, I_a\in \Delta_a, I_b\in \Delta_b\}. \] \end{proof} \begin{figure}[!htb] \centering \includegraphics[scale=0.75]{polytopes.pdf} \caption{Polytopes from Example~\ref{polytope}: a) $\Delta_a$ and $\Delta_b$; and b) $\text{Conv}(a\oplus b)\subsetneq \Delta_a\oplus\Delta_b$ and $\text{Conv}(a\odot b)\subsetneq \Delta_a\otimes\Delta_b$, where the included polytopes are shaded.} \label{Figura_politopos} \end{figure} \begin{ex}\label{polytope} { Let $A=\{(2,3),(3,1),(5,0)\}$ and $B=\{(0,4),(1,3),(4,2)\}$; then $A\odot B=\{(2,7),(3,5),(4,4),(6,3),(9,2)\}$ and $A\oplus B=\{(0,4),(1,3),(5,0)\}$. So the inclusions \eqref{uniqueness} are proper in this case.} \noindent {Now let $\Delta_a=\text{Conv}(A)$ and $\Delta_b=\text{Conv}(B)$. As indicated in Figure~\ref{Figura_politopos}b, we have \[ \begin{aligned} \mathcal{V}(\Delta_a\otimes\Delta_b)&=A\odot B\cup\{(6,5)\}\text{ and } A+B=\mathcal{V}(\Delta_a\otimes\Delta_b)\cup\{(3,6),(5,4),(7,3)\}; \\ \mathcal{V}(\Delta_a\oplus\Delta_b)&=A\oplus B\cup\{(4,2)\}\text{ and } A\cup B=\mathcal{V}(\Delta_a\oplus\Delta_b)\cup\{(1,3),(2,3)\}. \end{aligned} \] } \end{ex} Now let $\overline{\mathbb{Z}}$ denote the idempotent semifield given set-theoretically by $\mathbb{Z}\cup\{-\infty\}$ and equipped with the usual max-plus tropical addition and multiplication laws, and let $\mathcal{O}_{\mathbb{Z}^m}$ denote the idempotent semifield given set-theoretically by $\mathcal{O}_{\mathbb{Z}^m}= \{f:\mathbb{Z}^m\rightarrow\overline{\mathbb{Z}} \::\:f\text{ is piecewise linear} \}\cup\{-\infty\}$ and equipped with the operations of {\it point-wise} tropical addition and multiplication. The map $\phi:\mathcal{P}_{\mathbb{Z}^m}\ra \mathcal{O}_{\mathbb{Z}^m}$ sending the polytope $P$ to its support function $\phi_P:\mathbb{Z}^{m}\rightarrow\overline{\mathbb{Z}}$ defined by $\phi_P(x)=\text{max}\{\langle u,x\rangle\::\:u\in P\}$ (where $\langle u,x\rangle$ denotes the usual Euclidean inner product) is an embedding of semirings. See \cite{KM19} for a recapitulation of the well-known correspondence between piecewise linear functions and polytopes. Given $a\in V\mathbb{B}[T]$, let $\phi_{a}:\mathbb{Z}^m\xrightarrow[]{}\mathbb{Z}$ denote the support function associated to $\Delta_a$. If $X\subset \mathcal{O}_{\mathbb{Z}^m}$ denotes the image of $V\mathbb{B}[T]$ under $\phi_a$, it is natural to wonder whether $X$ and $\overline{\mathbb{Z}}$ may be endowed with an algebraic structure so that the resulting function $V\mathbb{B}[T]\rightarrow X$ sending $a$ to $\phi_{a}$ is a semiring homomorphism. \section{Non-Archimedean seminorms and initial forms} \label{Section_QVIF} Our tropicalization scheme for differential algebraic geometry is an enriched version of the classical tropicalization scheme based on valuations; the key technical notion we require is that of non-Archimedean seminorm, which is closely related to the quasivaluations of \cite{KM19} and the ring valuations of \cite{GG}. We also use non-Archimedean seminorms in order to single out new types of {\it initial forms} associated with (differential) polynomials. Initial forms for differential polynomials $P\in K_{m,n}$ over a field of characteristic zero $K$ were first introduced in \cite[Sect. 2]{HG19} when $m=1$; they were subsequently generalized in \cite[Sect. 8]{FGH20+} for every $m$. Here we first reframe initial forms in terms of seminorms, and in Section~\ref{Section_Fromringtofield} we show how to extend the construction of initial forms to differential polynomials with coefficients in $\text{Frac}(K[\![T]\!])$. Throughout this section, ${K}$ denotes a field of characteristic zero. \begin{dfn}\label{dfn:quasivaluation} {A }\emph{non-Archimedean seminorm} {is a map $|\cdot|:R\rightarrow {\rm S}$ from a semiring $R$ to an idempotent semiring ${\rm S}$ for which \begin{enumerate} \item $|0|=0$ and $|1|=1$; \item $|a+b|\leq |a|+|b|$; and \item $|ab|\leq |a||b|$. \end{enumerate} Whenever $R$ is a ${K}$-algebra, we replace the condition $|1|=1$ by the requirement that $|a|=1$ for every nonzero element $a\in K$. The non-Archimedean seminorm $|\cdot|$ is } a norm {if furthermore no nonzero element $a$ satisfies $|a|=0$, and }multiplicative {if it satisfies $|ab|= |a||b|$ for every $a, b \in R$. A multiplicative norm is called a valuation. Whenever ${\rm S}$ arises from a totally ordered monoid $({\rm M},\times,1,\leq)$, the non-Archimedean seminorm $|\cdot|$ is of {\it Krull} (or classical) type.} \end{dfn} Hereafter we will simply speak of seminorms, with the implicit understanding that they are non-Archimedean. \begin{dfn} \label{defi_support} {The }\emph{support series} {of the element $a=\sum_{I\in A}a_IT^I\in {\rm S}[\![T]\!] $ is the boolean formal power series $\text{sp}(a):=\sum_{I\in A}T^I$.} \end{dfn} When ${\rm S}=K$, the resulting map $\text{sp}:{K}[\![T]\!] \rightarrow\mathbb{B}[\![T]\!]$ is a norm, and the map $\text{Supp}:{K}[\![T]\!] \rightarrow\mathcal{P}(\mathbb{N}^m)$ from \cite{FGH20} fits into the following commutative diagram: \begin{equation*} \xymatrix{K[\![T]\!] \ar[r]^{\text{sp}}\ar[dr]_{\text{Supp}}&\mathbb{B}[\![T]\!]\ar[d]^{\ell} \\ &\mathcal{P}(\mathbb{N}^m).} \end{equation*} Moreover, as we show in the proof of Proposition \ref{Longue_prop} below, the norm $\text{sp}$ commutes with the respective differentials of $K_{m}=({K}[\![T]\!] ,D)$ and $\mathbb{B}_{m}=(\mathbb{B}[\![T]\!] ,D) . Putting everything together and applying Lemma~\ref{New_congruence}, we obtain a more intrinsic formulation of the following result from \cite[Sect. 4]{FGH20}. \begin{thm}\label{VBT_theorem} We have a commutative diagram \begin{equation} \label{Partial_diff_scheme} \xymatrix{ &\mathbb{B}_m\ar[r]^-{\text{for}}&\mathbb{B}[\![T]\!]\ar[d]^{\pi}\\ K_m\ar[r]_{\text{for}}\ar[ur]^{\text{sp}}&{K}[\![T]\!]\ar[ur]^{\text{sp}} \ar[r]_{\text{trop}}& \faktor{\mathbb{B}[\![T]\!] }{\text{New}}\\ } \end{equation} in which \emph{for} denotes the map that forgets the differential structures, while \begin{enumerate} \item sp is a ${K}$-algebra norm that commutes with $D$; \item $\pi:\mathbb{B}[\![T]\!] \rightarrow \faktor{\mathbb{B}[\![T]\!] }{\text{New}}$ is a quotient homomorphism; and \item $\text{trop}:K[\![T]\!] \rightarrow \faktor{\mathbb{B}[\![T]\!] }{\text{New}}$ is a ${K}$-algebra valuation. \end{enumerate} \end{thm} \begin{proof} Item 1 (resp., 2) follows from Proposition~\ref{Longue_prop} (resp., Lemma~\ref{New_congruence}). Item 3 is the content of \cite[Lem. 4.3]{FGH20}, but the argument given there for the multiplicativity of trop unfortunately is lacking, as it is predicated on reducing to a multiplicative property for (pairs of) Newton polytopes in several variables, which fails in general. To justify item 3, we argue as follows. Given $I\in \operatorname{trop}(\varphi\psi)$, let $c_It^I$ denote the corresponding monomial. According to Proposition~\ref{char_of_prod}, there are unique monomials $a_JT^{J}$ of $\overline{\varphi}$ and $b_KT^{K}$ of $\overline{\psi}$ for which $c_It^I=a_JT^{J}b_KT^{K}$; so $I \in \text{sp}(\overline{\varphi})\text{sp}(\overline{\psi})$; that trop is multiplicative now follows from the fact that $\pi$ is a homomorphism. \end{proof} The algebro-geometric study of tropical geometry over the semifield of piecewise linear functions $\mathcal{O}_{\mathbb{Z}^m}$, and in particular of $\mathcal{O}_{\mathbb{Z}^m}$-valued valuations, was initiated in \cite{KM19}. Remark~\ref{char_of_prod} implies that our valuations $\text{trop}:K[\![T]\!] \xrightarrow{}V\mathbb{B}[T]$ are closely related to theirs. It also establishes that the order structure of the idempotent semiring $V\mathbb{B}[T]$ is richer than that of $\mathcal{P}_{\mathbb{Z}^m}$. This will be relevant in section \ref{Section_Fromringtofield}. \begin{rem} \label{rem_diff_enh} The diagram \eqref{Partial_diff_scheme} from Theorem~\ref{VBT_theorem} establishes that sp is an instance of what J. Giansiracusa and S. Mereta call a \emph{differential enhancement} of the trop valuation. See \cite[Section 4.7]{GM} \end{rem} It is important to note that although $\text{trop}:K[\![T]\!]\xrightarrow{}V\mathbb{B}[T]$ is a valuation, it is never of Krull type when $m>1$, which follows from Lemma \ref{order_relation}. Remarkably, though, there is still a meaningful theory of initial forms with respect to trop, as we show in the next subsection. \begin{rem} \label{rem_one_more} {Using the relation of irrelevancy from Definition \ref{dfn_relevant} we show that if $\operatorname{trop}(a)\prec \operatorname{trop}(b)$, then $\operatorname{trop}(a+b)=\operatorname{trop}(a)\oplus\operatorname{trop}(b)=\operatorname{trop}(b)$, because $b$ decomposes as $b=\overline{b}+(b-\overline{b})$ (cf. \eqref{New_decomposition}), and in this instance no cancellations may occur because $\operatorname{trop}(a)\cap \operatorname{trop}(b)=\emptyset$.} \end{rem} \subsection{Initial forms} \label{Section_Initial_forms} The main result of this subsection is the following extension from $n=0$ to $n>0$ of Theorem~\ref{VBT_theorem}; it is the key to reformulating the results of \cite{FGH20+} in terms of valuation theory. \begin{thm}\label{VBT_theorem_extended} Given $w=(w_1,\ldots,w_n)\in \mathbb{B}_m ^n$, we have a commutative diagram \begin{equation} \label{Ext_Partial_diff_scheme} \xymatrix{ &\mathbb{B}_{m,n}\ar[r]^-{\text{for}}&\mathbb{B}[\![T]\!][x_{i,J}]\ar[r]^-{\text{ev}_w}&\mathbb{B}[\![T]\!]\ar[d]^{V}\\ K_{m,n}\ar[r]_{\text{for}}\ar[ur]^{\text{sp}}&{K}[\![T]\!][x_{i,J}]\ar[ur]^{\text{sp}} \ar[rr]_{\text{trop}_w}&& V\mathbb{B}[T]\\ } \end{equation} in which \emph{for} denotes the map that forgets the differential structures, while \begin{enumerate} \item the map $\text{sp}:{K}_{m,n}\rightarrow \mathbb{B}_{m,n}$ is a ${K}$-algebra norm satisfying $\text{sp}\circ\Theta_{{K}_{m,n}}(e_i)\leq \Theta_{\mathbb{B}_{m,n}}(e_i)\circ \text{sp}$ for $i=1,\dots,m$; and \item $\text{trop}_w:{K}[\![T]\!][x_{i,J}] \rightarrow V\mathbb{B}[T]$ is a multiplicative $K$-algebra seminorm. \end{enumerate} \end{thm} The crucial technical tool operative in the proof of Theorem~\ref{VBT_theorem_extended} is the following result, whose proof we defer to Appendix~\ref{Proof_of_Longue_prop}. \begin{prop} \label{Longue_prop} For $n>0$, the map $\text{sp}:{K}_{m,n}\rightarrow \mathbb{B}_{m,n}$ that sends $P=\sum_{i=1}^d\alpha_{M_i}E_{M_i}$ to $\text{sp}(P)=\sum_{i=1}^d\text{sp}(\alpha_{M_i})E_{M_i}$ is a ${K}$-algebra norm that satisfies $\text{sp}\circ\Theta_{{K}_{m,n}}(e_i)\leq \Theta_{\mathbb{B}_{m,n}}(e_i)\circ \text{sp}$ for $i=1,\dots,m$. \end{prop} We now construct the seminorm $\operatorname{trop}_w$ for $w\in \mathbb{B}_m ^n$ in order to define the initial form of a differential polynomial $P\in K_{m,n}$ ``at" the support vector $w\in \mathbb{B}_m ^n$. First we use the valuation $\operatorname{trop}:K[\![T]\!] \to{}V\mathbb{B}[T]$ to define the concept {\it initial term}. Note that any nonzero element $a=\sum_{I\in A}a_IT^I$ in $K[\![T]\!]$ and with $\text{Supp}(a)=A$ decomposes as \begin{equation}\label{New_decomposition} a=\overline{a}+(a-\overline{a}), \text{ where } \overline{a}=\sum_{\{I\::\:\operatorname{trop}(a_I)\nprec \operatorname{trop}(a)\}}a_IT^I=\sum_{I\in\text{Vert}(A)}a_IT^I \end{equation} and $\overline{a}$ is the initial term of $a$ with respect to the valuation trop. We then use the section $e_K:\mathbb{B}[\![T]\!]\to K[\![T]\!]$ to obtain a $K$-lift $e_K(w) \in K_m^n$ of the vector $w=(w_1,\ldots,w_n)\in \mathbb{B}_m ^n$. For every monomial $aE$ in ${K}[\![T]\!][x_{i,J}]$, we interpret it as a differential monomial and set \begin{equation}\label{trop_of_diff_monomial} \operatorname{trop}_w(aE):=\operatorname{trop}(aE(e_K(w)))=\operatorname{trop}(a)\odot V(E(w)). \end{equation} \begin{dfn}\label{tropical_quasivaluation} {Given $w\in \mathbb{B}_m ^n$, we define $\operatorname{trop}_w:{K}[\![T]\!][x_{i,J}]\rightarrow V\mathbb{B}[T]$ by extending the assignment \eqref{trop_of_diff_monomial} by linearity; explicitly, we have \begin{equation*} \label{New_valuation} \operatorname{trop}_w\biggl(\sum_Ma_ME_M\biggr):=\bigoplus_M\operatorname{trop}_w(a_ME_M)=\bigoplus_M\bigl(\operatorname{trop}(a_M)\odot V(E_M(w))\bigr). \end{equation*} } \end{dfn} Note that the definition of $\operatorname{trop}_w$ emulates the usual valuation specified by classical tropical algebraic geometry, and the outcome is a multiplicative $K$-algebra seminorm. We spell this out explicitly in Corollary~\ref{coro_Longue_prop} of Proposition \ref{Longue_prop}, whose proof we also defer to Appendix~\ref{Proof_of_Longue_prop}. We now use the valuation $\operatorname{trop}_w$ together with the decomposition \eqref{New_decomposition} to define the {\it initial form} $\text{in}^{\ast}_w(P)$ of $P\in {K}[\![T]\!][x_{i,J}]$ with respect to the support vector $w\in \mathbb{B}_m ^n$. \begin{dfn}\label{initial_form} { The }initial form {$\text{in}_w^{\ast}(P)$ of $P=\sum_Ma_ME_M$ at $w\in \mathbb{B}_m ^n$ is the differential polynomial} \begin{equation}\label{initial_modified} \text{in}_w^{\ast}(P)=\sum_{\{M:\:\operatorname{trop}_w(a_ME_M)\nprec \operatorname{trop}_w(P)\}}\overline{a_M}E_M. \end{equation} \end{dfn} The upshot of Remark~\ref{properties_of_irrelevant} is that every $P\in {K}[\![T]\!][x_{i,J}]$ decomposes as \[ P=\text{in}_w^{\ast}(P)+P^{\ast}, \text{ in which }\operatorname{trop}_w(P^{\ast})\prec\operatorname{trop}_w(P)=\operatorname{trop}_w(\text{in}_w^{\ast}(P)) \] where $P^{\ast}=P-\text{in}_w^{\ast}(P)$ is called {\it $w-$irrelevant} part of $P$. \begin{ex}\label{initial_not_mult} {The map $\text{in}_w^{\ast}: {K}[\![T]\!][x_{i,J}]\xrightarrow[]{}{K}[\![T]\!][x_{i,J}]$ fails to be multiplicative when $m>1$, as it restricts to the map ${K}[\![T]\!]\xrightarrow[]{}{K}[\![T]\!]$ that sends $a$ to $\overline{a}$, and the support of the product of two vertex polynomials is in general properly contained in the Minkowski sum of their supports (see Proposition~\ref{char_of_prod}). } \end{ex} \begin{rem} Definition~\ref{initial_form} is a modified version of initial form from the classical setting; it also generalizes the definition of initial form given in the preprint \cite{HG19} when $m=1$, in which case trop is a Krull valuation. For a related construction, see \cite{FT20}. \end{rem} \section{A tropical fundamental theorem for differential algebraic geometry} \label{Section_TFT} In this section we present the extended fundamental theorem in the differential context. Here $R={K}_{m,n}$, where ${K}$ is a field of characteristic zero; ${\rm S}=\mathbb{B}_{m,n}$ as in section~\ref{Tropical}; and the seminorm (valuation) is a support map. The differential enhancement of trop given by the diagram \eqref{Partial_diff_scheme} is the key to proving the fundamental theorem, which characterizes what we call {\it tropical DA varieties} in three distinct ways. We'll discuss the usefulness of the tropical fundamental theorem in extracting combinatorial information about solutions to systems of differential equations. Given a system of differential polynomials $\Sigma$ in $K_{m,n}$, the best possible outcome would be to compute the associated DA variety $\text{Sol}(\Sigma)=\{\varphi\in K_m ^n:P(\varphi)=0 \text{ for all }P\in\Sigma\}$. This is often hard; an easier subsidiary task is to compute its set of support series $\text{sp}(\text{Sol}(\Sigma))\subset\mathbb{B}_m^n$, or its DA tropicalization. Explicitly, we will concentrate on understanding \[ \text{sp}(\text{Sol}(\Sigma)):=\{a=(a_1,\ldots,a_n)\in\mathbb{B}_m^n\::\:a=\text{sp}(\varphi)\text{ for some }\varphi\in\text{Sol}(\Sigma)\} \] where $\text{sp}:K_m ^n\rightarrow\mathbb{B}_m ^n$ extends $\text{sp}:K_m \rightarrow\mathbb{B}_m $ coordinate-wise. To define the DA tropicalization of $\text{Sol}(\Sigma)$, we'll make use of the following notion. \begin{dfn} {An ideal of the differential ring $(R,D)$ is} \emph{differential} {whenever it is closed under the action of $D$.} \end{dfn} Now say that $(R,D)$ is a differential ring, and that $X$ is a subset of $R$. Following the convention adopted in \cite{HG19}, we let $(X)$ and $[X]$ denote the algebraic and differential ideals generated by $X$, respectively. By definition, $[X]$ is the intersection of all differential ideals of $R$ that contain $X$; so {\it $[X]$ is the algebraic ideal of $R=K_{m,n}$ spanned by the set $\{\Theta_{R}(J)(P):\:P\in X,\:J\in\mathbb{N}^m\}$}. \begin{rem}\label{rem_sum_not_value} {Given a semiring ${\rm S}$ and an element $P\in {\rm S}_{m,n}$, Remark~\ref{commutation} establishes that $\tfrac{\partial }{\partial t_i}(P(a))=\tfrac{\partial P}{\partial t_i}(a)$ for every $a\in {\rm S}_m ^n$ and $i\in\{1,\ldots,m\}$. Now assume that ${\rm S}=K$; then $P(a)=0$ implies that $\tfrac{\partial P}{\partial t_i}(a)=0$; in particular, a system $\Sigma\subseteq K_{m,n}$ and its associated differential ideal $[\Sigma]$ define the same DA varieties.} {We saw in Proposition \ref{prop_reformulation_solution} that tropical vanishing depends on the realization of $P(a)$ as a sum of monomials and not on the {\it value} of $P(a)$. For example, when $P=x_{1,0}+x_{(0,1)}+(t^2+u^2)\in \mathbb{B}_{2,1}$, $a=t^2u+u^3\in\mathbb{B}_2$ is a solution of $P$, but not of $\tfrac{\partial P}{\partial t}$.} \end{rem} \begin{dfn} \label{Definition_TDAV} Given $\Sigma\subseteq {K}_{m,n}$, the tropicalization {$\text{Sol}(\text{sp}([\Sigma]))$ of the DA variety $\text{Sol}(\Sigma)$ is \[ \text{Sol}(\text{sp}([\Sigma])):= \bigcap_{p\in \text{sp}([\Sigma])}\text{Sol}(p) \]} where $\text{sp}:K_{m,n}\rightarrow \mathbb{B}_{m,n}$ is the norm of Proposition~\ref{Longue_prop}. \end{dfn} \begin{ex}[Tropical DA hypersurfaces]\label{ex_trop_hyp} {Given $P\in K_{m,n}$, to compute the tropicalization of the associated hypersurface $\text{Sol}(P)=\{\varphi\in K_m ^n:P(\varphi)=0\}$ we first compute the differential ideal $[P]\subset K_{m,n}$; the tropicalization of $\text{Sol}(P)$ is then $\bigcap_{p\in\text{sp}([P])}\text{Sol}(p)$. } \end{ex} The following construction from \cite{FGH20+} is operative in the fundamental theorem, and generalizes the corresponding construction in the $m=1$ case from \cite{HG19}. \begin{dfn} \label{initial_ideal_def} {Given $w\in\mathbb{B}_m ^n$, the }initial ideal {$\text{in}_w^*(G)$ of a differential ideal $G\subset {K}_{m,n}$ is the algebraic ideal generated by the initial forms $\{\text{in}_w^*(P):P\in G\}$ in ${K}_{m,n}$ defined in equation \eqref{initial_modified}.} \end{dfn} We now have all of the ingredients required to state the tropical fundamental theorem \cite[Thm. 9.6]{FGH20+} for differential algebraic geometry. \begin{thm}[Fundamental theorem of tropical differential algebraic geometry] \label{EFT} Fix $m,n\geq1$, let $K$ be an uncountable, algebraically closed field of characteristic zero, and let $[\Sigma]$ and $\text{Sol}(\Sigma)$ be the differential ideal and the DA variety associated to $\Sigma\subset K_{m,n}$ respectively. Then the following subsets of $\mathbb{B}_m^n$ coincide: \begin{enumerate} \item the tropicalization $\text{Sol}(\text{sp}([\Sigma])) = \bigcap_{p\in \text{sp}([\Sigma])}\text{Sol}(p)$ of $\text{Sol}(\Sigma)$ as in Definition \ref{Definition_TDAV}; \item the set $\{w\in \mathbb{B}_m ^n\::\:\text{in}_w^*([\Sigma])\text{ is monomial-free}\}$ as in Definition \ref{initial_ideal_def}; and \item the set $\text{sp}(\text{Sol}(\Sigma))=\{\text{sp}(\varphi)\::\:\varphi\in\text{Sol}(\Sigma)\}$ of coordinatewise supports of points in $\text{Sol}(\Sigma)$. \end{enumerate} \end{thm} The tropical fundamental theorem \ref{EFT} characterizes tropical DA varieties as (1) sets of formal $\mathbb{B}$-power series solutions of the tropical differential system $\text{sp}([\Sigma])$ associated to a differential ideal $[\Sigma]\subset K_{m,n}$; (2) weight vectors that single out monomial-free initial ideals of a fixed differential ideal $[\Sigma]$; and (3) as support series of all formal ${K}$-power series solutions $\text{Sol}(\Sigma)$ of a system $\Sigma\subset K_{m,n}$. It establishes, in particular, that any formal $\mathbb{B}$-power series solution of the tropical system $\text{sp}([\Sigma])$ lifts to a formal ${K}$-power series solution for $\Sigma$. \begin{ex} \label{ex_trop_hyp_2} Applying Theorem \ref{EFT}, we see that the tropical DA hypersurface associated to $P\in K_{m,n}$ is $X= \{\text{sp}(\varphi):\:\varphi\in K_m ^n,\: P(\varphi)=0\}$; cf. Example~\ref{ex_trop_hyp}.} \end{ex} \begin{rem} {Important examples of base fields ${K}$ to which our theory applies are $\mathbb{C}$ and $\mathbb{Q}_p^{\text{alg}}$, the algebraic closure of the field of $p$-adic numbers. Each of these fields is equipped with a natural topology which comes from an absolute value, which is archimedean in the complex case ($\mb{C}_\infty$), and non-archimedean in the $p$-adic case; let $\mathbb{C}_p$ denote the metric completion of $\mathbb{Q}_p^{\text{alg}}$. It is natural to ask for conditions under which the solutions $\text{Sol}(\Sigma)$ are tuples of analytic functions in a neighborhood of the origin with respect to the canonical metrics inherited from $\mb{C}_\infty$ and $\mb{C}_p$, respectively.} \end{rem} \begin{rem} \label{rem_paradigms} {According to \cite{BH19}, the solution set of any system $\Sigma\subset K_{m,n}$ is equal to the solution set of a \emph{finite} system $\{P_1,\ldots,P_s\}\subset \Sigma$. Each polynomial $P_i$ imposes an infinite number of conditions on the coefficients of the solutions $\varphi\in\text{Sol}(\Sigma)$.} { \hspace{-3pt}On the other hand, any given polynomial $Q\in\mathbb{B}_{m,n}$ imposes only a finite number of conditions on the supports of the solutions $\varphi\in\text{Sol}(Q)$, and in view of \[ \text{sp}(\text{Sol}(P_1,\ldots,P_s))=\text{sp}(\text{Sol}(\Sigma))=\text{Sol}(\text{sp}([\Sigma])) \] we require an infinite number of elements $Q\in \text{sp}([\Sigma])$ in order to describe this common set. The upshot is the following dichotomy between classical and tropical theories of algebraic differential equations: \begin{itemize} \item the classical theory requires solving a {\it finite number of infinite systems} of algebraic equations, while \item the tropical theory requires solving an {\it infinite number of finite systems} of boolean equations. \end{itemize}} \end{rem} In Theorem~\ref{EEFT} of the next section, we will extend items (2) and (3) of the fundamental theorem to the fraction field $K(\!(T)\!)$ of $K[\![T]\!] $. \section{From $K[\![T]\!] $ to $K(\!(T)\!)$} \label{Section_Fromringtofield} Paraphrasing \cite{HG19}, {\it it is more convenient to develop Groebner basis theory over fields than rings.} With this principle in mind, in this section we generalize the valuative results obtained for the differential ring $K_m=(K[\![T]\!] ,D)$ in section~\ref{Section_QVIF} to the differential field $F_m:=(K(\!(T)\!),D)$. The results in this section also extend those of \cite[Sect. 2.2]{HG19} to the $m>1$ case. The key ingredient in all of this is the following property of the semiring $V\mathbb{B}[T]$ of vertex polynomials. \begin{prop} \label{Prop_MC} The semiring $V\mathbb{B}[T]$ is {integral} (or {multiplicatively cancellative}); that is, if we have $a\odot c=b\odot c$, then $c\neq0$ or $a=b$. \end{prop} \begin{proof} Let $a,b\in V\mathbb{B}[T]$ be nonzero elements with supports $A$ and $B$ respectively. We will show first that for any $I\in A$, there exists $J\in B$ such that $I+J\in a\odot b$. Indeed, intersecting the normal fans of $\text{New}(A)$ and $\text{New}(B)$ results in the normal cone $C_I$ of each vertex $I$ of $\text{New}(A)$ being subdivided by cones of $\text{New}(B)$. Since both normal fans are maximal dimensional, there is necessarily some cone $C_J$ of $\text{New}(B)$ that contains or has non-empty intersection with $C_I$, so $I+J$ is a vertex of $\text{New}(A)+\text{New}(B)$, i.e., $I+J\in a\odot b$. Now suppose $a\odot c=b\odot c$, where $c\neq0$ has support $C$; we will show that $A=B$. To do so we argue indirectly, assuming for the sake of argument that $A \setminus B \neq \emptyset$. Say $K\in A\setminus B$, and that $C=\{J_1,\ldots,J_s\}$. As $a\odot c=b\odot c$, no $K+J_i$, $i=1,\dots,s$ belongs to $b\odot c$; but this contradicts the fact that $K+J_i\in a\odot c$ for some $J_i$. Thus $V(a) \subset V(b)$; by symmetry, we conclude that $V(a)=V(b)$. \end{proof} \begin{rem} Let $\sim$ denote the semiring congruence on $\mathbb{B}[\![T]\!]$ consisting of pairs $(a,b)$ of boolean power series for which $ac=bc$ for some $c \neq 0$. Suppose that $a\sim b$; as $V$ is a homomorphism, we then have $V(ac)=V(a)\odot V(c)=V(b)\odot V(c)=V(bc)$, and it follows from \eqref{Prop_MC} that $V(a)=V(b)$. The semiring $\faktor{\mathbb{B}[\![T]\!] }{\sim}$ is cancellative by definition, and we have a commutative diagram of surjective maps of semirings \begin{equation*} \xymatrix{\mathbb{B}[\![T]\!] \ar[d]_\pi\ar[dr]^V&\\ \faktor{\mathbb{B}[\![T]\!] }{\sim}\ar[r]_{\widetilde{V}}&V\mathbb{B}[T].} \end{equation*} \end{rem} As $V\mathbb{B}[T]$ has no nontrivial zero divisors, there is a well-defined semifield $V\mathbb{B}(T)=\text{Frac}(V\mathbb{B}[T])$ of fractions $\frac{a}{b}$ with $a,b\in V\mathbb{B}[T]$, $b\neq0$. The product and sum of fractions in $V\mathbb{B}(T)$ are defined as usual by $\frac{a}{b}\odot\frac{c}{d}=\frac{a\odot c}{b\odot d}$, and $\frac{a}{b}\oplus\frac{c}{d}=\frac{a\odot d\oplus b\odot c}{b\odot d}$. The upshot of Proposition~\ref{Prop_MC} is that $\frac{a}{b}=\frac{c}{d}$ in $V\mathbb{B}(T)$ if and only if $a\odot d=b\odot c$; furthermore, the map $ V\mathbb{B}[T]\ra V\mathbb{B}(T)$ that sends $a$ to $\frac{a}{1}$ is an embedding. For a detailed account, see \cite[Sect. 11]{JG}. \begin{coro} \label{extension_trop} The map $\text{trop}:K(\!(T)\!) \rightarrow V\mathbb{B}(T)$ defined by $\operatorname{trop}(\frac{f}{g}):=\frac{\operatorname{trop}(f)}{\operatorname{trop}(g)}$ is a $K$-algebra valuation. \end{coro} \begin{proof} Follows immediately from Theorem~\ref{VBT_theorem}, which establishes that $\text{trop}:K[\![T]\!] \rightarrow V\mathbb{B}[T]$ has the same property. \end{proof} Now let $F_{m,n}:=(K(\!(T)\!) [x_{i,J}: i,J],D)$ denote the differential ring of differential polynomials with coefficients in the differential field $K(\!(T)\!) $. By clearing denominators, every $P\in F_{m,n}$ decomposes as \begin{equation} \label{from_field_to_ring} \lambda P=Q, \text{ where }\lambda\in K[\![T]\!] \setminus\{0\}\text{ and }Q\in K_{m,n}. \end{equation} The decomposition \eqref{from_field_to_ring} is the key to extending our results from $K[\![T]\!]$ to $K(\!(T)\!)$. \begin{coro} Given $w\in\mathbb{B}_m ^n$, the map $\operatorname{trop}_w:K(\!(T)\!) [x_{i,J}: i,J] \ra V\mathbb{B}(T)$ that sends $P=\sum_Ma_ME_M$ to $\operatorname{trop}_w(P):=\bigoplus_M(\operatorname{trop}(a_M)\odot V(E_M(w)))$ is a multiplicative $K$-algebra seminorm. \end{coro} \begin{proof} Given $P\in F_{m,n}$, write $P=\frac{Q}{\lambda}$ as in \eqref{from_field_to_ring}, so that $\operatorname{trop}_w(P)=\frac{\operatorname{trop}_w(Q)}{\operatorname{trop}(\lambda)}$; the desired result follows from Corollary \ref{coro_Longue_prop}, which establishes that $\operatorname{trop}_w:K[\![T]\!] [x_{i,J}: i,J]\ra V\mathbb{B}[T]$ is a multiplicative $K$-algebra seminorm. \end{proof} We propose the following generalization of \eqref{initial_modified} from $K_{m,n}$ to $F_{m,n}$. \begin{dfn} \label{initial_field} Given $P\in F_{m,n}$, write $P=\frac{Q}{\lambda}$ as in \eqref{from_field_to_ring}. The {\it initial form} $\text{in}_w^*(P)$ of $P$ at $w \in \mb{B}_m^n$ is $\text{in}_w^*(P):=\frac{\text{in}_w^*(Q)}{\overline{\lambda}}.$ \end{dfn} \begin{rem} The notion of relevancy may be extended from $V\mathbb{B}[T]$ to $V\mathbb{B}(T)$ as follows: whenever $\frac{a}{b}\leq\frac{c}{d}$, we say that $\frac{a}{b}$ is relevant for $\frac{c}{d}$ provided that $a\odot d\nprec c\odot b$ in $V\mathbb{B}[T]$. Now say as before that $\lambda P=Q$, with $P=\sum_{M}\frac{a_M}{b_M}E_M$. Given a monomial $c_ME_M$ of $Q$, we have $$\text{trop}_w(\frac{a_M\la}{b_M}E_M)=\text{trop}_w(c_ME_M)\nprec \text{trop}_w(Q)=\text{trop}(\lambda)\odot \text{trop}_w(P).$$ It follows from Remark \ref{properties_of_irrelevant} that $\text{in}_w^*(Q)$ (and hence $\text{in}_w^*(P)$) is supported on the set of $M$ for which $\text{trop}_w(\frac{a_M}{b_M}E_M)\nprec \text{trop}_w(P)$. \end{rem} We thus have a decomposition \begin{equation}\label{decomposition_relevant} \lambda P=Q=\text{in}_w^{\ast}(Q)+Q^{\ast}=\overline{\lambda}\cdot \text{in}_w^*(P)+P^{\pr} \end{equation} in which $\operatorname{trop}_w(b_NE_N)\prec \operatorname{trop}_w(\lambda P)$ for every monomial $b_NE_N$ of $P^{\pr}$. In fact, more is true; namely, $\operatorname{trop}_w(P^{\pr})\prec \operatorname{trop}_w(\lambda P)=\operatorname{trop}_w(Q)=\operatorname{trop}_w(\text{in}_w^{\ast}(Q))$. So again $P^{\pr}=Q-\text{in}_w^{\ast}(Q)$ is the $w-$irrelevant part of $\lambda P$. We close this preliminary subsection with some significant extensions of previous results. It is worth noting that in the extended version of the fundamental theorem \ref{EEFT}, the supports of solutions in $K[\![T]\!]^n$ for systems defined over the field $K(\!(T)\!)$ are controlled by the theory over $K[\![T]\!]$. \begin{thm} \label{Previous_EEFT} A differential ideal $G \subset F_{m,n}$ has a solution in $K_m ^n$ with support vector $w\in \mathbb{B}_m^n$ if and only if $\{\text{in}_w^*(P):P\in G\}$ contains no differential monomials. \end{thm} \begin{proof} The proof imitates that of \cite[Cor. 1]{HG19}, as follows. Let $G_0=G\cap K_{m,n}$; then for any $P\in G$ equation \eqref{from_field_to_ring} yields $Q=\lambda P\in G_0$, and $\text{in}_w^*(Q)=\overline{\lambda} \cdot \text{in}_w^*(P)$ by definition. On the other hand, clearly $\varphi\in K_m ^n$ is a solution of $G$ if and only if it is a solution of $G_0$; the desired result now follows from part 2 of Theorem~\ref{EFT}. \end{proof} \begin{dfn}\label{initial_ideal_ext_def} {Given a differential ideal $G \subset F_{m,n}$ and $w\in \mathbb{B}_m^n$, the \emph{initial ideal} $\text{in}_w^*(G)$ of $G$ with respect to $w$ is the algebraic ideal $(\text{in}_w^*(P):P\in G)$ of $F_{m,n}$.} \end{dfn} \begin{thm}[Fundamental theorem of tropical differential algebraic geometry over $F_m$] \label{EEFT} Fix $m,n\geq1$, let $K$ be an uncountable, algebraically closed field of characteristic zero, and let $\Sigma\subset F_{m,n}$ be a family of differential polynomials with associated differential ideal $[\Sigma]$ and solution set $\text{Sol}(\Sigma)=\{\varphi\in K_m ^n:P(\varphi)=0 \text{ for all }P\in\Sigma\}$. The following subsets of $\mathbb{B}_m ^n$ coincide: \begin{enumerate} \item the set $\{w\in \mathbb{B}_m ^n:\text{in}_w^*([\Sigma])\text{ is monomial-free}\}$ as in Definition \ref{initial_ideal_ext_def}; and \item the set $\text{sp}(\text{Sol}(\Sigma))=\{\text{sp}(\varphi):\varphi\in\text{Sol}(\Sigma)\}$ of coordinatewise supports of points in $\text{Sol}(\Sigma)$. \end{enumerate} \end{thm} \begin{proof} The proof is similar to that of \cite[Lem. 3]{HG19}. Letting $G=[\Sigma]$, it suffices to show that if $\text{in}_w^*(G)$ contains a monomial, then $\{\text{in}_w^*(P):\:P\in G\}$ also contains a monomial (as the converse is clear). We then conclude by applying Theorem~\ref{Previous_EEFT}. Accordingly, say that $E=\sum_iZ_i\text{in}_w^*(P_i)$, where $P_i\in G$ and $Z_i\in F_{m,n}$ satisfy $\lambda_iP_i=R_i$ and $\mu_iZ_i=S_i$ for some $R_i,S_i\in K_{m,n}$ and $\lambda_i,\mu_i\in K[\![T]\!]$. According to Definition~\ref{initial_field}, we have $\text{in}_w^*(P_i)=\frac{\text{in}_w^*(R_i)}{\overline{\lambda_i}}$; and thus $cE=\sum_i(c_iS_i)\text{in}_w^*(R_i)$, where $c=\prod_j\mu_j\overline{\lambda_j}$ and $c_i=\prod_{j\neq i}\mu_j\overline{\lambda_j}$. Let $P=\sum_i(cS_i)R_i=\sum_i(cS_i)\lambda_iP_i\in G$; we will show that $\overline{c}E=\text{in}_w^*(P)$. Indeed, for each index $i$, write $c_iS_i=\sum_jb_{N(ij)}E_{N(ij)}$ and $R_i=\sum_ka_{M(ik)}E_{M(ik)}$, so that $P=\sum_i\biggl[\sum_{j,k}b_{N(ij)}a_{M(ik)}E_{N(ij)}E_{M(ik)}\biggr]$ and \[ \operatorname{trop}_w(P):=\bigoplus_i\biggl[\bigoplus_{j,k}\operatorname{trop}_w(b_{N(ij)}a_{M(ik)}E_{N(ij)}E_{M(ik)})\biggr]. \] Further write $a_{M(ik)}=\overline{a_{M(ik)}}+a_{M(ik)}^*$ if $M(ik)$ is in the support of $\text{in}_w^*(R_i)$, and $a_{M(ik)}=a_{M(ik)}^*$ otherwise. For every $i$ we have $\operatorname{trop}_w(R_i^*)\prec \operatorname{trop}_w(R_i)=\operatorname{trop}_w(\text{in}_w^*(R_i))$, so $\operatorname{trop}_w(cS_iR_i^*)\prec \operatorname{trop}_w(cS_iR_i)=\operatorname{trop}_w(cS_i\text{in}_w^*(R_i))$ by Remark~\ref{properties_of_irrelevant}. Hence $$\bigoplus_{j,k}\operatorname{trop}_w(b_{N(ij)}a_{M(ik)}E_{N(ij)}E_{M(ik)})=\bigoplus_{j,k}\operatorname{trop}_w(b_{N(ij)}\overline{a_{M(ik)}}E_{N(ij)}E_{M(ik)})$$ and summing over $i$ yields $$\operatorname{trop}_w(P):=\bigoplus_i\biggl[\bigoplus_{j,k}\operatorname{trop}_w(b_{N(ij)}\overline{a_{M(ik)}}E_{N(ij)}E_{M(ik)})\biggr].$$ Thus the only monomials of $P$ in (the support of) $\text{in}_w^*(P)$ are those in $cE=\sum_i(c_iS_i)\text{in}_w^*(R_i)$. Suppose $\text{trop}_w(b_{N(ij)}\overline{a_{M(ik)}}E_{N(ij)}E_{M(ik)})\nprec \text{in}_w^*(P)$ for one of these. Then either i) $E_{N(ij)}E_{M(ik)}=E$ or ii) $E_{N(ij)}E_{M(ik)}\neq E$, where the sum over all the monomials of the second type is zero by hypothesis. In the second case, we conclude that $-b_{N(ij)}\overline{a_{M(ik)}}E_{N(ij)}E_{M(ik)}$ also appears in $cE$, and their initial forms cancel. Thus $\text{in}_w^*(P)=\overline{c}E$, and our proof is complete. \end{proof} Given a differential ideal $I\subset F_{m,n}$ and a weight vector $w\in\mathbb{B}_m^n$, a set $\mathcal{G}\subset I$ is a {\it tropical basis} for $I$ with respect to $w$ provided $\text{in}_w^*(I)=(\text{in}_w^*(\Theta(J)(g)):\:g\in\mathcal{G},J\in \mathbb{N}^m)$. The following result generalizes \cite[Prop. 1]{HG19}; our strategy of proof is slightly different. \begin{prop} \label{prop_local_trop_basis} Let $\omega_m\in \mathbb{B}_m^n$ denote the unique series vector with support $\mathbb{N}^m$. Given $G\subset K_{0,n}$, let $I=[G]$ denote the differential ideal of $F_{m,n}$ that it generates. Then $G$ is a tropical basis for $I$ with respect to $\omega_m$. \end{prop} \begin{proof} Set $\mathcal{G}:=\{\Theta(J)(g):\:g\in G,\: J\in\mathbb{N}^m\}$ and $w:=\omega_m$. As $\mathbb{N}^m\subset \text{Supp}(w)$ and $G\subset K_{0,n}$, we have $\text{in}_{w}^*(h)=h$ for every $h\in\mathcal{G}$. On the other hand, given any $f\in I$, there is some polynomial $f_1\in K_{m,n}$ and $\lambda\in K$ for which $\lambda f=f_1=\sum_ia_ih_i$, where $a_i\in K_{m,n}$ and $h_i\in\mathcal{G}$. We will compute $\text{in}_w^*(f_1)$. To this end, we decompose $a_i$ as $a_i=\text{in}_w^*(a_i)+a^{\pr}_i$, so that $a_ih_i=\text{in}_w^*(a_i)h_i+a^{\pr}_i$. For every monomial $b_NE_M$ of $h_i$, we have $v_w(h_i)=v_w(b_NE_M)=1$. It follows that $\text{in}_w^*(a_ih_i)=\text{in}_w^*(a_i)h_i$, and consequently that \[ \text{in}_w^*(f_1)=\sum_i\text{in}_w^*(a_i)h_i \text{ and } \text{in}_w^*(f_1)=\frac{\text{in}_w^*(f_1)}{\overline{\lambda}} \] where the sum is over all relevant indices $i$, which means precisely that $G$ is a tropical basis for $I$ with respect to $w$. \end{proof} \subsection{Initial degenerations}\label{initial_degenerations} The geometric counterparts of initial forms are initial {\it degenerations}. Each of these is specified by a scheme-theoretic {\it model} that is flat over the spectrum of the {\it valuation ring} $K(\!(T)\!)^\circ=\{x\in K(\!(T)\!):\operatorname{trop}(x)\leq1\}$ corresponding to the valuation $\operatorname{trop}:K(\!(T)\!)\to V\mathbb{B}(T)$, as in \cite{gub}. Given an affine scheme $X$ defined by an ideal $I\subset K(\!(T)\!)[x_{i,J}:i,J]$, we now let $\mathcal{X}$ denote the spectrum of a quotient ring $K(\!(T)\!)^\circ[x_{i,J}:i,J]/I^{\pr}$, where $I^{\pr}$ is a translation of $I$ induced by a weight vector $w\in\mathbb{B}_m ^n$: this is our putative model. We will show that, for every $w\in\mathbb{B}_m^n$, an analogue of the translation map $K(\!(T)\!) [x_{i,J}:i,J] \ra K(\!(T)\!) ^\circ[x_{i,J}:i,J]$ defined when $m=1$ in \cite{FT20} also exists when $m>1$. On the other hand, we will also show that when $m>1$, there is a tower of ring extensions $K[\![T]\!] \subsetneq K(\!(T)\!)^\circ\subset K(\!(T)\!)$ which diverges from the $m=1$ case. In particular, $K(\!(T)\!)^\circ$ is not a local ring, and therefore is not a valuation ring in the traditional sense; nor is it closed under taking partial derivatives. \begin{coro} Let $\mathcal{C}:=\{x\in K(\!(T)\!):\frac{\partial x}{\partial t_j}=0 \text{ for every } j\}$. Then $\mc{C}=K$, while $K(\!(T)\!)^{\circ}$ is a subring of $K(\!(T)\!)$. \end{coro} \begin{proof} The first claim is clear; the second follows from the fact that $V\mathbb{B}(T)^{\circ}:=\{a\in V\mathbb{B}(T): a\leq1\}$ is a subsemiring of $V\mathbb{B}(T)$, together with Corollary~\ref{extension_trop}. \end{proof} \begin{dfn} {We call $K(\!(T)\!) ^{\circ}$ (resp., $V\mathbb{B}(T)^{\circ}$) the {\it ring of integers} of $K(\!(T)\!) $ (resp., the {\it semiring of integers} of $V\mathbb{B}(T)$).} \end{dfn} \begin{prop} \label{ring_of_integers} The ring $K(\!(T)\!)^{\circ}$ is an extension of $K[\![T]\!]$, and it is local and differential if and only if $m=1$. \end{prop} \begin{proof} If $m=1$, then $K(\!(T)\!) ^\circ=K[\![T]\!]$ and there is nothing to prove, so we shall suppose that $m>1$. Since every $a\in V\mathbb{B}[T]$ is less than $1$, we have $V\mathbb{B}[T]\subset V\mathbb{B}(T)^{\circ}$. In particular, since $\operatorname{trop}(K[\![T]\!] )=V\mathbb{B}[T]$, we deduce that $K[\![T]\!] $ is a subring of $K(\!(T)\!)^{\circ}$. In general this inclusion is proper; for example, we have $\frac{t_1 t_2}{t_1+t_2}\in K(\!(T)\!)^{\circ}\setminus K[\![T]\!]$. To prove the second point, we use the fact that a ring is local if and only if the sum of any two non units is a non-unit, and we produce an explicit unit that decomposes as a sum of non-units when $m=2$. Appealing to Lemma \ref{order_relation} again, we have $\frac{a}{b}<1$ in $V\mathbb{B}(T)$ if and only if $A=A^{\pr} \sqcup B^{\pr}$, with $A^{\pr}\subsetneq B$ and $B^{\pr}\subset (\widetilde{B}\setminus B)$. Thus $x=\frac{t_1}{t_1+t_2}$ and $y=\frac{t_2}{t_1+t_2}$ are non-units in $K(\!(T)\!) ^{\circ}$ for which $x+y=1$. Finally, the ring $K(\!(T)\!)^\circ$ is not closed under differentiation; for example, when $m=2$, the element $\frac{t_1}{t_1+t_2}$ belongs to $K(\!(T)\!)^\circ$, while $\frac{\partial}{\partial t_1}(\frac{t_1}{t_1+t_2})=\frac{t_2}{(t_1+t_2)^2}$ does not. \end{proof} The following construction is inspired by one carried out in \cite{FT20} in the $m=1$ case. For each weight vector $w=(w_1,\ldots,w_n)\in\mathbb{B}_m^n$, we define a map \begin{equation} \label{translation_map} \begin{aligned} K(\!(T)\!) [x_{i,J}:i,J]&\longrightarrow K(\!(T)\!) ^\circ[x_{i,J}:i,J]\\ P&\mapsto P_w. \end{aligned} \end{equation} For every $1\leq i\leq m$ and $J\in \mathbb{N}^m$, we let $T(w_i,J)\in K[\![T]\!] $ denote the series $V(\Theta_{\mathbb{B}_m}(J)w_i)$. In terms of the canonical section $e_K:\mathcal{P}(\mathbb{N}^m)\to K[\![T]\!]$, we have \[ T(w_i,J)=e_K(\text{Vert}((W_i-J)_{\geq0})) \] where $W_i=\text{Supp}(w_i)$ for $i=1,\ldots,n$. Given $P\in F_{m,n}$, if $\operatorname{trop}_w(P)=\frac{a}{b}\neq0$ with $A=\text{Supp}(a)$ and $B=\text{Supp}(b)$; we then set $T(\operatorname{trop}_w(P))^{-1}:=\frac{e_K(B)}{e_K(A)} \in K(\!(T)\!)$. We now specify a differential polynomial $P_w$ by substituting every instance of $x_{i,J}$ in $P$ by $T(w_i,J)x_{i,J}$ and then multiplying the result by $T(\operatorname{trop}_w(P))^{-1}$: \begin{equation*} P_w(x_{i,J}):=\begin{cases} T(\operatorname{trop}_w(P))^{-1}P(T(w_i,J)x_{i,J}),&\text{ if }\operatorname{trop}_w(P)\neq0\\ 0,&\text{ if }\operatorname{trop}_w(P)=0. \\ \end{cases} \end{equation*} \begin{lemma} \label{toric_degeneration} The polynomial $P_w$ has coefficients in $K(\!(T)\!)^{\circ}$. \end{lemma} \begin{proof} Let $P=\sum_Ma_ME_M$. We then have $P_w(x)= \sum_Mb_ME_M$, where $b_M=T(\operatorname{trop}_w(P))^{-1}a_M\prod_{i,J}(T(w_i,J))^{M_{i,J}}.$ We need to show that $\operatorname{trop}(b_M)\leq1$, i.e., that \[ \operatorname{trop}(a_M)\odot V(E_M(w))= \operatorname{trop} \bigg(a_M\prod_{i,J}(T(w_i,J))^{M_{i,J}}\bigg)\leq\operatorname{trop}_w(P). \] Here the equality is clear, while the inequality holds because $\operatorname{trop}_w(P)$ is the least upper bound of $\bigoplus_M\operatorname{trop}_w(a_ME_M)$. \end{proof} Whenever $G\subset K(\!(T)\!) [x_{i,J}:i,J]$ is a differential ideal and the ring \begin{equation} \label{ring_for_model} R_w:=K(\!(T)\!) ^\circ[x_{i,J}:i,J]/(P_w:\:P\in G) \end{equation} is flat over $K(\!(T)\!)^\circ$, there is an associated {\it (algebraic) initial degeneration} of the spectrum of $R=K(\!(T)\!) [x_{i,J}:i,J]/G$ given by specializing the scheme $\mathcal{X}(w):=\text{Spec}(R_w)$ to the fiber over some closed point $p\in \text{Spec}(K(\!(T)\!)^\circ)$. \section{Computational aspects and open problems} \label{Section_CA} In this section we discuss issues at play in effectively computing tropical DA varieties, and we highlight several outstanding problems in need of resolution. The first issue concerns the finiteness of presentations of differential ideals, and tropical bases. As stated, the fundamental theorems \ref{EFT} and \ref{EEFT} are about differential ideals $G$, which are infinite sets of differential polynomials. The Ritt-Raudenbush basis theorem (see \cite{BH19}) establishes that there are always finitely many $P_1,\ldots,P_s\in G$ for which $\text{Sol}(G)=\text{Sol}(P_1)\cap \cdots\cap \text{Sol}(P_s)$. It is not true in general, however, that the DA tropicalization of $\text{Sol}(G)$ is described by the supports of these generators; indeed, as explained in Remark~\ref{rem_paradigms}, an infinite set $\{\text{sp}(P):P\in G\}$ is generally required to recover the DA tropicalization. A {\it tropical DA basis} for $G$ is a (preferably smaller) subsystem $\Phi\subset G$ from which we may compute both the DA variety $\text{Sol}(\Phi)=\text{Sol}(G)$ and its corresponding DA tropicalization $\text{Sol}(\text{sp}(\Phi))=\text{Sol}(\text{sp}(G))$. It is worth noting that Groebner bases exist in both the tropical and the differential settings, and that tropical differential Groebner bases for ordinary differential tropical systems were introduced in \cite{HG19}, but at present these are lacking in the more general partial differential case. These would simultaneously represent adaptations of tropical Groebner bases to the differential setting, and of differential Groebner bases to the tropical setting. \noindent {\it Problem 1.} Define a proper notion of tropical basis for the partial differential setting. The second issue is that of effectively evaluating vertex polynomials. Given $P= \sum_Ma_ME_M$ in $K_{m,n}$ and $a\in {K}[\![T]\!]^n$, we have $P(a)=\sum_Ma_ME_M(a)$, and consequently $V(P(a))=\bigoplus_MV(a_M E_M(a))=\bigoplus_MV(a_M)\odot V(E_M(a)) $, as $V:\mathbb{B}[\![T]\!] \ra V\mathbb{B}[T]$ is a semiring homomorphism. As a result, solutions of $P$ may be formulated in terms of the vertex polynomials $V(a_M)\odot V(E_M(a))$ (as per the original definition given in \cite{FGH20}). Using the decomposition \eqref{New_decomposition}, it should be possible to compute the vertex polynomials $V(a)$ algorithmically by combining \begin{enumerate} \item a routine that computes the vertex set $V(P)$ of a polyhedron $P$; and \item a routine that computes the distinguished polytopal representative $P=P_a$ in the decomposition \eqref{newton_decomposition} of the integral polyhedron $\text{New}(a)$. \end{enumerate} This is significant, as implementations of each of these routines are available. A closely-related problem is that of producing concrete formulas for the binary operations $\oplus$ and $\odot$ on $V\mathbb{B}[T]$; this, in turn, relates to a number of subsidiary problems involving operations on polytopes. Recall from \eqref{uniqueness} given vertex polynomials $a$ and $b$ with supports $A$ and $B$ respectively, we have \[ V(ab)\subseteq \mathcal{V}(\Delta_a\otimes \Delta_b)\subseteq A+B,\quad \text{ and }\quad V(a+b)\subseteq \mathcal{V}(\Delta_a\oplus \Delta_b)\subseteq A\cup B \] where $\Delta_{\al}=\text{Conv}(\al)$. Thus, as pointed out in \cite{DT}, one strategy for effectively computing $a\odot b=V(ab)$ involves first computing $A+B$, and then applying a second algorithm to remove all points that are not in $\mathcal{V}(\Delta_a\otimes \Delta_b)$. On the other hand, computing $a \oplus b$ naturally leads to that of computing $\mathcal{V}(\Delta_a\oplus \Delta_b)$; the latter problem is referred to in the literature as {\it redundancy removal}. \noindent {\it Problem 2.} Implement algorithms for computing vertex polynomials and vertex sets. Another issue we would like to highlight is that of convergence. Thus far, we have only considered solutions as vectors of formal power series in $K_m ^n$, but for geometric applications it would be useful to know whether they represent analytic functions $\Omega\rightarrow \mathbb{A}_K^{1,\text{an}}$ with a common domain $0\in\Omega\subset \mathbb{A}_K^{m,\text{an}}$, whenver $K$ is a Banach field. Possibilities are stratified according to whether or not the field $K$ is Archimedean. \begin{enumerate} \item[(1)] {\it The Archimedean case}. Here the only possibility is that ${K}=\mathbb{C}$ with the Euclidean topology; thus $\mathbb{A}_{{K}}^{m,\text{an}}=(\mathbb{C}_\infty^m,\mathcal{H})$, where $\mathcal{H}$ is the sheaf of holomorphic functions. \item[(2)] {\it The non-Archimedean case}. As we mentioned in the introduction, an interesting choice of field is $K=\mathbb{Q}_p^{\text{alg}}$; and an interesting choice of topological space is the Berkovich space $\mathbb{A}_{\mathbb{C}_p}^{m,\text{an}}$ associated to the completion $\mathbb{C}_p$ of $\mathbb{Q}_p^{\text{alg}}$. \end{enumerate} \noindent {\it Problem 3.} Study the analytic properties of lifts of tropical DA varieties. \subsection{Seminorms, bis} We saw in Proposition~\ref{Longue_prop} that the support map $\text{sp}:{K}_{m,n}\rightarrow \mathbb{B}_{m,n}$ is a $K$-algebra norm and also interacts in an interesting way with differentials. In particular, it may be useful in the future study of tropical differential algebra from the perspective of differential (idempotent) semirings. \noindent {\it Question 1.} Is there a well-behaved notion of tropical differential ideal? The fundamental theorem characterizes the systems of tropical differential equations $\Sigma\subset \mathbb{B}_{m,n}$ whose solution sets equal the tropicalizations of differential algebraic sets $\text{Sol}(G)$ over an algebraically closed uncountable field of characteristic zero. \noindent {\it Question 2.} Is there an axiomatic characterization of such tropical systems? Classically, a tropical prevariety is a finite intersection of tropical hypersurfaces; see \cite[p.102]{MS}. On the other hand, the concept of tropical DA hypersurface is clearly well-defined (and is illustrated explicitly in Examples~\ref{ex_trop_hyp} and \ref{ex_trop_hyp_2}). \noindent {\it Question 3.} Is there a well-defined notion of tropical DA prevariety? The theory of initial forms with respect to the non-Krull valuation trop introduced in section~\ref{Section_Initial_forms} also opens the door to a number of natural questions. \noindent {\it Question 4.} Is there a theory of Groebner bases adapted to the valuations trop: $K[\![T]\!] \rightarrow V\mathbb{B}[T]$ and trop:$K(\!(T)\!) \rightarrow V\mathbb{B}(T)$ when $m>1$? In \cite{HG19} such a Groebner theory is developed when $m=1$, in which case trop is a Krull valuation. In section~\ref{Section_Fromringtofield}, we extended the trop valuation from the domain $K[\![T]\!] $ to its quotient field $K(\!(T)\!)$. In doing so, we established in Proposition \ref{ring_of_integers} that whenever $m>1$ the ring of integers $K(\!(T)\!)^{\circ}$ is not a local ring, and that it properly contains $K[\![T]\!]$. Further, according to Lemma \ref{toric_degeneration}, there is a translation map $P\mapsto P_w$ associated to every given weight vector $w\in\mathbb{B}_m^n$ that sends a differential polynomial with coefficients in $K(\!(T)\!)$ to a differential polynomial with coefficients in $K(\!(T)\!)^{\circ}$. We correspondingly considered the spectra $\mathcal{X}(w):=\text{Spec}(R_w)$ for $R_w$ as in \eqref{ring_for_model}. Each $R_w$ potentially represents an {\it integral model} of $R$; it remains, however to check that it is flat. \noindent {\it Question 5.} Given $w$, is $\mathcal{X}(w)$ {\it flat} over the ring of integers $K(\!(T)\!)^\circ$? In forthcoming work \cite{FGH21} we will show that the answer is ``yes". A subsidiary question with obvious geometric implications would be to study the scheme $\text{Spec}(K(\!(T)\!)^{\circ})$ that parameterizes our initial degenerations. Finally, in order to prove the missing item (1) of the fundamental theorem \ref{EFT} for $F_{m,n}$ when $m>1$, we require a differential enhancement in the sense of Remark \ref{rem_diff_enh} of the basic tropicalization scheme $\operatorname{trop}:K(\!(T)\!) \to V\mathbb{B}(T)$. See \cite[Ex. 7.4]{FGH20} for a related discussion when $m=1$. \noindent {\it Question 6.} For every positive integer $m \geq 1$, is there an analogue of item (1) of the fundamental theorem \ref{EFT} for the field $F_{m,n}$? \subsection{An example} \label{Example} To illustrate how tropical differential algebra works, in this subsection we compute $P(a)$ for particular choices of $P\in \mathbb{B}[\![t,u]\!]\{x,y\}$ and $a\in \mathbb{B}_2=\mathbb{B}[\![t,u]\!]$; and we give a general strategy for computing the solution set of an arbitrary differential polynomial $P\in\mathbb{B}_{m,n}$. Accordingly, let $a=(a_1,a_2)=(t+t^2+tu+u^3,1+u+t^2u+t^3u^2)$ and let \begin{equation} \label{concrete_example} P=(t+u)x_{(0,0)}y_{(1,1)}^3+(1+t^2u^2)x_{(1,0)}x_{(0,1)}+ty_{(1,0)}^2. \end{equation} Since there are comparatively few variables and the differential operators involved are of low order, we will use standard partial derivative notation. We then have \begin{itemize} \item $a_{M_1}E_{M_1}(a)=(t+u)a_1(\tfrac{\partial^2 a_2}{\partial t\partial u})^{3}=(t+u)(t+t^2+tu+u^3)(t+t^2u)^{3}$; \item $a_{M_2}E_{M_2}(a)=(1+t^2u^2)(\tfrac{\partial a_1}{\partial t })(\tfrac{\partial a_1}{\partial u})= (1+t^2u^2)(1+u+t)(t+u^2)$; and \item $a_{M_3}E_{M_3}(a)=t(\tfrac{\partial a_2}{\partial t})^2 =t(tu+t^2u^2)^2=t^3u^2(1+tu+t^2u^2)$. \end{itemize} The evaluation $P(a)$ is the union of three formal series $a_{M_i}E_{M_i}(a)$, $i=1,2,3$; they are displayed in Figure~\ref{Figura}. \begin{figure}[!htb] \centering \includegraphics[scale=0.65]{dibujo.pdf} \caption{Values of $a_{M_i}E_{M_i}(a)$ for $i=1,2,3$, and $P(a)$.} \label{Figura} \end{figure} \noindent Finally, note that $V(P(a))=V(a_{M_2})\odot V(E_{M_2}(a))=t+u^2$, and since $t$ and $u^2$ do not belong to the supports of either of the other two $a_{M_i}E_{M_i}(a)$, it follows that $a$ is not a solution of $P$. We close by showing how to work with non-archimedean differential amoebae. \begin{rem} \label{non_arch_amoeba} The isomorphism of differential semirings $\text{Supp}:(\mathbb{B}[\![T]\!] ,D)\xrightarrow[]{}(\mathcal{P}(\mathbb{N}^m),D)$ from Remark~\ref{rem_difference} corresponds to the map $\ell:{\rm S}\to {\rm S}^{\text{op}}$. As $\text{sp}:K_m \rightarrow\mathbb{B}_m$ is a seminorm, it follows that the amoeba of a DA variety $\text{Sol}(\Sigma)\subset K_m ^n$ is precisely its image under $\ell\circ\text{sp}:K_m ^n\rightarrow (\mathcal{P}(\mathbb{N}^m))^n$. \end{rem} Now equip $\mathbb{N}^m$ with the usual product order. Given $X\subset \mathbb{N}^m$ and $J\in \mathbb{N}^m$, let $X({\geq J}):=\{I\in X:\:I\geq J\}$. Note that \begin{equation*} \label{concrete_description_sup_der} (X-J)_{\geq0}=\begin{cases} X({\geq J})-J,&\text{ if }X({\geq J})\neq\emptyset; \text{ and}\\ \emptyset,&\text{ otherwise}.\\ \end{cases} \end{equation*} Any differential monomial $aE$, with $E=\prod_{1\leq i\leq n, J\in\mathbb{N}^m}x_{i,J}^{m_{i,J}}$ and $A=\text{Supp}(a)$, induces an operator $aE:\mathcal{P}(\mathbb{N}^m)^n\xrightarrow[]{}\mathcal{P}(\mathbb{N}^m)$ defined by \begin{equation*} aE(X_1,\ldots,X_n)= \begin{cases} \emptyset,\text{ if }X_i(\geq J)=\emptyset\text{ for some }(i,J)\text{ in }E;&\text{ and}\\ [\sum_{i,J}m_{i,J}X_{i}(\geq J)-\sum_{i,J}m_{i,J}J]+\text{Vert}(A)&\text{ otherwise.}\\ \end{cases} \end{equation*} We may rewrite the second expression as $\sum_{i,J}m_{i,J}X_{i}(\geq J)+[\text{Vert}(A)-\sum_{i,J}m_{i,J}J]$ provided we authorize vectors with entries in $\mathbb{Z}^n$. Executing this procedure using the polynomial $P$ from \eqref{concrete_example}, the maps $\mathcal{P}(\mathbb{N}^2)^2 \xrightarrow[]{}\mathcal{P}(\mathbb{N}^2)$ induced by its monomials are \begin{equation*} \begin{aligned} (t+u)x_{(0,0)}y_{(1,1)}^3(X,Y)&=X+3Y({\geq(1,1)})+\{(-2,-3),(-3,-2)\};\\ (1+t^2u^2)x_{(1,0)}x_{(0,1)}(X,Y)&=X({\geq(1,0)})+X({\geq(0,1)})+(-1,-1); \text{ and}\\ ty_{(1,0)}^2(X,Y)&=2Y({\geq(1,0)})+(-1,0). \end{aligned} \end{equation*} These equations impose restrictions on the support vector ($X,Y)\in\mathcal{P}(\mathbb{N}^2)^2$ of solutions of $P$. It remains to carry out a case-by-case analysis of the expression $V(P(a))=\bigoplus_MV(a_ME_M(a))$ based on the number of relevant summands, which must be at least two in order for $a$ to be a solution of $P$. The casewise inspection required here is very similar to that required to explicitly describe a tropical hypersurface $V(f)$ as a union of semialgebraic sets in tropical affine space.
{ "timestamp": "2021-11-16T02:15:06", "yymm": "2012", "arxiv_id": "2012.14067", "language": "en", "url": "https://arxiv.org/abs/2012.14067" }
\section{Introduction} Nowadays, the potential of convolutional deep learning models for the task of image classification has been proven. Research in this field has followed different directions namely, new architecture and framework proposals \cite{richards2019deep, khan2019novel}, training methods \cite{wu2017effective, xu2018semantic}, multi-tasking \cite{zhang2020deep, luvizon20182d}, attention mechanisms \cite{li2019understanding, jain2020deep}, explainability and interpretability \cite{samek2017explainable, vellido2019importance}, among others. New techniques such as attention mechanisms allow to force the model to pay attention to certain features, whilst explainable artificial intelligence techniques allow to interpret the model and know what is happening during the learning process. However, to the best of our knowledge, the combination of both approaches has not been explored. Inspired by this lack of combination, we aim to improve the training procedure by interpreting the model and focusing it on certain regions of interest. To this end, our proposed approach is based on modifying the classical training procedure to include online information and thus adapt the learning process based on the features on which the network is focused. More specifically, we propose a new training scheme that benefits from the saliency maps provided by visual explanation techniques. Our hypothesis is that, by the end of the training phase, the model should use as many features as possible to make a robust prediction. In this sense, we apply a visual explanation algorithm to identify the regions on which the model bases its decisions. After identifying those relevant areas, we partially occlude them trying to \textit{distract} the model in some way and forcing the detection of other regions that, a priori, are weak (i.e., not so informative for the discrimination of the class). Our intention is to highlight that the model should not forget what the occluded regions mean, but it should learn to recognize other features to make a decision. This is ensured as the occluded images are combined with the original ones during the learning process. We think fine-grained image classification problems could benefit the most from this approach, as they have many classes that differ from each other in small details, and our training approach forces the network to find them. For this reason, we evaluated the proposed training scheme on two well-know datasets namely Stanford cars \cite{stanfordCars} and FGVC-Aircraft \cite{aircraftDataset}, composed of 16,185 and 10,000 images respectively, and used in fine-grained recognition. In addition, we carried out some experiments on top of different backbone architectures to demonstrate that our proposal improves the performance regardless of the respective network. Furthermore, we evaluate the robustness of our model in a real-scenario case study: recognizing the food-related scene that an egocentric image depicts. The analysis of egocentric images is an emerging field within computer vision that has gained attention in recent years \cite{damen2018scaling}. Images captured by wearable cameras during daily life allow recording information about the lifestyle of the users from a first-person perspective \cite{bolanos2016toward, talavera_dataset}. The analysis of this information can be used to improve peoples' health-related habits \cite{gelonch2020effects}. In particular, the analysis of food-related egocentric images can be a powerful tool to analyze peoples' nutritional habits, being the focus of previous research \cite{macnet,talavera_dataset}. In this context, we carried out some experiments on the EgoFoodPlaces dataset \cite{talavera_dataset}, which is composed of 33,801 images and describes food-related locations gathered by 11 camera wearers throughout their daily life activities. The contributions of this research work are three-fold: \begin{enumerate} \item A novel training scheme for CNN image classification that makes use of visual explanation techniques, with the main aim of improving the robustness and the generalization ability of the trained models. \item The experiments carried out demonstrate the competitiveness of our training scheme, which outperforms the classical approach on two public datasets commonly used in fine-grained recognition tasks, regardless of the backbone architecture. \item Our proposed method achieves competitive results in a real-case scenario that addresses the classification of egocentric photo-streams depicting food-related scenes. \end{enumerate} The rest of the paper is organized as follows. Section \ref{section:related_work} includes an overview of related works. Section \ref{section:proposed_approach} presents the proposed training approach. Section \ref{section:experimental_results} introduces the two datasets for fine-grained recognition, describes the experiments carried out and analyzes the obtained results. Section \ref{section:case_study} describes and evaluates the case study focused on egocentric vision. Finally, Section \ref{section:conclusion} closes with our conclusions and future lines of research. \section{Related Work}\label{section:related_work} While the very first machine learning systems were easily interpretable, the last years have been characterized by an upsurge of opaque decision systems, such as deep neural networks (DNNs) \cite{xai_herrera,xai_natalia}. DNNs are the state-of-the-art on many machine learning tasks due to their great generalization and prediction skills. However, they are considered \textit{black-box} machine learning models. In this context, there has been a growing influx of work on explainable artificial intelligence. Post-hoc local explanations, which refer to the use of interpretation methods after training a model, and feature relevance methods are increasingly the most adopted approaches to explain DNNs \cite{xai_herrera}. In this section, we review some methods that produce \textit{visual explanations} for decisions of a large class of DNN-based models, making them more transparent and reliable. Most of these visual explanation techniques provide heat maps to identify the regions of the input images that networks look at when making predictions, allowing the data to be interpreted at a glance. Note that these heat maps are also referred to in the literature as sensitivity maps, saliency maps, or class activation maps. Class activation mapping (CAM) \cite{cam} is a well-known procedure for generating class activation maps using global average pooling in CNNs. Their authors expect each unit to be activated by some visual pattern within its receptive field. The class activation map is nothing more than a weighted linear sum of the presence of these visual patterns at different spatial locations. By simply upsampling the class activation map to the size of the input image, they can analyze the most relevant image regions to identify the particular category. However, CAM can only be used with a restricted set of layers and architectures. A class-discriminative localization technique called gradient-weighted class activation mapping (Grad-CAM) was proposed in \cite{gradcam}. In fact, it is a generalization of CAM that can be applied to a significantly broader range of CNN families. Grad-CAM uses the gradients of any target concept flowing into the final convolutional layer to produce a coarse localization map, highlighting the regions of the image that are relevant for the prediction. Given an image and a class of interest (e.g., \textit{tiger cat}) as inputs, Grad-CAM forward propagates the image through the convolutional part of the model and then through task-specific computations to obtain a raw score for the category. The gradients are set to 0 for all classes except for the desired class (\textit{tiger cat}), which is set to 1. This signal is then backpropagated to the rectified convolutional feature maps of interest, which are combined to compute the coarse Grad-CAM localization that represents where the model looks at to make the corresponding decision. Finally, they point-wise multiply the heat map with guided backpropagation, thus obtaining also guided Grad-CAM visualizations, which are both high-resolution and concept-specific. Another visual explanation method was presented in \cite{occlusion}, in which input images are perturbed by occluding all their patches, in an iterative process, and classifying the occluded images. This idea allows the authors to analyze how the top feature maps and the classifier output change, revealing structures within each patch that stimulate a particular feature map. However, the use of this method requires generating multiple occluded samples and their classification, making it computationally expensive. Ribeiro et al.~\cite{lime} proposed the local interpretable model-agnostic explanations (LIME) technique, which allows to explain the predictions of any classifier in an interpretable and faithful manner. Given the original representation of the instance being explained, they get new samples by perturbing the original representation. They use those samples to approximate the classifier with an interpretable model. Just as the method above, the use of multiple samples implies to apply the classifier several times given one instance. Some of these visual explanation techniques generate noisy sensitivity maps. In this context, Smilkov et al.~\cite{smoothgrad} proposed SmoothGrad, a technique to reduce the noise in the sensitivity maps produced by visual explanation techniques based on gradients. Their idea was to sample images similar to the original ones by adding some noise. Then, they produced intermediate sensitivity maps for each image and took the average of them as the final sensitivity map. Finally, it is worth highlighting some applications of the saliency maps generated by visual explanation techniques. Sch\"ottl~\cite{schottl2020light} used Grad-CAM maps to improve the explainability of classification networks. More specifically, the idea was to introduce some measures obtained from the Grad-CAM maps in the loss function. Cancela et al.~\cite{cancela2020scalable} proposed a saliency-based feature selection method that selects the features that contain a higher discrimination result, allowing to provide robust and explainable predictions in both classification and regression problems. \subsection{Egocentric photo-streams} Following, we review some recent works on egocentric photo-streams, mainly focused on the classification of food-related scenes, such as our case study. Egocentric image analysis is a field within computer vision related to the design and development of algorithms to analyze and understand photo-streams captured by wearable cameras \cite{talavera_dataset}. These cameras are capable of capturing images that record visual information of our daily life, known as \textit{visual lifelogging}, to create a visual diary with activities of first-person life. The analysis of these egocentric photo-streams can improve peoples' lifestyle by analyzing social patterns \cite{herruzo2017analyzing}, social interactions \cite{aghaei2016whom}, or food behavior \cite{talavera2020eating}. In recent years, there is a growing interest in egocentric photo-streams giving their potential for assisted living. For instance, Furnari et al. \cite{furnari1} presented a benchmark dataset containing egocentric videos of eight personal locations and proposed a multi-class classifier to reject locations not belonging to any of the categories of interest for the end-user. As for food-related scene recognition, Sarker et al. \cite{macnet} addressed this task by proposing a multi-scale atrous CNN \cite{atrous} to analyze lifelogging images and determine people's recurrences in food places throughout their day. Later, Talavera et al. \cite{talavera_dataset} presented the EgoFoodPlaces dataset, composed of more than 33,000 images organized in 15 food-related scene classes. This dataset was recorded by 11 users while spending time on the acquisition, preparation, or consumption of food. The dataset was manually labeled into a total of 15 different food-related scene classes like \textit{bakery shop}, \textit{bar}, or \textit{kitchen}. Taking into account the relation of the studied classes, a taxonomy for food-related scene recognition was introduced. Furthermore, the authors proposed a hierarchical classification model based on the aggregation of six VGG16 networks \cite{vgg} over different subgroups of classes, emulating the proposed taxonomy. This is, to the best of our knowledge, the state-of-the-art in the recognition of food-related scenes in egocentric images. \section{Methodology} \label{section:proposed_approach} We propose a novel training approach to improve the robustness of CNNs in image classification. Figure \ref{fig:workflow} illustrates the different steps of the proposed scheme, which are subsequently explained in depth. \begin{figure}[htb] \includegraphics[width=\textwidth]{workflow.pdf} \caption{Workflow of our alternative training scheme, which (1) gets a new mini-batch of input images, (2) applies a visual explanation technique to generate the heat maps, (3) occludes the regions highlighted in the previous step, and (4) trains the CNN classifier.} \label{fig:workflow} \end{figure} Let consider the classical mini-batch gradient descent \cite{dekel2012optimal} training algorithm where, on each training step, the mini-batch is first fed into the neural network, then the gradient is computed, and finally, the calculated gradient is used to update the weights of the network. We propose to modify the training step to apply the new scheme over each mini-batch with a probability $p \in (0,1)$; i.e., with a probability $1-p$, the images in the mini-batch kept unchanged and the classical training step is performed as usual. Note that the probability $p$ belongs to the open interval $(0,1)$. $p = 0$ would mean that our training scheme is not applied (i.e., the classical training procedure is used instead). $p = 1$ would mean that only the modified images are used, making model convergence difficult. Preliminary experimentation suggests applying the method with values of $p \leq 0.5$ to guarantee that both occluded and original images are used in the learning process. Therefore, with a probability $p \in (0,1)$, our training scheme is applied as follows: \begin{enumerate} \item Using the current weights of the network, we do inference over the current mini-batch and apply a visual explanation method to get a heat map for each image in the mini-batch. These heat maps highlight the regions where the current model focuses its attention to classify the corresponding image. \item After that, we occlude the areas corresponding to those highlighted regions, forcing the model to look at other regions in the image. For each image in the mini-batch, we normalize its heat map and get a weight $w \in [0,1]$ for each pixel. Next, we select all the pixels whose weight $w$ is over a threshold $th$. The selected pixels are erased by setting them to 0, calling this approach 0-occlusion. As a result, we obtain the occluded images of the mini-batch. \item Finally, we train our model making use of the occluded mini-batch. \end{enumerate} Algorithm \ref{alg:pseudoCodigo} shows the pseudo-code of our proposed training method according to the 0-occlusion approach. Note that once the mini-batch is modified, the training step continues as usual (i.e., the gradient is calculated and the weights are updated). We think it is important to highlight that the model should not forget what the occluded regions mean, but learn to recognize other parts of the image to make a decision. This is guaranteed as the occluded images are used only for some mini-batches, according to the $p$ hyper-parameter, while the original ones are used for the rest of them. \begin{algorithm}[htb] \KwData{trainingSet, model, $p$, $th$} \KwResult{the model trained using the proposed approach} \For{miniBatch $in$ trainingSet}{ $r$ = random(0,1)\; \If{$r \leq p$}{ \For{(image, label) $in$ miniBatch}{ heatMap = visualExplanation(image, label, model, lastConvLayer)\; heatMap = minMaxNorm(heatMap)\; selectedPixels = $[$heatMap $> th]$\; image$[$selectedPixels$]$ = 0\; } } train(model, miniBatch)\; } \caption{Pseudo-code of the proposed training scheme using 0-occlusion.} \label{alg:pseudoCodigo} \end{algorithm} The proposed approach is compatible with any of the visual explanation methods presented in Section \ref{section:related_work} and, in general, with any method that generates a heat map to explain the decision of a CNN for a given target image. Among all these techniques, we choose Grad-CAM \cite{gradcam} because it uses the flow of the gradients from the last convolutional layer to compute the heat maps, making it computationally less expensive than other methods like LIME \cite{lime} or SmoothGrad \cite{smoothgrad}. These other techniques apply inference several times on images generated by perturbing the target image to compute the heat maps. In other words, Grad-CAM does inference once per image while other techniques do inference several times per image, which makes the former more appropriate for the problem at hand. Summarizing, the heat maps provided by Grad-CAM highlight the relevant regions of the image for predicting the ground truth class. By occluding them, the model is forced to look at other regions to make the decision. The initial regions should not be forgotten by the model, but other parts of the images should also be taken into account in the learning process. In this manner, the model improves its robustness and generalization capabilities. \section{Experimental framework and results}\label{section:experimental_results} In this section, we present two datasets used to evaluate the proposed method. Next, we describe the implementation details as well as the two experiments carried out, including the evaluation metrics considered. Finally, we report and analyze the results obtained in both experiments: (1) a comparison between the proposed method and some variants of it, and (2) a comparison with standardized baselines. \subsection{Datasets} \label{sec:datasets} We evaluated our proposed method on two well-known datasets: the Stanford cars dataset \cite{stanfordCars}, and the fine-grained visual classification of aircraft (FGVC-Aircraft) benchmark dataset \cite{aircraftDataset}. Both datasets were used as part of the fine-grained recognition challenge FGComp 2013, which ran jointly with the ImageNet Challenge 2013\footnote{http://image-net.org/challenges/LSVRC/2013/}. The Stanford cars dataset contains 16,185 images of 196 car models covering sedans, SUVs, coupes, convertibles, pickups, hatchbacks, and station wagons; and it is officially split into 8,144 training and 8,041 test images. The FGVC-Aircraft dataset contains 10,000 images of aircraft, with 100 images for each of 100 different aircraft model variants; and it is officially split into 6,667 training and 3,333 test images. \subsection{Implementation details} \label{sec:exp_implementation} The techniques and parameters used for experimentation are explained in the following. We used the Adam optimization algorithm \cite{adam} with the following parameters: learning rate $\alpha=0.00005$, $\beta_1=0.9$, $\beta_2=0.999$, and $\epsilon=0.0000007$. Regarding the training step, we used a batch size of 16 and the images were resized to $224 \times 224$. The outputs were monitored using the validation accuracy to apply an early stopping strategy, based on which the training process finished after 30 epochs with no improvement. Additionally, we applied the following classical data augmentation techniques: horizontal flip, rotation $[-40º, 40º]$, random channel shift $[-30, 30]$, and image brightness change $[0.5, 1.5]$. The proposed method was implemented on TensorFlow \cite{tensorflow} and Keras \cite{keras}, and the code is available for download\footnote{https://github.com/DavidMrd/Playing-to-distraction}. Starting from the training algorithm provided in Keras, we modified the training step to apply our method over each mini-batch with a probability $p$, as described in Section \ref{section:proposed_approach}. According to some preliminary experiments, we applied the proposed method with a probability $p=0.25$, and the threshold for the occlusion step was set to $th=0.85$. \subsection{Experimental setup} \label{sec:exp_setup} This section describes the two experiments designed to evaluate our training scheme. Both experiments were applied to each dataset individually and compared with other approaches. As for the experimentation itself, we kept the original split in training and test sets for the two considered datasets (see Section \ref{sec:datasets}). For validation purposes, we randomly divided the original training dataset into two parts: 75\% training and 25\% validation. Then, we trained the model and evaluated it on the isolated test set, using the performance metrics described in Section \ref{sec:exp_evaluation}. This validation procedure was repeated five times. We report the average performance and the standard deviation calculated across the five runs. \subsubsection{Experiment 1} The objective of this experiment is to test several setups of our training scheme and compare them with a baseline. In particular, we used a ResNet50 \cite{resnet}, a very popular network successfully applied to different image classification tasks. The different configurations are detailed as follows: \begin{enumerate} \item \textbf{Baseline.} In order to compare our method with a baseline, we trained a ResNet50 using the classical training approach (i.e., without applying the proposed method). We called this model fine-tuned ResNet50 (FT-ResNet50) because it is a model pre-trained on the ImageNet dataset \cite{imagenet}, whose parameters were fine-tuned with the corresponding dataset. \item \textbf{Our approach.} We trained a ResNet50 using the proposed training method, which is based on Grad-CAM visualizations and illustrated in Figure \ref{fig:workflow}. More specifically, we used the weights from the ResNet50 model pre-trained on ImageNet \cite{imagenet}, and then we fine-tuned them using the corresponding dataset and our training scheme. Note that, during the learning process, the Grad-CAM algorithm was applied to the last convolutional layer of the ResNet50, as indicated in \cite{gradcam}. \item \textbf{Other setups.} Aiming at demonstrating the adequacy of the 0-occlusion approach, we also conducted some experiments in which the pixels were set to a random value (R-occlusion) and 1 (1-occlusion). \end{enumerate} \subsubsection{Experiment 2} This experiment aims to demonstrate the adequacy of our training scheme regardless of the backbone architecture considered. In this sense, we applied it to two well-known backbone architectures, different from ResNet50, using the following configurations: \begin{enumerate} \item \textbf{Baselines.} We trained two architectures commonly used in the literature, InceptionV3 \cite{inceptionV3} and DenseNet \cite{huang2017densely}, using the classical approach. We called them FT-InceptionV3 and FT-DenseNet, respectively, because they were pre-trained on ImageNet and fine-tuned with the corresponding dataset. \item \textbf{Our approach.} We trained the two backbone architectures considered, InceptionV3 and DenseNet, using the proposed training scheme (see Figure \ref{fig:workflow}). As in the previous experiment, we used the weights from these two architectures pre-trained on ImageNet, and then we fine-tuned them with the corresponding dataset and our training scheme. Regarding the Grad-CAM algorithm, it was applied to the last convolutional layer of the networks as described in \cite{gradcam}. \end{enumerate} \subsection{Evaluation} \label{sec:exp_evaluation} In order to evaluate the performance of the proposed models and make a fair comparison with other approaches, we computed some popular metrics in image classification tasks: accuracy, precision, recall, and F-score (F1). These metrics are defined as follows: \begin{equation} Accuracy = \frac{TP+TN}{TP+TN+FP+FN} \end{equation} \begin{equation} Precision = \frac{TP}{TP+FP} \end{equation} \begin{equation} Recall = \frac{TP}{TP+FN} \end{equation} \begin{equation} F1 = 2*\frac{Precision*Recall}{Precision+Recall} \end{equation} where $TP$, $FP$, $TN$, and $FN$ stand for true positives, false positives, true negatives, and false negatives, respectively. \subsection{Results} In this section, we report and analyze the results obtained in the two experiments described above. \subsubsection{Experiment 1} Table \ref{tab:resultados_resnet} shows the results obtained for the different configurations. As can be observed, our training scheme provides very competitive results regardless of the setup used for the occlusion. Analyzing the four metrics considered, the three setups outperform the baseline method (FT-ResNet50), which was trained with the classical learning procedure, in both datasets. Focusing on our proposal (0-occlusion), it achieves a gain of more than 2 percent in the Standford cars dataset and about 2 percent in the FGVC-Aircraft dataset. In order to demonstrate the relevance of this improvement, we applied a statistical t-test that allows us to determine if there is a significant difference between the baseline (FT-ResNet50) and our proposal (0-occlusion). Notice that we used a paired sample, two-tailed t-test. As a result, we can confirm that our proposal significantly outperforms the baseline in terms of accuracy, with a significance level of 0.05. If we analyze the behavior of the three different setups considered for the proposed training scheme, we can see that both 0-occlusion and 1-occlusion provide better results than R-occlusion, with a very slight difference in favor of the former (0-occlusion). The experiments show that, when using random values for the occlusion procedure, the model does not benefit so much from the \textit{distraction} applied to the model, by forcing it to look at new regions in the input images. This behavior is discussed in detail below, with some qualitative results that aim at illustrating the impact of the proposed method. \begin{table*}[htb] \centering \begin{tabular}{l|c|c|c|c} \hline \multicolumn{5}{c}{Stanford cars} \\ \hline & FT-ResNet50 & 0-occlusion & R-occlusion & 1-occlusion \\ \hline Accuracy & $0.849\pm 0.009$ & \boldmath$0.871\pm0.007$ & $0.860\pm0.009$ & $0.869 \pm 0.008$ \\ Precision & $0.855 \pm0.007 $& \boldmath$0.876\pm0.007$ & $0.866\pm0.008$ & $0.873 \pm 0.008$ \\ Recall & $0.849\pm0.009$ & \boldmath$0.870\pm0.008$ & $0.860\pm0.009$ & $0.868 \pm 0.009 $ \\ F1 & $0.848\pm0.009$ & \boldmath$0.870\pm0.008$ & $0.859\pm0.009$ & $0.867\pm0.009$ \\ \hline \hline \multicolumn{5}{c}{FGVC-Aircraft} \\ \hline & FT-ResNet50 & 0-occlusion & R-occlusion & 1-occlusion \\ \hline Accuracy & $0.731 \pm 0.013$& \boldmath$0.749 \pm 0.005$ & $0.739 \pm 0.012$ & $0.743 \pm 0.005$ \\ Precision & $0.746 \pm 0.011$ & \boldmath$0.762 \pm 0.005 $ & $0.755 \pm 0.010$ & $0.759 \pm 0.004$ \\ Recall & $0.731 \pm 0.013 $& \boldmath$0.749 \pm 0.005$ & $0.739 \pm 0.012$& $0.743 \pm 0.005 $ \\ F1 & $0.731 \pm 0.014$& \boldmath$0.748 \pm 0.005$ & $0.739 \pm 0.012$& $0.743 \pm 0.005 $\\ \hline \end{tabular} \caption{Classification performance, averaged across five runs, of the different approaches on the Stanford cars \cite{stanfordCars} and FGVC-Aircraft \cite{aircraftDataset} datasets. Best results are in bold.} \label{tab:resultados_resnet} \end{table*} Figure \ref{fig:analysis} depicts two representative images of the two datasets used for experimentation, Stanford cars and FGVC-Aircraft, along with the heat maps generated by Grad-CAM for the different methods analyzed: the baseline FT-ResNet50 and the three setups for the proposed training approach. As can be observed, the models trained with the proposed approach, regardless of the setup, base their decisions on more features than the one trained using a classical approach (FT-ResNet50). While the baseline method seems to base its decisions just on a local area of the image, the models trained with the proposed approach seem to react to almost the whole object. Analyzing the different configurations, we can see that both 0-occlusion and 1-occlusion show a similar behavior, reacting to the whole object, which explains the achieved results in both cases. However, the R-occlusion version behaves differently since it reacts to many features but with a low level of confidence. That is, occluding the selected pixels with a fixed value (0 or 1) allows us to achieve better results than occluding the relevant regions with a random value. The reason for this behavior could be that, when using a fixed value, the model learns to ignore these areas and looks at other regions, whereas the model does not benefit as much from this idea when using a different value each time. It is worth noting that using 0-occlusion is somewhat similar to the well-known dropout \cite{dropout}, a regularization technique in which some connections are disabled during the learning phase. This would explain why this approach gets slightly better results than the 1-occlusion version. \begin{figure} \centering \begin{tabular}{c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}} \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{coche9.jpg} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{grad_cam_rescoche9.jpg} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{grad_cam_disscoche9.jpg} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{grad_cam_Rcoche9.jpg} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{grad_cam_t1coche9.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{avion3.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{agrad_cam_resavion3.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{agrad_cam_dissavion3.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{agrad_cam_randomavion3.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=2.0cm, height=2.0cm]{agrad_cam_t1avion3.jpg} \caption{} \end{subfigure} \end{tabular} \caption{(a) Input images from the Stanford cars (top) and FGVC-Aircraft (bottom) datasets, (b) heat maps generated by Grad-CAM for the baseline FT-ResNet50, and heat maps generated by Grad-CAM for the model trained with the proposed training scheme using (c) 0-occlusion, (d) R-occlusion, and (e) 1-occlusion.} \label{fig:analysis} \end{figure} Finally, Table \ref{tab:epochs} shows the number of epochs and the seconds per epoch needed to train the baseline (FT-ResNet50) and our proposal (0-occlusion). As can be observed, our training scheme requires more computational time per epoch and more epochs to converge than the classical procedure. Regarding the increment in terms of seconds per epoch, it is lower than $19\%$. Note that this time only depends on the image resolution and the hardware, so it is the same for both datasets. With respect to the increment in the number of epochs, it is $\approx32\%$ for the Stanford cars dataset and $\approx53\%$ for the FGVC-Aircraft dataset. Nevertheless, it is worth noting that, for application purposes, this computational time is not decisive since the training procedure is carried out only once before the model is put into production, after defining its architecture and setting its hyper-parameters. As our method is applied during the learning process, the computation time in the test phase is not affected. \begin{table*}[htb] \centering \begin{tabular}{l|cc|cc} \cline{2-5} & \multicolumn{2}{c|}{FT-ResNet50} & \multicolumn{2}{c}{0-occlusion} \\ & Standford cars & FGVC-Aircraft & Standford cars & FGVC-Aircraft \\ \hline Number of epochs & $98.8 \pm 6.78$ & $113.4 \pm 10.97$ & $130 \pm 10.38$ & $174 \pm 13.87$\\ Seconds per epoch* & \multicolumn{2}{c|}{$153 \pm 0.00$} & \multicolumn{2}{c}{$175 \pm 0.00$} \\ \hline \multicolumn{5}{l}{* Network input size: $224 \times 224 \times 3$. Hardware: NVIDIA T4 Tensor Core GPU.} \end{tabular} \caption{Number of epochs and seconds per epoch, averaged across five runs, needed to train the two different approaches on the Stanford cars \cite{stanfordCars} and FGVC-Aircraft \cite{aircraftDataset} datasets.} \label{tab:epochs} \end{table*} \subsubsection{Experiment 2} Table \ref{tab:resultados_inception_dense} shows the results obtained when applying our training scheme to the other two backbone architectures selected: InceptionV3 and DenseNet. According to the figures, our approach outperforms the corresponding baseline for both datasets regardless of backbone considered. While analyzing the behavior of our training scheme when using InceptionV3, we can observe that it achieves an improvement of more than 1 percent for the four performance measures. In terms of accuracy, this improvement over the baseline is of 1.3 percent on the Stanford cars dataset and 1.5 percent on the FGVC-Aircraft dataset. Regarding the DenseNet backbone, the improvement with respect to the baseline is about 1.1 percent for all the metrics on both datasets. \begin{table*}[htb] \centering \begin{tabular}{l|c|c||c|c} \hline \multicolumn{5}{c}{Stanford cars} \\ \hline & FT-InceptionV3 & 0-occlusion-InceptionV3 & FT-DenseNet & 0-occl-DenseNet \\ \hline Accuracy & $0.778 \pm 0.023$ & \boldmath$ 0.791 \pm 0.020$ & $0.883\pm0.010$ & \boldmath$0.894 \pm 0.011$ \\ Precision & $0.788 \pm 0.021 $& \boldmath$0.798 \pm 0.020$ & $0.888\pm0.009$ & \boldmath$0.898 \pm 0.011$ \\ Recall & $0.777 \pm 0.023$& \boldmath$0.791 \pm 0.020$ & $0.882 \pm 0.010$ & \boldmath$0.893 \pm 0.012 $ \\ F1 & $0.776 \pm 0.023 $& \boldmath$0.790 \pm 0.021$ & $0.882 \pm 0.010 $ & \boldmath$0.893 \pm0.012$ \\ \hline \hline \multicolumn{5}{c}{FGVC-Aircraft} \\ \hline & FT-InceptionV3 & 0-occlusion-InceptionV3 & FT-DenseNet & 0-occl-DenseNet \\ \hline Accuracy & $0.618 \pm 0.029$& \boldmath$0.633 \pm 0.026$ & $0.767 \pm 0.026$ & \boldmath$0.780 \pm 0.025$ \\ Precision & $0.630 \pm 0.030$ & \boldmath$0.641 \pm 0.029 $ & $0.786 \pm 0.024$ & \boldmath$0.794 \pm 0.023$ \\ Recall & $0.618 \pm 0.028 $& \boldmath$0.633 \pm 0.026$ & $0.767 \pm 0.026$& \boldmath$0.780 \pm 0.025 $ \\ F1 & $0.616 \pm 0.029$& \boldmath$0.630 \pm 0.026$ & $0.768 \pm 0.026$& \boldmath$0.780 \pm 0.025 $\\ \hline \end{tabular} \caption{Classification performance, averaged across five runs, making use of different backbones on the Stanford cars \cite{stanfordCars} and FGVC-Aircraft \cite{aircraftDataset} datasets. Best results are in bold.} \label{tab:resultados_inception_dense} \end{table*} \section{Case study}\label{section:case_study} This section describes an application of the proposed method to a real-world scenario. In particular, we consider the task of food-related scene classification in egocentric images, as detailed below. \subsection{Dataset} We evaluated our proposed method on the EgoFoodPlaces dataset \cite{talavera_dataset}, which is composed of 33,810 egocentric images gathered by 11 users and organized in 15 food-related scene classes. By making use of a wearable camera\footnote{\url{http://getnarrative.com/}}, the users regularly recorded an amount of approximately 1,000 images per day. The camera movements and the wide range of different situations that the users experienced during their days, lead to challenges such as background scene variation or changes in lighting conditions. The dataset was manually labeled into a total of 15 different food-related scene classes namely, \textit{bakery shop, bar, beer hall, cafeteria, coffee shop, dining room, food court, ice cream parlor, kitchen, market indoor, market outdoor, picnic area, pub indoor, restaurant, and supermarket.} Table \ref{tab:class_distribution} depicts the distribution of images among the collected classes, with a great imbalance between them. \begin{table}[htb] \Large \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{l|ccccccccccccccc|c} Class & \rotatebox[origin=c]{90}{Bakery shop} & \rotatebox[origin=c]{90}{Bar} & \rotatebox[origin=c]{90}{Beer hall} & \rotatebox[origin=c]{90}{Cafeteria} & \rotatebox[origin=c]{90}{Coffee shop} & \rotatebox[origin=c]{90}{Dining room} & \rotatebox[origin=c]{90}{Food court} & \rotatebox[origin=c]{90}{Ice cream parlor} & \rotatebox[origin=c]{90}{Kitchen} & \rotatebox[origin=c]{90}{Market indoor} & \rotatebox[origin=c]{90}{Market outdoor} & \rotatebox[origin=c]{90}{Picnic area} & \rotatebox[origin=c]{90}{Pub indoor} & \rotatebox[origin=c]{90}{Restaurant} & \rotatebox[origin=c]{90}{Supermarket} & \rotatebox[origin=c]{90}{Total} \\ \hline \begin{tabular}[c]{@{}l@{}}\#Images\end{tabular} & 144 & 1632 & 672 & 1689 & 2313 & 3639 & 204 & 107 & 3837 & 1181 & 1388 & 921 & 511 & 10310 & 5262 & \textbf{33810}\\ \hline \end{tabular}} \caption{Distribution of images per class in the EgoFoodPlaces dataset \cite{talavera_dataset}.} \label{tab:class_distribution} \end{table} \subsection{Experimental results} This section describes the results obtained when addressing the task of food-related scene classification with our proposed training scheme. The implementation details are the ones described in Section \ref{sec:exp_implementation} with two exceptions: (1) the resolution of the input images, which in this case is $250 \times 250$ as in \cite{talavera_dataset}; and (2) the application of class oversampling to the fourth largest class (i.e., \textit{dining room}) in order to alleviate the imbalance problem. Concerning the experimentation, we used the split described in \cite{talavera_dataset}, which includes a division into events for the training and evaluation phases, to make sure that there are no images from the same scene/event in both phases. The validation procedure, in this case, consisted of three partitions, with the following distribution: training set (70\%), validation set (10\%), and test set (20\%). Then, the model was trained and evaluated on the test set. This validation procedure was repeated five times. We report the average performance and the standard deviation calculated across the five runs. Finally, we considered the four performance metrics detailed in Section \ref{sec:exp_evaluation}: accuracy, precision, recall, and F1 score. Note that, for the per-class metrics (precision, recall, and F1), we calculated the macro- and weighted-averages, as suggested in \cite{talavera_dataset}: \textit{macro} gives equal weight to all classes, while \textit{weighted} is sensitive to imbalances. It is worth noting the relevance of these two average values due to the unbalanced nature of the dataset. \subsubsection{Classification performance} For the evaluation of our proposal, we followed the experimental setup described in Section \ref{sec:exp_setup}, but using the EgoFoodPlaces dataset to train the ResNet50 architecture with the classical procedure (FT-ResNet50) and with our training scheme (0-occlusion). Additionally, we compared our results with the ones reported in \cite{talavera_dataset}, the state-of-the-art approach for this dataset. Table \ref{tab:resultados_egocentric} reports the results obtained for the different approaches. As can be seen, training a ResNet50 with our proposed scheme (0-occlusion) allows us to achieve a higher accuracy than the one obtained with the baseline (FT-ResNet50). Moreover, the proposed method also achieves higher weighted averages for the other three metrics considered (precision, recall, and F1). It is worth noting that, due to the high imbalance of the dataset, the weighted metrics are more informative than the macro values. Concerning the latter, the differences between both methods are minimal, with the same values for precision and F1, and a slightly higher macro recall in favor of the baseline. \begin{table*}[htb] \centering \begin{tabular}{l|c|c|c} & Hierarchical approach \cite{talavera_dataset} & FT-ResNet50 & 0-occlusion \\ \hline Macro Precision & 0.56 & \boldmath$0.59 \pm 0.03$ & \boldmath$0.59 \pm 0.05$ \\ Macro Recall & 0.53 & \boldmath$0.55 \pm 0.03$ & $0.54 \pm 0.06$ \\ Macro F1 & \textbf{0.53} & \boldmath$0.53 \pm 0.04$ & \boldmath$0.53 \pm 0.06$ \\ \hline Weighted Precision & 0.65 & $0.67 \pm 0.02$ & \boldmath$0.68 \pm 0.03$ \\ Weighted Recall & \textbf{0.68} & $0.67 \pm 0.03$ & \boldmath$0.68 \pm 0.04$ \\ Weighted F1 & 0.65 & $ 0.64 \pm 0.03 $& \boldmath$0.66 \pm 0.04 $ \\ \hline Accuracy & \textbf{0.68} & $0.67 \pm 0.03$ & \boldmath$0.68 \pm 0.04$ \\ \hline \end{tabular} \caption{Classification performance, averaged across five runs, of the different approaches on the EgoFoodPlaces dataset \cite{talavera_dataset}. Best results are in bold.} \label{tab:resultados_egocentric} \end{table*} If we analyze the results achieved by the state-of-the-art \cite{talavera_dataset} and compare them with the proposed method, we can see that our approach achieves better results in four out of the seven performance measures, whereas the remaining three are equal. We find important to point out that our approach makes use of only just one classifier (ResNet50), while the model presented in \cite{talavera_dataset} uses a hierarchical ensemble composed of six VGG16 networks. Therefore, the complexity of our model is significantly lower, not only because we have one single classifier but also because our backbone model ResNet50 has a lower number of parameters than their VGG16 networks. Therefore, we can conclude that our proposed method is able to achieve similar performance results with a less complex architecture and a computationally less expensive approach. Finally, the impact of the different approaches on the individual classes is presented in Table \ref{tab:resultados_egocentric_class}. As can be seen, our method (0-occlusion) shows a behavior very similar to the baseline approach (FT-ResNet50), with slightly higher rates in seven classes and three ties. Analyzing the figures obtained with the hierarchical approach \cite{talavera_dataset}, our method achieves better results in eight classes. More specifically, the results in which our approach outperforms the state-of-the-art correspond to the four most represented classes (\textit{restaurant}, \textit{supermarket}, \textit{kitchen}, and \textit{dining room}). Also noteworthy is the improvement achieved for the class \textit{food court}, which could not be classified by the hierarchical model (true positive rate of 0.00). However, there are five classes for which the hierarchical model gets a better performance, including \textit{beer hall}, \textit{cafeteria}, and \textit{coffee shop}. We deduce that this is due to the benefits of classifying them in a hierarchical fashion. \begin{table}[htb] \Large \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{l|ccccccccccccccc} Class & \rotatebox[origin=c]{90}{Bakery shop} & \rotatebox[origin=c]{90}{Bar} & \rotatebox[origin=c]{90}{Beer hall} & \rotatebox[origin=c]{90}{Cafeteria} & \rotatebox[origin=c]{90}{Coffee shop} & \rotatebox[origin=c]{90}{Dining room} & \rotatebox[origin=c]{90}{Food court} & \rotatebox[origin=c]{90}{Ice cream parlor} & \rotatebox[origin=c]{90}{Kitchen} & \rotatebox[origin=c]{90}{Market indoor} & \rotatebox[origin=c]{90}{Market outdoor} & \rotatebox[origin=c]{90}{Picnic area} & \rotatebox[origin=c]{90}{Pub indoor} & \rotatebox[origin=c]{90}{Restaurant} & \rotatebox[origin=c]{90}{Supermarket} \\ \hline \begin{tabular}[c]{@{}l@{}}Hierarchical app. \cite{talavera_dataset}\end{tabular} & 0.39 & 0.31 & \textbf{0.89} & \textbf{0.45} & \textbf{0.59} & 0.58 & 0.00 & \textbf{0.52} & 0.89 & \textbf{0.70} & 0.28 & 0.00 & \textbf{0.85} & 0.70 & 0.85\\ \begin{tabular}[c]{@{}l@{}}FT-ResNet50\end{tabular} & \textbf{0.63} & 0.31 & 0.24 & 0.38 & 0.49 & 0.66 & \textbf{0.53} & 0.50 & 0.89 & 0.60 & \textbf{0.57} & 0.00 & 0.78 & \textbf{0.73} & 0.90 \\ \begin{tabular}[c]{@{}l@{}}0-occlusion\end{tabular} & 0.60 & \textbf{0.32} & 0.26 & 0.35 & 0.48 & \textbf{0.72} & 0.43 & \textbf{0.52} & \textbf{0.90} & 0.60 & 0.53 & 0.00 & 0.80 & \textbf{0.73} & \textbf{0.92} \\ \hline \end{tabular}} \caption{True positive rate per class, averaged across five runs, of the different approaches on the EgoFoodPlaces dataset \cite{talavera_dataset}. Best results are in bold.} \label{tab:resultados_egocentric_class} \end{table} Going deeper into the results obtained and given the characteristics of the EgoFoodPlaces dataset, we can draw some additional conclusions. Firstly, we can observe that the classification improves when using our approach for (1) classes where the scene to recognize is right in front of the camera users (e.g., \textit{restaurant}), and (2) classes that tend to share descriptors even if recorded at different locations (e.g., \textit{dining room} or \textit{supermarket}). Those results inherit that the model is able to learn the relevant features in the scene when it is self-contained, which is closely related to the fine-grained datasets evaluated in Section \ref{section:experimental_results}. Analyzing the images we can also see that, in some classes (e.g., \textit{food court}, \textit{cafeteria}, \textit{market outdoor}), there is more background than foreground information necessary for the identification of the scene; that is, the image is composed of characteristics that an observer would not find relevant for the distinction of an event. Therefore, the main difficulty in learning these scenes is that not only the locations vary but also they are composed of elements common to other scenes. In this case, including other relevant regions along with a limited amount of samples available per class might represent imposed noise and lead to a lower performance in our approach compared to the baseline. This issue could be addressed with the extension of the dataset. \subsubsection{Model inspection} We analyzed not only the classification performance of our training scheme but also its ability to make predictions. In particular, we aimed to find out if the proposed approach is able to improve the robustness of a CNN classifier and make it sensible to more features. For this reason, we carried out two additional experiments: (1) we analyzed the behavior of the models making use of a visual explanation algorithm, and (2) we randomly erased some areas of the test images before evaluating the models on them. In the first experiment, our target was to demonstrate that the regions considered as relevant by the trained models were more and bigger when applying our training scheme than when following the classical procedure. For this purpose, we applied the Grad-CAM algorithm to the last convolutional layer of the two ResNet50 models previously trained on the EgoFoodPlaces dataset: one trained using the classical procedure (FT-ResNet50), and the other one using our training scheme with 0-occlusion. As a result, we obtained the heat maps that allow us to visualize the regions that are important to the models when making a prediction for a given image. Figure \ref{fig:analysis_ego} depicts some representative images along with their corresponding heat maps for each model. As can be observed, our model took into account bigger regions than the baseline method (FT-ResNet50) when processing the same target images. Besides, it can be seen that the model trained with our proposed method bases its decisions on more regions than when using the classical procedure. Furthermore, the regions that the baseline model took into account when making a decision were also taken into account by the proposed model. This demonstrates that when using the proposed training scheme, the model does not forget the learned features, but just learns to recognize other features. \begin{figure} \centering \begin{tabular}{ccc} \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{bar.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_bar.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_bar.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{beer_hall.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_beer_hall.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_beer_hall.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{dining_room.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_dining_room.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_dining_room.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{market_indoor.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_market_indoor.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_market_indoor.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{pub_indoor.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_pub_indoor.jpg} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_pub_indoor.jpg} \end{subfigure} \\ \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{supermarket.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{original_supermarket.jpg} \caption{} \end{subfigure} & \begin{subfigure}[b]{0.275\linewidth} \includegraphics[width=2.75cm, height=2.75cm]{grad_cam_supermarket.jpg} \caption{} \end{subfigure} \end{tabular} \caption{(a) Input images, (b) heat maps generated by Grad-CAM for the baseline FT-ResNet50, and (c) heat maps generated by Grad-CAM for the model trained with the proposed training scheme (0-occlusion).} \label{fig:analysis_ego} \end{figure} Finally, we conducted the second experiment to test the robustness of our training scheme. For this purpose, we hid some regions of the test images by randomly erasing them, as proposed in \cite{randomErasing}. After that, we compared how the two approaches (FT-ResNet50 and 0-occlusion) performed on the modified test set. Table \ref{tab:resultados_ocultando_test} presents the results for this experiment. As can be observed, the proposed approach (0-occlusion) performs better than the baseline model (FT-ResNet50). This means that our model does not suffer as much when some areas of the image are erased or hidden, demonstrating its robustness. It is also worth noting that these results are consistent with the ones obtained in the previous experiment, and demonstrate that our model makes use of more and bigger regions than the baseline approach to make a prediction for a target image. \begin{table}[htb] \centering \begin{tabular}{l|c|c} & FT-ResNet50 & 0-occlusion \\\hline Macro Precision & $0.53 \pm 0.01 $ & \boldmath$0.54 \pm 0.02 $ \\ Macro Recall & $0.47 \pm 0.02 $ & \boldmath$0.48 \pm 0.03$ \\ Macro F1 & $0.47 \pm 0.02 $& \boldmath$0.48 \pm 0.05$ \\ \hline Weighted Precision & \boldmath$0.63 \pm 0.02$ & \boldmath$0.63 \pm 0.03$ \\ Weighted Recall & $0.59 \pm 0.02 $& \boldmath$0.65 \pm 0.03$ \\ Weighted F1 & \boldmath$0.59 \pm 0.02 $ & \boldmath$0.59 \pm 0.02$ \\ \hline Accuracy & $0.59 \pm 0.02$ & \boldmath$0.60 \pm 0.02$ \\ \hline \end{tabular} \caption{Classification performance, averaged across five runs, of the baseline method and the proposed training scheme when we randomly hid some regions on the test images. Best results are in bold.} \label{tab:resultados_ocultando_test} \end{table} \section{Conclusion}\label{section:conclusion} This research work presents a novel training scheme that improves the robustness and generalization ability of CNNs applied to image classification. The idea is to force the model to learn as many features as possible when making a class selection. For this purpose, we apply a visual explanation algorithm to identify the areas on which the model bases its decisions. After identifying those areas, we occluded them and trained the model with a combination of the modified images and the original ones. In this manner, the model is not able to base its prediction on the occluded regions and is forced to use other areas. Consequently, the model also learns to pay attention to those regions of the target image that, \textit{a priori}, are not so informative for its classification. To evaluate the proposed method, we carried out different experiments on two popular datasets used for fine-grained recognition tasks: Stanford cars and FGVC-Aircraft. From the obtained results, we can confirm our initial hypothesis: our method forces the network to learn additional features that help it distinguish between very similar classes, showing its suitability for fine-grained classification problems. More specifically, and within the different evaluated configurations, the 0-occlusion approach has shown to be the most appropriate setting. Furthermore, we demonstrated the adequacy of our training scheme regardless of the backbone architecture considered. We further experimented with a real-case study focused on the classification of food-related scenes. We analyzed the impact of our training scheme by comparing it with a baseline method and, to the best of our knowledge, with the state-of-the-art approach that follows an ensemble composed of six CNNs \cite{talavera_dataset}. The results achieved with our method were comparable or even better than the ones obtained with the state-of-the art approach despite making use of just one network, thus reducing the level of complexity while maintaining a competitive performance. Furthermore, our method is computationally less expensive, as the chosen backbone (ResNet50) has fewer parameters than the VGG16 used in \cite{talavera_dataset}. Finally, we carried out several occlusion and visual explanation experiments, showing that our method improves the robustness of the classifier by forcing it to base its decisions on more features. As a future line of research, it would be interesting to apply the same methodology not only to input images but also at different convolutional levels, as it is usually done with the regularization technique known as dropout. In other words, the feature maps obtained at different levels could be analyzed and occluded in the same way that we did with the input images. This idea would force the model to pay attention to different characteristics on the feature maps, thereby improving the robustness of the model at different levels of the learning process. \begin{acknowledgements} We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. \end{acknowledgements} \section*{Conflict of interest} The authors declare that they have no conflict of interest. \bibliographystyle{spphys}
{ "timestamp": "2021-07-30T02:15:01", "yymm": "2012", "arxiv_id": "2012.14173", "language": "en", "url": "https://arxiv.org/abs/2012.14173" }
\section{Introduction} \subsection{The DPRM theorem} We write $\mathbb{N}$ for the semi-ring of non-negative integers. A set $X\subseteq \mathbb{N}^r$ is \emph{listable} (aka. computably or recursively enumerable) if its elements can be listed by a Turing machine. On the other hand, $X\subseteq \mathbb{N}^r$ is \emph{Diophantine} if there is a polynomial $P\in \mathbb{Z}[x_1,...,x_r,y_1,...,y_k]$ (depending on $X$) satisfying $X=\{{\bf a}\in \mathbb{N}^r : \exists {\bf b}\in \mathbb{N}^k, P({\bf a},{\bf b})=0\}$. An elementary fact is that $X\subseteq \mathbb{N}^r$ is Diophantine if and only if it is positive existentially definable over $\mathbb{N}$ in the language of arithmetic $\mathscr{L}_a=\{0,1,+,\times, =\}$. Also, it is a classical remark that every Diophantine set is listable. A celebrated result by Davis-Putnam-Robinson \cite{DPR} and Matiyasevich \cite{Matiyasevich} gives the converse. \begin{theorem}[DPRM theorem] A set $X\subseteq \mathbb{N}^r$ is Diophantine if and only if it is listable. \end{theorem} An immediate consequence of this result is a negative solution to Hilbert's tenth problem. However, the DPRM theorem goes far beyond undecidability; it gives a complete and satisfactory classification of the positive existentially definable sets of the structure $\mathbb{N}$ over $\mathscr{L}_a$. In this article we investigate extensions of the DPRM theorem in the setting of \emph{listable structures} (cf. Section \ref{SecIntroListable}). When such a structure satisfies an analogue of the DPRM theorem, we will say that it has the \emph{DPRM property} (cf. Section \ref{SecIntroDPRM}). We prove a number of results addressing foundational aspects of the relation between listable sets and positive existentially definable sets in listable structures; see Section \ref{SecIntroDPRM} for a brief summary of our results on this setting. Finally, we discuss applications of our results in two different directions. First, we will provide rigorous proofs for several folklore facts concerning transference of the DPRM property, in a strong form (cf. Section \ref{SecIntroAppFolklore}). And secondly, we will apply our results to show that several number-theoretical conjectures are in fact closely related to the question of whether global fields have the DPRM property or not (cf. Section \ref{SecIntroAppGlobal}). \subsection{Analogues of the DPRM theorem.} \label{SecIntroAnalogues} Given a first order language $\mathscr{L}$, an $\mathscr{L}$-structure $\mathfrak{M}=(M;\mathscr{L})$ is \emph{recursive} if there is a surjective map onto the domain $\theta:\mathbb{N}\to M$ satisfying that the pull-back of the interpretation of each element of the signature $s\in \mathscr{L}$ is decidable; cf. \cite{FroShe}. Starting with Denef's work \cite{DenefZT} on $\mathbb{Z}[T]$, analogues of the DPRM over other structures have been investigated in the context of \emph{recursive rings}. We refer the reader to Section \ref{SecDPRMknown} for a summary of the known results on analogues of the DPRM theorem. However, the setting of recursive rings is too restrictive. For instance, the crucial result by Davis-Putnam-Robinson \cite{DPR} does not concern rings as the signature contains an exponential function. Furthermore, the condition that the structure under consideration be recursive is unnatural in the study of extensions of the DPRM theorem. If $\mathfrak{M}$ is a recursive and we expand its signature by a positive existentially definable relation, the new structure can fail to be recursive. For instance, consider a listable undecidable set $H\subseteq \mathbb{N}$ (which is Diophantine by DPRM) and note that the structure $(\mathbb{N};0,1,+,\times, H,=)$ is not recursive, although its positive existentially definable sets are the same as those of $(\mathbb{N};0,1,+,\times,=)$. This problem is avoided by considering \emph{listable structures}. \subsection{Listable structures} \label{SecIntroListable} Let $\mathscr{L}$ be a first order language and let $\mathfrak{M}=(M;\mathscr{L})$ be an $\mathscr{L}$-structure. A \emph{listable presentation} of $\mathfrak{M}$ is a surjective map $\rho:\mathbb{N}\to M$ such that for every $s\in \mathscr{L}$ the pull-back under $\rho$ of the interpretation of $s$ is a listable set. In this way, $\rho$ affords a notion of listable set on $\mathfrak{M}$ with respect to $\rho$: A set $X\subseteq M^r$ is \emph{$\rho$-listable} if its pull-back under $\rho$ is listable. If $\mathfrak{M}$ admits a listable presentation, we say that it is a \emph{listable structure}, aka. positive or recursively enumerable structure, see \cite{Selivanov1, Selivanov2}. In Section \ref{SecListable} we give a detailed study of listable presentations tailored to our intended applications on extensions of the DPRM property. Among other results, we prove a transference criterion (Proposition \ref{PropIntimplList}), a characterization of $\rho$-listable sets of $\mathfrak{M}$ (Lemma \ref{LemmaCharList}), and the existence of universal $\rho$-listable sets (cf. Lemma \ref{LemmaUnivList} together with Corollary \ref{CoroBij}). We give rigorous proofs of all these results ---our arguments do not rely on a naive notion of ``real-world algorithm''. These statements seem to be well-known to the experts, but we were unable to find proofs in the literature. In Section \ref{SecEquivListStr} we discuss a notion of equivalence of listable presentations. A central theme in our analysis is whether all the listable presentations of a given listable structure are equivalent to each other; i.e. the problem of \emph{unique listability} for a structure. Our Theorem \ref{ThmUL} provides a very general criterion for unique listability, which implies unique listability of finitely generated structures (Proposition \ref{PropFG}). Uniquely listable structures are convenient since they have an intrinsic notion of listable sets. We show that this last feature essentially characterizes unique listability (Theorem \ref{ThmEquivCompare}). We remark that listable structures are not our main goal, and we cover them because they are necessary in our study of the DPRM property. \subsection{The DPRM property} \label{SecIntroDPRM} All first order definitions are understood to be \emph{without parameters}, unless explicitly stated otherwise. Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. It is easy to show that if a set $X\subseteq M^r$ is positive existentially $\mathscr{L}$-definable, then it is $\rho$-listable for every listable presentation $\rho$ of $\mathfrak{M}$ (Corollary \ref{CoroTotList}). A listable $\mathscr{L}$-structure $\mathfrak{M}$ has the \emph{DPRM property} when the converse holds, namely, if the positive existentially $\mathscr{L}$-definable sets over $\mathfrak{M}$ are the same as those which are $\rho$-listable for every listable presentation $\rho$ of $\mathfrak{M}$ ---naturally, the definition simplifies when $\mathfrak{M}$ is uniquely listable. Our main results on the DPRM property concern the following aspects. \subsubsection{The number of existential quantifiers} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with domain $M$. Given a set $X\subseteq M^r$ we define $\mathrm{rank}^{p.e.}_\mathfrak{M}(X)$ as the least number of existential quantifiers needed to give a positive existential $\mathscr{L}$-definition of $X$ with parameters from $M$. For instance, if $X$ is a singleton then $\mathrm{rank}^{p.e.}_\mathfrak{M}(X)=0$. The quantity $\mathrm{rank}^{p.r.}_\mathfrak{M}(X)$ will be called \emph{positive existential rank} of $X$. Theorem \ref{ThmCatDPRM} shows that if a uniquely listable structure $\mathfrak{M}=(M;\mathscr{L})$ satisfies the DPRM property, then (under some mild assumptions) for each $r\ge 1$ there is a uniform parametric positive existential definition of all the positive existentially definable sets of $\mathfrak{M}$ contained in $M^r$. As a consequence, in this setting we have uniform boundedness of the positive existential rank: There is a bound $B_\mathfrak{M}(r)$ depending only on $\mathfrak{M}$ and $r$ such that $\mathrm{rank}^{p.e.}_\mathfrak{M}(X)\le B_\mathfrak{M}(r)$ for every positive existentially $\mathscr{L}$-definable set $X\subseteq M^r$. Such a uniform boundedness property is remarkable and it can fail even in some very natural structures. In fact, building on work by Koll\'ar \cite{Kollar}, in Theorem \ref{ThmCt} we prove unboundedness of the positive existential rank for the field of complex rational functions $\mathbb{C}(t)$. With a similar argument, the same result can be obtained over any uncountable large field of characteristic zero (e.g. $\mathbb{R}$ or $\mathbb{Q}_p$) but we only state the results over $\mathbb{C}$ for the sake of simplicity. \subsubsection{Transference of the DPRM property} Denef's proof \cite{DenefZT} of the DPRM property for the ring $\mathbb{Z}[T]$ implicitly developed a transference principle that takes as input the DPRM property in the semi-ring $\mathbb{N}$ in order to deduce the DPRM property for a recursive integral domain. See for instance \cite{ZahidiOKt} where Denef's transference principle is made explicit, and see the detailed discussion in \cite{DemeyerThesis}. In Theorem \ref{ThmTransferDPRM} we prove a much more general transference result. Consider languages $\mathscr{L}_1,\mathscr{L}_2$ and uniquely listable structures $\mathfrak{M}_1, \mathfrak{M}_2$ over these languages respectively, such that $\mathfrak{M}_2$ has the DPRM property and each structure admits a positive existential interpretation in the other. Theorem \ref{ThmTransferDPRM} shows that $\mathfrak{M}_1$ inherits the DPRM property if and only if the graph of the self-interpretation of $\mathfrak{M}_1$ obtained by composing the two given interpretations, is positive existentially definable over $\mathfrak{M}_1$. In the special case when $\mathfrak{M}_2$ is the semi-ring $\mathbb{N}$ and $\mathfrak{M}_1$ is a recursive integral domain, this specializes to Denef's transference principle (see Corollary \ref{CoroDPRM}). Furthemore, Theorem \ref{ThmTransferDPRM} shows that under the previous assumptions, $\mathfrak{M}_2$ has the DPRM property if and only if $\mathfrak{M}_1$ and $\mathfrak{M}_2$ are \emph{positive existentially bi-interpretable} in the sense introduced in Section \ref{SecHomotopy}. This aspect is not covered by the classical version of Denef's transference principle. Let us stress the fact that for two structures $\mathfrak{M}_1$ and $\mathfrak{M}_2$ to be positive existentially bi-interpretable it is not enough that each one admits a positive existential interpretation in the other. By definition, we moreover require that the self-interpretations of $\mathfrak{M}_1$ and $\mathfrak{M}_2$ obtained by composing the two interpretations be \emph{positive existentially homotopic} to the identity interpretation, see Section \ref{SecHomotopy} for details. If we drop the condition that all formulas involved in the discussion be positive existential, then we are back in the classical setting of homotopy of interpretations introduced in \cite{Ambos}. Several useful fundamental facts about bi-interpretability in this sense have been recently proved in \cite{AKNS}. The necessary material for the positive existential counterpart is developed in Section \ref{SecHomotopy}. \subsubsection{Model-theoretical characterization} Our transference results allow us to obtain, under some mild assumptions, a characterization of the uniquely listable structures having the DPRM property. \begin{theorem}[cf. Theorem \ref{ThmCharDPRM}] Let $\mathscr{L}$ be a fist order language and let $\mathfrak{M}$ be a uniquely listable $\mathscr{L}$-structure with an infinite domain such that the relation $\ne$ is positive existentially $\mathscr{L}$-definable on $\mathfrak{M}$. Then the following are equivalent: \begin{itemize} \item[(i)] $\mathfrak{M}$ has the DPRM property. \item[(ii)] $\mathfrak{M}$ is positive existentially bi-interpretable with the semi-ring $\mathbb{N}$. \end{itemize} \end{theorem} Thus, uniquely listable structures having the DPRM property are essentially the same as those which are positive existentially bi-interpretable with $\mathbb{N}$. In particular, under the assumptions of the theorem, uniquely listable structures which are not bi-interpretable with $\mathbb{N}$ do not have the DPRM property. In view of Proposition \ref{PropFG}, unique listability holds for finitely generated structures. In particular, this leaves the problem of classifying which finitely generated structures are positive existentially bi-interpretable with $\mathbb{N}$. Let us remark that the analogous problem for \emph{first order} bi-interpretations with $\mathbb{N}$ is fully solved in the recent work \cite{AKNS} in the case of commutative finitely generated rings. \subsection{Application: Some folklore transference results} \label{SecIntroAppFolklore} In Section \ref{SecExamplesDPRM} we use our general results on the DPRM property to prove various folklore facts for which no complete proof seems to be available in the literature. These include, for instance, the fact that $\mathbb{Z}$ is Diophantine in $\mathbb{Q}$ if and only if $\mathbb{Q}$ has the DPRM property, as well as analogues for rings of integers and function fields. In fact, we give a more precise version of the aforementioned equivalence, which relates these conditions to positive existential bi-interpretability with $\mathbb{N}$. \subsection{Application: Diophantine conjectures over global fields}\label{SecIntroAppGlobal} \subsubsection{An algebraicity conjecture} A real number $x\in \mathbb{R}$ will be called \emph{left-Diophantine} if it is the supremum of a Diophantine subset of $\mathbb{Q}$. We will show that every real algebraic number is left-Diophantine, and that the class $\mathscr{D}$ of left-Diophantine numbers is contained in the class $\Lambda$ of real numbers that can be ``described'' by a Turing machine by producing rational approximations from below: the \emph{left-listable numbers} (also known as left recursively enumerable numbers, or left computably enumerable numbers). These lower and upper bounds for $\mathscr{D}$ are far apart, since the class $\Lambda$ contains rather exotic transcendental numbers. We conjecture that, in fact, the set $\mathscr{D}$ is as small as possible: it is just the field of real algebraic numbers. In particular, we conjecture that $\mathscr{D}$ is a field, which should be contrasted with the fact that $\Lambda$ is not a field \cite{Ambos}. As we will explain, the existence of at least \emph{one} left-listable real number which is not left-Diophantine would be enough to imply that $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. See Section \ref{SecLeft} for details. In particular, we show that the conjecture that $\mathscr{D}$ is exactly the field of real algebraic numbers follows from a version of Mazur's conjecture on the topology of rational points \cite{MazurConj1,MazurConj2,CTSSD} (the necessary material on Mazur's conjecture is recalled in Section \ref{SecMazur}). \subsubsection{The number of existential quantifiers for global fields} Our result on unboundedness of the positive existential (p.e.) rank for the field $\mathbb{C}(t)$ (Theorem \ref{ThmCt}) leads us to conjecture that the same failure of uniform boundedness might hold for global fields such as $\mathbb{Q}$ and $k(t)$ with $k$ a finite field. As we will show in Proposition \ref{PropBddnonDioph}, the latter would imply that $\mathbb{Z}$ and $k[t]$ are not Diophantine in $\mathbb{Q}$ and $k(t)$ respectively. Furthermore, Proposition \ref{PropBddnonDioph} shows that unboundedness of the p.e. rank over a global field $K$ would imply that $K$ is not positive existentially bi-interpretable with $\mathbb{N}$. After a first version of this work was released, Philip Dittmann informed us about his joint work with Nicolas Daans and Arno Fehm \cite{DDF} where they carry out an independent and very detailed study the minimal number of existential quantifiers required to define Diophantine sets over fields. In particular, they relate this notion to other measures of complexity of Diophantine sets, and they also point out the link with the question of whether $\mathbb{Z}$ is Diophantine in $\mathbb{Q}$. However, our result on $\mathbb{C}(t)$ and the connection of the p.e. rank with the DPRM property are not covered in \cite{DDF}. \subsubsection{A Diophantine approximation conjecture} In Section \ref{SecKey} we introduce a conjecture in Diophantine approximation (Conjecture \ref{ConjKey}) which would imply that the analogue of the DPRM theorem fails over every global field. In simple terms, the main idea of the conjecture is the following: If $K$ is a global field, $v$ is a place of $K$, and $X$ is a variety over $K$ whose $K$-rational points are Zariski dense, then we expect that there is an effective divisor $D$ on $X$ defined over $K$ such that some sequence of $K$-rational points on $X$ $v$-adically approaches $D$. The precise statement of the conjecture is more technical because it allows one to choose $D$ in a family of divisors, and it allows one to discard families of $K$-rational points coming from lower dimensional varieties, as such points might have anomalous Diophantine approximation properties. We provide some evidence for this conjecture in Section \ref{SecKey}; especially, we show that in the number field case it would follow from general conjectures of Mazur on the topology of rational points. In Section \ref{SecnD} we show that our Diophantine approximation conjecture would imply that various natural subsets of global fields are not Diophantine, such as $\mathbb{Z}$ in $\mathbb{Q}$, as well as $\mathbb{F}_p[t]$ and $\{t^n: n\in \mathbb{N}\}$ in $\mathbb{F}_p(t)$ ---this last case requires results on functional transcendence. \section{Notation and basic facts}\label{SecPrelim} \subsection{Functions and sets} Given sets $X,Y$ and a function $f:X\to Y$, we define $$ \Gamma(f)=\{(y,x)\in Y\times X : y=f(x)\}\subseteq Y\times X $$ For a positive integer $r$ we let $f^{(r)}:X^r\to Y^r$ be the map $(x_1,...,x_r)\mapsto (f(x_1),...,f(x_r))$. Given $S\subseteq Y^r$ we let $f^*(S)=(f^{(r)})^{-1}(S)\subseteq X^r$. Although the notation $f^*$ does not refer to $r$, the value of $r$ will always be clear from the context. We will use bold fonts to indicate a tuple whenever the number of coordinates is clear from the context, for instance, ${\bf a}=(a_1,...,a_r)$. \subsection{Recursive functions} A partial function over $\mathbb{N}$ is a function $f:A\to \mathbb{N}$ where $A\subseteq \mathbb{N}^r$ for some $r$ called arity. The set of \emph{recursive functions} $\mathscr{R}$ is the smallest class of partial functions over $\mathbb{N}$ satisfying the next two conditions: \begin{itemize} \item It contains the successor function $S(x)=x+1$, all coordinate projections, and the constant function $0$ of each arity $r\ge 1$. \item It is closed under composition, recursion, and the minimalization operator $\mu$. \end{itemize} For details, see (for instance) \cite{CoriLascar} or any other textbook on the subject. It is a classical result that $\mathscr{R}$ is precisely the class of partial functions that can be computed by Turing machines ---the domain corresponds to those inputs where the corresponding machine halts. More generally, we say that a function $f:A\to \mathbb{N}^k$ for some $A\subseteq \mathbb{N}^r$ is \emph{recursive} if $f=(f_1,...,f_k)$ where each $f_j:A\to \mathbb{N}$ is recursive in the previous sense. For $k=1$ the definitions agree, of course. A recursive function of arity $r$ is \emph{total} if its domain is $\mathbb{N}^r$. \subsection{Decidable and listable sets} Let $X\subseteq \mathbb{N}^r$ be a set. We say that $X$ is \emph{decidable} if its characteristic function $\chi_X:\mathbb{N}^r\to \{0,1\}\subseteq \mathbb{N}$ is total recursive. We say that $X$ is \emph{listable} if it is the domain of a recursive function. We have the following standard characterizations: \begin{itemize} \item $X$ is decidable if and only if $X$ and its complement $X^c$ are both listable. \item $X$ is listable if and only if it is either empty or the image of a total recursive function $f:\mathbb{N}\to \mathbb{N}^r$. Furthermore, if $X$ is infinite, then one can ask $f$ to be injective. \end{itemize} The next two lemmas are straightforward. \begin{lemma}\label{LemmaListImage} Let $f:\mathbb{N}^a\to \mathbb{N}^b$ be a total recursive function and let $B\subseteq \mathrm{im}(f)\subseteq \mathbb{N}^b$. We have that $B$ is listable if and only if $f^{-1}(B)$ is listable. \end{lemma} \begin{lemma}[Basic operations with listable sets]\label{LemmaBooleanN} The class of listable sets is closed under finite unions, finite intersections, permutation of coordinates, coordinate projections, and image and preimage under recursive functions. \end{lemma} We also recall the following important result together with two remarkable consequences. \begin{theorem}[Universal recursive function; the enumeration theorem, cf. \cite{CoriLascar}] Let $r\ge 1$. There is a partial recursive function $\phi^{univ}_r:U_r\to \mathbb{N}$ with domain $U_r\subseteq \mathbb{N}^{r+1}$ such that for every partial recursive function $f$ of arity $r$, there is an integer $i(f)\in \mathbb{N}$ such that $\phi^{univ}_r(i(f),x_1,...,x_r)=f(x_1,...,x_r)$ as partial functions. \end{theorem} \begin{corollary}[A universal listable set]\label{CoroUr} Let $r\ge 1$. The set $U_r=\mathrm{dom}(\phi^{univ}_r)\subseteq \mathbb{N}^{r+1}$ is listable and it has the following property: For every listable set $X\subseteq \mathbb{N}^r$ there is $n_X\in \mathbb{N}$ such that $X= \{{\bf a}\in \mathbb{N}^r : (n_X,{\bf a})\in U_r\}$. \end{corollary} \begin{corollary}[An undecidable listable set: The halting problem] Define the partial recursive function $h(x)=\phi^{univ}_1(x,x)$. The set $H=\mathrm{dom}(h)\subseteq \mathbb{N}$ is listable but it is not decidable. \end{corollary} \subsection{Structures} Let $\mathscr{L}$ be a first order language consisting of symbols of constants, relations, and functions. For an $\mathscr{L}$-structure $\mathfrak{M}$, its domain is denoted by $M=|\mathfrak{M}|$ and for each $s\in \mathscr{L}$ we let $s^\mathfrak{M}$ be the interpretation of $s$. Thus: \begin{itemize} \item If $s$ is a symbol of a constant, then $s^\mathfrak{M}\in M$. \item If $s$ is a symbol of an $n$-ary relation, then $s^\mathfrak{M}\subseteq M^n$. \item If $s$ is a symbol of an $n$-ary function, then $s^\mathfrak{M}\subseteq M^{n+1}$ (the graph). \end{itemize} We make the important assumption that \emph{we only consider languages containing the binary relation symbol `` $=$'', and we only consider structures where this symbol is interpreted as equality}. We refer to this assumption as \emph{equality hypothesis}. For an $\mathscr{L}$-formula $\phi$, the notation $\phi[x_1,...,x_n]$ means that the free variables of $\phi$ are among the variables $x_1,...,x_n$, and all these variables are free or do not occur in $\phi$. We do not allow parameters unless explicitly stated otherwise. Given an $\mathscr{L}$-formula $\phi[x_1,...,x_n]$, the interpretation of it over $\mathfrak{M}$ is $ \phi^\mathfrak{M} = \{{\bf a}\in M^n : \mathfrak{M}\models \phi[{\bf a}]\}\subseteq M^n $ where $\phi[{\bf a}]$ means that $\phi[x_1,...,x_n]$ is interpreted in $\mathfrak{M}$ by interpreting $(x_1,...,x_n)$ as ${\bf a}\in M^n$. An $\mathscr{L}$-formula $\phi$ is \emph{positive existential} (denoted as p.e.) if the only quantifier it uses is $\exists$, it does not use negations, and the only connectives that it uses are $\vee$ and $\wedge$. \subsection{Interpretations} Let $\mathscr{L},\mathscr{K}$ be languages, and let $\mathfrak{M},\mathfrak{N}$ be structures over these languages with domains $M,N$ respectively. Let $r\ge 1$. An \emph{interpretation of $\mathfrak{N}$ in $\mathfrak{M}$ of rank $r$} is a map $\theta:X\to N$ where $X=\mathrm{dom}(\theta)\subseteq M^r$, satisfying the following properties: \begin{itemize} \item[(Int1)] $\theta:\mathrm{dom}(\theta)\to N$ is surjective onto $N$. \item[(Int2)] $\mathrm{dom}(\theta)$ is $\mathscr{L}$-definable over $\mathfrak{M}$. \item[(Int3)] For each $s\in \mathscr{K}$, the set $\theta^*(s^\mathfrak{N})$ is $\mathscr{L}$-definable over $\mathfrak{M}$. \end{itemize} We use the notation $\theta:\mathfrak{M}\dasharrow \mathfrak{N}$ to indicate that $\theta$ is an interpretation of $\mathfrak{N}$ in $\mathfrak{M}$, and the rank $r$ is denoted by $\mathrm{rank}(\theta)$. We say that the interpretation $\theta:\mathfrak{M}\dasharrow \mathfrak{N}$ is \emph{positive existential} (p.e.) if $\mathrm{dom}(\theta)$ and each $\theta^*(s^\mathfrak{N})$ for $s\in \mathscr{K}$ are p.e. $\mathscr{L}$-definable. \begin{lemma}[Pull-back of definable sets under interpretations] Let $\mathscr{L}$ and $\mathscr{K}$ be languages. Consider an $\mathscr{L}$-structure $\mathfrak{M}$, a $\mathscr{K}$-structure $\mathfrak{N}$, and an interpretation $\theta:\mathfrak{M}\dasharrow \mathfrak{N}$ of rank $r$. Let $S\subseteq N^m$. If $S$ is $\mathscr{K}$-definable, then $\theta^*(S)\subseteq M^{rm}$ is $\mathscr{L}$-definable. Furthermore, if the interpretation is p.e. and the set $S$ is p.e. $\mathscr{K}$-definable, then $\theta^*(S)$ is p.e. $\mathscr{L}$-definable. \end{lemma} A useful standard fact is that interpretations can be composed. \begin{lemma}[Composition of interpretations]\label{LemmaCompInt} For $i=1,2,3$, let $\mathscr{L}_i$ be a language and $\mathfrak{M}_i$ an $\mathscr{L}_i$-structure with domain $M_i$. Let $\theta_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ and $\theta_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_3$ be interpretations. There is an interpretation $\zeta:\mathfrak{M}_1\dasharrow \mathfrak{M}_3$ with the following properties: \begin{itemize} \item[(i)] $\mathrm{rank} (\zeta)=\mathrm{rank} (\theta_1)\cdot \mathrm{rank}(\theta_2)$ \item[(ii)] $\mathrm{dom}(\zeta)=\theta_1^*(\mathrm{dom}(\theta_2))\subseteq M_1^{\mathrm{rank}(\zeta)}$. \item[(iii)] $\zeta: \mathrm{dom}(\zeta)\to M_3$ is defined by $\zeta=\theta_2\circ \theta_1^{(\mathrm{rank}(\theta_2))}$. \item[(iv)] If $\theta_1$ and $\theta_2$ are p.e., then $\zeta$ is p.e. \end{itemize} \end{lemma} In the situation of the previous lemma, the interpretation $\zeta$ is said to be the \emph{composition of the interpretations $\theta_1$ and $\theta_2$}, which we denote by $\zeta=\theta_2\bullet \theta_1$. One directly checks \begin{lemma} \label{LemmaCat} Structures over languages together with p.e. interpretations form a category. \end{lemma} \subsection{Homotopy of interpretations}\label{SecHomotopy} For $i=1,2$ let $\mathfrak{M}_i$ be an $\mathscr{L}_i$-structure with domain $M_i$. Let $\theta,\theta':\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ be interpretations of ranks $r$ and $r'$ respectively. We define $$ K(\theta,\theta')=\{({\bf u},{\bf v})\in M^{r+r'} : {\bf u}\in \mathrm{dom}(\theta),{\bf v}\in \mathrm{dom}(\theta'), \mbox{ and }\theta({\bf u})=\theta'({\bf v})\}\subseteq M^{r+r'}. $$ Following \cite{AhlbrandtZiegel}, the interpretations $\theta$ and $\theta'$ are said to be \emph{homotopic} if $K(\theta,\theta')$ is $\mathscr{L}_1$-definable over $\mathfrak{M}_1$. See \cite{AhlbrandtZiegel, AKNS} for more details on homotopy of interpretations. In the special case when $\theta$ and $\theta'$ are p.e. we will need to introduce a refined notion of homotopy. In this case, we say that $\theta$ and $\theta'$ are \emph{positive existential homotopic} if $K(\theta,\theta')$ is p.e. $\mathscr{L}_1$-definable. This will be denoted by $\theta\asymp \theta'$. The results we present below for $\asymp$ are analogous to some of the results given in \cite{AKNS} for homotopy of interpretations (not in the p.e. setting). \begin{lemma}[P.e. homotopy is an equivalence relation]\label{LemmaHomotopyEquiv} For $i=1,2$, let $\mathfrak{M}_i$ be and $\mathscr{L}_i$-structure. Then $\asymp$ is an equivalence relation in the set of p.e. interpretations of $\mathfrak{M}_2$ in $\mathfrak{M}_1$. \end{lemma} \begin{proof} Symmetry is clear. Reflexivity follows from the fact that for any p.e. interpretation $\theta:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ the set $\theta^*(=)=K(\theta,\theta)$ is p.e. $\mathscr{L}_1$-definable. Let us check transitivity. For $i=1,2,3$ let $\theta_i:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ be a p.e. interpretation of rank $r_i$, such that $\theta_1\asymp\theta_2$ and $\theta_2\asymp \theta_3$. The set $ \Omega = \left(K(\theta_1,\theta_2)\times M^{r_3}\right)\cap \left(M^{r_1}\times K(\theta_2,\theta_3)\right)\subseteq M^{r_1+r_2+r_3} $ is p.e. $\mathscr{L}$-definable, and it equals $$ \{({\bf x},{\bf y},{\bf z})\in M^{r_1+r_2+r_3} : {\bf x}\in \mathrm{dom}(\theta_1), {\bf y}\in \mathrm{dom}(\theta_2), {\bf z}\in \mathrm{dom}(\theta_3)\mbox{ and }\theta_1({\bf x})=\theta_2({\bf y})=\theta_3({\bf z})\}. $$ By surjectivity of $\theta_2$ onto the domain of $\mathfrak{M}_2$, we get that $K(\theta_1,\theta_3)$ is the projection of $\Omega$ onto its first $r_1$ and last $r_3$ coordinates. Hence, $K(\theta_1,\theta_3)$ is p.e. $\mathscr{L}_1$-definable. \end{proof} \begin{lemma}[Composition respects p.e. homotopy]\label{LemmaHomotopyComp} For $i=1,2,3$ let $\mathfrak{M}_i$ be an $\mathscr{L}_i$-strucutre. Let $\theta_1,\kappa_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ and $\theta_2,\kappa_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_3$ be p.e. interpretations with $\theta_1\asymp\kappa_1$ and $\theta_2\asymp\kappa_2$. Then $\theta_2\bullet\theta_1\asymp\kappa_2\bullet\kappa_1$. \end{lemma} \begin{proof} By Lemma \ref{LemmaHomotopyEquiv}, it suffices to prove $\theta_2\bullet\theta_1\asymp\kappa_2\bullet\theta_1$ and $\kappa_2\bullet\theta_1\asymp\kappa_2\bullet\kappa_1$. This can be done as in Lemma 2.1 of \cite{AKNS} since that argument only introduces existential quantifiers (i.e. coordinate projections of definable sets). \end{proof} The (transposed) graph of an interpretation $\theta: \mathfrak{M}\dasharrow\mathfrak{N}$ is $$ \Gamma(\theta)=\{(x_0,{\bf x})\in N\times M^{\mathrm{rank}(\theta)}: {\bf x}\in \mathrm{dom}(\theta)\mbox{ and } x_0=\theta({\bf x})\}\subseteq N\times M^{\mathrm{rank}(\theta)}. $$ For $i=1,2$ let $\mathfrak{M}_i$ be an $\mathscr{L}_i$-structure with domain $M_i$. A \emph{bi-interpretation} is a pair of interpretations $\theta_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ and $\theta_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_1$ such that $\Gamma(\theta_1\bullet \theta_2)$ is $\mathscr{L}_2$-definable over $\mathfrak{M}_2$ and $\Gamma(\theta_2\bullet \theta_1)$ is $\mathscr{L}_1$-definable over $\mathfrak{M}_1$. We say that the bi-interpretation is \emph{positive existential} if $\theta_1$ and $\theta_2$ are p.e. interpretations and both $\Gamma(\theta_1\bullet \theta_2)$ and $\Gamma(\theta_2\bullet \theta_1)$ are p.e. definable. \begin{lemma}[Bi-interpretations in terms of homotopy in the p.e. case]\label{LemmaBiIntHomotopy} For $i=1,2$ let $\mathfrak{M}_i$ be an $\mathscr{L}_i$-structure. Let $\theta_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ and $\theta_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_1$ be p.e. interpretations. Then $(\theta_1,\theta_2)$ is a p.e. bi-interpretation if and only if $\theta_2\bullet \theta_1\asymp \mathrm{Id}_{\mathfrak{M}_1}$ and $\theta_1\bullet \theta_2\asymp \mathrm{Id}_{\mathfrak{M}_2}$. \end{lemma} \begin{proof} This is because if $\zeta:\mathfrak{M}\dasharrow \mathfrak{M}$ is a p.e. interpretation, then $K(\mathrm{Id}_\mathfrak{M},\zeta)=\Gamma(\zeta)$. \end{proof} When a p.e. bi-interpretation between $\mathfrak{M}_1$ and $\mathfrak{M}_2$ exists, we say that $\mathfrak{M}_1$ and $\mathfrak{M}_2$ are \emph{p.e. bi-interpretable}. One has the following basic property: \begin{lemma}[Transitivity of p.e. bi-interpretability] \label{LemmaBiIntTransitive} For $i=1,2,3$ let $\mathfrak{M}_i$ be an $\mathscr{L}_i$-structure. Suppose that $\mathfrak{M}_1$ is p.e. bi-interpretable with $\mathfrak{M}_2$ and that $\mathfrak{M}_2$ is p.e. bi-interpretable with $\mathfrak{M}_3$. Then $\mathfrak{M}_1$ and $\mathfrak{M}_3$ are p.e. bi-interpretable. \end{lemma} \begin{proof} Let $\theta_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$, $\theta_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_3$, $\kappa_1:\mathfrak{M}_2\dasharrow \mathfrak{M}_1$, and $\kappa_2:\mathfrak{M}_3\dasharrow \mathfrak{M}_2$ be p.e. interpretations such that $(\theta_1,\kappa_1)$ and $(\theta_2,\kappa_2)$ are p.e. bi-interpretations. Using Lemmas \ref{LemmaCat}, \ref{LemmaHomotopyComp}, and \ref{LemmaBiIntHomotopy} we see that $(\kappa_1\bullet\kappa_2)\bullet(\theta_2\bullet\theta_1)=\kappa_1\bullet(\kappa_2\bullet\theta_2)\bullet\theta_1\asymp \kappa_1\bullet\mathrm{Id}_{\mathfrak{M}_2} \bullet\theta_1=\kappa_1 \bullet\theta_1\asymp \mathrm{Id}_{\mathfrak{M}_1}$ and similarly $(\theta_2\bullet\theta_1)\bullet(\kappa_1\bullet\kappa_2)\asymp \mathrm{Id}_{\mathfrak{M}_3}$. We conclude by Lemma \ref{LemmaBiIntHomotopy}. \end{proof} \subsection{Examples} The language of arithmetic is $\mathscr{L}_a=\{0,1,+,\times,=\}$. By interpreting its symbols in the obvious way, every semiring (such as $\mathbb{N}$) becomes an $\mathscr{L}_a$-structure. The following two simple lemmas are included with just to serve as examples of the previous notions. The proofs are direct applications of Lagrange's $4$-squares theorem. \begin{lemma}\label{LemmaIntNZ} The $\mathscr{L}_a$-structures $\mathbb{N}$ and $\mathbb{Z}$ are p.e. bi-interpretable. \end{lemma} \begin{proof} By Lagrange's $4$-squares theorem the binary relation $\ge$ is p.e. $\mathscr{L}_a$-definable in $\mathbb{Z}$. Consider the maps $\theta_1: \mathbb{N}^2\to \mathbb{Z}$ and $\theta_2:\mathbb{N}=\{n\in \mathbb{Z}: n\ge 0\}\to \mathbb{N}$ given by $\theta_1(a,b)=a-b$ and $\theta_2(n)=n$. One readily checks that they define p.e. interpretations $\theta_1: \mathbb{N}\dasharrow \mathbb{Z}$ and $\theta_2:\mathbb{Z}\to \mathbb{N}$, and that these interpretations give the desired p.e. bi-interpretation. \end{proof} \begin{lemma}\label{LemmaQinNZ} Consider $\mathbb{N}$, $\mathbb{Z}$ and $\mathbb{Q}$ as $\mathscr{L}_a$-structures. Then $\mathbb{Q}$ is p.e. interpretable in $\mathbb{N}$ and in $\mathbb{Z}$. Furthermore, the map $\kappa:\mathbb{Z}\times (\mathbb{Z}-\{0\})\to \mathbb{Q}$ given by $\kappa(a,b)=a/b$ defines a p.e. interpretation of $\mathbb{Q}$ in $\mathbb{Z}$. \end{lemma} Bi-interpretation and p.e. bi-interpretation are in fact two very different conditions. For instance, it is worth pointing out that by a celebrated theorem of J. Robinson \cite{JRobinsonQ} $\mathbb{Z}$ is $\mathscr{L}_a$-definable in $\mathbb{Q}$, and it easily follows that $\mathbb{Z}$ and $\mathbb{Q}$ are bi-interpretable as $\mathscr{L}_a$-structures. However, it is not known whether they are p.e. bi-interpretable ---see Section \ref{SecExamplesDPRM} below. \section{Listable structures}\label{SecListable} \subsection{Listable presentations} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with domain $M$. We recall from the introduction that a \emph{listable presentation} of $\mathfrak{M}$ is a surjective set-theoretical function $\rho:\mathbb{N}\to M$ such that for every $s\in \mathscr{L}$ we have that $\rho^*(s^\mathfrak{M})$ is a listable set. In this setting, we say that $\mathfrak{M}$ is a \emph{listable $\mathscr{L}$-structure}. Of course, if $\mathfrak{M}$ is a listable $\mathscr{L}$-structure, then $M$ is countable (possibly finite). We have the basic example: \begin{lemma}\label{LemmaListN} The $\mathscr{L}_a$-structure $(\mathbb{N};0,1,+,\times,=)$ is listable, and the identity map $\mathbb{N}\to \mathbb{N}$ is a listable presentation. \end{lemma} Let $\rho:\mathbb{N}\to M$ be a listable presentation for $\mathfrak{M}$ and let $X\subseteq M^r$. We say that $X$ is \emph{$\rho$-listable} (resp. \emph{$\rho$-decidable}) if $\rho^*(X)\subseteq \mathbb{N}^r$ is listable (resp. decidable). In particular, the equality hypothesis implies that the set $ E_\rho=\{(m,n)\in \mathbb{N}^2 : \rho(m)=\rho(n)\}=\rho^*(=)\subseteq \mathbb{N}^2 $ is listable for every listable presentation $\rho:\mathbb{N}\to M$. We observe that \begin{lemma}\label{LemmaBijDec} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure and let $\rho$ be a listable presentation for it. If $\rho$ is bijective, then $E_\rho$ is the diagonal in $\mathbb{N}^2$ and, in particular, $E_\rho$ is decidable. \end{lemma} We say that a set $X\subseteq M^r$ is \emph{totally listable} if for every listable presentation $\rho$ of $\mathfrak{M}$ the set $X$ is $\rho$-listable. Let us remark the following: \begin{lemma}\label{LemmaNE} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure and let $\rho$ be a listable presentation for it. The binary relation $\ne$ on $\mathfrak{M}$ is $\rho$-listable if and only if $E_\rho\subseteq \mathbb{N}^2$ is decidable. In particular, $\ne$ on $\mathfrak{M}$ is totally listable if and only if for every listable presentation $\gamma$ of $\mathfrak{M}$ we have that $E_\gamma$ is decidable. \end{lemma} \begin{proof} The set $E_\rho=\rho^*(=)\subseteq \mathbb{N}^2$ is listable. Thus, $E_\rho$ is decidable if and only if $E_\rho^c=\rho^*(\ne)\subseteq \mathbb{N}^2$ is listable. The latter precisely means that $\ne$ on $\mathfrak{M}$ is $\rho$-listable. \end{proof} However, it can very well happen that $E_\rho$ is undecidable for a listable presentation. For instance, one can take $R\subseteq \mathbb{N}^2$ a listable equivalence relation which is undecidable (see \cite{Ershov}) and consider the structure $(\mathbb{N}/R, =)$. The quotient map $\pi:\mathbb{N}\to \mathbb{N}/R$ is a listable presentation and $E_\pi =R$ is undecidable. We have an alternative characterization of $\rho$-listable sets over a listable structure. \begin{lemma}[Characterization of $\rho$-listable sets] \label{LemmaCharList} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$ and let $\rho$ be a listable presentation for $\mathfrak{M}$. Let $X\subseteq M^r$ be a subset. The following are equivalent: \begin{itemize} \item[(i)] $X$ is $\rho$-listable; that is, $\rho^*(X)\subseteq \mathbb{N}^r$ is listable. \item[(ii)] There is a listable set $Z\subseteq \mathbb{N}^r$ with $\rho^{(r)}(Z)=X$. \item[(iii)] Either $X$ is empty or there is $f:\mathbb{N}\to \mathbb{N}^r$ total recursive with $\rho^{(r)} (f(\mathbb{N}))=X$. \end{itemize} \end{lemma} \begin{proof} (ii) and (iii) are equivalent by the theory of listable sets over $\mathbb{N}$. If (i) holds, then (ii) holds with $Z=\rho^*(X)$. Assume that (iii) holds with $X$ non-empty. Write $f=(f_1,...,f_r)$. Let $\epsilon=(\epsilon_1,\epsilon_2):\mathbb{N}\to \mathbb{N}^2$ be a total recursive function with image $E_\rho$ (this set is listable). Let $g=(g_0,g_1,...,g_r):\mathbb{N}\to \mathbb{N}^{r+1}$ be a total recursive bijection. The function $$ \phi(x_1,...,x_r) :=\mu y [f_j(g_0(y))=\epsilon_1(g_j(y))\mbox{ and }\epsilon_2(g_j(y))=x_j\mbox{ for each }j=1,...,r] $$ is partial recursive. By (iii), the domain of $\phi$ is $$\{(x_1,...,x_r)\in \mathbb{N}^r : \mbox{there exists $n$ such that } \rho (f_j(n))=\rho(x_j)\}=\rho^*(X).$$ Hence, $\rho^*(X)$ is listable. \end{proof} \begin{corollary}[Finite sets are totally listable]\label{CoroFinSets} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$ and let $X\subseteq M^r$ be finite. Then $X$ is totally listable. \end{corollary} \begin{proof} Let $\rho$ be a listable presentation for $\mathfrak{M}$. Then $X$ is $\rho$-listable by surjectivity of $\rho:\mathbb{N}\to M$ and item (iii) in Lemma \ref{LemmaCharList}. \end{proof} In particular, we get the following consequence which implies that the study of listable sets in structures is more interesting for structures with an infinite domain. \begin{corollary}\label{CoroFin} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure whose domain is finite. Then for every $r$, all subsets of $M^r$ are totally listable. \end{corollary} In the special case when a listable structure has infinite domain and it admits a bijective listable presentation, the structure inherits universal listable sets from $\mathbb{N}$. \begin{lemma}[Universal $\rho$-listable sets]\label{LemmaUnivList} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with infinite domain $M$ and let $\rho:\mathbb{N}\to M$ be bijective listable presentation for $\mathfrak{M}$. Given $r\ge 1$, there is a $\rho$-listable set $U_r(\rho)\subseteq M^{r+1}$ with the following property: For every $\rho$-listable set $X\subseteq M^r$ there is an element $m_X\in M$ such that $X=\{{\bf x}\in M^r : (m_X,{\bf x})\in U_r(\rho)\}$. \end{lemma} \begin{proof} Let $U_r\subseteq \mathbb{N}^{r+1}$ be the universal listable set provided by Corollary \ref{CoroUr} and let $U_r(\rho)=\rho^{(r+1)}(U_r)\subseteq M^{r+1}$. Let $X\subseteq M^r$ be a $\rho$-listable set. Then $\rho^*(X)\subseteq \mathbb{N}^r$ is listable and there is an integer $n_{\rho^*(X)}\in \mathbb{N}$ such that $\rho^*(X)=\{{\bf a}\in \mathbb{N}^r : (n_{\rho^*(X)},{\bf a})\in U_r\}$. Let $m_X=\rho(n_{\rho^*(X)})$. Since $\rho$ is injective, we have $(n_{\rho^*(X)},{\bf a})\in U_r$ if and only if $(m_X,\rho^{(r)}({\bf a}))\in U_r(\rho)$. By surjectivity of $\rho$ we conclude $X=\rho^{(r)}(\rho^*(X)) = \{{\bf x}\in M^r : (m_X,{\bf x})\in U_r(\rho) \}$. \end{proof} The condition that a listable presentation $\rho:\mathbb{N}\to M$ be bijective, will be completely characterized in Corollary \ref{CoroBij} below. \subsection{Listability and p.e. definability} The next lemma is straightforward. \begin{lemma}\label{LemmaBoolean} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure and let $\rho$ be a listable presentation for $\mathfrak{M}$. The class of $\rho$-listable subsets over $\mathfrak{M}$ is closed under cartesian products, coordinate projections, permutation of coordinates, finite unions, and finite intersections. \end{lemma} \begin{corollary}[All p.e. definable sets are totally listable] \label{CoroTotList} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. Every p.e. $\mathscr{L}$-definable set over $\mathfrak{M}$ is totally listable. \end{corollary} \begin{proof} Fix a listable presentation $\rho$. Using Lemma \ref{LemmaBoolean} and the fact that $s^\mathfrak{M}$ is $\rho$-listable for each $s\in \mathscr{L}$, one gets that for each atomic formula $\phi$, the set $\phi^\mathfrak{M}$ is $\rho$-listable. Applying Lemma \ref{LemmaBoolean} again we get the result. \end{proof} \begin{corollary}\label{CoroNEpeEdec} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. If the binary relation $\ne$ on $\mathfrak{M}$ is p.e. $\mathscr{L}$-definable, then for every listable presentation $\rho$ we have that $E_\rho$ is decidable. \end{corollary} \begin{proof} By Lemma \ref{LemmaNE} and Corollary \ref{CoroTotList}. \end{proof} Quite often it happens that the inequality $\ne$ is p.e. definable over interesting structures (see for instance \cite{MoretBailly}), so in practice it is not a restrictive assumption to require that $E_\rho$ be decidable for every listable presentation $\rho$ of a structure $\mathfrak{M}$. We will not make this assumption in general. \subsection{Listability and p.e. interpretations} \begin{proposition}[Transference of listability] \label{PropIntimplList} For $i=1,2$ let $\mathscr{L}_i$ be a language and $\mathfrak{M}_i$ an $\mathscr{L}_i$-structure with domain $M_i$. Let $\theta:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ be a p.e. interpretation of rank $r$ and let $\rho$ be a listable presentation for $\mathfrak{M}_1$. Let $X=\rho^*(\mathrm{dom}(\theta))\subseteq \mathbb{N}^r$. There exists a total recursive function $f:\mathbb{N}\to \mathbb{N}^r$ with image $X$, and for any $f$ of this type the function $\gamma=\theta\circ \rho^{(r)}\circ f:\mathbb{N}\to M_2$ is a listable presentation for $\mathfrak{M}_2$ having the following property: For every $S\subseteq M_2^k$ we have that $S$ is $\gamma$-listable if and only if $\theta^*(S)$ is $\rho$-listable. \end{proposition} \begin{proof} The set $\mathrm{dom}(\theta)$ is p.e. $\mathscr{L}_1$-definable, hence, it is $\rho$-listable by Corollary \ref{CoroTotList}. Thus $X$ is listable and $f$ exists. Take any total recursive $f:\mathbb{N}\to \mathbb{N}^r$ with image $X$. By construction, $\gamma:\mathbb{N}\to M_2$ is surjective. Let $s\in \mathscr{L}$ and let $n$ be such that $s^\mathfrak{N}\subseteq M_2^n$. Then $\theta^*(s^{\mathfrak{M}_2})\subseteq \mathrm{dom}(\theta)^n$ is p.e. $\mathscr{L}_1$-definable, hence, it is $\rho$-listable by Corollary \ref{CoroTotList}. This implies that $\rho^*\theta^*(s^\mathfrak{N})\subseteq \mathbb{N}^{rn}$ is listable. Hence, $\gamma^*(s^{\mathfrak{M}_2})=f^*\rho^*\theta^*(s^{\mathfrak{M}_2})\subseteq \mathbb{N}^n$ is listable (pre-image of a listable set under a total recursive function). Finally, let $S\subseteq M_2^k$. Since $f$ surjects onto $X\subseteq \mathbb{N}^r$ and $\rho^*\theta^*(S)\subseteq X^k\subseteq \mathbb{N}^{rk}$, we have that $\rho^*\theta^*(S)\subseteq \mathrm{im}(f^{(k)})$. We conclude by Lemma \ref{LemmaListImage}: $S$ is $\gamma$-listable if and only if $f^*\rho^*\theta^*(S)=\gamma^*(S)\subseteq \mathbb{N}^k$ is listable, which happens if and only if $\rho^*\theta^*(S)\subseteq \mathbb{N}^{rk}$ is listable. Thus, $S$ is $\gamma$-listable if and only if $\theta^*(S)$ is $\rho$-listable. \end{proof} \begin{lemma}\label{LemmaListtoInt} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$ and let $\rho$ be a listable presentation for it. Then $\rho:\mathbb{N}\to M$ defines a p.e. interpretation of $\mathfrak{M}$ in the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{lemma} \begin{proof} The map $\rho:\mathbb{N} \to M$ is surjective. For each $s\in \mathscr{L}$ we have that $\rho^*(s^\mathfrak{M})$ is listable, and by the DPRM theorem $\rho^*(s^\mathfrak{M})$ is p.e. $\mathscr{L}_a$-definable over $\mathbb{N}$. \end{proof} \begin{theorem}\label{ThmListableN} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure. Then $\mathfrak{M}$ is p.e. interpretable in the $\mathscr{L}_a$-structure $\mathbb{N}$ if and only if $\mathfrak{M}$ is listable. \end{theorem} \begin{proof} The forward implication follows from Lemma \ref{LemmaListN} and Proposition \ref{PropIntimplList}. The converse follows from Lemma \ref{LemmaListtoInt} \end{proof} \begin{corollary}\label{CoroListableZQ} The $\mathscr{L}_a$-structures $\mathbb{Z}$ and $\mathbb{Q}$ are listable. \end{corollary} \begin{proof} By Lemmas \ref{LemmaIntNZ} and \ref{LemmaQinNZ}, and by Theorem \ref{ThmListableN} \end{proof} The previous corollary admits, of course, a direct proof. \subsection{Equivalence of listable presentations} \label{SecEquivListStr} \begin{lemma} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure and let $\rho$ and $\gamma$ be listable presentations for it. If there is a total recursive function $\phi:\mathbb{N}\to \mathbb{N}$ with $\gamma=\rho\circ \phi$, then there is a total recursive function $\psi:\mathbb{N}\to \mathbb{N}$ with $\rho=\gamma\circ \psi$. \end{lemma} \begin{proof} Let $g=(g_0,g_1,g_2):\mathbb{N}\to \mathbb{N}^3$ be a total recursive function with image $\mathbb{N}\times E_\rho$, which exists since $E_\rho$ is listable. Let us define the function $\psi:\mathbb{N}\to \mathbb{N}$ by $$ \psi(x)= g_0\left(\mu y[\phi(g_0(y))=g_1(y)\mbox{ and }x=g_2(y)]\right). $$ Then $\psi:\mathbb{N}\to\mathbb{N}$ is total recursive. Given $x$, the integer $n=\psi(x)$ satisfies $(\phi(n),x)\in E_\rho$, that is, $\gamma(n)=\rho(\phi(n))=\rho(x)$. This proves $\gamma\circ \psi = \rho$. \end{proof} If $\rho$ and $\gamma$ are listable presentations of an $\mathscr{L}$-structure $\mathfrak{M}$, we write $\rho\approx \gamma$ if there is a total recursive function $\phi:\mathbb{N}\to \mathbb{N}$ with $\gamma=\rho\circ \phi$. From the previous lemma it follows that \begin{proposition} Given a listable $\mathscr{L}$-structure $\mathfrak{M}$, the relation $\approx$ is an equivalence relation on the set of listable presentations of $\mathfrak{M}$. \end{proposition} Hence, if $\rho\approx \gamma$ we say that $\rho$ and $\gamma$ are \emph{equivalent}. In the special case that all listable presentations of a structure are equivalent to each other, we say that the structure is \emph{uniquely listable}. \begin{proposition}\label{PropFinUL} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. If the domain of $\mathfrak{M}$ is finite, then $\mathfrak{M}$ is uniquely listable. \end{proposition} \begin{proof} Let $\rho$ and $\gamma$ be listable presentations of $\mathfrak{M}$. Let $M$ be the domain of $\mathfrak{M}$ and for each $m\in M$ let $A_m=\gamma^{-1}(m)$. Then $\{A_m : m\in M\}$ is a partition of $\mathbb{N}$ and each $A_m$ is listable, hence, decidable (it has listable complement). For each $m\in M$ choose $y_m\in \rho^{-1}(m)$. Define $\phi:\mathbb{N}\to \mathbb{N}$ by $\phi(x)=y_m$ if $x\in A_m$. Then $\phi$ is total recursive (defined by decidable cases) and $\gamma=\rho\circ \phi$. \end{proof} From Lemma \ref{LemmaBooleanN} we deduce \begin{lemma}[Equivalent listable presentations have the same listable sets] \label{LemmaComparisonListSets} Let $\rho$ and $\gamma$ be listable presentations of an $\mathscr{L}$-structure $\mathfrak{M}$. If $\rho\approx \gamma$, then the class of $\rho$-listable sets is the same as the class of $\gamma$-listable sets over $\mathfrak{M}$. \end{lemma} We also have a partial converse to the previous result. First we need: \begin{lemma}\label{LemmaBij} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with infinite domain $M$. Let $\rho$ be a listable presentation for $\mathfrak{M}$ and assume that $E_\rho$ is decidable. There is an injective total recursive function $\iota_\rho: \mathbb{N} \to \mathbb{N}$ such that $\rho\circ \iota_\rho : \mathbb{N} \to M$ is bijective. \end{lemma} \begin{proof} Let $\chi_{E_\rho}$ be the characteristic function of $E_\rho$, which is total recursive by assumption. We define a function $h:\mathbb{N}\to \mathbb{N}$ as follows: We set $h(0)=0$ and for $x>0$ we let $$ h(x)=\mu y \left[\sum_{j<x}\chi_{E_\rho}(h(j),y)=0\right]. $$ The function $h$ is total because $M$ is infinite, and it is recursive because it is defined by minimalization and course-of-values recursion. Given $x>0$, we note that $h(x)$ is the least value of $y$ for which $\rho(y)\ne \rho(h(j))$ for each $j<x$, so $\rho\circ h$ is bijective and we can take $\iota_\rho=h$. \end{proof} \begin{corollary}[Making a listable presentation bijective]\label{CoroBij} Let $\mathfrak{M}$ be an infinite $\mathscr{L}$-structure with a listable presentation $\rho$. The following are equivalent: \begin{itemize} \item[(i)] $\ne$ is $\rho$-listable over $\mathfrak{M}$. \item[(ii)] $E_\rho$ is decidable. \item[(iii)] There is a bijective listable presentation $\tilde{\rho}$ for $\mathfrak{M}$ which satisfies $\tilde{\rho}\approx \rho$. \end{itemize} \end{corollary} \begin{proof} Items (i) and (ii) are equivalent by Lemma \ref{LemmaNE}. Assuming (ii), Lemma \ref{LemmaBij} allows us to take $\tilde{\rho}=\rho\circ \iota_\rho$. This proves (iii). Conversely, if (iii) holds, then $E_{\tilde{\rho}}$ is decidable by Lemma \ref{LemmaBijDec}. Thus, the diagonal $\Delta\subseteq M^2$ is $\tilde{\rho}$-decidable, which implies that $\Delta$ is $\rho$-decidable by Lemma \ref{LemmaComparisonListSets}. Hence, $E_\rho=\rho^*(\Delta)$ is decidable. \end{proof} With Corollary \ref{CoroBij} we can give a refinement of Lemma \ref{LemmaComparisonListSets}. \begin{theorem}[Equivalence and comparison of listable sets]\label{ThmEquivCompare} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure and let $\rho,\gamma$ be listable presentations. In the following (i) implies (ii), and (ii) implies (iii): \begin{itemize} \item[(i)] $\rho\approx \gamma$ \item[(ii)] The class of $\rho$-listable sets is the same as the class of $\gamma$-listable sets. \item[(iii)] The class of $\rho$-listable sets is contained in the class of $\gamma$-listable sets. \end{itemize} Furthermore, if $E_\rho$ is decidable, then the three properties are equivalent. \end{theorem} \begin{proof} (i) implies (ii) by Lemma \ref{LemmaComparisonListSets}, and it is clear that (ii) implies (iii). It only remains to show that (iii) implies (i) assuming that $E_\rho$ is decidable. By Proposition \ref{PropFinUL} it suffices to consider the case when $M$ is infinite. Assume (iii) and that $E_\rho$ is decidable. By Corollary \ref{CoroBij} we may assume that $\rho:\mathbb{N}\to M$ is bijective after replacing it by an equivalent listable presentation ---the class of $\rho$-listable sets in (iii) remains the same by Lemma \ref{LemmaComparisonListSets}. Let $T_\rho=\{(\rho(n), \rho(n+1)) : n\in \mathbb{N}\}\subseteq M^2$. Since $T_\rho$ is $\rho$-listable, we get that $T_\rho$ is $\gamma$-listable by (iii). Let $g=(g_1,g_2):\mathbb{N}\to \mathbb{N}^2$ be a total recursive function with image $\gamma^*(T_\rho)$. Define a function $\psi:\mathbb{N}\to \mathbb{N}$ by choosing any $\psi(0)\in \gamma^{-1}(\rho(0))$ and for $x>0$ we define $$ \psi(x)=g_2\left(\mu y [ g_1(y)=\psi(x-1) ] \right). $$ Then $\psi:\mathbb{N}\to\mathbb{N}$ is total recursive. Note that for each $n\ge 0$ we have $(\gamma(\psi(n)),\gamma(\psi(n+1)))\in T_\rho$. Since $\gamma(\psi(0))=\rho(0)$ and $\rho$ is bijective, we deduce $\gamma\circ \psi=\rho$. This proves $\rho\approx \gamma$. \end{proof} \begin{corollary} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure for which the binary relation $\ne$ is totally listable. Given $\rho$ and $\gamma$ listable presentations for $\mathfrak{M}$, the following are equivalent: \begin{itemize} \item[(i)] $\rho\approx \gamma$ \item[(ii)] The class of $\rho$-listable sets is the same as the class of $\gamma$-listable sets. \item[(iii)] The class of $\rho$-listable sets is contained in the class of $\gamma$-listable sets. \end{itemize} \end{corollary} \begin{proof} By Lemma \ref{LemmaNE} and Theorem \ref{ThmEquivCompare}. \end{proof} Given $\rho,\gamma:\mathbb{N}\to M$ listable presentations of an $\mathscr{L}$-structure $\mathfrak{M}$, we define $$ \Delta(\gamma,\rho):=\{(m,n)\in \mathbb{N}^2 : \gamma(m)=\rho(n)\}\subseteq \mathbb{N}^2. $$ In particular, note that $E_\rho=\Delta(\rho,\rho)$. \begin{lemma}[Diagonal test for equivalence]\label{LemmaDTE} Let $\gamma, \rho:\mathbb{N}\to M$ be listable presentations for $\mathfrak{M}$. We have that $\gamma\approx\rho$ if and only if $\Delta(\gamma,\rho)\subseteq \mathbb{N}^2$ is listable. \end{lemma} \begin{proof} Assume $\rho\approx \gamma$ and let $\psi:\mathbb{N}\to \mathbb{N}$ be a total recursive function with $\gamma\circ \psi=\rho$. Let $\epsilon^\gamma=(\epsilon^\gamma_1,\epsilon^\gamma_2):\mathbb{N}\to \mathbb{N}^2$ be a total recursive map with image $E_\gamma$. Define the partial function $$ \delta(m,n)=\mu y[\epsilon^\gamma_2(y)=m \mbox{ and } \epsilon^\gamma_1(y)=\psi(n) ]. $$ The function $\delta$ is partial recursive and its domain is $\Delta(\gamma,\rho)$. Thus, $\Delta(\gamma,\rho)$ is listable. Conversely, assume that $\Delta(\gamma,\rho)$ is listable. It is non-empty, so, there is a total recursive $f=(f_1,f_2):\mathbb{N}\to \mathbb{N}^2$ with image $\Delta(\gamma,\rho)$. Observe that both $f_1,f_2:\mathbb{N}\to \mathbb{N}$ are total recursive and surjective. Define $$ \phi(n)=f_2(\mu y [f_1(y)=n]) $$ Then $\phi:\mathbb{N}\to \mathbb{N}$ is total recursive and it satisfies $\rho(\phi(n))=\gamma(n)$ for all $n\ge 0$. Hence $\rho\approx \gamma$. \end{proof} In view of Lemma \ref{LemmaListtoInt} we get \begin{corollary} If $\rho,\gamma:\mathbb{N}\to M$ are listable presentations for an $\mathscr{L}$-structure $\mathfrak{M}$, we have $\rho\approx\gamma$ if and only if $\rho\asymp\gamma$ as p.e. interpretations. \end{corollary} \begin{proof} Note that in this case $K(\gamma, \rho)=\Delta(\gamma,\rho)$. By the DPRM theorem, we see that $K(\gamma, \rho)$ is p.e. $\mathscr{L}_a$-definable over $\mathbb{N}$ if and only if $\Delta(\gamma,\rho)$ is listable. We conclude by Lemma \ref{LemmaDTE}. \end{proof} \subsection{Uniquely listable structures} We have seen that listable structures with finite domain are uniquely listable (cf. Proposition \ref{PropFinUL}). However, not all listable structures are uniquely listable. For instance, let $H\subseteq \mathbb{N}$ be a listable undecidable set and consider the structure $(\mathbb{N};H,=)$. The identity map $\rho:\mathbb{N}\to \mathbb{N}$ is a listable presentation. Another listable presentation is given by the set-theoretical bijection $\gamma:\mathbb{N}\to \mathbb{N}$ mapping $2n$ to the $n$-th element of $H$ and $2n+1$ to the $n$-th element of $H^c$. Since $H^c$ is $\gamma$-listable but it is not $\rho$-listable, we conclude $\rho\not\approx \gamma$. Let us discuss the problem of determining whether a listable structure is uniquely listable. First, we have the following basic transference property. \begin{lemma}[Transference of unique listability] \label{LemmaUnique} Let $\mathscr{L}$ and $\mathscr{K}$ be languages. Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$, and let $\mathfrak{N}$ be a uniquely listable $\mathscr{K}$-structure with domain $N$. Suppose that there is a bijective function $\theta: M\to N$ defining a p.e. interpretation $\theta:\mathfrak{M}\dasharrow \mathfrak{N}$. Then $\mathfrak{M}$ is uniquely listable. \end{lemma} \begin{proof} Given $\rho,\gamma:\mathbb{N}\to M$ listable presentations for $\mathfrak{M}$ we have that $\theta\circ\rho$ and $\theta\circ\gamma$ are listable presentations of $\mathfrak{N}$ (by Proposition \ref{PropIntimplList} with $f=\mathrm{Id}_\mathbb{N}$) and therefore they are equivalent. Let $\phi:\mathbb{N}\to\mathbb{N}$ be a total recursive function with $\theta\circ \rho\circ \phi=\theta\circ\gamma$, then $\rho\circ \phi=\gamma$ because $\theta$ is injective, and we get $\rho\approx \gamma$. \end{proof} Unfortunately, this transference property is rather restrictive and a more flexible criterion for unique listability is necessary. Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$. A \emph{universal listing} for $\mathfrak{M}$ is a surjective set-theoretical function $\tau:\mathbb{N}\to M$ satisfying that for every listable presentation $\rho:\mathbb{N}\to M$ there is a total recursive function $a_{\rho}^{\tau}:\mathbb{N}\to \mathbb{N}$ such that $\rho\circ a_{\rho}^\tau = \tau$. Universal listings are relevant for us due to the following relation with unique listability. \begin{lemma}[Universal listings and unique listability]\label{LemmaUL} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure which admits a universal listing $\tau$. Then $\mathfrak{M}$ is uniquely listable and $\tau$ is a listable presentation for it. \end{lemma} \begin{proof} Let $\rho$ be any listable presentation for $\mathfrak{M}$. By assumption, $\tau$ is surjective. Since $\tau=\rho\circ a^\tau_\rho$ and $a^\tau_\rho:\mathbb{N}\to \mathbb{N}$ is total recursive, we see that for every $s\in \mathscr{L}$ the set $\tau^*(s^\mathfrak{M})=(a^\tau_\rho)^*(\rho^*(s^{\mathfrak{M}}))$ is listable. Hence, $\tau$ is a listable presentation for $\mathfrak{M}$ and the relation $\tau=\rho\circ a^\tau_\rho$ implies $\rho\approx \tau$. \end{proof} Naturally, there is the problem of showing that a given structure actually has some universal listing. For this we have: \begin{theorem}[Criterion for unique listability]\label{ThmUL} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. Let $r,k$ be positive integers and let $c\in \mathbb{N}$. Let us choose the following: \begin{itemize} \item a total recursive function $h=(h_1,...,h_r):\mathbb{N}\to \mathbb{N}^r$ such that $h_j(n)<n$ for each $j=1,...,r$ and all $n>c$, \item a partition $A_1,...,A_k$ of $\mathbb{N}_{>c}$ with each $A_i$ decidable, \item for each $i=1,2,...,k$, a p.e. $\mathscr{L}$-definable function $F_i: V_i\to M$ with domain $V_i\subseteq M^r$. \end{itemize} Let $\tau:\mathbb{N}\to M$ be a set-theoretical function satisfying the following conditions: \begin{itemize} \item[(i)] $\tau$ is surjective. \item[(ii)] If $n\in A_i$ and $n>c$, then $h(n)\in \tau^* (V_i)$. \item[(iii)] For each $n>c$, we have $$ \tau(n)=\begin{cases} F_1(\tau(h_1(n)),...,\tau(h_r(n)))\mbox{ if }n\in A_1\\ \vdots\\ F_k(\tau(h_1(n)),...,\tau(h_r(n)))\mbox{ if }n\in A_k. \end{cases} $$ \end{itemize} Then $\tau$ is a universal listing. In particular, $\mathfrak{M}$ is uniquely listable and $\tau$ is a listable presentation. \end{theorem} \begin{proof} Let $\rho:\mathbb{N}\to M$ be a listable presentation. Let $$ \Gamma_i=\{(x_0,...,x_r): x_0=F_i(x_1,...,x_r)\}\subseteq M^{r+1} $$ and note that $\Gamma_j$ is p.e. $\mathscr{L}$-definable. By Corollary \ref{CoroTotList} the set $\rho^*(\Gamma_i)\subseteq \mathbb{N}^{r+1}$ is listable. For each $i=1,...,k$ let $f_i=(f_{i0},...,f_{ir}):\mathbb{N}\to \mathbb{N}^{r+1}$ be a total recursive map with image $\rho^*(\Gamma_i)$. We note that $(f_{i1},...,f_{ir}):\mathbb{N}\to \mathbb{N}^r$ has image $\rho^*(V_i)$ because $V_i$ is the domain of $F_i$. We claim that there is a total recursive function $\alpha:\mathbb{N}\to \mathbb{N}$ satisfying the following: \begin{itemize} \item[(a)] For each $n\in \mathbb{N}$ we have $\rho (\alpha(n)) = \tau(n)$. \item[(b)] For each $n\in A_i$ with $n>c$ we have $(\alpha(h_1(n)),...,\alpha(h_r(n)))\in \rho^*(V_i)$. \item[(c)] For each $n>c$, \begin{equation}\label{EqnDefalpha} \alpha(n)=\begin{cases} f_{10}(\mu y [ f_{1j}(y)=\alpha(h_j(n)) \mbox{ for each }j=1,...,r] )\mbox{ if }n\in A_1\\ \vdots\\ f_{k0}(\mu y [ f_{kj}(y)=\alpha(h_j(n)) \mbox{ for each }j=1,...,r] )\mbox{ if }n\in A_k. \end{cases} \end{equation} \end{itemize} Let us choose any values $\alpha(n)\in \rho^{-1}(\tau(n))$ for $n\le c$. Then $\rho(\alpha(m))=\tau(m)$ for $m=0,1,...,c$. We will recursively construct the values of the function $\alpha(n)$ for larger values of $n$ by using (c), and along the construction we will inductively prove that (a) and (b) hold. Let us fix an $n>c$ and let us assume that for each $m<n$ we have that $\alpha(m)\in \mathbb{N}$ is already defined and that $\rho(\alpha(m))=\tau(m)$ holds. Let $i$ be the index with $n\in A_i$. Note that $h_j(n)<n$ for each $j=1,...,r$, hence $(\alpha(h_1(n)),...,\alpha(h_r(n)))\in \mathbb{N}^r$ is already defined and \begin{equation}\label{Eqnu1} (\rho(\alpha(h_1(n))),...,\rho(\alpha(h_r(n)))) = (\tau(h_1(n)),...,\tau(h_r(n)))\in V_i \end{equation} by condition (ii). That is, $$ (\alpha(h_1(n)), ..., \alpha(h_r(n)))\in \rho^*(V_i)\subseteq \mathbb{N}^r $$ as required by (b). Since $\rho^*(V_i)$ is the image of $(f_{i1},...,f_{ir}):\mathbb{N}\to \mathbb{N}^r$, there is some $y\in \mathbb{N}$ such that $f_{ij}(y)=\alpha(h_j(n))$ for each $j=1,...,r$. Therefore, \eqref{EqnDefalpha} uniquely defines $\alpha(n)\in \mathbb{N}$. Furthermore, if $y_0$ is the minimal such $y$ for our chosen $n$, then $$ \begin{aligned} \rho(\alpha(n))& =\rho(f_{i0}(y_0)) = F_i(\rho(f_{i1}(y_0)),...,\rho(f_{ir}(y_0)))\quad &\mbox{ by definition of }f_i:\mathbb{N}\to \mathbb{N}^{r+1}\\ & = F_i(\rho(\alpha(h_1(n))),...,\rho(\alpha(h_r(n)))) \quad &\mbox{ by choice of }y_0\\ &= F_i(\tau(h_1(n)),...,\tau(h_r(n))) \quad &\mbox{ by \eqref{Eqnu1}}\\ &= \tau(n) \quad &\mbox{ by condition (iii).} \end{aligned} $$ This proves that $\rho(\alpha(n))=\tau(n)$ holds, as required by (a). Finally, it only remains to observe that the function $\alpha:\mathbb{N}\to \mathbb{N}$ that we have constructed is total recursive. In fact, we already proved that $\alpha$ is a total function, and it is defined by the first chosen values $\alpha(1),...,\alpha(c)$ together with the condition \eqref{EqnDefalpha} for $n>c$. The condition \eqref{EqnDefalpha} shows that $\alpha$ is recursive because it only involves the total recursive functions $f_{ij}$ and $h_j$, the schema of definition by decidable cases, the minimalization operator, and the schema of course-of-values recursion. In particular, we have constructed a total recursive function $\alpha:\mathbb{N}\to \mathbb{N}$ satisfying condition (a). The function $\tau:\mathbb{N}\to M$ is surjective by assumption (i), and the choice $a^\tau_\rho=\alpha$ shows that $\tau$ is a universal listing. We conclude by Lemma \ref{LemmaUL}. \end{proof} \subsection{Examples}\label{SecULexamples} A first example to explain how to use the previous results on unique listability: \begin{proposition}\label{PropExN} Let $\mathfrak{N}$ be an $\mathscr{L}$-structure with domain $\mathbb{N}$ such that \begin{itemize} \item[(i)] For each $s\in \mathscr{L}$, the set $s^{\mathfrak{N}}$ is listable. \item[(ii)] $0\in \mathbb{N}$ and the successor function $S:\mathbb{N}\to \mathbb{N}$, $S(x)=x+1$ are p.e. $\mathscr{L}$-definable in $\mathfrak{N}$. \end{itemize} Then $\mathfrak{N}$ is uniquely listable. In particular, this holds for $(\mathbb{N};0,S,=)$ and $(\mathbb{N};0,1,+,\times,=)$. \end{proposition} \begin{proof} The identity function gives a listable presentation by (i). It remains to prove uniqueness up to equivalence. We apply Theorem \ref{ThmUL} with $c=0$, $r=k=1$, $A_1=\mathbb{N}$, $h(n)=\max\{0,n-1\}$, $F_1=S$, $V_1=\mathbb{N}$, and $\tau=\mathrm{Id}_\mathbb{N}$. The result follows. \end{proof} Next we consider the case of $\mathbb{Q}$ in detail. We begin with a folklore fact. \begin{lemma}\label{LemmaQnum} Let $q: \mathbb{Z}_{>0}\to \mathbb{Q}_{>0}$ be the function defined by $q(1)=1$ and for $n\ge 2$: $$ q(n)=\begin{cases} q(n/2) + 1&\mbox{ if $n\ge 2$ is even}\\ 1/q(n-1) &\mbox{ if $n\ge 2$ is odd.} \end{cases} $$ Then $q: \mathbb{Z}_{>0}\to \mathbb{Q}_{>0}$ is bijective. \end{lemma} \begin{proof} Take any rational number $r>0$ and recall that it admits a unique continued fraction expansion $[a_0;a_1,...,a_d]$ for some $d\ge 1$ under the requirements $a_0\ge 0$, $a_j\ge 1$ for $j\ge 1$, and $a_d=1$. We note that choosing the positive integer \begin{equation}\label{Eqnbinary} n=2^{a_0}(2^{a_1}(...(2^{a_{d-2}}(2^{a_{d-1}}+1)+1)...)+1) \end{equation} we get $q(n)=r$, so $q(\mathbb{Z}_{>0})=\mathbb{Q}_{> 0}$. Furthermore, $q$ is injective because the expression \eqref{Eqnbinary} always exists and is unique for a given $n\ge 1$ under the requirements $a_0\ge 0$ and $a_j\ge 1$ for $j\ge 1$ ---expanding the product in \eqref{Eqnbinary} we get the binary expansion of $n$. \end{proof} \begin{corollary}\label{CoroQtau} Let $\tau:\mathbb{N}\to \mathbb{Q}$ be defined by $\tau(0)=0$, $\tau(1)=1$, $\tau(2)=-1$ and for $n\ge 3$ $$ \tau(n)=\begin{cases} \tau((n-1)/2)+1 &\mbox{ if }n\equiv 3\bmod 4\\ \tau(n/2)-1 &\mbox{ if }n\equiv 0\bmod 4\\ 1/\tau(n-2) &\mbox{ if }n\equiv 1,2\bmod 4. \end{cases} $$ Then $\tau:\mathbb{N}\to \mathbb{Q}$ is bijective. \end{corollary} \begin{proof} This follows from Lemma \ref{LemmaQnum}, since the sequence of values $\tau(n)$ for $n=0,1,2,...$ is precisely $0$, $q(1)$, $-q(1)$, $q(2)$, $-q(2)$, ... \end{proof} \begin{proposition} \label{PropULQ} Let $\mathfrak{Q}$ be an $\mathscr{L}$-structure with domain $\mathbb{Q}$ such that \begin{itemize} \item[(i)] $\mathfrak{Q}$ admits some listable presentation. \item[(ii)] The constant $0\in \mathbb{Q}$, the successor function $S:\mathbb{Q}\to \mathbb{Q}$, $S(x)=x+1$, and the multiplicative inverse function $R:\mathbb{Q}^\times \to \mathbb{Q}$, $R(x)=1/x$ are p.e. $\mathscr{L}$-definable in $\mathfrak{Q}$. \end{itemize} Then $\mathfrak{Q}$ is uniquely listable and the function $\tau:\mathbb{N}\to \mathbb{Q}$ of Corollary \ref{CoroQtau} is a listable presentation. \end{proposition} \begin{proof} First we note that the function $P:\mathbb{Q}\to\mathbb{Q}$ given by $y=P(x)=x-1$ is p.e. $\mathscr{L}$-defined by the formula $x=S(y)$. Let us apply Theorem \ref{ThmUL} to show that the function $\tau:\mathbb{N}\to \mathbb{Q}$ from Corollary \ref{CoroQtau} is a universal listing for $\mathfrak{Q}$. We choose $c=2$, $r=1$, $k=3$, $A_1=\{n\ge 3 : n\equiv 3\bmod 4\}$, $A_2=\{n\ge 3 : n\equiv 0\bmod 4\}$, $A_3=\{n\ge 3 : n\equiv 1,2\bmod 4\}$, and the total recursive function $h:\mathbb{N}\to \mathbb{N}$ given by $$ h(n)=\begin{cases} (n-1)/2 &\mbox{ if }n\equiv 3\bmod 4\\ n/2 &\mbox{ if } n\equiv 0\bmod 4\\ \max\{0, n-2\} &\mbox{ if }n\equiv 1,2\bmod 4. \end{cases} $$ With these choices, the function $\tau:\mathbb{N}\to \mathbb{Q}$ from Corollary \ref{CoroQtau} satisfies $\tau(0)= 0$, $\tau(1)= S(0)$, $\tau(2)=P(0)$ and for $n\ge 3$ we have $$ \tau(n)=\begin{cases} S(h(n))\mbox{ if } n\in A_1\\ P(h(n))\mbox{ if } n\in A_2\\ R(h(n))\mbox{ if } n\in A_3. \end{cases} $$ Theorem \ref{ThmUL} implies that $\tau$ is a universal listing. Hence the result. \end{proof} \begin{corollary}\label{CoroQulist} The $\mathscr{L}_a$-structure $\mathbb{Q}$ is uniquely listable. \end{corollary} \begin{proof} By Corollary \ref{CoroListableZQ} and Proposition \ref{PropULQ}. \end{proof} In a similar fashion, one can get several other results. We include here the case of a certain class of finitely generated structures, which generalizes the examples we have presented in detail so far in this section. The proofs goes along the same lines. \begin{proposition}\label{PropFG} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure with domain $M$. Suppose that there is a finite list of elements $g_0, g_1, ...,g_c\in M$ and functions $F_1,...,F_k:M^r\to M$ such that \begin{itemize} \item each $g_j$ and each $F_i$ is p.e. $\mathscr{L}$-definable over $\mathfrak{M}$, and \item $M$ is generated by the elements $g_j$ and the functions $F_i$, in the sense that each element of $M$ is obtained by applying a suitable composition of the $F_i$ to the elements $g_j$. \end{itemize} Then $\mathfrak{M}$ is uniquely listable. \end{proposition} One immediately gets the following generalization of Corollary \ref{CoroQulist}: \begin{corollary}\label{CoroULfg} Let $M$ be a finitely generated ring. Let $\mathscr{L}=\mathscr{L}_a\cup\{g_0,g_1,...,g_c\}$ where $g_i$ are symbols of constant interpreted in $M$ as a list of ring generators. Then $M$ seen as an $\mathscr{L}$-structure is uniquely listable. The same result holds if $M$ is a finitely generated field, in which case the $g_i$ should be taken to correspond to a list of field generators. \end{corollary} \begin{proof} It is a standard fact that finitely generated rings and fields are p.e. interpretable in the $\mathscr{L}_a$-structure $\mathbb{N}$; see for instance Corollary 2.14 in \cite{AKNS} and note that the constructed interpretation is p.e. (idea: it suffices to interpret rings of the form $\mathbb{Z}[x_1,...,x_n]$ and then take quotients and localizations, which leads to p.e. interpretations). The result follows from Theorem \ref{ThmListableN} and Proposition \ref{PropFG}. \end{proof} \section{The DPRM property for listable structures}\label{SecDPRM} \subsection{The DPRM property}\label{SecDPRM1} Let $\mathfrak{M}$ be a listable $\mathscr{L}$-structure. Recall that a set $X\subseteq M^r$ is totally listable if for every listable presentation $\rho:\mathbb{N}\to M$ we have that $X$ is $\rho$-listable. In the special case that $\mathfrak{M}$ is uniquely listable, the class of totally listable sets over $\mathfrak{M}$ is the same as the class of $\rho$-listable sets for any chosen listable presentation $\rho:\mathbb{N}\to M$ (cf. Lemma \ref{LemmaComparisonListSets}). So, if $\mathfrak{M}$ is uniquely listable, there is a well-defined notion of listable sets over $\mathfrak{M}$. By Corollary \ref{CoroTotList}, if $X\subseteq M^r$ is p.e. $\mathscr{L}$-definable, then it is totally listable. We are interested in listable structures $\mathfrak{M}$ for which the converse holds. We say that a listable $\mathscr{L}$-structure $\mathfrak{M}$ has the \emph{DPRM property} if for each $r\ge 1$, every totally listable set $X\subseteq M^r$ is p.e. $\mathscr{L}$-definable. Thus, a listable structure $\mathfrak{M}$ has the DPRM property if and only if the class of totally listable sets over $\mathfrak{M}$ is the same as p.e. $\mathscr{L}$-definable sets over $\mathfrak{M}$. Of course, $\mathbb{N}$ as an $\mathscr{L}_a$-structure has the DPRM property precisely by the DPRM theorem. It easily follows that $\mathbb{Z}$ as an $\mathscr{L}_a$-structure also has the DPRM property (see Lemma \ref{LemmaZDPRM} for details). A basic feature of listable structures having the DPRM property is the following: \begin{lemma} If $\mathfrak{M}$ is a listable $\mathscr{L}$-structure with the DPRM property, then all finite sets in $M^r$ are p.e. $\mathscr{L}$-definable. In particular, each element of $M$ is p.e. $\mathscr{L}$-definable. \end{lemma} \begin{proof} By Corollary \ref{CoroFinSets}. \end{proof} In particular, the DPRM property has a simple characterization for finite structures. \begin{corollary} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with finite domain. Then $\mathfrak{M}$ has the DPRM property if and only if each element of $M$ is p.e. $\mathscr{L}$-definable. In this case, every subset of $M^r$ is p.e. $\mathscr{L}$-definable. \end{corollary} \subsection{Known results}\label{SecDPRMknown} Other authors have considered a slightly different variant of the DPRM property where only \emph{recursive presentations} are allowed. That is, surjective maps $\rho:\mathbb{N}\to M$ such that for every $s\in \mathscr{L}$ we have that $\rho^*(s^\mathfrak{M})$ is decidable. Let us momentarily call this variant the \emph{recursive DPRM property}. Note that if a set $X\subseteq M^r$ is $\rho$-listable for every listable presentation $\rho$, then the same holds for every recursive presentation. Therefore, the recursive DPRM property implies the DPRM property, and all available results in the literature establishing the recursive DPRM property for a certain structure are still valid if the DPRM property is considered. Let us briefly survey the known cases of the DPRM property beyond $\mathbb{N}$ and $\mathbb{Z}$. Denef \cite{DenefZT} proved the DPRM property for $\mathbb{Z}[t]$. Zahidi \cite{ZahidiOKt} extended Denef's result to the polynomial ring $O_K[t]$ when $K$ is a totally real number field and $O_K$ is its ring of integers. Demeyer proved several other cases: $k[t]$ for $k$ a finite field or a recursive infinite algebraic extension of a finite field \cite{DemeyerInv}, and $A[t]$ where $A$ is a recursive subring of a number field \cite{DemeyerPoly}. Degroote and Demeyer \cite{DeDe} proved the case of $L[t]$ where $L$ is an automorphism-recursive algebraic extension of $\mathbb{Q}$ having an embedding into $\mathbb{R}$ or into a finite extension of $\mathbb{Q}_p$ for some prime $p$ (see \cite{DeDe} for the definition of automorphism-recursive extensions). Other than this, if $B$ is a uniquely listable ring and $A\subseteq B$ is a uniquely listable subring such that $A$ has the DPRM property and $A$ is Diophantine in $B$ (that is, p.e. $\mathscr{L}_a$-definable with parameters), then there are a number of cases where the DPRM property can be transferred from $A$ to $B$ by applying some version of Theorem \ref{ThmTransferDPRM} below, especially in the context of finite algebraic extensions of fields or Dedekind domains. See for instance Proposition \ref{PropOK} below. \subsection{The number of existential quantifiers}\label{SecEquant} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure. Given $r\ge 1$, a \emph{p.e. $r$-catalogue} for $\mathfrak{M}$ is a p.e. $\mathscr{L}$-formula $\Upsilon_r[x_0,x_1,...,x_r]$ with the following property: For every p.e. $\mathscr{L}$-definable set $X\subseteq M^r$ there is an element $m_X\in M$ such that $X=\{{\bf a}\in M^r : \mathfrak{M}\models \Upsilon_r[m_X,{\bf a}]\}$. (One may argue that ``universal p.e. formula'' would be a better terminology, but such an oxymoron might lead to confusion.) We record the following remark. \begin{lemma} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure. If $\mathfrak{M}$ has a p.e. $r$-catalogue for certain $r\ge 1$, then it has a p.e. $n$-catalogue for every $1\le n\le r$. \end{lemma} Conversely, one has: \begin{lemma} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with domain $M$. If $\mathfrak{M}$ has a p.e. $1$-catalogue and there is a p.e. $\mathscr{L}$-definable injective function $M^2\to M$, then $\mathfrak{M}$ has a p.e. $r$-catalogue for every $r\ge 1$. \end{lemma} For an $\mathscr{L}$-structure $\mathfrak{M}$ and subset $X\subseteq M^r$ which is p.e. $\mathscr{L}$-definable with parameters from $\mathfrak{M}$, we define the \emph{p.e. rank} of $X$ over $\mathfrak{M}$ as the minimal number of existential quantifiers required by a p.e. $\mathscr{L}$-formula to define $X$ \emph{with parameters from $\mathfrak{M}$}. The p.e. rank of $X$ over $\mathfrak{M}$ is denoted by $\mathrm{rank}^{p.e.}_{\mathfrak{M}}(X)$. We allow p.e. definitions with parameters because the p.e. rank is intended to be a rough measure of the complexity of a p.e. definable set, and it is desirable that all p.e. definable finite sets have the lowest possible complexity in this sense ---namely, $0$. The following observation follows from the definition of p.e. $r$-catalogues. \begin{lemma}[Boundedness of the p.e. rank]\label{LemmaCatBdd} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with domain $M$ and let $r\ge 1$. If $\mathfrak{M}$ has a p.e. $r$-catalogue, then there is a bound $B_\mathfrak{M}(r)$ depending only on $\mathfrak{M}$ and $r$, such that for every $n\le r$ and every p.e. $\mathscr{L}$-definable $X\subseteq M^n$ we have $\mathrm{rank}^{p.e.}_{\mathfrak{M}}(X)\le B_{\mathfrak{M}}(r)$. \end{lemma} \begin{theorem}[Existence of p.e. catalogues] \label{ThmCatDPRM} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with infinite domain $M$. Suppose that $\mathfrak{M}$ is uniquely listable, that it has the DPRM property, and that the binary relation $\ne$ is p.e. $\mathscr{L}$-definable over $\mathfrak{M}$. Then $\mathfrak{M}$ has a p.e. $r$-catalogue $\Upsilon_r[x_0,x_1,...,x_r]$ for every $r\ge 1$. In particular, for every $r\ge 1$ there is a bound $B_\mathfrak{M}(r)$ (depending only on $\mathfrak{M}$ and $r$) such that for every p.e. $\mathscr{L}$-definable set $X\subseteq M^r$ we have $\mathrm{rank}^{p.e.}_{\mathfrak{M}}(X)\le B_\mathfrak{M}(r)$. \end{theorem} \begin{proof} Take any listable presentation $\rho:\mathbb{N}\to M$. Since $M$ is infinite and $\ne$ is p.e. $\mathscr{L}$-definable, Corollary \ref{CoroBij} yields a bijective listable presentation $\gamma:\mathbb{N}\to M$. Lemma \ref{LemmaUnivList} then gives a $\gamma$-listable set $U_r(\gamma)\subseteq M^{r+1}$ such that for every $\gamma$-listable set $X\subseteq M^r$ there is $m_X\in M$ such that $$ X = \{{\bf a}\in M^r : (m_X,{\bf a})\in U_r(\gamma)\}. $$ Since $\mathfrak{M}$ has the DPRM property and it is uniquely listable, $U_r(\gamma)$ is p.e. $\mathscr{L}$-definable. We can take $\Upsilon_r[x_0,x_1,...,x_r]$ to be any p.e. $\mathscr{L}$-formula defining $U_r(\gamma)$. \end{proof} \subsection{Transference of the DPRM property} Our next goal is to characterize when it is possible to transfer the DPRM property from one structure to another by means of a p.e. interpretation. For simplicity, we will work with uniquely listable structures. First we need \begin{lemma}\label{LemmaSelf} Let $\mathfrak{M}$ be a uniquely listable $\mathscr{L}$-structure with domain $M$ and suppose that it has the DPRM property. Then for every p.e. self-interpretation $\lambda:\mathfrak{M}\dasharrow \mathfrak{M}$ we have that $$ \Gamma(\lambda) =\{(x_0,x_1,...,x_r)\in M^{r+1} : (x_1,...,x_r)\in \mathrm{dom}(\lambda)\mbox{ and } x_0=\lambda(x_1,...,x_r)\} $$ is p.e. $\mathscr{L}$-definable. \end{lemma} \begin{proof} Since $\mathrm{dom}(\lambda)$ is p.e. $\mathscr{L}$-definable, it is totally listable (cf. Corollary \ref{CoroTotList}). Let $\rho$ be a listable presentation for $\mathfrak{M}$ and let $f=(f_1,...,f_r):\mathbb{N}\to \mathbb{N}^r$ be a total recursive function with image $\rho^*(\mathrm{dom}(\lambda))$. The map $\gamma=\lambda\circ \rho^{(r)}\circ f:\mathbb{N}\to M$ is a listable presentation for $\mathfrak{M}$, by Proposition \ref{PropIntimplList}. Since $\mathfrak{M}$ is uniquely listable, $\gamma\approx \lambda$ and there is a total recursive function $\phi:\mathbb{N}\to \mathbb{N}$ with $\gamma=\rho\circ \phi$. Note that $\rho^{(r)}\circ f:\mathbb{N}\to M^r$ maps onto $\mathrm{dom}(\lambda)$, so, $\Gamma(\lambda)$ consists precisely of elements of the form $$ (\gamma(n), \rho^{(r)}(f(n))) = (\rho(\phi(n)) , \rho^{(r)}(f(n))) = \rho^{(r+1)}\circ(\phi,f_1,...,f_r)(n) $$ for some $n\in \mathbb{N}$. We conclude by Lemma \ref{LemmaCharList} (iii) and the DPRM property. \end{proof} \begin{theorem}[Transference of DPRM]\label{ThmTransferDPRM} For $i=1,2$ let $\mathscr{L}_i$ be a language and let $\mathfrak{M}_i$ be a uniquely listable $\mathscr{L}_i$-structure with domain $M_i$. Suppose that $\mathfrak{M}_2$ has the DPRM property and that there are p.e. interpretations $\theta_1:\mathfrak{M}_1\dasharrow \mathfrak{M}_2$ and $\theta_2:\mathfrak{M}_2\dasharrow \mathfrak{M}_1$ of ranks $r_1,r_2$ respectively. Let $\zeta=\theta_2\bullet \theta_1:\mathfrak{M}_1\dasharrow\mathfrak{M}_1$ and let $r=r_1r_2$. Then the following are equivalent: \begin{itemize} \item[(i)] The pair $(\theta_1,\theta_2)$ defines a p.e. bi-interpretation between $\mathfrak{M}_1$ and $\mathfrak{M}_2$. \item[(ii)] $\Gamma(\zeta)\subseteq M_1^{r+1}$ is p.e. $\mathscr{L}_1$-definable over $\mathfrak{M}_1$. \item[(iii)] $\mathfrak{M}_1$ has the DPRM property. \end{itemize} \end{theorem} \begin{proof} (i) implies (ii) by definition of p.e. bi-interpretation. Let us show that (ii) implies (iii). Let $\rho$ be a listable presentation for $\mathfrak{M}_2$. By Proposition \ref{PropIntimplList} applied to $\theta_2:\mathfrak{M}_2\dasharrow\mathfrak{M}_1$, there is a listable presentation $\gamma$ for $\mathfrak{M}_1$ such that for any $S\subseteq M_1^k$ we have that $S$ is $\gamma$-listable if and only if $\theta_2^*(S)$ is $\rho$-listable. Let $S\subseteq M_1^k$ be a $\gamma$-listable set. Then $\theta_2^*(S)$ is $\rho$-listable. Since $\mathfrak{M}_2$ is uniquely listable and it has the DPRM property, $\theta_2^*(S)$ is p.e. $\mathscr{L}_2$-definable over $\mathfrak{M}_2$. Hence, the set $\zeta^*(S)=\theta_1^*(\theta_2^*(S))\subseteq M_1^{rk}$ is p.e. $\mathscr{L}_1$-definable over $\mathfrak{M}_1$. Since $\zeta:\mathrm{dom}(\zeta)\to M_1$ is surjective, we have $S=\zeta^{(k)}(\zeta^*(S))\subseteq M_1^k$. By (ii), the function $\zeta^{(k)}:\mathrm{dom}(\zeta)^k\to M_1^k$ is p.e. $\mathscr{L}_1$-definable. Therefore, the set $$ S=\{{\bf y}\in M_1^k : \exists {\bf x}\in \zeta^*(S)\mbox{ such that } \zeta^{(k)}({\bf x})={\bf y}\} $$ is p.e. $\mathscr{L}_1$-definable over $\mathfrak{M}_1$. This proves (iii). Finally, (iii) implies (i) by Lemma \ref{LemmaSelf}. \end{proof} \begin{corollary}\label{CoroDPRM} Let $\mathfrak{M}$ be a uniquely listable $\mathscr{L}$-structure with domain $M$ and let $\rho$ be a listable presentation for it. Suppose that we have an interpretation $\theta:\mathfrak{M}\dasharrow \mathbb{N}$ with $\mathbb{N}$ is regarded as an $\mathscr{L}_a$-structure. The following are equivalent: \begin{itemize} \item[(i)] $\mathfrak{M}$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \item[(ii)] The function $\rho\circ \theta:\mathrm{dom}(\theta)\to M$ is p.e. $\mathscr{L}$-definable. \item[(iii)] $\mathfrak{M}$ has the DPRM property. \end{itemize} \end{corollary} \begin{proof} Recall that the $\mathscr{L}_a$-structure $\mathbb{N}$ is uniquely listable (cf. Proposition \ref{PropExN}) and it has the DPRM property. Theorem \ref{ThmTransferDPRM} shows that (i) for any bi-interpretation implies (iii). By Lemma \ref{LemmaListtoInt}, $\rho$ defines a p.e. interpretation of $\mathfrak{M}$ in the $\mathscr{L}_a$-structure $\mathbb{N}$. Applying Theorem \ref{ThmTransferDPRM} with $\mathscr{L}_1=\mathscr{L}$, $\mathscr{L}_2=\mathscr{L}_a$, $\mathfrak{M}_1=\mathfrak{M}$, $\mathfrak{M}_2=\mathbb{N}$, $\theta_1=\theta$, and $\theta_2=\rho$, we see that (ii) is equivalent to (iii) and they imply (i). \end{proof} The equivalence between (ii) and (iii) in Corollary \ref{CoroDPRM} has been previously used in the literature to transfer the DPRM property from the semi-ring $\mathbb{N}$ to recursive rings and fields; see for instance \cite{DenefZT,ZahidiOKt} and, more generally, the references in Section \ref{SecDPRMknown}. We also refer the reader to Demeyer's thesis \cite{DemeyerThesis} where the equivalence between (ii) and (iii) in Corollary \ref{CoroDPRM} is discussed in the context of recursively presented rings. In this special form, the strategy originated in Denef's work \cite{DenefZT} where he transferred the DPRM property from $\mathbb{N}$ to $\mathbb{Z}[T]$ by implicitly using the fact that (ii) implies (iii) in Corollary \ref{CoroDPRM}. A recursive presentation for a ring satisfying (ii) of Corollary \ref{CoroDPRM} is often referred to as a \emph{Diophantine enumeration}, but they are difficult to obtain in general. We will use the more flexible criterion given by Theorem \ref{ThmTransferDPRM} in the examples of Section \ref{SecExamplesDPRM}. Nevertheless, Corollary \ref{CoroDPRM} allows us to give a characterization of uniquely listable structures with infinite domain that have the DPRM property, under the mild assumption that $\ne$ is totally listable. \begin{theorem}[Characterization of the DPRM property for infinite uniquely listable structures] \label{ThmCharDPRM} Let $\mathfrak{M}$ be a uniquely listable $\mathscr{L}$-structure with infinite domain, and with the property that $\ne$ is totally listable over $\mathfrak{M}$ (this is the case, for instance, if $\ne$ is p.e. $\mathscr{L}$-definable over $\mathfrak{M}$). Then $\mathfrak{M}$ has the DPRM property if and only if it is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{theorem} \begin{proof} If there is a p.e. bi-interpretation, then in particular there is a p.e. interpretation $\theta:\mathfrak{M}\dasharrow \mathbb{N}$ and we can apply Corollary \ref{CoroDPRM} to conclude that $\mathfrak{M}$ has the DPRM property. Conversely, assume that $\mathfrak{M}$ has the DPRM property. By Corollary \ref{CoroDPRM}, it suffices to construct a p.e. interpretation $\theta:\mathfrak{M}\dasharrow \mathbb{N}$ with $\mathbb{N}$ seen as an $\mathscr{L}_a$-structure. Since $\mathfrak{M}$ has infinite domain $M$ and $\ne$ is totally listable, Corollary \ref{CoroBij} implies that there is a bijective listable presentation $\rho:\mathbb{N}\to M$. Let $\theta=\rho^{-1}:M\to \mathbb{N}$. For $s\in \mathscr{L}_a$ we have that $s^\mathbb{N}$ is listable, hence $\theta^*(s^\mathbb{N})$ is $\rho$-listable because $\rho^*(\theta^*(s^\mathbb{N}))=s^\mathbb{N}$. By the DPRM property on $\mathfrak{M}$ we get that $ \theta^*(s^\mathbb{N})$ is p.e. $\mathscr{L}$-definable over $\mathfrak{M}$, which implies that $\theta:\mathfrak{M}\dasharrow \mathbb{N}$ is the required p.e. interpretation. \end{proof} We remark that Theorem \ref{ThmCharDPRM} applies in the setting of Theorem \ref{ThmCatDPRM}. \subsection{Examples}\label{SecExamplesDPRM} All the structures considered in this section are uniquely listable, thanks to the results in Section \ref{SecULexamples}. If $\mathfrak{M}$ is an $\mathscr{L}$-structure with domain $M$ and $\mathscr{S}\subseteq M$, we let $\mathscr{L}\cup\mathscr{S}$ be the language obtained by expanding $\mathscr{L}$ with constant symbols corresponding to the elements of $\mathscr{S}$ and interpreted in $\mathfrak{M}$ accordingly. As a warm-up, we have: \begin{lemma}\label{LemmaZDPRM} The $\mathscr{L}_a$-structure $\mathbb{Z}$ has the DPRM property. \end{lemma} \begin{proof} By Lemma \ref{LemmaIntNZ} and the equivalence of (i) and (iii) in Corollary \ref{CoroDPRM}. \end{proof} The next three examples are considered folklore results (except for the assertions about p.e. bi-interpretability), although the author is not aware of any reference for their proofs. Let us recall that for a ring $A$, a subset $S\subseteq A^r$ is \emph{Diophantine} if it is p.e. $\mathscr{L}_a$-definable over $A$ with parameters. \begin{proposition}\label{PropZQ} The following are equivalent: \begin{itemize} \item[(i)] $\mathbb{Z}$ is Diophantine in $\mathbb{Q}$. \item[(ii)] The $\mathscr{L}_a$-structure $\mathbb{Q}$ has the DPRM property. \item[(iii)] The $\mathscr{L}_a$-structure $\mathbb{Q}$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{itemize} \end{proposition} \begin{proof} Let us consider $\mathbb{Q}$ as an $\mathscr{L}$-structure. It is infinite and it is uniquely listable by Corollary \ref{CoroQulist}. The relation $\ne$ is p.e. $\mathscr{L}_a$-definable over $\mathbb{Q}$, hence, totally listable (cf. Corollary \ref{CoroTotList}). Thus, (ii) and (iii) are equivalent by Theorem \ref{ThmCharDPRM}. Assume (i). Then the identity map $\theta_1:\mathbb{Z}\to \mathbb{Z}$ defines a p.e. interpretation $\theta_1:\mathbb{Q}\dasharrow \mathbb{Z}$ as $\mathscr{L}_a$-structures. We take $\theta_2:\mathbb{Z}\times (\mathbb{Z}-\{0\})\to \mathbb{Q}$ given by $(a,b)\mapsto a/b$, which defines a p.e. interpretation $\theta_2 : \mathbb{Z}\dasharrow \mathbb{Q}$; in fact, this is the p.e. interpretation $\kappa:\mathbb{Z}\dasharrow \mathbb{Q}$ from Lemma \ref{LemmaQinNZ}. For $\zeta=\theta_2\bullet\theta_1:\mathbb{Q}\dasharrow \mathbb{Q}$ we have $\mathrm{dom}(\zeta)=\mathbb{Z}\times (\mathbb{Z}-\{0\})\subseteq \mathbb{Q}^2$ and $$ \Gamma(\zeta)=\{(x_0,x_1,x_2)\in \mathbb{Q}^3 : x_1\in \mathbb{Z}, x_2\in \mathbb{Z}-\{0\}, x_0x_2=x_1\}\subseteq \mathbb{Q}^3 $$ which is p.e. $\mathscr{L}_a$-definable over $\mathbb{Q}$ by (i). Hence, by Theorem \ref{ThmTransferDPRM} and Lemma \ref{LemmaZDPRM} we get (ii). Finally, assume (ii). To obtain (i) it suffices to show that $\mathbb{Z}\subseteq \mathbb{Q}$ is $\gamma$-listable for some listable presentation $\gamma$ of $\mathbb{Q}$, because $\mathbb{Q}$ is uniquely listable. Directly doing this in detail without invoking Church's thesis can be rather messy if the listable presentation is not chosen carefully, but choosing the listable presentation $\tau:\mathbb{N}\to \mathbb{Q}$ provided by Proposition \ref{PropULQ} one can check that $\tau^{-1}(\mathbb{Z})=\{2^n: n\ge 0\}\cup \{2^n-1: n\ge 0\}$, which is listable in $\mathbb{N}$. Alternatively, consider again the p.e. interpretation $\kappa : \mathbb{Z}\dasharrow \mathbb{Q}$ from Lemma \ref{LemmaQinNZ}. Then $\kappa^*(\mathbb{Z})=\{(a,b)\in \mathbb{Z}^2 : b\ne 0\mbox{ and }b|a\}$ is p.e. $\mathscr{L}_a$-definable over $\mathbb{Z}$, hence totally listable (cf. Corollary \ref{CoroTotList}). Then $\mathbb{Z}$ is $\gamma$-listable in $\mathbb{Q}$ for some listable presentation $\gamma$ by Proposition \ref{PropIntimplList}. \end{proof} We stress the fact that, although $\mathbb{N}$ and $\mathbb{Q}$ are known to be bi-interpretable as $\mathscr{L}_a$-structures thanks to results of J. Robinson \cite{JRobinsonQ}, it is not known whether they are p.e. bi-interpretable. \begin{proposition}\label{PropOK} Let $K$ be a number field, let $O_K$ be its ring of integers, and let $\mathscr{G}\subseteq O_K$ be a finite set of ring generators for $O_K$. Let us consider $O_K$ as a structure over $\mathscr{L}=\mathscr{L}_a\cup\mathscr{G}$. Then $O_K$ is uniquely listable and the following are equivalent: \begin{itemize} \item[(i)] $\mathbb{Z}$ is Diophantine in $O_K$. \item[(ii)] The $\mathscr{L}$-structure $O_K$ has the DPRM property. \item[(iii)] The $\mathscr{L}$-structure $O_K$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{itemize} \end{proposition} \begin{proof} We consider $\mathbb{Z}$ as an $\mathscr{L}_a$-structure and $O_K$ as an $\mathscr{L}$-structure. The ring $O_K$ is infinite and it is uniquely listable by Corollary \ref{CoroULfg}. The relation $\ne$ is p.e. $\mathscr{L}$-definable in $O_K$ ---this is elementary; see for instance paragraph 1.2.1 in \cite{MoretBailly}. Hence, (ii) and (iii) are equivalent by Theorem \ref{ThmCharDPRM}. Let $r=[K:\mathbb{Q}]$ and let $\beta_1, ...,\beta_r$ be an integral basis for $O_K$ with $\beta_1=1$. The elements $\beta_j$ are p.e. $\mathscr{L}$-definable since $\mathscr{G}$ is a set of ring generators for $O_K$. Let $\kappa: \mathbb{Z}^r\to O_K$ be the map $\kappa(x_1,...,x_r)=x_1\beta_1+...+x_r\beta_r$. Thus, $\kappa$ defines a p.e. interpretation $\kappa: \mathbb{Z}\dasharrow O_K$. Assume (i). Then the identity map $\theta_1:\mathbb{Z}\to \mathbb{Z}$ defines a p.e. interpretation $\theta_1:O_K\dasharrow \mathbb{Z}$. Let us take $\theta_2=\kappa : \mathbb{Z}\dasharrow O_K$. The composed interpretation $\zeta=\theta_2\bullet \theta_1:O_K\dasharrow O_K$ has $\mathrm{dom}(\zeta)=\mathbb{Z}^r\subseteq O_K$ and $ \Gamma(\zeta)=\{(x_0,x_1,...,x_r)\in O_K^{r+1} : x_1,...,x_r\in \mathbb{Z}\mbox{ and }x_0=x_1\beta_1+...+x_r\beta_r\} $ which is p.e. $\mathscr{L}$-definable by (i). We obtain (ii) by Theorem \ref{ThmTransferDPRM} and Lemma \ref{LemmaZDPRM}. Finally, let us assume (ii). To get (i) it suffices to show that $\mathbb{Z}\subseteq O_K$ is listable for some listable presentation $\gamma$ of $O_K$ (since $O_K$ is uniquely listable). The p.e. interpretation $\kappa: \mathbb{Z}\dasharrow O_K$ constructed above satisfies $\kappa^{-1}(\mathbb{Z})=\{(n,0,...,0): n\in \mathbb{Z}\}\subseteq \mathbb{Z}^r$ because $\beta_1=1$ and the $\beta_j$ form an integral basis. Thus, $\kappa^{-1}(\mathbb{Z})$ is totally listable over $\mathbb{Z}$ (it is p.e. $\mathscr{L}_a$-definable) hence $\mathbb{Z}$ is $\gamma$-listable over $O_K$ for some listable presentation $\gamma$ afforded by Proposition \ref{PropIntimplList}. \end{proof} At this point we recall that it is a conjecture of Denef and Lipshitz \cite{DL78} that $\mathbb{Z}$ is Diophantine in $O_K$ for every number field. The general case remains open, although it is known that this would follow from standard conjectures on elliptic curves \cite{MazurRubin, MurtyPasten}. The available unconditional results are proved in \cite{DL78,Denef80, Pheidas88, Shlapentokh89, Videla89, ShaShl89} and most recently in \cite{MRnew} and \cite{GFP}. All recent progress on this problem was possible thanks to the elliptic curve criteria from \cite{PoonenEll, CorPheZah, ShlapentokhEll}. Regarding function fields, we have: \begin{proposition}\label{Propkt} Let $k$ be a finite field, let $t$ be a transcendental element, and let $\mathscr{G}$ be a (finite) set of ring generators for $k$. Let us consider $k[t]$ and $k(t)$ as $\mathscr{L}$-structures, where $\mathscr{L}=\mathscr{L}_a\cup \mathscr{G} \cup \{t\}$. The following are equivalent: \begin{itemize} \item[(i)] $k[t]$ is Diophantine in $k(t)$. \item[(ii)] The $\mathscr{L}$-structure $k(t)$ has the DPRM property. \item[(iii)] The $\mathscr{L}$-structures $k[t]$ and $k(t)$ are p.e. bi-interpretable. \item[(iv)] The $\mathscr{L}$-structure $k(t)$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{itemize} \end{proposition} \begin{proof} The ring $k[t]$ is infinite, uniquely listable by Corollary \ref{CoroULfg}, and the inequality $\ne$ is p.e. $\mathscr{L}$-definable (standard fact using two primes of $k[t]$; see Lemme 3.2 in \cite{MoretBailly} for a generalization). Furthermore, $k[t]$ has the DPRM property by a result of Demeyer \cite{DemeyerInv}. Thus $k[t]$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$ by Theorem \ref{ThmCharDPRM}. It follows that (iii) and (iv) are equivalent by Lemma \ref{LemmaBiIntTransitive}. The field $k(t)$ is infinite, uniquely listable by Corollary \ref{CoroULfg}, and the relation $\ne$ is p.e. $\mathscr{L}$-definable in $k(t)$. Hence, (ii) and (iv) are equivalent by Theorem \ref{ThmCharDPRM}. The equivalence of (i) and (ii) is shown as in the case of $\mathbb{Z}$ and $\mathbb{Q}$ (cf. Proposition \ref{PropZQ}) using the p.e. interpretation $\kappa: k[t]\dasharrow k(t)$ given by $\kappa: k[t]\times (k[t]-\{0\})\to k(t)$ with $(f,g)\mapsto f/g$. Namely, assuming (i), the identity map $\theta_1: k[t]\to k[t]$ defines a p.e. interpretation $\theta_1:k(t)\dasharrow k[t]$ and we can take $\theta_2=\kappa$ in order to apply Theorem \ref{ThmTransferDPRM} together with Demeyer's theorem \cite{DemeyerInv}. This allows us to transfer the DPRM property from $k[t]$ to $k(t)$, obtaining (ii). Conversely, assume (ii). Then $\kappa^*(k[t])=\{(f,g)\in k[t]^2 : g\ne 0\mbox{ and }g|f\}$ is p.e. $\mathscr{L}$-definable over $k[t]$, hence totally listable. Proposition \ref{PropIntimplList} implies that $k[t]$ is $\gamma$-listable in $k(t)$ for some listable presentation $\gamma$, hence $k[t]$ is a totally listable subset of $k(t)$ because $k(t)$ is uniquely listable. Therefore, (ii) implies (i). \end{proof} Similarly, these results can be extended to the case of $S$-integers and global fields, not just $\mathbb{Q}$ and $k(t)$. We leave the details to the reader. In a similar fashion, other transference results can be obtained. For instance, Demeyer \cite{DemeyerPoly} proved that $\mathbb{Q}[t]$ has the DPRM property, seen as a structure over $\mathscr{L}_t=\mathscr{L}_a\cup\{t\}$. Using this and the same methods as in the previous three results, one can show \begin{proposition} Consider $\mathbb{Q}[t]$ and $\mathbb{Q}(t)$ as structures over $\mathscr{L}_t$. The following are equivalent: \begin{itemize} \item[(i)] $\mathbb{Q}[t]$ is Diophantine in $\mathbb{Q}(t)$. \item[(ii)] The $\mathscr{L}$-structure $\mathbb{Q}(t)$ has the DPRM property. \item[(iii)] The $\mathscr{L}$-structures $\mathbb{Q}[t]$ and $\mathbb{Q}(t)$ are p.e. bi-interpretable. \item[(iv)] The $\mathscr{L}$-structure $\mathbb{Q}(t)$ is p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{itemize} \end{proposition} \section{Diophantine sets of global fields and related problems}\label{SecConjectures} \subsection{Varieties and Diophantine sets} In this article, a variety over a field $k$ is a reduced separated scheme of finite type over $k$. In particular, we do not require irreducibility. Recall that a set $S\subseteq k^n$ is Diophantine over $k$ if it is p.e. $\mathscr{L}_a$-definable over $k$ with parameters from $k$. It easily follows from the definitions that $S$ is Diophantine over $k$ if and only if there is an affine variety $X$ over $k$ and a morphism $f:X\to \mathbb{A}^n_k$ defined over $k$ such that $f(X(k))=S$. \subsection{Mazur's conjecture} \label{SecMazur} For later reference, let us recall some conjectures on the topology of rational points formulated by Mazur \cite{MazurConj1, MazurConj2}, as well as a variation proposed by Colliot-Th\'el\`ene, Skorobogatov, and Swinnerton-Dyer \cite{CTSSD}. The following intriguing conjecture is due to Mazur \cite{MazurConj1, MazurConj2}. \begin{conjecture}[Mazur's conjecture]\label{ConjMazur} Let $X$ be a variety over $\mathbb{Q}$. The topological closure of $X(\mathbb{Q})$ in $X(\mathbb{R})$ has finitely many connected components. \end{conjecture} This conjecture was initially stated for smooth varieties, but the previous version is easily reduced to the smooth case by taking $X_1$ as the smooth locus of $X$, and then $X_2$ as the smooth locus of $X-X_1$, etc. which is a finite process because $X$ is of finite type over $\mathbb{Q}$. As remarked by Mazur, this conjecture implies at once that $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. By Proposition \ref{PropZQ}, it would also follow that $\mathbb{Q}$ does not have the DPRM property and that $\mathbb{Q}$ and $\mathbb{N}$ are not p.e. bi-interpretable as $\mathscr{L}_a$-structures. Actually, a first version of Mazur's conjecture proposed in \cite{MazurConj1} asserted that the topological closure of $X(\mathbb{Q})$ in $X(\mathbb{R})$ precisely consisted of some of the connected components of $X(\mathbb{R})$, but this was disproved in \cite{CTSSD}. Nevertheless, the following version of Mazur's conjecture proposed as Conjecture 4 in \cite{CTSSD} still seems plausible. \begin{conjecture}[Strong version of Mazur's conjecture]\label{ConjStrongMazur} Let $X$ be a smooth irreducible variety over $\mathbb{Q}$ and let $U$ be a connected component of $X(\mathbb{R})$. Let $W$ be the topological closure of $X(\mathbb{Q})\cap U$ in $U$. Then there is a Zariski closed set $Y\subseteq X$ defined over $\mathbb{Q}$ such that $W$ is a finite union of connected components of $Y(\mathbb{R})$. \end{conjecture} The following observation is implicit in \cite{CTSSD} and in fact it motivates the previous conjecture. \begin{lemma}\label{LemmaConjMazur} The strong version of Mazur's conjecture (Conjecture \ref{ConjStrongMazur}) implies Mazur's conjecture (Conjecture \ref{ConjMazur}). \end{lemma} \begin{proof} The set of real points of an affine variety defined over $\mathbb{Q}$ forms a semi-algebraic set, and it is a standard result that semi-algebraic sets over $\mathbb{R}$ have finitely many connected components. \end{proof} Let $L$ be a real-closed field and let $F\subseteq L$ be an ordered field. A semi-algebraic set $U\subseteq L^n$ is said to be \emph{defined over $F$} if there is a first order formula $\Phi[x_1,...,x_n]$ over the language $\mathscr{L}_{\le}=\{0,1,+,\cdot, \le, =\}$ with parameters from $F$ such that $U$ is the interpretation of $\Phi$ over $L$. With this terminology, here is yet another variant of Mazur's conjecture which is implicitly suggested in \cite{CTSSD}, and explicitly formulated at the end of Section 2 in \cite{CornelissenZahidi}. Here, for a variety $X$ over $\mathbb{R}$, a semi-algebraic set of $X(\mathbb{R})$ is defined as a set which is semi-algebraic on each affine chart of an affine open cover of $X$. \begin{conjecture}[Semi-algebraic version of Mazur's conjecture]\label{ConjSemiAlg} Let $X$ be a variety over $\mathbb{Q}$. The topological closure of $X(\mathbb{Q})$ in $X(\mathbb{R})$ is a semi-algebraic set defined over $\mathbb{Q}$. \end{conjecture} It turns out that this last conjecture follows from Conjecture \ref{ConjStrongMazur} by the same argument as in Lemma \ref{LemmaConjMazur}, just keeping track of the field of definition. \begin{lemma} The strong version of Mazur's conjecture (Conjecture \ref{ConjStrongMazur}) implies the semi-algebraic version of Mazur's conjecture (Conjecture \ref{ConjSemiAlg}). \end{lemma} \begin{proof} Let $Y$ be an affine algebraic variety defined over $\mathbb{Q}$ in the affine space $\mathbb{A}^n_\mathbb{Q}$. Let $C$ be a connected component of $Y(\mathbb{R})$. It is a standard result that $C$ is semi-algebraic over $\mathbb{R}$ (cf. Theorem 5.22 in \cite{AlgRA}). We claim that the semi-algebraic set $C$ is defined over $\mathbb{Q}$ (this is well-known but we were not able to find an explicit reference for this particular fact). Let $F\subseteq \mathbb{R}$ be the field of real algebraic numbers. Then $F$ is real-closed, and by Proposition 5.24 in p.170 of \cite{AlgRA} there is a semi-algebraic connected component $C'$ of $Y(F)$ defined over $F$ such that for any $\mathscr{L}_{\le}$-formula $\Phi[x_1,...,x_n]$ with parameters form $F$ which defines $C'$ over $F$, the interpretation of $\Phi$ over $\mathbb{R}$ is $C$. Since $F$ is the real closure of $\mathbb{Q}$, Proposition 2.82 in p.71 of \cite{AlgRA} shows that there is an $\mathscr{L}_{\le}$-formula $\Psi$ with parameters from $\mathbb{Q}$ (thus, $\Psi$ can be taken without parameters) such that the interpretation of $\Psi$ over $F$ is $C'$. Hence, the interpretation of $\Psi$ over $\mathbb{R}$ is $C$. This proves that the connected components of $Y(\mathbb{R})$ are semi-algebraic sets defined over $\mathbb{Q}$. Let $X$ be a smooth affine algebraic variety over $\mathbb{Q}$ in affine space $\mathbb{A}^n_\mathbb{Q}$; this case of Conjecture \ref{ConjSemiAlg} implies the general case by taking a suitable stratification of the variety under consideration (cf. the discussion after Conjecture \ref{ConjMazur}) and then taking affine coverings. Assuming Conjecture \ref{ConjStrongMazur} we get that the topological closure of $X(\mathbb{Q})$ in $X(\mathbb{R})\subseteq \mathbb{R}^n$ is the union of finitely many connected components of real algebraic sets $Y(\mathbb{R})$ for certain varieties $Y$ defined over $\mathbb{Q}$. Thus, by the previous claim, the closure of $X(\mathbb{Q})$ in $X(\mathbb{R})$ is semi-algebraic defined over $\mathbb{Q}$. \end{proof} Let us observe the following: \begin{proposition}\label{PropEndpoint} If the semi-algebraic version of Mazur's conjecture (Conjecture \ref{ConjSemiAlg}) holds, then for every Diophantine set $S\subseteq \mathbb{Q}$, we have that the topological closure of $S$ in $\mathbb{R}$ is a finite union of closed intervals whose endpoints are real algebraic or infinite. (Here, the singleton $\{x\}\subseteq \mathbb{R}$ is taken as the closed interval $[x,x]$.) \end{proposition} \begin{proof} This is by the Tarski-Seidenberg theorem with coefficients in $\mathbb{Q}$. See Theorem 2.77 in p.69 of \cite{AlgRA} for a general version with coefficients in an ordered field contained in a real closed field. \end{proof} One may ask about extensions of Mazur's conjecture to other global fields and other places, not just the archimedean. In the case of number fields, Mazur (cf. Question I in \cite{MazurConj2}) asked the following (see also \cite{PoonenShlapentokh}): \begin{question}[Mazur]\label{QuestionMazur} Let $K$ be a number field and let $v$ be a place of $K$. Let $X$ be an irreducible projective variety over $K$. For each local point $x\in X(K_v)$, let $Z_x\subseteq X$ be the intersection of all Zariski closed sets $Y\subseteq X$ that contain some $X(K)\cap U$, as $U$ ranges over all $v$-neighborhoods of $x$. Is the collection $\{Z_x : x\in X(K_v) \}$ finite? \end{question} Let us remark the following: \begin{lemma}\label{LemmaApplyMazur} Let $K$ be a number field, let $v$ be a place of $K$, and let $S\subseteq K$ be Diophantine over $K$. Assume either of the following: \begin{itemize} \item[(i)] $K=\mathbb{Q}$, $v=\infty$ is the archimedean place, and Mazur's Conjecture \ref{ConjMazur} holds; or \item[(ii)] Mazur's Question \ref{QuestionMazur} has a positive answer for the number field $K$ and the place $v$. \end{itemize} Then $S$ can have at most finitely many $v$-adically isolated points. \end{lemma} \begin{proof} Suppose that $S$ has infinitely many $v$-adically isolated points. Let $X$ be an affine variety defined over $K$ and let $f:X\to \mathbb{A}^1_K$ be a morphism defined over $K$ such that $S=f(X(K))$. Passing to a Diophantine subset of $S$ with infinitely many $v$-adically isolated points, we may assume that $X$ is irreducible. Let $z_1,z_2,...$ be an infinite sequence of points in $S$ that are $v$-adically isolated. Then the fibres $Y_j=f^{-1}(z_j)$ are infinitely many pairwise disjoint Zariski-closed subsets of $X$ defined over $K$ with $Y_j(K)$ non-empty. If $K=\mathbb{Q}$ and $v=\infty$, then the sets $Y_j(\mathbb{R})$ for $j=1,2,...$ contain infinitely many connected components of the real closure of $X(\mathbb{Q})$ in $X(\mathbb{R})$ because $f$ maps them to isolated points of $f(X(\mathbb{Q}))$. Hence (i) cannot hold. In the general case, choose $x_j\in Y_j(K)$, let $V_j$ be a $v$-adic neighborhood of $z_j$ in $K_v$ that separates $z_j$ from $S-\{z_j\}$, and let $U_j=f^{-1}(V_j)$ which is a $v$-adic neighborhood of $x_j$. Then $Y_j$ contains $X(K)\cap U_j$, so $Z_{x_j}$ is contained in $Y_j$ in the notation of Question \ref{QuestionMazur}. As the varieties $Y_j$ are disjoint, (ii) cannot hold. \end{proof} Question \ref{QuestionMazur} is specific for number fields, and the analogue for global function fields is known to be false. Namely, the following result essentially due to Pheidas \cite{PheidasInv} produces $v$-adically discrete infinite sets that are Diophantine in the function field setting (the connection with Mazur's conjecture was pointed out by Cornelissen and Zahidi \cite{CornelissenZahidi}). \begin{theorem}\label{ThmDiscrete} Let $p>2$ be a prime. The sets $S_1 = \{t^{p^n} : n\ge 0\}$ and $$ S_2=\{b+t+t^p+t^{p^2}+...+t^{p^n} : n\ge 0\mbox{ and } b\in \mathbb{F}_p\} $$ are Diophantine in $K=\mathbb{F}_p(t)$. More precisely, let $U\subseteq \mathbb{A}^3_K$ be the curve defined over $\mathbb{F}_p(t)$ by $$ \begin{cases} x-t=y^p-y\\ x^{-1} -t^{-1} = z^p-z. \end{cases} $$ Projecting $U(K)$ onto the $x$-coordinate gives $S_1$, and projecting onto the $y$-coordinate gives $S_2$. \end{theorem} \begin{proof} This is mostly contained in the proof of Lemma 1 of \cite{PheidasInv}. The only missing point is that from \emph{loc. cit.} one only gets $S_2\subseteq \pi(U(K))$ rather than equality, where $\pi:U\to \mathbb{A}^1_K$ is the projection onto the $y$-coordinate. Let $A:\mathbb{F}_p(t)\to \mathbb{F}_p(t)$ be the map $A(f)=f^p-f$. Then $A$ is an additive group morphism with kernel $\mathbb{F}_p$. Let $(u,v,w)\in U(K)$ and note that $u=t^{p^n}$ for some $n\ge 0$ (cf. \cite{PheidasInv}). Then $A(v)=t^{p^n}-t$ and we easily check that $f=t+t^p+t^{p^2}+...+t^{p^{n-1}}$ satisfies $$ A(f)=f^p-f=t^{p^n}-t. $$ Hence, all the possibilities for $v$ are $A^{-1}(t^{p^n}-t)=\{b+f : b\in \mathbb{F}_p\}$. \end{proof} \subsection{Left-Diophantine numbers} \label{SecLeft} For a real number $\alpha\in \mathbb{R}$, let $L(\alpha)=\{q\in \mathbb{Q} : q<\alpha\}$. We say that $\alpha\in \mathbb{R}$ is \emph{left-Diophantine} if $L(\alpha)$ is a Diophantine subset of $\mathbb{Q}$. (Naturally, there is a notion of right-Diophantine number but that leads to a similar analysis.) By Lagrange's $4$-squares theorem, we have the elementary fact that the relation $\le$ is Diophantine over $\mathbb{Q}$. This will be frequently used in the discussion below. For instance, we deduce: \begin{lemma}\label{LemmaSup} $\alpha\in \mathbb{R}$ is left-Diophantine if and only if there is a Diophantine subset $S\subseteq \mathbb{Q}$ such that $\alpha=\sup S$. \end{lemma} \begin{proof} If $L(\alpha)$ is Diophantine, then note that $\alpha=\sup S$ with $S=L(\alpha)$. For the converse, if $S$ is Diophantine and $\alpha=\sup S$, we observe that $L(\alpha)=\{q\in \mathbb{Q} : \exists a\in S, q<a\}$, so $L(\alpha)$ is Diophantine. \end{proof} Let $\mathscr{D}\subseteq \mathbb{R}$ be the collection of all left-Diophantine numbers and let $\mathscr{A}\subseteq \mathbb{R}$ be the set of real algebraic numbers. We have the following lower bound for $\mathscr{D}$. \begin{lemma} $\mathscr{A}\subseteq \mathscr{D}$. \end{lemma} \begin{proof} Let $\alpha\in \mathbb{R}$ be algebraic with minimal polynomial $p(x)\in \mathbb{Q}[x]$. The roots of $p(x)$ are simple, so the function $p:\mathbb{R}\to \mathbb{R}$ changes sign at $\alpha$. Up to multiplying $p(x)$ by $-1$, we may assume that there are $q_1,q_2\in \mathbb{Q}$ such that $q_1<\alpha<q_2$ and for all $q_1\le u\le q_2$ we have $p(u)>0$ if $u<\alpha$ and $p(u)<0$ if $u>\alpha$. Then the set $$ X=\{u\in \mathbb{Q} : p(u)>0 \mbox{ and } u<q_2\} $$ is Diophantine over $\mathbb{Q}$ and $\alpha=\sup X$. We conclude by Lemma \ref{LemmaSup}. \end{proof} Recall from Corollary \ref{CoroQulist} that the $\mathscr{L}_a$-structure $\mathbb{Q}$ is uniquely listable. Thus, there is a well-defined notion of listable subsets of $\mathbb{Q}^r$ for every $r\ge 1$. A real number $\alpha\in \mathbb{R}$ is called \emph{left-listable} if $L(\alpha)\subseteq \mathbb{Q}$ is listable. This notion is standard and it appears under various names in the literature, such as left-r.e., and left-c.e. (see for instance Chapter 5 in \cite{DowneyHirschfeld}), although the discussion on unique listability is omitted and replaced by the assumption that a \emph{recursive} presentation or ``effective coding'' for $\mathbb{Q}$ is fixed (see for instance Assumption 5.1.2 in \cite{DowneyHirschfeld}). The class $\Lambda$ of left-listable numbers is vast; for instance, $\Lambda$ contains all computable real numbers (those for which a Turing machine outputs the decimal expansion to any required precision) as well as more exotic real numbers such as Chaitin's constant. From Corollary \ref{CoroTotList} applied to the sets $L(\alpha)\subseteq \mathbb{Q}$ we deduce \begin{lemma} $\mathscr{D}\subseteq \Lambda$. \end{lemma} Thus, we know that $\mathscr{A}\subseteq \mathscr{D}\subseteq \Lambda$. The set $\Lambda$ is much larger than $\mathscr{A}$ and one can ask for a better description of $\mathscr{D}$. We expect the following: \begin{conjecture}[Algebraicity]\label{ConjDA} All left-Diophantine numbers are algebraic. Thus, $\mathscr{D}=\mathscr{A}$. \end{conjecture} In particular, this would imply \begin{conjecture}\label{ConjDA2} $\mathscr{D}$ is a field. \end{conjecture} In the direction of Conjecture \ref{ConjDA}, we have: \begin{proposition} The algebraicity conjecture (Conjecture \ref{ConjDA}) follows from the semi-algebraic version of Mazur's conjecture (Conjecture \ref{ConjSemiAlg}). In particular, it follows from the strong version of Mazur's conjecture (Conjecture \ref{ConjStrongMazur}). \end{proposition} \begin{proof} This is by Proposition \ref{PropEndpoint}. \end{proof} In any case, the following much weaker conjecture seems plausible. \begin{conjecture}\label{ConjwDA} Not every left-listable number is left-Diophantine. That is, $\mathscr{D}\ne\Lambda$. \end{conjecture} The previous conjecture follows from the algebraicity conjecture (Conjecture \ref{ConjDA}). Furthermore: \begin{proposition} Conjecture \ref{ConjDA2} implies Conjecture \ref{ConjwDA}. That is, if $\mathscr{D}$ is a field, then $\mathscr{D}\ne \Lambda$. \end{proposition} \begin{proof} This is because $\Lambda$ is not even a ring; see \cite{Ambos} or Section 5.5 in \cite{DowneyHirschfeld}. \end{proof} It turns out that Conjecture \ref{ConjwDA} would be enough to show that $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. \begin{proposition} If $\mathscr{D}\ne \Lambda$, then we have the following: \begin{itemize} \item[(i)] $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. \item[(ii)] The field $\mathbb{Q}$ does not have the DPRM property. \item[(iii)] $\mathbb{Q}$ and $\mathbb{N}$ are not p.e. bi-interpretable as $\mathscr{L}_a$-structures. \end{itemize} \end{proposition} \begin{proof} If $\mathscr{D} \ne \Lambda$ then there is some $\alpha\in \mathbb{R}$ such that $L(\alpha)\subseteq \mathbb{Q}$ is listable but it is not Diophantine, which implies that $\mathbb{Q}$ does not have the DPRM property. We conclude by Proposition \ref{PropZQ}. \end{proof} \subsection{Unboundedness of the positive existential rank} \label{SecKollar} Let $\mathfrak{M}$ be an $\mathscr{L}$-structure with domain $M$. We say that $\mathfrak{M}$ has \emph{bounded p.e. rank} if there is a constant $B$ depending only on $\mathfrak{M}$ such that for every p.e. $\mathscr{L}$-definable set $S\subseteq M$ we have $\mathrm{rank}^{p.e.}_\mathfrak{M}(S)\le B$. Otherwise, we say that $\mathfrak{M}$ has \emph{unbounded p.e. rank}. (One can extend this notion to subsets of $M^r$, but the case $r=1$ is enough for our purposes.) For instance, Lemma \ref{LemmaCatBdd} shows that if $\mathfrak{M}$ has a p.e. $r$-catalogue for some $r\ge 1$, then $\mathfrak{M}$ has bounded p.e. rank. A well-known example: \begin{lemma} $\mathbb{N}$ as an $\mathscr{L}_a$-structure has bounded p.e. rank. In fact, it has p.e. $r$-catalogues for every $r\ge 1$. The same holds for the $\mathscr{L}_a$-structure $\mathbb{Z}$. \end{lemma} \begin{proof} In the case of $\mathbb{N}$ this follows from the DPRM theorem, Theorem \ref{ThmCatDPRM}, and Lemma \ref{LemmaCatBdd}. For $\mathbb{Z}$ it is the same argument, using Lemma \ref{LemmaZDPRM}. \end{proof} For global fields, the author expects the following: \begin{conjecture}\label{ConjBdd} Let $K$ be a global field and let $\mathscr{G}$ be a finite set of field generators. Consider $K$ as a structure over $\mathscr{L}=\mathscr{L}_a\cup\mathscr{G}$. Then $K$ has unbounded p.e. rank. In particular, it does not have p.e. $r$-catalogues for any $r\ge 1$. \end{conjecture} In support of this conjecture, we have the following \begin{theorem}\label{ThmCt} Let us consider the field of rational functions $\mathbb{C}(t)$ as a structure over a language $\mathscr{L}$ expanding $\mathscr{L}_a\cup\{t\}$ by some symbols of constant. Then $\mathbb{C}(t)$ has unbounded p.e. rank. In particular, it does not have p.e. $r$-catalogues for any $r\ge 1$. \end{theorem} For the proof, we need a consequence of Koll\'ar's work \cite{Kollar} (Theorem \ref{ThmKollar} stated below), which we state only over $\mathbb{C}(t)$ for simplicity, although an analogous result holds over function fields of complex projective curves. Given $n\ge 0$ let $\mathrm{Rat}_n$ be the set of all rational functions $\phi\in \mathbb{C}(t)$ of (topological) degree $n$ and note that $\mathbb{C}(t)=\cup_{n\ge 0} \mathrm{Rat}_n$. Writing such a $\phi$ as a fraction of polynomials and considering the coefficients of these polynomials (up to scaling, with the necessary non-vanishing conditions to ensure $\deg \phi=n$) we see that each $\mathrm{Rat}_n$ has a natural structure of quasi-projective variety. In particular, for a set $S\subseteq \mathbb{C}(t)$, it makes sense to consider varieties over $\mathbb{C}$ contained in $S$. \begin{theorem}[Koll\'ar]\label{ThmKollar} Let $K=\mathbb{C}(t)$. Let $X$ be a variety over $K$ and let $f$ be a regular function on $X$ defined over $K$. Let $S=f(X(K))\subseteq K$. Let $d\ge 0$ be an integer such that $S$ contains a constructible set of dimension $d$ over $\mathbb{C}$. Then at least one of the following holds: \begin{itemize} \item[(i)] $\dim_K(X)\ge d$ \item[(ii)] For all but finitely many $\alpha\in \mathbb{C}$, the set $S$ contains rational functions with poles at $\alpha$. \end{itemize} \end{theorem} \begin{proof} This follows from Theorem 4 in \cite{Kollar}. Indeed, (i) is implied by Theorem 4 (1) in \cite{Kollar} because $d$ is a lower bound for the Diophantine dimension of $S$ over $\mathbb{C}$ as defined there (as $S$ contains a constructible set of dimension $d$ over $\mathbb{C}$, it cannot be contained in a countable union of varieties of dimension smaller than $d$). On the other hand, each $\phi\in \mathbb{C}(t)$ of degree $n$ has an (effective) divisor of poles which in turn gives a point $\mathrm{Pole}(\phi)\in \mathrm{Symm}^n\mathbb{P}^1_\mathbb{C}(\mathbb{C})$ where $\mathrm{Symm}^n\mathbb{P}^1_\mathbb{C}(\mathbb{C})$ is the quotient of $\mathbb{P}^1_\mathbb{C}(\mathbb{C})^n$ by the action of the symmetric group in $n$ letters. Let $\mathrm{Pole}_n(S)=\{\mathrm{Pole}(\phi) : \phi\in S\cap \mathrm{Rat}_n\}\subseteq \mathrm{Symm}^n\mathbb{P}^1_\mathbb{C}(\mathbb{C})$. Theorem 4 (2) (b) in \cite{Kollar} asserts that for certain $n>0$ (in fact, infinitely many values of $n$) one has that $\mathrm{Pole}_n(S)$ contains a positive dimensional constructible set (namely, certain $\rho_m(D_m(\mathbb{C}))$ in the notation of \emph{loc. cit.}). Thus, the pre-image of $\mathrm{Pole}_n(S)\subseteq \mathrm{Symm}^n\mathbb{P}^1_\mathbb{C}(\mathbb{C})$ in $\mathbb{P}^1_\mathbb{C}(\mathbb{C})^n$ contains a positive dimensional constructible set, and so does some (hence, each) coordinate projection. The constructible sets of $\mathbb{P}^1_\mathbb{C}(\mathbb{C})$ are either finite or cofinite, hence item (ii) holds. \end{proof} With this at hand, we can prove Theorem \ref{ThmCt}. \begin{proof}[Proof of Theorem \ref{ThmCt}] This argument builds on the same construction appearing in Example 6 (1) of \cite{Kollar}. Let $S_n\subseteq \mathbb{C}(t)$ be the set of polynomials of degree $n$. Note that $\mathbb{C}$ is p.e. $\mathscr{L}$-definable over $\mathbb{C}(t)$ thanks to the Riemann-Hurwitz formula; e.g. using the $\mathscr{L}_a$-formula $\exists y, y^2=x^3+1$. Hence, $S_n=\{c_0+c_1t+...+c_nt^n: c_0,...,c_n\in \mathbb{C}\mbox{ and }c_n\ne 0\}\subseteq \mathrm{Rat}_n$ is p.e. $\mathscr{L}$-definable over $\mathbb{C}(t)$. In particular, taking $r(n)=\mathrm{rank}^{p.e.}_{\mathbb{C}(t)}(S_n)$ we see that there is an affine variety $X_n\subseteq \mathbb{A}^{r(n)+1}_{\mathbb{C}(t)}$ defined over $\mathbb{C}(t)$ such that $$ S_n = \{u_0\in \mathbb{C}(t) : \exists u_1...\exists u_{r(n)}, (u_0,u_1,...,u_{r(n)})\in X_n(\mathbb{C}(t))\}. $$ Taking $f:X_n\to \mathbb{A}^1_{\mathbb{C}(t)}$ as the projection onto the first coordinate (a morphism defined over $\mathbb{C}(t)$) we see that $S_n=f(X_n(\mathbb{C}(t)))$. Note that $S_n$ is isomorphic to $\mathbb{C}^{n}\times \mathbb{C}^\times$ as varieties over $\mathbb{C}$, so we can apply Theorem \ref{ThmKollar} with $d=n+1$. As (ii) does not hold, we must have $\dim_{\mathbb{C}(t)} X_n\ge d=n+1$. Since $X_n\subseteq \mathbb{A}^{r(n)+1}_{\mathbb{C}(t)}$, we conclude that $\mathrm{rank}^{p.e.}_{\mathbb{C}(t)}(S_n)=r(n)\ge n$. \end{proof} For our discussion, the relevant consequence of Conjecture \ref{ConjBdd} is the following. \begin{proposition}\label{PropBddnonDioph} Let $K$ be a global field and let $\mathscr{G}$ be a finite set of field generators for it. Consider $K$ as a structure over $\mathscr{L}=\mathscr{L}_a\cup\mathscr{G}$. If Conjecture \ref{ConjBdd} holds for $K$, then $K$ does not have the DPRM property and it is not p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. Thus: \begin{itemize} \item[(i)] In the case $K=\mathbb{Q}$ this implies that $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. \item[(ii)] In the case $K=k(t)$ for a finite field $k$, this implies that $k[t]$ is not Diophantine in $k(t)$ and that the field $k(t)$ is not p.e. bi-interpretable with $k[t]$. \end{itemize} \end{proposition} \begin{proof} The $\mathscr{L}$-structure $K$ is uniquely listable by Corollary \ref{CoroULfg}. Hence, the first part follows from Theorem \ref{ThmCatDPRM}, Lemma \ref{LemmaCatBdd}, and Theorem \ref{ThmCharDPRM}. In addition, (i) follows from Proposition \ref{PropZQ} while (ii) follows from Proposition \ref{Propkt}. \end{proof} We remark that the analogue of items (i) and (ii) in Proposition \ref{PropBddnonDioph} for a number field $K$ is conditional to $\mathbb{Z}$ being Diophantine in $O_K$, which is not known in general. See Proposition \ref{PropOK} and the discussion after it. For more information about the p.e. rank in the case of fields, we refer the reader to \cite{DDF} by Daans, Dittmann, and Fehm. In particular, they independently arrived to the observation that if the p.e. rank of Diophantine subsets of $\mathbb{Q}$ is unbounded, then $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. \subsection{A Diophantine approximation conjecture}\label{SecKey} Let $K$ be a global field and let $v$ be a place of $K$ with normalized absolute value $|-|_v$; that is, if $v$ corresponds to a prime ideal $\mathfrak{p}$ then $|x|_v=[O_K:\mathfrak{p}]^{\mathrm{ord}_\mathfrak{p}(x)}$ while if $v$ corresponds to a (possibly real) embedding $\sigma: K\to \mathbb{C}$ then $|x|_v=|\sigma(x)|$ where $|-|$ is the usual absolute value on $\mathbb{C}$. Let $K_v$ be the completion of $K$ at $v$. For every projective variety $X$ over $K$ and every Cartier divisor $D$ on $X$ defined over $K$, there is a local Weil function $\lambda_{X,D,v}:X(K_v)-\mathrm{supp}\,(D)\to \mathbb{R}$, where $\mathrm{supp}\,(D)$ denotes the support of $D$; see Section 6.2 in \cite{Serre} for a precise definition and construction. Roughly speaking, if $D$ is represented by $\{(U_j, f_j)\}_j$ for some finite open cover $\{U_j\}_j$ of $X$ and $f_j$ are the corresponding local equations of $D$, then $\lambda_{X,D,v}(P)= -\log |f_{D,j}(P)|_v + \alpha_j(P)$ for all $P\in U_j-\mathrm{supp}\,(D)$, where $\alpha_j$ is a certain nice bounded function. In particular, if $D$ is effective and $P_1,P_2,...$ is a sequence in $X(K_v)-\mathrm{supp}\,(D)$, then $\lambda_{X,D,v}(P_j)\to \infty$ if and only if the sequence $P_j$ approaches $\mathrm{supp}\,(D)(K_v)$ $v$-adically. Since we will be concerned with points in the support of divisors, it is worth pointing out the following clarification. If $X$ is a variety over a field $k$ and $D$ is an effective Cartier divisor on $X$ defined over $k$, then $D$ is locally given by equations over $k$ on an open covering of $X$, and there is no need for the variety $\mathrm{supp}\,(D)$ to have $k$-rational points. For instance, if $k=\mathbb{Q}$ and $X=\mathbb{P}^1_\mathbb{Q} = {\rm Proj}\, \mathbb{Q}[x_0,x_1]$, we have the open covering defined over $\mathbb{Q}$ given by $U_0=\{[x_0:x_1]:x_0\ne 0\}$ and $U_1=\{[x_0:x_1]:2x_0^2\ne x_1^2\}$. Let us define the divisor $D$ represented by $\{(U_0, 2x_0^2-x_1^2), (U_1,1) \}$. Then $D$ is an effective divisor of degree $2$ defined over $\mathbb{Q}$ with $\mathrm{supp}\,(D)(\mathbb{Q})=\emptyset$ and $\mathrm{supp}\,(D)(\mathbb{R})=\{[1:-\sqrt{2}],[1:\sqrt{2}]\}$. We would like to propose the following: \begin{conjecture}\label{ConjKey} Let $K$ be a global field and let $v$ be a place of $K$. Let $X$ and $Y$ be positive dimensional irreducible projective varieties over $K$, let $f : X\dasharrow Y$ be a dominant rational map defined over $K$, and let $U$ be a non-empty Zariski open set of $X$ defined over $K$ and contained in the domain of $f$. Suppose that $X(K)$ is Zariski dense in $X$. Then there is an effective Cartier divisor $D$ on $Y$ defined over $K$, such that $\lambda_{Y, D, v}$ is unbounded on $f(U(K))-\mathrm{supp}\,(D)$. That is, there is a sequence of $K$-rational points in $U-f^{-1}(\mathrm{supp}\, (D))$ whose images under $f$ approach $\mathrm{supp}\,(D)(K_v)$ $v$-adically. \end{conjecture} Applications of this conjecture in the study of Diophantine subsets of global fields will be discussed in Section \ref{SecnD}. A slightly different version of Conjecture \ref{ConjKey} using morphisms instead of rational maps was proposed by the author in \cite{OWR}, but the current form of the conjecture is simpler to use in applications and it gives a wider range of search for potential counterexamples (if any). Even the simplest case when $U=X=Y$ and $f=\mathrm{Id}_X$ is open in general. In this case, Conjecture \ref{ConjKey} says that there is a sequence of $K$-rational points of $X$ that $v$-adically accumulates towards the support of some effective Cartier divisor on $X$ defined over $K$. Conjecture \ref{ConjKey} can be reduced to a special case. \begin{proposition}\label{PropP1} Let $K$ be a global field and $v$ a place of $K$. If Conjecture \ref{ConjKey} holds for $K$ and $v$ in the special case when $Y=\mathbb{P}^1_K$, then it holds in general for this choice of $K$ and $v$. \end{proposition} \begin{proof} In the setup of Conjecture \ref{ConjKey}, consider a non-constant rational function $g:Y\dasharrow \mathbb{P}^1_K$ and let $Z\subseteq Y$ be the locus where $g$ is not defined. Let $U\subseteq X$ be a non-empty Zariski open set defined over $K$ and contained in $\mathrm{dom}(f)$. Let $U'=U-f^{-1}(Z)$ and note that $U'$ is contained in $\mathrm{dom}(g\circ f)$. Let us apply the conjecture to $g\circ f:X\dasharrow \mathbb{P}^1_K$ and $U'$. This gives a divisor $D$ on $\mathbb{P}^1_K$. Let $E$ be any effective Cartier divisor on $Y$ defined over $K$ whose support contains $g^{-1}(\mathrm{supp}\,(D))\cup Z$. Let $x_1,x_2,...$ be a sequence in $U'(K)$ (hence, in $U(K)$) such that $(g(f(x_j)))_{j\ge 1}$ $v$-adically approaches $\mathrm{supp}\,(D)(K_v)$. Since $Y$ is projective and $K_v$ is locally compact, $Y(K_v)$ is compact and the sequence $(f(x_j))_{j\ge 1}$ has a $v$-adic accumulation point $y\in Y(K_v)$. Note that either $y\in Z(K_v)$ or $g$ is defined at $y$, in which case $y \in g^{-1}(\mathrm{supp}\,(D))(K_v)$. In either case, $y\in \mathrm{supp}\,(E)(K_v)$. Therefore, $(f(x_j))_{j\ge 1}$ has a subsequence that $v$-adically approaches $\mathrm{supp}\,(E)(K_v)$. \end{proof} The case $K=\mathbb{Q}$ is particularly relevant for us since, as we will see, Conjecture \ref{ConjKey} for $K=\mathbb{Q}$ implies that $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$ (cf. Proposition \ref{PropKeyQ} below). In this case we have \begin{proposition} Mazur's Conjecture \ref{ConjMazur} implies Conjecture \ref{ConjKey} for $K=\mathbb{Q}$ and $v=\infty$ the archimedean place. Furthermore, a positive answer to Mazur's question \ref{QuestionMazur} for a given number field $K$ and a place $v$ of it implies Conjecture \ref{ConjKey} for this choice of $K$ and $v$. \end{proposition} \begin{proof} By Proposition \ref{PropP1} it suffices to consider the case of an irreducible projective variety $X$ over $K$ with $X(K)$ Zariski dense and a dominant rational function $f:X\dasharrow \mathbb{P}^1_K$ defined over $K$. Let $U\subseteq X$ be a non-empty Zariski open set defined over $K$. Shrinking $U$ if necessary we may assume that $U$ is affine and that $f:U\to \mathbb{A}^1_K$ is a regular function defined over $K$. Then $f(U(K))\subseteq K$ is an infinite Diophantine set and in either case (assuming Conjecture \ref{ConjMazur} or a positive answer to Question \ref{QuestionMazur}) Lemma \ref{LemmaApplyMazur} would give that $S=f(U(K))$ has at most finitely many $v$-adically isolated points. Let $y\in S$ be a point which is not $v$-adically isolated in $S$. Then we can take $D$ as the divisor on $\mathbb{P}^1_K$ determined by $y\in f(U(K))$, which is as required by Conjecture \ref{ConjKey} because there is a sequence in $S-\{y\}$ that $v$-adically converges to $y$. \end{proof} In particular, the available evidence for Mazur's Conjecture \ref{ConjMazur} and for a positive answer to Question \ref{QuestionMazur} provides evidence for Conjecture \ref{ConjKey} in the number field setting. Since the analogue of Mazur's conjecture fails over function fields due to the example provided by Theorem \ref{ThmDiscrete} and since the previous result shows a close connection between Mazur's conjecture and Conjecture \ref{ConjKey}, one may ask whether Theorem \ref{ThmDiscrete} can also be used to give a counterexample to the function field version of Conjecture \ref{ConjKey}. On the contrary, it gives a rather non-trivial example where Conjecture \ref{ConjKey} has essentially one chance to work, and it does. \begin{example} For $p>2$ let $K=\mathbb{F}_p(t)$ and let $v$ be the $t$-adic valuation on $K$. Let $U$ be the afine curve in Theorem \ref{ThmDiscrete} and let $f, g:U\to \mathbb{A}^1_K\subseteq \mathbb{P}^1_K$ be the projection maps from $U$ onto the $x$ and $y$ coordinates respectively. Let $X$ be a projective closure of $U$ and extend $f,g$ to rational functions $f,g:X\dasharrow \mathbb{P}^1_K$. Let $T$ be the affine coordinate on $\mathbb{A}^1_K$. For the map $f$ we note that the only $t$-adic limit point of $f(U(K))=S_1=\{t^{p^n}: n\ge 0\}$ in $K_v=\mathbb{F}_p((t))$ is $0$. So, we can take the divisor $D=\{T=0\}$. On the other hand, the elements of $g(U(K))=S_2$ $t$-adically accumulate towards the formal power series $f_b=b+t+t^p+t^{p^2}+...\in \mathbb{F}_p[[t]]\subseteq K_v$ for $b\in \mathbb{F}_p$, and these are the only limit points. The power series $f_b$ for $b\in \mathbb{F}_p$ are algebraic over $\mathbb{F}_p(t)$ and, in fact, they are the roots of $T^p-T+t$. We can take the divisor $D=\{T^p-T+t=0\}$ on $\mathbb{A}^1_K\subseteq \mathbb{P}^1_K$. In both cases, Conjecture \ref{ConjKey} holds with the given choices of the effective divisor $D$ on $\mathbb{A}^1_K\subseteq \mathbb{P}^1_K$. We note that since the Diophantine sets $S_1$ and $S_2$ have only finitely many $t$-adic limit points, the choice of $D$ is essentially unique (up to multiplicity and up to adding more components). \end{example} \subsection{Consequences of the Diophantine approximation conjecture} \label{SecnD} Applications of Conjecture \ref{ConjKey} are simplified by the following observation. \begin{lemma}\label{LemmaKey} Let $K$ be a global field and let $v$ be a place of $K$. Let $S\subseteq K$ be an infinite Diophantine subset which is $v$-adically bounded in $K$ and has exactly one $v$-adic limit point $\alpha\in K_v$. If Conjecture \ref{ConjKey} holds for $K$ and $v$, then $\alpha$ is algebraic over $K$. \end{lemma} \begin{proof} Since $S$ is Diophantine, there is an affine variety $U$ over $K$ and a morphism $f:U\to \mathbb{A}^1_K$ such that $f(U(K))=S$. Possibly passing to a subsequence in $S$ and shrinking $U$, we may assume that $U$ is irreducible and that $U(K)$ is Zariski dense in $U$. Let $X$ be a projective closure of $U$ and extend $f$ to a rational function $f: X\dasharrow \mathbb{P}^1_K$. Since $S$ is $v$-adically bounded in $K$, the point $\alpha\in K_v\subseteq \mathbb{P}^1_K(K_v)$ is the only $v$-adic limit point of $f(U(K))=S$ in $\mathbb{P}^1_K(K_v)$. If $\alpha$ were transcendental over $K$, it would not belong to the support of any Cartier divisor on $\mathbb{P}^1_K$ defined over $K$. This would contradict Conjecture \ref{ConjKey}. \end{proof} \begin{proposition} If Conjecture \ref{ConjKey} holds for a global field $K$ and any place $v$, then $K$ does not have the DPRM property over the language $\mathscr{L}=\mathscr{L}_a\cup\mathscr{G}$, where $\mathscr{G}$ is a finite set of field generators for $K$. In particular, $K$ is not p.e. bi-interpretable with the $\mathscr{L}_a$-structure $\mathbb{N}$. \end{proposition} \begin{proof} One can take a listable set $S$ consisting of the terms of a sequence that $v$-adically converges to a transcendental $\alpha\in K_v$ ---such examples are easy to produce using, for instance, the same idea as in Liouville's explicit construction of transcendental numbers. The result now follows from Lemma \ref{LemmaKey}. The final part is due to Theorem \ref{ThmCharDPRM}. \end{proof} From Propositions \ref{PropZQ} and \ref{Propkt} we immediately deduce: \begin{proposition}\label{PropKeyQ} If Conjecture \ref{ConjKey} holds for $\mathbb{Q}$ and any $v$, then $\mathbb{Z}$ is not Diophantine in $\mathbb{Q}$. \end{proposition} \begin{proposition} Let $k$ be a finite field. If Conjecture \ref{ConjKey} holds for $K=k(t)$ and any place $v$, then $k[t]$ is not Diophantine in $k(t)$. \end{proposition} The non-Diophantineness of $\mathbb{Z}$ in $\mathbb{Q}$ and of $k[t]$ in $k(t)$ is also implied by different conjectures, as discussed in previous sections. However, Conjecture \ref{ConjKey} has other consequences: \begin{proposition}\label{Proptn} Let $k$ be a finite field. If Conjecture \ref{ConjKey} holds for $K=k(t)$ and the place $v$ given by the $t$-adic valuation, then $\{t^n: n\ge 0\}$ is not Diophantine in $k(t)$. \end{proposition} This should be compared to the following result of Pheidas (see Lemmas 1 and 3 in \cite{PheidasInv} for characteristic $p\ge 3$, and see \cite{Videla} for characteristic $2$; see also \cite{PastenFrob} for a generalization to function fields of bounded genus uniformly on the characteristic): \begin{theorem}[Pheidas]\label{ThmPheidasps} Let $k$ be a finite field of characteristic $p>0$. The binary relation $x\le_p y $ on $k(t)$ defined by $\exists s\ge 0, y=x^{p^s}$ is Diophantine over $k(t)$. In particular, the set $\{t^{p^n} : n\ge 0\}$ is Diophantine in $k(t)$. \end{theorem} Let us recall some preliminaries for the proof of Proposition \ref{Proptn}. Let $m\ge 2$ be an integer. By \emph{$m$-automaton}, we mean a finite deterministic automaton with input language $\{0,1,...,m-1\}$ and output states $0$ (``reject'') and $1$ (``accept''). We say that a set $A\subseteq \mathbb{N}$ is \emph{$p$-recognizable} if there is a $p$-automaton $\mathscr{M}_A$ that computes the characteristic function of $A$. More precisely, for an integer $r\in \mathbb{N}$, the $m$-automaton $\mathscr{M}_A$ takes as input the string formed by the base $m$ digits of $r$ (from less to most significant) and it reaches the final state $0$ or $1$ according to $r\not\in A$ or $r\in A$ respectively. For a set $A\subseteq \mathbb{N}$ we define the counting function $N(A,x)=\#\{n\in A : n\le x \}$. Let us recall the following classical estimate. \begin{lemma}\label{LemmaRec} Let $m\ge 2$ and let $A\subseteq \mathbb{N}$ be $m$-recognizable. If $A$ is infinite, then there are constants $x_0\ge 1$ and $c>0$ such that for all $x\ge x_0$ we have $N(A,x)>c\cdot \log x$. \end{lemma} \begin{proof} See Proposition 5 in \cite{MinskyPapert} for the case $m=2$; the general case is proved in the same way. Alternatively, there is the stronger estimate provided by Theorem 12 in \cite{Cobham}. \end{proof} Let $A\subseteq \mathbb{N}$ and let $p$ be a prime. The generating series of $A$ over $\mathbb{F}_p$ is the formal power series $f_A=\sum_{a\in A} t^a\in \mathbb{F}_p[[t]]$. The following result is due to Christol \cite{Christol} \begin{theorem}[Christol]\label{ThmChristol} Let $A\subseteq \mathbb{N}$ and let $p$ be a prime. The set $A$ is $p$-recognizable if and only if $f_A\in \mathbb{F}_p[[t]]$ is algebraic over $\mathbb{F}_p(t)$. \end{theorem} For a prime $p$, the binary relation $|_p$ on $\mathbb{N}$ is defined as follows: $x|_py$ if and only if there is $s\ge 0$ with $y=xp^s$. The following definability result is due to Pheidas \cite{Pheidasdivp}. \begin{theorem}[Pheidas]\label{ThmPheidasdivp} The multiplication function $\cdot: \mathbb{N}\times \mathbb{N}\to \mathbb{N}$ is p.e. definable over the structure $(\mathbb{N}; 0,1,+,|_p,=)$. \end{theorem} With these results at hand, we can prove Proposition \ref{Proptn}. \begin{proof}[Proof of Proposition \ref{Proptn}] Assume Conjecture \ref{ConjKey} for $k(t)$ and $v$ the $t$-adic valuation. Let $p$ be the characteristic of $k$. The rule $t\mapsto t+1$ defines a $k$-linear field automorphism of $k(t)$, so it suffices to show that $P=\{(1+t)^n : n\ge 0\}$ is not Diophantine over $k(t)$. Let $A\subseteq \mathbb{N}$ be the set of integers which can be written as the sum of different integers of the form $p^{j^j}$ for $j\ge 1$. That is, $A$ consists of all integers whose base $p$ expansion only has digits $0$ and $1$, and the digit $1$ can only occur in front of $p^n$ for $n=1,4,27,...,j^j,...$ Explicitly, $$ A=\{0, p, p^4, p+p^4, p^{27},p+p^{27}, p^4+p^{27}, p+p^4+p^{27},...\} $$ We note that for $x=p^{j^j}$ we have $N(A,x)\le 2^{j}$, as every $n\in A$ with $n\le x$ is uniquely determined by some subset of $\{p^{i^i}: 1\le i\le j\}$ ---in fact, $N(A,x)=1+2^{j-1}$. Thus, for every $c>0$ there is some $x_0$ such that for all $x=p^{j^j}>x_0$ we have $N(A,x)< j^j\cdot c\log p=c\log x$. By Lemma \ref{LemmaRec} and Theorem \ref{ThmChristol}, the power series $f_A\in \mathbb{F}_p[[t]]\subseteq k[[t]]$ is transcendental over $k(t)$. For $r\ge 1$ let $n_r=\sum_{j=1}^r p^{j^j}$ and define $S=\{n_r : r\ge 1\}=\{p, p+p^4, p+p^4+p^{27},...\}\subseteq \mathbb{N}$. Then $$ (1+t)^{n_r} = \prod_{j=1}^r (1+t)^{p^{j^j}}=\prod_{j=1}^r \left(1+t^{p^{j^j}}\right). $$ From the last product, the sequence of polynomials $(1+t)^{n_r}$ converges $t$-adically to $f_A$ as $r\to \infty$. For the sake of contradiction, suppose that $P=\{(1+t)^{n} : n\ge 0\}$ is Diophantine over $k(t)$. Then, by Theorem \ref{ThmPheidasps}, the map $\theta: P\to \mathbb{N}$ given by $\theta((1+t)^n)=n$ determines a p.e. interpretation of the structure $(\mathbb{N};0,1,+,|_p,=)$ in $k(t)$, where $k(t)$ is seen as a structure over $\mathscr{L}=\mathscr{L}_a\cup\mathscr{G}$ for a finite set $\mathscr{G}$ of field generators for $k(t)$. By Theorem \ref{ThmPheidasdivp}, $\theta$ also determines a p.e. interpretation of $\mathbb{N}$ seen as an $\mathscr{L}_a$-structure. Note that the set $S=\{n_r : r\ge 1\}\subseteq \mathbb{N}$ defined above is listable because it is defined from elementary arithmetic functions. By the DPRM theorem (on $\mathbb{N}$) we get that $S$ is p.e. $\mathscr{L}_a$-definable over $\mathbb{N}$. Hence, $\theta^*(S)=\{(1+t)^{n_r} : r\ge 0\}$ is p.e. $\mathscr{L}$-definable over $k(t)$ because $\theta$ is a p.e. interpretation. In particular, $T:=\{(1+t)^{n_r} : r\ge 0\}$ is Diophantine in $k(t)$. The Diophantine set $T$ is $t$-adically bounded and we proved that its only $t$-adic limit point is $f_A\in k[[t]]$, which is transcendental over $k(t)$. By Lemma \ref{LemmaKey}, this contradicts Conjecture \ref{ConjKey}. \end{proof} We remark that, alternatively, the use of Christol's theorem in the previous argument can be replaced by an ad hoc transcendence lemma for lacunary power series, after some modification of the construction. However, we feel that Christol's theorem might be relevant in approaching Conjecture \ref{ConjKey} in the function field setting, which is why we decided to highlight this connection. \subsection{Final questions} \begin{question} Does every finitely generated infinite domain have the DPRM property? In other words, is every finitely generated infinite domain p.e. bi-interpretable with the semi-ring $\mathbb{N}$? \end{question} \begin{question} Is there some finitely generated field which has the DPRM property? Namely, is there some finitely generated field which is p.e. bi-interpretable with the semi-ring $\mathbb{N}$? \end{question} \section{Acknowledgments} I would like to thank Barry Mazur and Bjorn Poonen for several discussions on the conjectures presented in Section \ref{SecConjectures}. I am also grateful to Philip Dittmann for his comments on a first version of this manuscript and for informing me about his joint work with Nicolas Daans and Arno Fehm \cite{DDF}. The extremely valuable comments by the referee are gratefully acknowledged. This research was supported by ANID (ex CONICYT) FONDECYT Regular grant 1190442 from Chile.
{ "timestamp": "2021-11-16T02:05:26", "yymm": "2012", "arxiv_id": "2012.14054", "language": "en", "url": "https://arxiv.org/abs/2012.14054" }
\section{Introduction}\label{sec:intro} A \emph{cut} of a graph is a partition of the vertices of the graph into two disjoint subsets. Graph cuts play an important role in combinatorial optimization and graph theory. The classical \emph{minimum cut problem} is well known to be polynomially solvable~\cite{FordFulkerson:flow}. Due to the rich application realm of this problem, many variants and extensions have been investigated. Some problems ask to partition the graph into more than two parts to disconnect some vertices such as the \emph{$k$-way cut problem} (the \emph{$k$-cut problem})~\cite{polynominalKCut,KT11}, the \emph{multiterminal cut problem}~\cite{ComplexityMultitermial,X2010} and the \emph{multicut problem}~\cite{Multicut:bounded,MR11}. Some problems are still going to partition the graph into two parts, but with some additional requirements beyond the disconnectivity. One of the most extensively studied additional requirements is the constraint on the numbers of vertices or edges in each of the two parts. For examples, the \emph{balanced cut problem}~\cite{ARV04,FM06,LR99} and the \emph{minimum bisection problem}~\cite{CLPPS14,FK02,FKN00} require the numbers of vertices in the two parts of the cut as close as possible. The (\emph{balanced}) \emph{judicious bipartition problem}~\cite{Judicious} has conditions on the numbers of edges in the two parts. Some other well studied additional requirements include conditions on the connectivity of the two parts such as the \emph{2-disjoint connected subgraphs problem}~\cite{cppw14}, and conditions on the degree of the two parts, such as the series of bipartition problems with degree constraints~\cite{BB:partition,decompose:treewidth,decompose:alg,Sdco,XN17}. In this paper, we study the \emph{bounded-degree cut} problem, which belongs to the latter kind of the extensions: to partition a given graph into two parts with some degree constraints on the induced subgraphs of the two parts. We mainly consider the upper bounds of the degree. The problem is defined as follows. \noindent\rule{\linewidth}{0.2mm} \textsc{bounded-degree cut} (with parameter: $k$)\\ \textbf{Instance:} A multigraph $G=(V,E)$, two disjoint nonempty vertex subsets $A,B\subseteq V$, two functions $\mathrm{u}_A$ and $\mathrm{u}_B$ from $V$ to $\{0,1,\ldots,|E|\}$ and an integer $k\geq 0$.\\ \textbf{Question:} Does there exist a minimal $(A,B)$-cut $(V_A,V_B)$ such that\\ the number of edges with one endpoint in $V_A$ and one endpoint in $V_B$ is at most $k$,\\ for each vertex $v\in V_A$, the degree of it in the induced graph $G[V_A]$ is at most $\mathrm{u}_A(v)$, and\\ for each vertex $v\in V_B$, the degree of it in the induced graph $G[V_B]$ is at most $\mathrm{u}_B(v)$?\\ \rule{\linewidth}{0.2mm} During the last decade, cut related problems were extensively studied from the viewpoint of parameterized algorithms. The parameterized complexity of many variants and extensions of the minimum cut problem have be developed. In this paper, we will study \textsc{bounded-degree cut} from the viewpoint of parameterized algorithms. The naive brute-force algorithm to enumerate all partitions can solve \textsc{bounded-degree cut} in $2^{|V|}\cdot|G|^{O(1)}$ time. The exponential part of the running time is related to the input size $|V|$. We show that this problem admits a \emph{fixed-parameter tractable} (FPT) algorithm with parameter $k$, an algorithm with running time $f(k)\cdot|G|^{O(1)}$ for a computable function $f(\cdot)$. Our main result is the first single-exponential FPT algorithm for \textsc{bounded-degree cut}. \begin{theorem}\label{th:mainresult} \textsc{bounded-degree cut} admits an FPT algorithm that runs in $2^{18k} \cdot |G|^{O(1)}$ time. \end{theorem} This theorem also implies that \textsc{bounded-degree cut} can be solved in polynomial time for $k=O(\log |G|)$. \subsection{Related work} There are several interesting contributions on finding a cut or partition of a graph with some additional requirements. It is known that the minimum ($s,t$)-cut problem is polynomially solvable. However, the \emph{balanced minimum ($s,t$)-cut problem} is NP-hard~\cite{FM06}, which is to decide whether there is a minimum ($s,t$)-cut such that the number of vertices in each part is at most $0<\alpha< 1$ times of the total vertex number. We can add some trivial vertices in the input graph to make $\alpha$ always being $0.5$. Note that in this problem, the cut is required to be a minimum ($s,t$)-cut. Let $k$ denote the size of a minimum ($s,t$)-cut. By developing a dynamic programming algorithm, Feige and Mahdian~\cite{FM06} showed that the vertex-deletion variant of the balanced minimum ($s,t$)-cut problem is FPT with the parameter $k$. This algorithm also works for the edge-deletion version. Another related problem is the (\emph{vertex}) \emph{minimum bisection problem}, which is to find a (vertex) cut of size at most $k$ such that the two parts of the cut have the same number of vertices. Marx's result in~\cite{M06} implies that the vertex minimum bisection problem is W[1]-hard with the parameter $k$. Cygan et.al.~\cite{CLPPS14} showed that the edge vertex version of the minimum bisection problem is FPT. The above problems have requirements on the vertex numbers in the two parts of the cut or partition. The \emph{judicious bipartition problem} requires that the numbers of edges in the two parts are bounded by $k_1$ and $k_2$ respectively. Lokshtanov et.al.~\cite{Judicious} proved that the judicious bipartition problem is FPT with parameter $k_1+k_2$. In this paper, we consider \textsc{bounded-degree cut}, which is a cut problem with additional requirements on the upper bound of the degree of each vertex in the two parts, and take the cut size as the parameter. \subsection{Our methods} The main idea of the algorithm is to construct from a given instance in a graph $G$ a set of at most $2^{18k}$ new ``easy'' instances on the same graph with a special structure such that (i) the feasibility of each easy instances can be tested in $|G|^{O(1)}$ time; and (ii) the original instance is feasible if and only if at least one of the easy instances is feasible. Constructing such easy instances and testing the feasibility of all these give an FPT algorithm for the original instance. The idea of converting a general instance to a set of ``easy'' instances has been used to design parameterized algorithms for several hard and important problems~\cite{topological,KT11,randomized,CLPPS14}. The construction of easy instances is the most important step. Some of the crucial techniques used in our construction is based on the concept of \emph{important cut} (or \emph{important separator}) introduced by Marx~\cite{M06}. Important cuts and separators play an important role in designing FPT algorithms for cut problems. The fixed-parameter tractability of the (directed) multiterminal cut problem~\cite{CLL2009,CHM2013}, the multicut problem~\cite{MR11}, the directed feedback vertex set problem~\cite{CLLOR2008,subsetFVS} and many other important problems were proved by using important cuts and separators together with some other techniques. We will apply important cuts in a nontrivial way to obtain some general lemmas for bounded sets related to cuts. These are crucial for us to design FPT algorithms. The framework of our algorithm is as follows. For a given instance $(G,A,B, \mathrm{u}_A, \mathrm{u}_B,k)$ with a feasible $(A,B)$-cut $(V_1,V_2)$, we try to guess some subsets $V'_1\subseteq V_1\setminus A$ and $V'_2\subseteq V_2\setminus B$ so that the new instance $(G,A^*=A\cup V'_1, B^*=B\cup V'_2, \mathrm{u}_A, \mathrm{u}_B,k)$ remains feasible and is an ``easy'' instance in the sense that the feasibility can be tested in $n^{O(1)}$ time. % We call a vertex $v$ in $G$ \emph{$A$-unsatisfied} (resp., \emph{$B$-unsatisfied}) if its degree in $G$ is greater than $\mathrm{u}_A(v)$ (resp., $\mathrm{u}_B(v)$), and call an $A$- or $B$-unsatisfied vertex \emph{unsatisfied}. We first guess whether each unsatisfied vertex belongs to $V_1$ or $V_2$. Although the number of unsatisfied vertices may not be bounded by a function of $k$, the set $Z_{A1}$ of $A$-unsatisfied vertices in $V_1$ can contain at most $k$ vertices, because each vertex in $Z_{A1}$ must be adjacent to a vertex in $V_2$ to satisfy the degree constraint. Symmetrically the set $Z_{B2}$ of $B$-unsatisfied vertices in $V_2$ can contain at most $k$ vertices. By applying the result on important cuts and our new lemmas, we construct at most $2^{12k}$ pairs $(X_1,X_2)$ of vertex subsets one of which is equal to $(Z_{A1},Z_{B2})$. For the set $(X_1,X_2)=(Z_{A1},Z_{B2})$ and the set $Z_{A2}$ (resp., $Z_{B1}$) of $A$-unsatisfied vertices in $V_2$ (resp., $B$-unsatisfied vertices in $V_1$), we see that the new instance $(G, A'=A\cup Z_{A1}\cup Z_{B1}, B'=B\cup Z_{A1}\cup Z_{B2}, \mathrm{u}_A, \mathrm{u}_B,k)$ remains feasible. However, this instance may not be ``easy'' yet in our sense, because whether the degree constraint on a vertex in $Z_{A1}$ or $Z_{B2}$ holds or not depends on a choice of an $(A',B')$-cut in the new instance. We next guess whether each neighbor of a vertex in $Z_{A1}\cup Z_{B2}$ belongs to $V_1$ or $V_2$. We see that the set $W_{B1}$ of neighbors of $Z_{B2}$ belonging to $V_1$ can contain at most $k$ vertices, because the number of such neighbors is bounded from above by the cut-size of $(V_1,V_2)$. Symmetrically the set $W_{A2}$ of neighbors of $Z_{A1}$ belonging to $V_2$ contains at most $k$ vertices. By applying the result on important cuts and our new lemmas again, we construct at most $2^{6k}$ pairs $(Y_1,Y_2)$ of vertex subsets one of which is equal to $(W_{B1},W_{A2})$. Let $W_{B2}$ (resp., $W_{A1}$) denote of neighbors of $Z_{B2}$ belonging to $V_2$ (resp., of $Z_{A1}$ belonging to $V_1$). Then for the right choice $(X_1,X_2)=(Z_{A1},Z_{B2})$ and $(Y_1,Y_2)=(W_{B1},W_{A2})$, the resulting instance $(G, A^*= A\cup Z_{A1}\cup Z_{B1}\cup W_{A1}\cup W_{B1}, B^*=B\cup Z_{A1}\cup Z_{B2}\cup W_{A2}\cup W_{B2}, \mathrm{u}_A, \mathrm{u}_B,k)$ remains feasible and is an easy instance where the degree constraint on a vertex in $Z_{A1}$ or $Z_{B2}$ holds or not does not depend on a choice of an $(A^*,B^*)$-cut in the new instance. The remaining part of the paper is organized as follows. Section~\ref{sec:prelimi} reviews basic notations on graphs and cuts and properties on minimum cuts and important cuts. Section~\ref{sec:candidate} introduces some technical lemmas, which will be building blocks of our algorithm. We believe that these lemmas can be used to design FPT algorithms for more problems. Section~\ref{sec:easy_case} defines ``easy'' instances and proves the polynomial-time solvability. Based on the results in Section~\ref{sec:candidate}, Section~\ref{sec:branching} describes how to generate candidate set pairs $(X_1,X_2)$ for the pair $(Z_{A1},Z_{B2})$ and $(Y_1,Y_2)$ for the pair $(W_{B1},W_{A2})$. Finally Section~\ref{sec:conclusion} makes some concluding remarks. \section{Preliminaries}\label{sec:prelimi} In this paper, a graph $G=(V,E)$ stands for an undirected multigraph with a vertex set $V$ and an edge set $E$. We will use $n$ and $m$ to denote the sizes of $V$ and $E$, respectively. Let $X$ be a subset of $V$. We use $G-X$ to denote the graph obtained from $G$ by removing vertices in $X$ together with all edges incident to vertices in $X$. Let $G[X]$ denote the graph induced by $X$, i.e., $G[X]= G -(V\setminus X)$. Let $N_G(v)$ denote the set of neighbors of a vertex $v$ in $G$, and let $N_G(v;X)\triangleq N_G(v)\cap X$. Let $N_G(X)$ denote the set of neighbors $u\in V\setminus X$ of a vertex $v\in X$, i.e., $N_G(X)= \bigcup_{v\in X}N_G(v;V\setminus X)$. For two disjoint vertex subsets $X$ and $Y$, the set of edges with one endpoint in $X$ and one endpoint in $X$ is denoted by $E_G(X,Y)$, and $E_G(X,V\setminus X)$ may be simply written as $E_G(X)$. Define $\mathrm{deg}_G(v)\triangleq |E_G(\{v\})|$ and $\mathrm{deg}_G(v;X)\triangleq |E_{G}(\{v\},X\setminus \{v\})|$. \begin{definition} \emph{\textbf{\emph{($(S,T)$-cuts)}} For two disjoint vertex subsets $S$ and $T$, an ordered pair $(V_1,V_2=V\setminus V_1)$ is called an {\em $(S,T)$-cut} if $S\subseteq V_1$ and $T\subseteq V_2$, and its {\em cut-size} is defined to be $|E_G(V_1)|$.} \end{definition} \begin{definition} \emph{\textbf{\emph{(minimal $(S,T)$-cuts, minimum $(S,T)$-cuts and MM $(S,T)$-cuts)}} An $(S,T)$-cut $(V_1,V_2)$ is minimal if $E_G(V_1)$ does not contain $E_G(V'_1)$ or $E_G(V'_2)$ as a subset for any $S\subseteq V'_1 \subsetneqq V_1$ and $T\subseteq V'_2 \subsetneqq V_2$. An $(S,T)$-cut $(V_1,V_2)$ is minimum if its cut-size $|E_G(V_1,V_2)|$ is minimum over all $(S,T)$-cuts. An $(S,T)$-cut $(V_1,V_2)$ is called an MM $(S,T)$-cut if it is a minimum $(S,T)$-cut such that $|V_1|$ is maximum over all minimum $(S,T)$-cuts.} \end{definition} \begin{lemma}\label{lm:min_cut} \emph{(\cite{FordFulkerson:flow,ComplexityMultitermial})} For two disjoint vertex subsets $S, T\subseteq V$, an MM $(S,T)$-cut is unique and it can be found in $O(\min\{n^{2/3},m^{1/2}\}m)$ time. \end{lemma} \begin{definition}\emph{ \textbf{\emph{(important cuts)}} A minimal $(S,T)$-cut $(X,V\setminus X)$ is called an important $(S,T)$-cut if there is no $(S,T)$-cut $(X',V\setminus X')$ such that $X'\supsetneqq X$ and $|E_G(X')| \leq |E_G(X)|$.} \end{definition} The following result is known \cite{CLL2009,MR11}. \begin{lemma} \label{lem_4kimp} Let $S, T\subseteq V$ be non-empty subsets in a graph $G=(V,E)$.\\ {\rm (i)} For any subset $X$ with $S\subseteq X\subseteq V\setminus T$, the MM $(X,T)$-cut is an important $(S,T)$-cut;\\ {\rm (ii)} There are at most $4^k$ important $(S,T)$-cuts of size at most $k$ and one can list all of them in $4^k (n+m)^{O(1)}$ time. \end{lemma} \section{Candidate Sets}\label{sec:candidate We introduce the next technical lemmas, which will be used to build blocks of our algorithm. These lemmas are crucial for us to design FPT algorithms. \begin{lemma} \label{lem_candidate1} Let $A,B,C\subseteq V$ be non-empty subsets in a graph $G=(V,E)$ and $k$ and $\ell$ be nonnegative integers. Then one can find in $2^{3(k+\ell)}(n+m)^{O(1)}$ time a family $\mathcal{X}$ of at most $2^{3(k+\ell)}$ subsets of $C$ with a property that $C\cap V_1\in \mathcal{X}$ for any minimal $(A, B)$-cut $(V_1,V_2)$ with size at most $k$ such that $| C\cap V_1|\leq \ell$. \end{lemma} \pf{ Let $\mathrm{Cut}(A,B,C,k,\ell;G)$ denote the set of minimal $(A, B)$-cuts $(V_1,V_2)$ in $G$ with size at most $k$ such that $| C\cap V_1|\leq \ell$. Construct a multigraph $H_b$ from $G$ by choosing a vertex $b\in B$ and adding a new edge between $b$ and each vertex $u\in C$, and let $\mathrm{ICut}(A,B,k+\ell;H_b)$ denote the set of important $(A,B)$-cuts in $H_b$ of size at most $k+\ell$. By Lemma~\ref{lem_4kimp}(ii), $|\mathrm{ICut}(A,B,k+\ell;H_b)|\leq 4^{k+\ell}$ holds, and $\mathrm{ICut}(A,B,k+\ell;H_b)$ can be found in time $4^{k+\ell} (n+m)^{O(1)}$. For any minimal $(A,B)$-cut $(V_1,V_2)\in \mathrm{Cut}(A,B,C,k,\ell;G)$, we see by Lemma~\ref{lem_4kimp}(i) that the MM $(A\cup (C\cap V_1), B)$-cut $(S,T)$ in $H_b$ is an important $(A,B)$-cut in $H_b$ of size at most $k+| C\cap V_1|\leq k+\ell$, where $ C\cap V_1 \subseteq N_{H_b}(b)\cap S$ holds. Construct the family $\mathcal{X}$ of subsets \[\mbox{ $X\subseteq N_{H_b}(b)\cap S$ with $|X|\leq \ell$ for each $(A,B)$-cut $(S,T)\in \mathrm{ICut}(A,B,k+\ell;H_b)$. }\] Then $\mathcal{X}$ contains the set $C\cap V_1$ for each $(A, B)$-cut $(V_1, V_2)\in \mathrm{Cut}(A,B,C,k,\ell;G)$. For each important $(A,B)$-cut $(S,T)\in \mathrm{ICut}(A,B,k+\ell;H_b)$, the family $\mathcal{X}$ contains at most $$\sum_{i=0}^\ell{|N_{H_b}(b)\cap S|\choose i} \leq \sum_{i=0}^\ell{k+\ell \choose i}<2^{k+\ell}$$ subsets $X$. Since $|\mathrm{ICut}(A,B,k+\ell;H_b)|\leq 4^{k+\ell}$, it holds that $|\mathcal{X}|\leq 4^{k+\ell}\cdot 2^{k+\ell}=2^{3(k+\ell)}$ and the family $\mathcal{X}$ can be constructed in $4^{k+\ell} (n+m)^{O(1)}+ 2^{3(k+\ell)}(n+m)^{O(1)}$ time. This proves the lemma. }\bigskip \begin{lemma} \label{lem_candidate2} Let $A,B,B'\subseteq V$ be non-empty subsets in a graph $G=(V,E)$, where $B'\subseteq B$, and $k$ be a nonnegative integer. Then one can find in $2^{3k}(n+m)^{O(1)}$ time a family $\mathcal{Y}$ of at most $2^{3k}$ subsets of $N_G(B')$ with a property that $N_G(B')\cap V_1\in \mathcal{Y}$ for any minimal $(A, B)$-cut $(V_1,V_2)$ with size at most $k$. \end{lemma} \pf{Let $\mathrm{Cut}(A,B,k)$ denote the set of minimal $(A, B)$-cuts in $G$ with size at most $k$. Let $\mathrm{ICut}(A,B,k)$ denote the set of important $(A,B)$-cuts in $G$ with size at most $k$. By Lemma~\ref{lem_4kimp}(ii), $|\mathrm{ICut}(A,B,k)|\leq 4^{k}$ holds, and $\mathrm{ICut}(A,B,k)$ can be found in $4^{k} n^{O(1)}$ time. For any minimal $(A,B)$-cut $(V_1,V_2)\in \mathrm{Cut}(A,B,k)$, we see by Lemma~\ref{lem_4kimp}(i) that the MM $(A \cup (N_G(B')\cap V_1), B)$-cut $(S,T)$ is an important $(A,B)$-cut of size at most $k$, where $ N_G(B')\cap V_1 \subseteq N_G(B')\cap S$ holds. Construct the family $\mathcal{Y}$ of subsets \[\mbox{ $Y\subseteq N_G(B')\cap S$ for each $(A,B)$-cut $(S,T)\in \mathrm{ICut}(A,B,k)$.}\] Then $\mathcal{Y}$ contains the set $N_G(B')\cap V_1$ for each $(A, B)$-cut $(V_1, V_2)\in \mathrm{Cut}(A,B,k)$. Note that the size of $N_G(B')\cap S$ is at most the size of the cut $(S,T)$. For each important $(A,B)$-cut $(S,T)\in \mathrm{ICut}(A,B,k)$, the family $\mathcal{Y}$ contains at most $$2^{|N_G(B')\cap S|}\leq 2^k$$ subsets $Y$. Since $|\mathrm{ICut}(A,B,k )|\leq 4^{k}$, it holds that $|\mathcal{Y}|\leq 4^{k }\cdot 2^{k }=2^{3k}$ and the family $\mathcal{Y}$ can be constructed in $4^{k } (n+m)^{O(1)}+ 2^{3k}(n+m)^{O(1)}$ time. This proves the lemma. }\bigskip \section{Restriction to an Easy Case}\label{sec:easy_case Recall that an instance of \textsc{bounded-degree cut} is defined by a tuple $(G=(V,E),A,B , \mathrm{u}_A, \mathrm{u}_B, k)$ such that $G$ is a multigraph, $A, B\subseteq V$ are two disjoint vertex subsets, $\mathrm{u}_A$ and $\mathrm{u}_B$ are two functions from $V$ to $\{0,1,\ldots,|E|\}$, and $k$ is a nonnegative integer. We will use $I=(G=(V,E),A,B)$ to denote an instance of the problem, where $\mathrm{u}_A$, $\mathrm{u}_B$ and $k$ are omitted since they remain unchanged throughout our argument. We call a minimal $(A,B)$-cut $(V_A,V_B=V\setminus V_A)$ {\em feasible} to an instance $I$ if \\ ~-~ $|E_G(V_A)|\leq k$; \\ ~-~ $\mathrm{deg}_G(v;V_A)\leq \mathrm{u}_A(v)$ for all vertices $v\in V_A$; and \\ ~-~ $\mathrm{deg}_G(v;V_B)\leq \mathrm{u}_B(v)$ for all vertices $v\in V_B$,\\ where the last two conditions are also called the \emph{degree constraint}. A feasible $(A,B)$-cut in $I$ is called a {\em solution} to $I$, and an instance $I$ is called {\em feasible} if it admits a solution. \medskip \textsc{bounded-degree cut} is to decide whether a given instance $I$ is feasible or not. We observe the next. \begin{lemma} \label{lem_restriction} For an instance $I=(G=(V,E),A,B)$ and disjoint nonempty subsets $X,Y\subseteq V$, let $I_{X,Y}$ denote the instance $(G, A\cup X ,B\cup Y)$. \\ {\rm (i)} If $I$ is infeasible, then $I_{X,Y}$ is infeasible for any $X,Y\subseteq V$; \\ {\rm (ii)} If $I$ is feasible and $X\subseteq V_A$ and $Y\subseteq V_B$ hold for a feasible $(A,B)$-cut $(V_A,V_B)$ to $I$, then $I_{X,Y}$ admits a feasible $(A\cup X, B\cup Y)$-cut, which is also feasible to $I$. \end{lemma} \pf{{\rm (i)} Assume to the contrary that $I_{X,Y}$ is feasible and $(V_A,V_B)$ is a feasible $(A\cup X ,B\cup Y)$-cut. Then it holds that $A\subseteq V_A$ and $B\subseteq V_B$ and the cut $(V_A,V_B)$ satisfies the conditions in the definition of feasible cuts. Thus, $(V_A,V_B)$ is also a feasible $(A,B)$-cut, a contradiction to the fact that $I$ is infeasible. {\rm (ii)} First of all, it is clear that $(V_A,V_B)$ is still a feasible $(A\cup X, B\cup Y)$-cut. Then $I_{X,Y}$ admits feasible $(A\cup X, B\cup Y)$-cuts. Let $(V'_A,V'_B)$ be an arbitrary feasible $(A\cup X, B\cup Y)$-cut. The above proof for {\rm (i)} shows that $(V'_A,V'_B)$ is also a feasible $(A, B)$-cut. } \medskip Let $I=(G,A,B)$ be an instance. We use $Z_A$ and $Z_B$ to denote the sets of $A$-unsatisfied vertices and $B$-unsatisfied vertices, respectively, i.e., \[\mbox{ $Z_A\triangleq \{v\in V\mid \mathrm{deg}_G(v)> \mathrm{u}_A(v)\}$ and $Z_B\triangleq \{v\in V\mid \mathrm{deg}_G(v)> \mathrm{u}_B(v)\}$, }\] where $Z_A\cap Z_B$ may not be empty. We call $I$ an \emph{easy instance} if it holds that \\ 1. $Z_A\cup Z_B \subseteq A\cup B$, \\ 2. $N_G(Z_A\cap A) \subseteq A\cup B$, and\\ 3. $N_G(Z_B\cap B) \subseteq A\cup B$. \begin{lemma} \label{lem_easy} The feasibility of an easy instance of \textsc{bounded-degree cut} can be tested in $(n+m)^{O(1)}$ time. \end{lemma} \pf{Let $I= (G,A,B)$ be an easy instance. Note that the degree constraint to each vertex in $V\setminus (A\cup B) \subseteq V\setminus (Z_A\cup Z_B)$ is satisfied for any $(A,B)$-cut in $I$. First we test in $n^{O(1)}$ time whether there is a vertex $v\in Z_A\cap A$ with $\mathrm{deg}_G(v;A)> \mathrm{u}_A(v)$ (resp., $v\in Z_B\cap B$ with $\mathrm{deg}_G(v;B)> \mathrm{u}_B(v)$) or not. If so, then clearly $I$ admits no solution. Assume that no such vertices exist in $I$. Then each $v\in Z_A$ satisfies $v\not \in A$ or $v\in Z_A\cap A$. In the former, it holds that $v\in B$ since $Z_A\subseteq A\cup B$, where we do not need to consider the degree constraint by $\mathrm{u}_A(v)$. In the latter, it holds that $N_G(v)\subseteq A\cup B$ since $N_G(Z_A\cap A) \subseteq A\cup B$, where $\mathrm{deg}_G(v;V_1)=\mathrm{deg}_G(v;A)\leq \mathrm{u}_A(v)$ for any $(A,B)$-cut $(V_1,V_2)$ in $I$, satisfying the degree constraint by $\mathrm{u}_A(v)$. Analogously no vertex in $Z_B$ violates the degree constraint by $\mathrm{u}_B(v)$ for any choice of $(A,B)$-cuts $(V_1,V_2)$ in $I$. Now $I$ admits a solution if and only if it has an $(A,B)$-cut with size at most $k$, which can be checked in $(n+m)^{O(1)}$ time by Lemma~\ref{lm:min_cut}. This proves the lemma. }\bigskip We will construct from a given instance at most $2^{18k}$ easy instances so that where the original instance is feasible if and only if at least one of the easy instances is feasible. \section{Constructing Easy Instances}\label{sec:branching For a minimal $(A,B)$-cut $\pi=(V_1,V_2)$ (not necessarily feasible) in a given instance $I=(G=(V,E),A,B)$, we define the following notation on vertex subsets: \\ ~~~ $Z_{Ai}\triangleq Z_A \cap V_i$ and $Z_{Bi}\triangleq Z_B \cap V_i$, $i=1,2$; \\ ~~~ $W_A\triangleq N_G(Z_{A1})$ and $W_B\triangleq N_G(Z_{B2})$; $W_{Ai}\triangleq W_A \cap V_i$, and $W_{Bi}\triangleq W_B \cap V_i$ , $i=1,2$;\\ ~~~ $A_{\pi}\triangleq A\cup Z_{A1} \cup Z_{B1} \cup W_{A1} \cup W_{B1}$ and $B_{\pi}\triangleq B\cup Z_{A2} \cup Z_{B2} \cup W_{A2} \cup W_{B2}$. \\ See in Fig.~\ref{fi:degree-cut} for an illustration on these subsets. Observe that the resulting instance $(G,A_{\pi},B_{\pi})$ is an easy instance. By Lemma~\ref{lem_restriction}, the $(A,B)$-cut $\pi=(V_1,V_2)$ is feasible if and only if the corresponding instance $(G,A_{\pi},B_{\pi})$ is feasible. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.25]{degree-cut.pdf} \end{center} \caption{(a) An $(A,B)$-cut $\pi=(V_A,V_B)$ to $I$ and the partitions $\{ Z_{A1},Z_{A2}\}$ of $Z_A$ and $\{ Z_{B1},Z_{B2}\}$ of $Z_B$ by $\pi$, where possibly $Z_A\cap Z_B\neq\emptyset$, (b) The partitions $\{W_{B1},W_{B2}\}$ of $W_B=N_G(Z_{B2})$ and $\{W_{A1},W_{A2}\}$ of $W_A=N_G(Z_{A1})$ by $\pi$, where possibly $W_A\cap W_B\neq\emptyset$. } \label{fi:degree-cut} \end{figure} \subsection{Partitioning Unsatisfied Vertices} For a minimal $(A,B)$-cut $(V_1,V_2)$ to an instance $I$, let $Z_{A1}$ and $Z_{B2}$ be the subsets defined in the above. We observe that if the cut is feasible, then $$|Z_{A1}|,|Z_{B2}| \leq k$$ since each vertex in $Z_{A1}\cup Z_{B2}$ has at least one incident edge included in $E_G(V_1,V_2)$ so that the degree constraint on the vertex holds. By applying Lemma~\ref{lem_candidate1} to $(A,B,C=Z_A,k,\ell=k)$, we can construct in $2^{6k} (n+m)^{O(1)}$ time a family $\mathcal{X}_1$ of at most $2^{6k}$ subsets of $Z_A$ such that $\mathcal{X}_1$ contains the set $Z_{A1} $ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in the instance $I=(G,A,B)$. Symmetrically it takes $2^{6k} (n+m)^{O(1)}$ time to find a family $\mathcal{X}_2$ of at most $2^{6k}$ subsets of $Z_B$ such that $\mathcal{X}_2$ contains the set $Z_{B2}$ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in the instance $I=(G,A,B)$. Then the set $\mathcal{X}_{1,2}$ of all pairs $(X_1,X_2)$ of disjoint sets $X_i\in \mathcal{X}_i$, $i=1,2$ contains the pair $(Z_{A1},Z_{B2})$ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in $I$. By noting that $|\mathcal{X}_{1,2}|\leq 2^{6k}2^{6k}= 2^{12k}$, we obtain the next. \begin{lemma} \label{lem_step1} Given an instance $I=(G, A,B)$, one can construct in $ 2^{12k}(n+m)^{O(1)}$ time at most $2^{12k}$ new instances $I'=(G, A',B')$ with $Z_A\cup Z_B \subseteq A' \cup B'$, one of which is equal to $(G, A\cup Z_{A1}\cup Z_{B1}, B \cup Z_{A2} \cup Z_{B2})$ for each feasible $(A,B)$-cut $(V_1,V_2)$ to $I$. \end{lemma} \subsection{Partitioning Neighbors of Unsatisfied Vertices} For a minimal $(A,B)$-cut $(V_1,V_2)$ to an instance $I$, let $W_{A2}$ and $W_{B1}$ be the subsets defined in the above. We observe that if the cut is feasible, then $$ |W_{B1}| , |W_{A2}| \leq k $$ since each of $|N_G(Z_{B2})\cap V_1|$ and $|N_G(Z_{A1})\cap V_2|$ is at most $|E_G(V_1,V_2)|\leq k$ to the feasible $(A,B)$-cut $(V_1,V_2)$. By applying Lemma~\ref{lem_candidate2} to $(A\cup Z_{A1}\cup Z_{B1}, B\cup Z_{A2}\cup Z_{B2}, B'=Z_{B2}, k)$, we can construct in $2^{3k} n^{O(1)}$ time a family $\mathcal{Y}_1$ of at most $2^{3k}$ subsets of $N_G(Z_{B2})$ such that $\mathcal{Y}_1$ contains the set $W_{B1}=N_G(Z_{B2})\cap V_1$ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in the instance $I=(G,A,B)$. Symmetrically it takes $2^{3k} (n+m)^{O(1)}$ time to find a family $\mathcal{Y}_2$ of at most $2^{3k}$ subsets of $N_G(Z_{A1})$ such that $\mathcal{Y}_2$ contains the set $W_{A2}=N_G(Z_{A1})\cap V_2$ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in $I$. Then the set $\mathcal{Y}_{1,2}$ of all pairs $(Y_1,Y_2)$ of disjoint sets $Y_i\in \mathcal{Y}_i$, $i=1,2$ contains the pair $(W_{B1},W_{A2})$ defined to each feasible $(A,B)$-cut $(V_1,V_2)$ in the instance $I=(G,A,B)$. By noting that $|\mathcal{Y}_{1,2}|\leq 2^{6k}$, we obtain the next. \begin{lemma} \label{lem_step2} Given an instance $I=(G, A,B)$ and the subsets $Z_{A1}$ and $Z_{B2}$ defined to a feasible $(A,B)$-cut $(V_1,V_2)$ in $I$, one can construct in $ 2^{6k} (n+m)^{O(1)}$ time at most $2^{6k}$ new easy instances $I'=(G, A',B')$, one of which is equal to $(G, A_{\pi}, B_{\pi})$ defined to the feasible $(A,B)$-cut $\pi=(V_1,V_2)$. \end{lemma} By Lemmas~\ref{lem_step1} and \ref{lem_step2}, we obtain the next. \begin{lemma} \label{lem_all_steps} Given an instance $I=(G, A,B)$, one can construct in $ 2^{18k} (n+m)^{O(1)}$ time at most $2^{18k}$ new easy instances $I'=(G, A',B')$, one of which is equal to $(G, A_{\pi}, B_{\pi})$ for each feasible $(A,B)$-cut $\pi=(V_1,V_2)$ to $I$. \end{lemma} This and Lemma~\ref{lem_easy} imply Theorem~\ref{th:mainresult}. \section{Concluding Remarks}\label{sec:conclusion} Cut and partition problems are important problems that have been extensively studied from the viewpoint of FPT algorithms. In this paper, we study a cut problem with additional constraints on the vertex degree of the two parts of the cut and design the first FPT algorithm for this problem. To obtain the FPT algorithm, we develop two new lemmas that are based on important cuts. Important cuts show some properties of bounded-size cuts, while the new lemmas further reveal some properties of vertex subsets of one part of a bounded-size cut. We believe these lemmas can be used to design FPT algorithms for more problems. In \textsc{bounded-degree cut}, we are going to check whether there is a minimal $(A,B)$-cut satisfying both the degree constraint and size constraint of most $k$. We also consider the \textsc{bounded-degree bipartition} problem, which is to check whether there is $(A,B)$-cut of size at most $k$ satisfying the degree constraint, without the requirement of being minimal. Note that some $(A,B)$-cuts of size at most $k$ satisfying the degree constraint may not be minimal. This kind of cuts are not solutions to \textsc{bounded-degree cut}, but are solutions to \textsc{bounded-degree bipartition}. To solve \textsc{bounded-degree bipartition}, we need some techniques more, which will be introduced in our further work.
{ "timestamp": "2020-12-29T02:27:32", "yymm": "2012", "arxiv_id": "2012.14174", "language": "en", "url": "https://arxiv.org/abs/2012.14174" }
\section{Introduction} Vision is crucial for our everyday activities such as driving, watching television, reading and interacting socially and visual impairment can be a real handicap. Even the slightest visual loss can affect our quality of life considerably and cause depression and in old adults cause accidents, injuries and falls \cite{1,2}. Blindness is the final stage of many eye diseases and according to previous studies, the most common cause of visual impairments are cataract, macular degeneration, glaucoma and diabetic retinopathy \cite{3,4}. \par Color fundus photography is a 2D imaging modality for the diagnosis of retinal diseases. 3D structure of the eye provides a considerable amount of crucial information (such as information about elevation in different parts) for ophthalmologists to diagnose which is unavailable in 2D fundus images. Therefore, being able to infer this information from just a 2D image can be helpful. Furthermore, the reconstructed heightmap offers clinicians another means to view eye structure which may help them in better and more accurate diagnosis \cite{59,60}. Optical Coherence Tomography (OCT) \cite{61} is an expensive but vital tool for evaluating the retinal structure which provides ophthalmologists with valuable information, enabling them to diagnose most of the macula diseases. Nevertheless, owing to the cost of using this system, OCT devices are not ubiquitous and using fundus images is mostly common for screening. \par \begin{figure}[h] \centering \includegraphics[clip,trim=5.5cm 19.0cm 3cm 1cm width=1.0\linewidth,scale=0.8]{fig1/fig1.pdf} \caption{The left and right image represent the correspondence between a fundus image and its heightmap image. As can be seen, each pixel's color value of the image on the right indicates a height according to the colorbar which ranges from 0 $\mu m$ to 500 $\mu m$. Note that all numbers are in micrometer.} \label{fig1} \end{figure} Shape from shading (SFS) \cite{70} is the only method applied to this problem for the reconstruction of the height of a single fundus image \cite{60}. However, this method suffers from drawbacks that limits its usage in this particular task. In fact, the success of the SFS method is totally dependent on predicting the position of the light source and the assumption about the surface which cannot be applied to the eye retina. Furthermore, disparity map estimation, which is one of the common methods for 3D reconstruction was also applied to this problem \cite{62}. Nonetheless, it totally depends on the availability of the stereo images from both eyes which is not practical. Hence, devising a method to automatically generate an accurate heightmap image from a given fundus image is crucial. \par In recent years, with the advent of Conditional Generative Adversarial Networks (cGANs) \cite{22,23}, many researchers used this methodology for image generation and transformation tasks \cite{24,39,40,76,77,80}. Medical image analysis also benefited a lot from these powerful models and many researchers used these methods for translating between different medical images such as translating between CT and PET images \cite{74,36}, denoise and correct the motion in medical images such as denoising CT images \cite{35} or correcting the motion in MR images \cite{36} and finally segmenting medical images \cite{37,78,79}. Most of these methods benefited from the U-Net architecture \cite{25} which first were proposed for image segmentation and its extension U-Net++ \cite{21}. Perceptual loss \cite{39,40} is another major part of successful methods which considers the difference between two images using high-level features extracted from different layers of a Deep Neural Network (DNN). \par Motivated by the promising results of cGANs on tasks relating to the analysis of medical images, in this work, we applied this method to generate a heightmap image from a given color fundus image targeted on the macula area. In fact, height information is one of the crucial information that OCT devices provide to ophthalmologists and is missing in color fundus images due to their 2D nature. Hence, by extracting such information from only a fundus image, we can ease the diagnosis and management of retinal diseases with macular thickness changes. \par Considering Figure \ref{fig1}, since our problem can be seen as an image translation task in which we want to predict a color image containing heights data from a fundus image targeted on the macula area, cGANs can be utilized in this problem. That is to say, each pixel in the right image of Figure \ref{fig1} has a color value which represents a height from 0 $\mu m$ to 500 $\mu m$ and by predicting red, green and blue color values for each pixel of the left image (fundus image), we can predict its heightmap. The color bar below Figure \ref{fig1} demonstrates the assignment of different color values to different heights. \par In this paper, we used a stack of three U-Nets for our generator network which we averaged on the output of them for deep supervision. Furthermore, in order to avoid problems of traditional GANs, we used Least Squares Adversarial Loss \cite{65} instead along with perceptual loss \cite{39} and L2-loss as pixel reconstruction loss. For the discriminator network, we used an image-level discriminator that classifies the whole image as real or fake. To the best of our knowledge, this is the first research paper on predicting the heightmap of the macula area on fundus images using DNNs. We evaluated our approach qualitatively and quantitatively on our dataset and compared the results with state-of-the-art methods in image translation and medical image translation. Furthermore, we studied the application of our method on real diagnosis cases which showed that reconstructed heightmaps can provide additional information to ophthalmologists and can be used for the analysis of the macula region. \par Our main contributions can be listed as follows: \begin{itemize} \item Motivated by deeply supervised networks \cite{69}, we propose a novel deep architecture for the generator network based on U-Net \cite{25} and CasNet \cite{36} which consists of a number of connected U-Nets that we utilized each of their output for deep supervision. \item We propose the first method for the reconstruction of the heightmap of the macula image based on DNNs. \item Finally, the subjective performance of our reconstructed heightmap was investigated from a medical perspective by two experienced ophthalmologists. \end{itemize} \section{Methods} \subsection{Preprocessing} A DNN has the capability to learn from unpreprocessed image data, but it can learn more easily and efficiently if we apply appropriate preprocessing on the input data. Hence, in this paper, we first apply Contrast Limited Adaptive Histogram Equalization (CLAHE) \cite{64} which enhances the foreground and background contrast. Afterward, we apply normalization (division by 255) to the input images to prepare them for feeding into the network. The impact of preprocessing on the input fundus images is depicted in Figure \ref{fig2}. \begin{figure}[h] \captionsetup[subfigure]{labelformat=empty} \subfloat[CLAHE]{ \begin{minipage}{ 0.23\textwidth} \includegraphics[width=1\textwidth]{fig2/1_CLAHE.jpg} \end{minipage}}\ \subfloat[Normal]{\begin{minipage}{ 0.23\textwidth} \includegraphics[width=1\textwidth]{fig2/1_NORMAL.jpg} \end{minipage}}\hspace{0.095cm} \setcounter{subfigure}{0} \subfloat[CLAHE]{ \begin{minipage}{ 0.23\textwidth} \includegraphics[width=1\textwidth]{fig2/2_CLAHE.jpg} \end{minipage}}\ \subfloat[Normal]{\begin{minipage}{ 0.23\textwidth} \includegraphics[width=1\textwidth]{fig2/2_NORMAL.jpg} \end{minipage}} \caption{The effect of CLAHE preprocessing on the quality of fundus images. As can be seen, in preprocessed images, the details are more clear and this can positively affect learning procedure.} \label{fig2} \end{figure} \subsection{Network structure} In our proposed cGAN setting, the input to our generator is a 128$\times$128$\times$3 image of the macula area on a fundus image and the generator will generate an image of the same size and depth such that each pixel's color indicates a height as depicted in Figure \ref{fig1}. The discriminator takes this image and gives a probability between 0 to 1 which indicates the similarity of this image to a real heightmap image. \par \begin{figure*} \centering \captionsetup[subfigure]{justification=justified,singlelinecheck=false} \includegraphics[clip,trim=0.06cm 8cm 35.0cm 0.1cm, scale=0.6]{fig3/fig3.pdf} \caption{The architecture of generator and discriminator in our proposed method. As can be seen, the generator is consists of a series of connected U-Net which we use the output of them for deep supervision (red arrows). Moreover, The discriminator network consists of Convolution-BatchNorm-LeakyReLU layers with a final fully connected layer. We utilized four convolutional layers to compute perceptual loss. The output of the network is in the form of probability and is used to distinguish between real and fake images. } \label{fig2} \end{figure*} Our proposed generator architecture is consists of three stacked U-Nets. Motivated by deeply supervised nets \cite{69} which states that discriminative features in deep layers will contribute to better performance, we used the output of the first two U-Net layers for deep supervision. In fact, similar to U-Net++ architecture \cite{21} which uses the output of the upsampling layer for deep supervision, we averaged the output of all U-Net layers and used the result as the final output of the generator network. By doing so, our network tries to learn a meaningful representation for these deep layers which will directly contribute to the final outcome. Moreover, since the amount of detail is of significance in this work, we decided to use the average operator instead of the max operator for the final output \cite{21}. Another advantage of this architecture is that by using a stack of U-Nets, each layer can add its own level of detail to the final outcome. Furthermore, although our network is deeper in comparison to a normal U-Net architecture, owing to skip-connections and deep supervision involved in the architecture, vanishing gradient problem will not happen and loss flows easily to upper layers through backpropagation. The generator architecture is depicted in the first row of Figure \ref{fig3}. \par Regarding the discriminator, the judgment can be made at the image level as well as the patch level. That is to say, we can judge the quality of the entire image by our discriminator (ImageGAN) or consider its patches when we want to judge (PatchGAN). Since a powerful discriminator is the key to successful training with GANs and extremely influences its output \cite{22,27}, we explored both of these methods and opted for image-level discriminator due to better quality images. Furthermore, As can be seen in the second row of Figure \ref{fig3}, we used $1^{th},4^{th},6^{th} $ and $8^{th}$ layer of the discriminator network to compute perceptual loss \cite{40} between generated image and ground-truth image as a supervisory signal with the aim of better output. \subsection{Objective functions} Our final loss function is composed of three parts which will be discussed in this section. \subsubsection{Least-squares adversarial loss} cGANs are generative models that learn mapping from observed image $x$ and random noise vector $z$ to $\hat{y}$, using generator $G: {x,z} \rightarrow \hat{y}$. Then the discrimnator $D$ aims to classify the concatenation of the source image $x$ and its corresponding ground-truth image $y$ as real $D(x,y) = 1$, while classifying $x$ and the transformed image $\hat{y}$ as fake, $D(x,\hat{y}) = 0$. \par Despite performing great in many tasks, GANs suffer from different problems such as mode collapse or unstable training procedure \cite{22}. Therefore, in this work to avoid such problems we adopted Least Square Generative Adversarial Networks (LSGANs) \cite{65}. The idea of LSGAN is that even samples on the right side of the decision boundary can provide signals for training. Hence, for achieving this aim, we adopted the least-squares loss function instead of the traditional cross-entropy loss used in normal GAN to penalize data on the right side of the decision boundary but very far from it. Using this simple yet effective idea we can provide gradients even for samples that are correctly classified by the discriminator. The loss function of LSGAN for both discriminator and generator can be written as follows: \begin{flalign} &\underset{D}{\min}\ L_{LSGAN}(D) = \frac{1}{2} \mathbb{E}_{x,y} \Big[\big(D(x,y) - b\big) ^2\Big] \ + \nonumber \\ &\qquad \qquad \qquad \nonumber \frac{1}{2} \mathbb{E}_{x,z}\Big[\Big(D\big(x,G(x,z) \big)-a\Big)^2\Big] \nonumber \\ &\underset{G}{\min}\ L_{LSGAN}(G) = \frac{1}{2} \mathbb{E}_{x,z}\Big[\Big(D\big(x,G(x,z) \big)-c\Big)^2\Big]&& \end{flalign} This loss functions directly operates on the logits of the output, where $a = 0$ and $c = b= 1$. \subsubsection{Pixel reconstruction loss} Image-to-image translation tasks that rely solely on the adversarial loss function do not produce consistent results \cite{36}. Therefore, we also used pixel reconstruction loss here but we opted for L2-loss rather than widely used L1-loss since it performed better in reconstructing details in this specific task. The equation for L2-loss is as below: \begin{equation} \begin{split} L_{L2}(G) = \mathbb{E}_{x,y,z}\big[\parallel y-G(x,z)\parallel^{2}_{2}\big] \label{eq3} \end{split} \end{equation} \subsubsection{Perceptual loss} Despite producing plausible results using only two aforementioned loss functions, since the generated image is blurry \cite{36} and especially in medical diagnosis small details are of significance, we used perceptual loss \cite{40} to improve the final result. As a matter of fact, using only L2-Loss or L1-Loss results in outputs that maintain the global structure but it shows blurriness and distortions \cite{40}. Furthermore, per-pixel losses fail to capture perceptual differences between input and ground-truth images. For instance, when we consider two identical images only shifted by some small offset from each other, per-pixel loss value may vary considerably between these two, even though they are quite similar \cite{39}. However, by using high-level features extracted from layers of a discriminator, we can capture those discrepancies and can measure image similarity more robustly \cite{39}. In our work, since discriminator network also has this capability of perceiving the content of images and difference between them and pre-trained networks on other tasks may perform poorly on other unrelated tasks, we used hidden layers of discriminator network \cite{40,36} to extract features as illustrated in the second row of the Figure \ref{fig2}. The mean absolute error for $i^{th}$ hidden layer between the generated image and the ground-truth image is then calculated as : \begin{equation} \begin{split} P_i\big(G(x,z),y\big) = \frac{1}{w_i h_i d_i} \parallel D_i\big(x,y\big) - D_i\big(x,G(x,z)\big) \parallel _{1} \label{eq4} \end{split} \end{equation} which $w_i$,$h_i$ and $d_i$ denote width, height and depth of the $i^{th}$ hidden layer respectively and $D_i$ means the output of $i^{th}$ layer of the discriminator network. Finally, perceptual loss can be formulated as : \begin{equation} \begin{split} L_{perceptual} = \sum_{i=0}^{L}\lambda_i P_i\big(G(x,z),y\big) \label{eq5} \end{split} \end{equation} Where $\lambda_i$ in equation \ref{eq5} tunes the contribution of $i^{th}$ utilized hidden layer on the final loss. \par Finally, our complete loss function for the generator is as below: \begin{equation} \begin{split} L = \alpha_{1}L_{perceptual} + \alpha_{2}L_{L2} + \alpha_{3}L_{LSGAN} \label{eq6} \end{split} \end{equation} where $\alpha_1$, $\alpha_2$ and $\alpha_3$ are the hyperparameters that balance the contribution of each of the different losses. Note that we also used perceptual loss in training discriminator besides traditional cGAN loss with equal contribution. \section{Experiments} \subsection{Dataset} The dataset was gathered from TopCon DRI OCT Triton captured at Negah Eye Hospital. We cropped the macula part of the fundus and heightmap image from the 3D macula report generated by the device to create image pairs. Our dataset contains 3407 color fundus-heightmap pair images. Since the images in our dataset were insufficient, we used data augmentation for better generalization. Nevertheless, because we are dealing with medical images, we could not rotate images since by rotating images for example by 90$^\circ$, we have vessels in a vertical position which is impossible in fundus imaging. Moreover, changing brightness is also illegal since it will change the standard brightness of a fundus image. Hence, the only augmentation that we could apply was to flip images in 3 different ways to generate 4 different samples from every image. Consequently, we had 13,628 images which we used 80$\%$ for training, 10$\%$ for validation and 10$\%$ for testing. Some examples from the utilized dataset are illustrated in Figure \ref{fig4}. \begin{figure}[h] \centering \subfloat{\includegraphics[width=0.24\linewidth]{fig4/1.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/1_GT.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/2.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/2_GT.png}}\ \vfill \subfloat{\includegraphics[width=0.24\linewidth]{fig4/3.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/3_GT.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/4.png}}\ \subfloat{\includegraphics[width=0.24\linewidth]{fig4/4_GT.png}}\ \caption{Macula area on fundus image and their corresponding heightmap in our dataset.} \label{fig4} \end{figure} \subsection{Exprimental setup} We used Tensorflow 2.0 \cite{66} for implementing our network. We also used Adam optimizer \cite{73} with an initial learning rate of $1e^{-3}$ with a step decay of $0.9$ per $30$ steps. Moreover, we used the batch size of 8 and trained for 250 epochs to converge. Additionally, we set $\lambda_1= 5.0$, $\lambda_2 = 1.0$, $\lambda_3 = 5.0$ and $\lambda_4 = 5.0$ in Equation \ref{eq5} by trial-and-error and considering the contribution of each of them as discussed in \cite{40,39}. Since we are working in a very high dimensional parameter space convergence and finding the optimal weights is difficult and starting from a random point will not work very well \cite{58}. As a result, in order to ease the training procedure and convergence, we employed a step-by-step training schema. That is to say, we first trained the first U-Net completely, then we added the next U-Net and trained a stack of 2 U-Net with deep supervision and finally, we trained the entire network. Note that we loaded the weights of the previous network when we wanted to train the new one. \subsection{Evaluation metrics} In this work, we utilized a variety of different metrics to evaluate our final outcomes quantitatively. We measured the quality of the final image using the Structural Similarity Index (SSIM) \cite{67}, Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). Nevertheless, these measures are insufficient for assessing structured outputs such as images, as they assume pixel-wise independence \cite{68}. Consequently, we used Learned Perceptual Image Patch Similarity (LPIPS) \cite{68} which can outperform other measures in terms of comparing the perceptual quality of images. We used features extracted from the $1^{th},4^{th},6^{th} $ and $8^{th}$ layer of the discriminator network to obtain the features and calculated the difference between $y$ as the generated heightmap and $ \hat{y}$ as the ground-truth heightmap for given input $x$ using the equation below which all parameters are similar to Equation \ref{eq4}: \begin{equation} \begin{split} d(y,\hat{y},x) = \sum_{l = 0} ^{n}\frac{1}{w_l h_l d_l} \parallel D_l(x,y) - D_l(x,\hat{y})\parallel ^{2}_{2} \label{eq7} \end{split} \end{equation} \subsection{Analysis of generator architecture} In this part, we explore different numbers of stacked U-Nets in generator architecture to find the optimum one. We set $\alpha_1 = 100$, $\alpha_2 = 1$ and $\alpha_3 = 50$ in Equation \ref{eq6} and used ImageGAN for our discriminator. The quantitative comparison is made in Figure \ref{fig5}. As can be seen, stacking three U-Nets resulted in higher values for SSIM and PSNR and lower values for MSE and LPIPS. Furthermore, qualitative comparison which is depicted in Figure \ref{fig6} also supports our claim that three stacks of U-Nets is the best choice. As a matter of fact, the generator with three U-Nets in this figure did well at predicting the full shape of the red region as well as the correct position and full shape of intense red spots. Additionally, it seems that by adding more U-Nets to the structure, the results become more blurry and details begin to vanish. Therefore, three U-Nets is the optimum number that preserves fine details and can produce plausible outcomes. \begin{figure*} \subfloat[SSIM and PSNR]{ \begin{minipage}{ 0.48\textwidth} \includegraphics[clip,trim=1.4cm 8.0cm 1cm 1.5cm ,width=1\textwidth]{fig5/Fig5-1.pdf} \end{minipage}}\ \subfloat[MSE and LPIPS]{\begin{minipage}{ 0.48\textwidth} \includegraphics[clip,trim=1.4cm 8.0cm 1cm 1.5cm ,width=1\textwidth]{fig5/Fig5-2.pdf} \end{minipage}} \caption{Quantitavte comparison between different number of stacked U-Nets in generator architecture.} \label{fig5} \end{figure*} \begin{figure*} \subfloat{ \begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/1-1UNET.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/1-2UNET.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/1-3UNET.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/1-4UNET.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/1-5UNET.png} \end{minipage}}\hspace{0.095cm} \subfloat{\begin{minipage}{ 0.16\textwidth}\hspace{0.095cm} \includegraphics[width=1\textwidth]{fig6/1-GT.png} \end{minipage}} \setcounter{subfigure}{0}% \subfloat[1 U-Net]{ \begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/2-1UNET.png} \end{minipage}}\ \subfloat[2 U-Net]{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/2-2UNET.png} \end{minipage}}\ \subfloat[3 U-Net]{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/2-3UNET.png} \end{minipage}}\ \subfloat[4 U-Net]{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/2-4UNET.png} \end{minipage}}\ \subfloat[5 U-Net]{\begin{minipage}{ 0.16\textwidth} \includegraphics[width=1\textwidth]{fig6/2-5UNET.png} \end{minipage}}\hspace{0.095cm} \subfloat[Ground-truth]{\begin{minipage}{ 0.16\textwidth}\hspace{0.095cm} \includegraphics[width=1\textwidth]{fig6/2-GT.png} \end{minipage}} \caption{Qualitative comparison in terms of different numbers of stacked U-Nets in generator structure.} \label{fig6} \end{figure*} \subsection{Effectiveness of deep supervision} In this section, we explore the effectiveness of using deep supervision. For this experiment, we trained our network with and without the use of deep supervision. As can be seen in Figure \ref{fig7}, the output of the deep layers when we used deep supervision is meaningful. In fact, by averaging the output of each of the layers, we force the network to output plausible images from these layers and this contributes to the higher quality of the network with deep supervision. \par As depicted in Figure \ref{fig7}, the output of the first U-Net layer is responsible for the overall brightness of the output image besides vaguely representing blue and red parts. The output of the second U-Net is mostly used for abnormal regions that are overlayed on the green parts. In fact, it is responsible for detecting higher or lower elevated parts in the macula region. Finally, the third layer is responsible for adding fine details to the output. If the image given does not contain any abnormalities, the output from the second and third deep layer is mostly black (e.g. Figure \ref{fig7} third example). However, considering the model without supervision, clearly, there is not any meaningful interpretation for the images outputted from the first and second layer and this contributed to the lower overall quality of this model. Quantitative comparisons in Table \ref{tb1} also proves our point that deep supervision can contribute to the finer output. As can be seen, our model with deep supervision achieved a higher score in all evaluation metrics. \begin{figure*} \captionsetup[sub]{labelformat=empty,justification=raggedleft,singlelinecheck=false} \subfloat{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w$}}}}% \includegraphics[width=1\textwidth]{fig7/498_0.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/498_1.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/498_2.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/498_out.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \vfill \vspace{-0.3cm} \subfloat{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w/o$}}}}% \includegraphics[width=1\textwidth]{fig7/498_0_2_.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/498_1_2_.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth}\ \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/498_out_2_.png} \end{minipage}}\hspace{0.02cm} \subfloat{\begin{minipage}{ 0.19\textwidth} \vspace*{-2.6cm}\includegraphics[width=1\textwidth]{fig7/498_GT.png} \end{minipage}} \vfill \vspace{-0.2cm} \subfloat{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w$}}}}% \includegraphics[width=1\textwidth]{fig7/900_0.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/900_1.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/900_2.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/900_out.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \vfill \vspace{-0.3cm} \subfloat{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w/o$}}}}% \includegraphics[width=1\textwidth]{fig7/900_0_2_.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/900_1_2_.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth}\ \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/900_out_2_.png} \end{minipage}}\hspace{0.02cm} \subfloat{\begin{minipage}{ 0.19\textwidth} \vspace*{-2.6cm}\includegraphics[width=1\textwidth]{fig7/900_GT.png} \end{minipage}} \vfill \vspace{-0.2cm} \subfloat{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w$}}}}% \includegraphics[width=1\textwidth]{fig7/10334_0.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/10334_1.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/10334_2.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/10334_out.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \vfill \vspace{-0.3cm} \setcounter{subfigure}{0}% \subfloat[First layer]{ \begin{minipage}{ 0.19\textwidth} \makebox[0pt][r]{\makebox[20pt]{\raisebox{31pt}{\rotatebox[origin=c]{90}{$w/o$}}}}% \includegraphics[width=1\textwidth]{fig7/10334_0_2_.png} \end{minipage}}\ \subfloat[Second layer]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/10334_1_2_.png} \end{minipage}}\ \subfloat[Third layer]{\begin{minipage}{ 0.19\textwidth}\ \includegraphics[width=1\textwidth]{fig7/blank.png} \end{minipage}}\ \subfloat[Output]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth]{fig7/10334_out_2_.png} \end{minipage}}\hspace{0.02cm} \captionsetup[subfigure]{captionskip=1.06cm,margin={-0.7cm,-0.1cm}} \subfloat[Ground-truth]{\begin{minipage}{ 0.19\textwidth} \vspace*{-2.6cm}\includegraphics[width=1\textwidth]{fig7/10334_GT.png} \end{minipage}} \caption{The effect of deep supervision on the final output. As can be seen, the outputs from the first and second layer of the model with deep supervision(w) generated meaningful results and represent the parts on which the layer focused. On the other hand, without the use of deep supervision(w/o), generated images from deep layers do not contain useful information and caused lower output quality.} \label{fig7} \end{figure*} \begin{table}[!ht] \centering \begin{tabular}{l|l|l|l|l} & SSIM & LPIPS & MSE & PSNR(dB) \\ \hline w supervision & \textbf{0.8823} & \textbf{1.81e-05} & \textbf{0.0033} & \textbf{34.6733} \\ \hline w\textbackslash{}o supervision & 0.7570 & 2.54e-04 & 0.0059 & 27.2784 \end{tabular} \caption{Quantitative comparison between model trained with supervision and model trained without supervision.} \label{tb1} \end{table} \subsection{L1-Loss vs L2-Loss} Even though in most of the papers L1-Loss is more common than L2-Loss as pixel-loss reconstruction loss \cite{75,24,36}, in this work we chose L2-Loss owing to emphasis that L2-Loss put on huge differences between generated image and ground-truth. As a matter of fact, since the difference in L2-Loss has a power of two, small differences become minuscule and negligible and the focus will be on huge differences. This behavior is perfectly suitable for this problem since our important goal is to predict regions that have a red color or blue color and it is acceptable to have inaccurate or blurry green areas or missed vessels. This is because those red or blue regions contain significant information for diagnosis since they are related to regions with elevation changes which are important in the diagnosis of many retinal diseases such as PED which cause different parts of the macula to swell. Our claim is supported by our experiment in which we compared the results from L1-Loss and L2-Loss. Note that in this experiment the contribution of L2-Loss and L1-Loss function was equal along with LSGAN and perceptual loss. As can be seen in Table \ref{tb2}, L2-Loss performed better in all metrics and the difference is considerable. \begin{table}[!ht] \centering \begin{tabular}{l|l|l|l|l} & SSIM & LPIPS & MSE & PSNR(dB) \\ \hline L1-Loss & 0.8721 & 3.53e-04 & 0.0072 & 33.8351 \\ \hline L2-Loss & \textbf{0.8823} & \textbf{1.81e-05} & \textbf{0.0033} & \textbf{34.6733} \end{tabular} \caption{Quantitative comparison between L2-Loss and L1-Loss.} \label{tb2} \end{table} Furthermore, regarding qualitative comparison in Figure \ref{fig8}, even though the global structure of images considering green areas is roughly the same, L2-Loss performed better at predicting blue regions which is crucial for diagnosis. \begin{figure}[h] \subfloat{ \begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/1-GT.png} \end{minipage}}\hspace{0.05cm} \subfloat{\begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/1-L1.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/1-L2.png} \end{minipage}} \setcounter{subfigure}{0}% \subfloat[Ground-truth]{ \begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/2-GT.png} \end{minipage}}\hspace{0.05cm} \subfloat[L1]{\begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/2-L1.png} \end{minipage}}\ \subfloat[L2]{\begin{minipage}{ 0.31\linewidth} \includegraphics[width=1\textwidth]{fig8/2-L2.png} \end{minipage}} \caption{Qualitative comparison between the output of L1-Loss and L2-Loss.} \label{fig8} \end{figure} \subsection{Comparison with other techniques} \label{sec5.8} Since this is the first method for the reconstruction of the heightmap of color fundus images using DNNs, there were no other method to directly compare our proposed method with. Therefore, we compared the results with popular methods that utilized cGANs. The methods that we compared our results to here are pix2pix \cite{24}, PAN \cite{40} and MedGAN \cite{36}. The results are given in Figure \ref{fig9} and Table \ref{tb3} for qualitative and quantitative comparison respectively. Pix2pix achieved the worst results since it does not use deep supervision and it is based on L1-loss. PAN did slightly better in SSIM and MSE metrics. However, there is a huge difference between PAN and pix2pix in terms of LPIPS since PAN uses perceptual loss in the training procedure. This difference proves the importance and the impact of using perceptual loss for training cGANs. MedGAN was designed especially for medical image translation such as translating between CT and PET images. As a result, It performs better in comparison to previous general methods, but the results are inferior to our proposed method. In fact, since we are using deep supervision in this method and carefully tuned the parameters for this particular problem, we achieve a higher value in all metrics. Another justification for higher values of the proposed method is that all the previous methods used L1-loss as pixel-reconstruction loss for the training, while we used L2-loss which as stated in the previous section, is the superior choice for this particular problem. \begin{table*}[!ht] \centering \begin{tabular}{l|c|c|c|c} Method & \multicolumn{1}{l|}{SSIM} & \multicolumn{1}{l|}{PSNR(dB)} & \multicolumn{1}{l|}{LPIPS} & \multicolumn{1}{l}{MSE} \\ \hline pix2pix \cite{24} & 0.8596 & 33.9523 & 2.25e-03 & 0.0068 \\ \cline{1-1} PAN \cite{40} & 0.8612 & 33.8512 & 2.37e-04 & 0.0053 \\ \cline{1-1} MedGAN \cite{36} & 0.8659 & 33.2958 & 5.61e-05 & 0.0048 \\ \cline{1-1} Proposed Method & \textbf{0.8823} &\textbf{ 34.6733} & \textbf{1.81e-05} &\textbf{0.0033} \end{tabular} \caption{Quantitative comparison between proposed method and other methods.} \label{tb3} \end{table*} \par Considering the qualitative comparison in Figure \ref{fig9}, our proposed method outperformed others in terms of reconstruction of the details. As can be seen, pix2pix missed some of the important details in the first and second examples such as bright red spots. PAN performed better at reconstructing the highly elevated parts in the second row, however it failed to reconstruct the correct shape for the third example. Finally, since MedGAN is specially designed for medical tasks, it outperformed the aforementioned methods in terms of output quality, but it was outperformed by our method and the proposed method generated the best quality images. \begin{figure*} \centering \subfloat{ \begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/1-GT.png} \end{minipage}}\hspace{0.25cm} \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/1-pix2pix.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/1-PAN.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/1-MedGAN.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/1-PM.png} \end{minipage}} \subfloat{ \begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/2-GT.png} \end{minipage}}\hspace{0.25cm} \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/2-pix2pix.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/2-PAN.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/2-MedGAN.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/2-PM.png} \end{minipage}} \setcounter{subfigure}{0}% \subfloat[Ground-truth]{ \begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/3-GT.png} \end{minipage}}\hspace{0.25cm} \subfloat[pix2pix]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/3-pix2pix.png} \end{minipage}}\ \subfloat[PAN]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/3-PAN.png} \end{minipage}}\ \subfloat[MedGAN]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/3-MedGAN.png} \end{minipage}}\ \subfloat[Proposed method]{\begin{minipage}{ 0.19\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig9/3-PM.png} \end{minipage}} \setcounter{subfigure}{0}% \caption{Qualitative comparison between the proposed method and other methods.} \label{fig9} \end{figure*} \subsection{Perceptual studies} \label{sectionPerceptualStudies} As stated before, the main purpose of reconstructing the heightmap of fundus images is to be able to infer part of the information that OCT devices provide to ophthalmologists, namely the height information. Hence, to judge the fidelity of the reconstructed heightmap, in this section we conducted an experiment in which two experienced ophthalmologists were presented a series of trails each containing reconstructed heightmap, fundus image and the ground-truth heightmap from the test set. The main purpose of this study is to investigate if the reconstructed heightmap and fundus image pair gives more information for the diagnosis of any retinal disease in comparison to the situation in which we have the fundus image only. \par For this experiment, two ophthalmologists first classified all images into two classes positive and negative which positive means that the image provides more information for diagnosis and negative means that the image does not add more information for diagnosis. Additionally, ophthalmologists rated each image from zero to three according to the level of information that each of the given images provide such that zero means no added information and three represents the highest amount of information for diagnosis. \par As can be seen in Table \ref{tb4}, ophthalmologist 1 classified all images as useful for diagnosis and the mean score for all of the images is 1.94. Additionally, ophthalmologist 2 classified 92$\%$ of the outputs as positive and the mean square is 1.84. This study shows that even though the output of our method may seem more blurry in comparison to the original one, these outputs can be used for diagnosis and can provide valuable additional information to ophthalmologists especially about height information in different regions. For instance, diseases such as Age-related macular degeneration are dependent on the swelling of different regions of the macula and the reconstructed heightmap contains this information. \par We also considered the positive samples in isolation and the results are shown in Table \ref{tb5}. As can be seen, both ophthalmologists classified most of the images in class 2 and the average score is near 2 for both ophthalmologists. This experiment also indicates that in most of the cases the reconstructed heightmap can provide useful information for diagnosis from a single fundus image. \begin{table}[h] \centering \begin{tabular}{l|c|c|c} \multirow{2}{*}{} & \multicolumn{2}{c|}{Score} & Classification \\ \cline{2-4} & Mean & SD & Positive \% \\ \hline Ophthalmologist 1 & 1.94 & 0.7669 & 100.00 \\ \cline{1-1} Ophthalmologist 2 & 1.84 & 0.9553 & 92.00 \end{tabular} \caption{Results of perceptual study.} \label{tb4} \end{table} \begin{table*}[] \begin{tabular}{c|c|c|c|c|cccc} & \multicolumn{4}{c|}{Ophthalmologist 1} & \multicolumn{4}{c}{Ophthalmologist 2} \\ \hline Score & Frequency & \% & Mean & SD & \multicolumn{1}{c|}{Frequency} & \multicolumn{1}{c|}{\%} & \multicolumn{1}{c|}{Mean} & SD \\ \hline 1 & 16 & 32.00 & \multirow{3}{*}{1.94} & & \multicolumn{1}{c|}{15} & \multicolumn{1}{c|}{30.00} & \multicolumn{1}{c|}{\multirow{3}{*}{2.00}} & \multirow{3}{*}{0.8164} \\ 2 & 21 & 42.00 & & 0.7669 & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32.00} & \multicolumn{1}{c|}{} & \\ 3 & 13 & 26.00 & & & \multicolumn{1}{c|}{15} & \multicolumn{1}{c|}{30.00} & \multicolumn{1}{c|}{} & \end{tabular} \caption{Information of positive samples in perceptual studies.} \label{tb5} \end{table*} Considering examples which were classified as positive in Figure \ref{fig10}, the reconstructed heightmap can indicate the lack of elevation change (top right example) as well as serious elevation changes in different regions (other examples) depending on the condition of the fundus image. In fact, both lack of elevation changes and having high or low elevated parts are types of information that cannot be inferred solely from a single fundus image. However, by utilizing our proposed method, ophthalmologists can be exposed to this additional information which can aid them in better and easier diagnosis with a single color fundus image. Finally, this figure also proves the point in the last paragraph that even though the reconstructed heightmaps in positive samples may seem more blurry than the ground-truth heightmaps, they were classified as positive owing to the information that they provide for diagnosis. \begin{figure*} \captionsetup[subfigure]{labelformat=empty,position=top} \centering \subfloat[Fundus]{ \begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/2_Fundus.png} \end{minipage}}\ \subfloat[Proposed method]{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/2.png} \end{minipage}}\ \subfloat[Ground-truth]{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/2_GT.png} \end{minipage}}\hspace{0.25cm} \subfloat[Fundus]{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/7_Fundus.png} \end{minipage}}\ \subfloat[Proposed method]{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/7.png} \end{minipage}}\ \subfloat[Ground-truth]{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/7_GT.png} \end{minipage}} \captionsetup[subfigure]{labelformat=parens,position=bottom} \setcounter{subfigure}{0}% \subfloat{ \begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/29_Fundus.png} \end{minipage}}\ \captionsetup[subfigure]{captionskip=0.5cm,margin={-0.1cm,0.0cm}} \setcounter{subfigure}{0}% \subfloat{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/29.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/29_GT.png} \end{minipage}}\hspace{0.25cm} \subfloat{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/14_Fundus.png} \end{minipage}}\ \setcounter{subfigure}{1}% \subfloat{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/14.png} \end{minipage}}\ \subfloat{\begin{minipage}{ 0.15\textwidth} \includegraphics[width=1\textwidth,height = 1\textwidth]{Fig10/14_GT.png} \end{minipage}} \caption{Some examples from images classified as positive by two ophthalmologists.} \label{fig10} \end{figure*} \section{Conclusion and Discussion} In this paper, we proposed a novel framework to automatically generate a heightmap image of the macula on a color fundus image. For the generator network, we used a stack of three U-Nets and motivated by deeply supervised networks, we averaged on the output of these U-Net layers for deep supervision. We also utilized LSGAN instead of traditional GAN loss for stable training procedure and better results along with L2-Loss and perceptual loss to generate the final outcome. \par The experimental results indicate that our proposed method outperformed other methods in the task of image and medical image translation in terms of SSIM, PSNR, MSE and LPIPS metrics as can be seen in Table \ref{tb3}. Furthermore, as depicted in Figure \ref{fig7}, deep supervision contributed greatly to the quality of the final outcome by producing meaningful outputs from deep layers. This suggests that when we are dealing with very deep neural networks, it is better to somehow constrain deep layers into generating features toward the final goal of the network. Finally, considering the applications of our proposed method in real diagnosis, perceptual studies in Table \ref{tb4} and \ref{tb5} indicate that we can infer more information from the reconstructed heightmap, especially in cases in which there are some elevation changes in different regions of the macula region. By utilizing this information about heights, we can provide additional information for the diagnosis of diseases that are dependent on the presence of data about elevations using only a color fundus image and without the need for OCT images. \par As stated in Section \ref{sectionPerceptualStudies}, despite slight blurriness in the output of our proposed network, it can still be used for diagnosis. However, this work is not free from limitations with further improvements is essential for improving the practical applicability in real diagnosis cases. In fact, in some cases owing to the poor image quality of the fundus image the system cannot extract useful and meaningful information from the image especially in cases in which the fundus image is blurred. This suggests that in future works, a pre-processing step should be employed to de-blur fundus images properly before feeding them into the network and study its effectiveness. Furthermore, in future works, we will try to utilize other features of the fundus image besides automatic features extracted from CNNs to improve the overall performance and quality of the proposed method. \par In future works, we will utilize images from different regions of fundus image such as Optic Nerve Head(ONH) to reconstruct its heightmap as this part of the fundus image has many practical applications and to develop our method into a general solution for heightmap reconstruction. Finally, considering the perceptual studies which show that our reconstructed heightmap can provide information for the diagnosis, in future researches, we can utilize the results generated from our network to detect retinal diseases automatically which were impossible before using only a single fundus image. \section{Acknowledgment} The authers are grateful to Dr.Ahmadieh (Ophthalmologist) for grading and classifying images for our experiment. \bibliographystyle{cas-model2-names}
{ "timestamp": "2020-12-29T02:26:33", "yymm": "2012", "arxiv_id": "2012.14140", "language": "en", "url": "https://arxiv.org/abs/2012.14140" }
\section{Introduction} Let $X$ be a metric space with metric $d$, $R$ a subset of $X\times X$, and $T\colon X \to X$ a mapping. We say that $T$ is a \emph{Meir-Keeler type} mapping on $R$ if for any $\epsilon > 0$ there exists $\delta >0$ such that \[ (x,y)\in R \text{ and } \epsilon \leq d(x,y) < \epsilon + \delta \text{ imply } d(Tx,Ty) < \epsilon. \] This mapping is based on a mapping introduced in Meir and Keeler~\cite{MR0250291}. Indeed, a Meir-Keeler type mapping $T$ on $X\times X$ is a \emph{weakly uniformly strict contraction} in the sense of \cite{MR0250291}, which is often called a \emph{Meir-Keeler contraction}. In Section~\ref{s:characterization}, we provide some characterizations of a Meir-Keeler type mapping (Theorem~\ref{t:MKC-char-R}). The result includes characterizations of a Meir-Keeler contraction by Wong~\cite{MR644645}, Lim~\cite{MR1845580}, and Gavruta et al.~\cite{gavruta2014two}. In Section~\ref{s:fpt}, we establish a fixed point theorem for a Meir-Keeler type mapping (Theorem~\ref{t:fpt}) in a metric space endowed with a transitive relation. The result is related to the study of Ben-El-Mechaiekh~\cite{MR3346760} and fixed point theorems in a metric space with a partial order proved in Ran and Reurings~\cite{MR2053350}, Nieto and Rodr\'{\i}guez-L\'{o}pez~\cite{MR2212687}, and Reich and Zaslavski~\cite{reich2017monotone}. \section{Preliminaries} Throughout the present paper, $\mathbb N$ denotes the set of positive integers, $\mathbb R$ the set of real numbers, and $\mathbb R_+$ the set of nonnegative real numbers. A function $l \colon \mathbb R_+ \to \mathbb R_+$ is said to be of \emph{type (L)} if for any $s>0$ there exists $\delta >0$ such that $l(t) \leq s$ for all $t \in [s, s+\delta]$. It is clear that if a function $l\colon \mathbb R_+ \to \mathbb R_+$ is of type (L), then $l(t) \leq t$ for all $t > 0$. \begin{remark} A mapping of type (L) above is based on an \textit{L}-function introduced in \cite{MR1845580}. We say that a function $l \colon \mathbb R_+ \to \mathbb R_+$ is an \emph{\textit{L}-function} \cite{MR1845580} if $l(0)=0$, $l(s)>0$ for all $s>0$, and $l$ is of type (L). \end{remark} We say that a function $w\colon \mathbb R_+ \to \mathbb R$ is \emph{right lower semicontinuous} at $t_0\in \mathbb R_+$ if for any $\epsilon > 0$ there exists $\delta > 0$ such that $w(t_0) - \epsilon < w(s)$ for all $s \in [t_0, t_0 + \delta)$; a function $\psi\colon \mathbb R_+ \to \mathbb R$ is \emph{right upper semicontinuous} at $t_0 \in \mathbb R_+$ if $-\psi$ is right lower semicontinuous at $t_0$. It is clear that if $w\colon \mathbb R_+ \to \mathbb R$ is a nondecreasing function, then $w$ is right lower semicontinuous at any $t \in \mathbb R_+$. It is known that a function $w\colon \mathbb R_+ \to \mathbb R$ is right lower semicontinuous at $t_0\in \mathbb R_+$ if and only if $w(t_0) \leq \liminf_n w(s_n)$ whenever $\{s_n\}$ is a sequence in $[t_0,\infty)$ such that $s_n \to t_0$. \section{Characterizations of a Meir-Keeler type mapping} \label{s:characterization} The aim of this section is to prove the following theorem, which provides characterizations of a Meir-Keeler type mapping defined on a metric space endowed with a transitive relation. \begin{theorem}\label{t:MKC-char-R} Let $X$ be a metric space with metric $d$, $T\colon X \to X$ a mapping, and $R$ a nonempty subset of $X \times X$. Then the following are equivalent: \begin{enumerate} \item $T$ is a Meir-Keeler type mapping on $R$, that is, for any $\epsilon > 0$ there exists $\delta >0$ such that $(x,y)\in R$ and $\epsilon \leq d(x,y) < \epsilon + \delta$ imply $d(Tx,Ty) < \epsilon$; \item for any $\epsilon > 0$ there exists $\delta >0$ such that $(x,y)\in R$ and $d(x,y) < \epsilon + \delta$ imply $d(Tx,Ty) < \epsilon$; \item \label{Gamma:R} there exists a nondecreasing function $\gamma \colon \mathbb R_+ \to [0,\infty]$ such that $\gamma(s) > s$ for all $s>0$ and $\gamma\bigl( d(Tx,Ty) \bigr) \leq d(x,y)$ for all $(x,y) \in R$; \item \label{Wong:R} there exists a function $w\colon \mathbb R_+ \to \mathbb R_+$ such that $w(s) > s$ for all $s>0$, $w$ is right lower semicontinuous on $(0,\infty)$, and $w\bigl( d(Tx,Ty) \bigr) \leq d(x,y)$ for all $(x,y) \in R$; \item \label{Lim:R} there exists a function $l\colon (0,\infty) \to \mathbb R_+$ of type~(L) such that $d(Tx,Ty) < l \bigl( d(x,y) \bigr)$ for all $(x,y) \in R$ with $x \ne y$; \item \label{Phi-Psi:R} there exist a nondecreasing function $\phi \colon \mathbb R_+ \to [0,\infty]$ and a function $\psi \colon \mathbb R_+ \to \mathbb R_+$ such that $\psi$ is right upper semicontinuous on $(0,\infty)$, $\phi(t) > \psi(t)$ for all $t >0$, and $\phi \bigl( d(Tx,Ty) \bigr) \leq \psi \bigl( d(x,y) \bigr)$ for all $(x,y) \in R$. \end{enumerate} Moreover, in \eqref{Lim:R}, one can choose $l$ to be a right continuous and nondecreasing function such that $l(s) > 0$ for all $s>0$. \end{theorem} Obviously, Theorem~\ref{t:MKC-char-R} is valid in case of $R=X\times X$. Therefore Theorem~\ref{t:MKC-char-R} provides characterizations of a Meir-Keeler contraction \cite{MR0250291} on a metric space. \begin{remark} The condition~(\ref{Gamma:R}) is related to the modulus of uniform continuity of $T$; see Lim~\cite{MR1845580}. The conditions~(\ref{Wong:R}) and~(\ref{Lim:R}) are based on \cite{MR1845580}*{Theorem~1}; see also Wong~\cite{MR644645} for~(\ref{Wong:R}). The condition~(\ref{Phi-Psi:R}) comes from a \emph{weak type contraction} introduced in \cite{gavruta2014two}. \end{remark} Theorem~\ref{t:MKC-char-R} above is a direct consequence of Theorem~\ref{t:fg} below. We first prove it by using lemmas in Section~\ref{s:lemmas}. \begin{theorem}\label{t:fg} Let $K$ be a nonempty set and let $f\colon K \to \mathbb R_+$ and $g\colon K \to \mathbb R_+$ be functions. Suppose that $g^{-1}(0) \subset f^{-1}(0)$. Then the following are equivalent: \begin{enumerate} \item \label{MK:fg} For any $\epsilon >0$ there exists $\delta > 0$ such that $x \in K$ and $\epsilon \leq g(x) < \epsilon + \delta$ imply $f(x) < \epsilon$; \item \label{MKs:fg} for any $\epsilon >0$ there exists $\delta > 0$ such that $x\in K$ and $g(x) < \epsilon + \delta$ imply $f(x) < \epsilon$; \item \label{Gamma:fg} there exists a nondecreasing function $\gamma \colon \mathbb R_+ \to [0,\infty]$ such that $\gamma(s) > s$ for all $s>0$ and $\gamma\bigl( f(x) \bigr) \leq g(x)$ for all $x \in K$; \item \label{Wong-finite:fg} there exists a function $w\colon \mathbb R_+ \to \mathbb R_+$ such that $w(s) > s$ for all $s>0$, $w$ is right lower semicontinuous on $(0,\infty)$, and $w\bigl( f(x) \bigr) \leq g(x)$ for all $x \in K$; \item \label{Lim:fg} there exists a function $l\colon (0,\infty) \to \mathbb R_+$ of type (L) such that $f(x) < l \bigl( g(x) \bigr)$ for all $x \in K$ with $g(x) \ne 0$. \item \label{Phi-Psi:fg} there exist a nondecreasing function $\phi \colon \mathbb R_+ \to [0,\infty]$ and a function $\psi \colon \mathbb R_+ \to \mathbb R_+$ such that $\phi(t) > \psi(t)$ for all $t >0$, $\psi$ is right upper semicontinuous on $(0,\infty)$, and $\phi \bigl( f(x) \bigr) \leq \psi \bigl( g(x) \bigr)$ for all $x \in K$. \end{enumerate} Moreover, in \eqref{Lim:fg}, one can choose $l$ to be a right continuous and nondecreasing function such that $l(s) > 0$ for all $s>0$. \end{theorem} \begin{proof} The implications \eqref{MKs:fg} $\Rightarrow$ \eqref{MK:fg} and \eqref{Gamma:fg} $\Rightarrow$ \eqref{Phi-Psi:fg} are clear. Lemma~\ref{l:MK=Lim} shows that \eqref{MK:fg} and \eqref{Lim:fg} are equivalent, and that $l$ in \eqref{Lim:fg} can be chosen to be a right continuous and nondecreasing function such that $l(s) > 0$ for all $s>0$. Lemmas~\ref{l:mks2gamma}, \ref{l:gamma2w-finite}, and~\ref{l:w-finite2MKs} show the implications \eqref{MKs:fg} $\Rightarrow$ \eqref{Gamma:fg}, \eqref{Gamma:fg} $\Rightarrow$ \eqref{Wong-finite:fg}, and \eqref{Wong-finite:fg} $\Rightarrow$ \eqref{MKs:fg}, respectively. Moreover, the implication \eqref{MK:fg} $\Rightarrow$ \eqref{MKs:fg} and \eqref{Phi-Psi:fg} $\Rightarrow$ \eqref{MK:fg} follow from Lemmas~\ref{l:MK2MKs} and~\ref{l:Phi-Psi2MK}, respectively. This completes the proof. \end{proof} The following example shows that the implication \eqref{MK:fg} $\Rightarrow$ \eqref{MKs:fg} in Theorem~\ref{t:fg} does not hold without the assumption $g^{-1}(0) \subset f^{-1}(0)$. \begin{example}\label{e:MK/=MKs} Let $K = \{x\}$ be a singleton and let $f\colon K \to \mathbb R_+$ and $g\colon K \to \mathbb R_+$ be functions defined by $f(x) = 1$ and $g(x)=0$. Then \eqref{MK:fg} in Theorem~\ref{t:fg} holds, but \eqref{MKs:fg} in Theorem~\ref{t:fg} does not hold. \end{example} \begin{proof} Let $\epsilon=1$. Then $0 = g(x) < \epsilon + \delta$ and $f(x) \geq \epsilon$ for all $\delta > 0$. Thus \eqref{MKs:fg} does not hold. On the other hand, let $\epsilon > 0$ and $\delta =1$. Then $\{y \in K\colon \epsilon \leq g(y) < \epsilon + \delta \} = \emptyset$. Therefore, \eqref{MK:fg} does hold. \end{proof} \begin{remark} Let $K$, $f$, and $g$ be the same as in Example~\ref{e:MK/=MKs} and let $\phi \colon \mathbb R_+ \to [0,\infty]$ and $\psi \colon \mathbb R_+ \to \mathbb R_+$ be functions defined by $\phi (t) \equiv 1/2$ and \[ \psi (t) = \begin{cases} 1 & \text{ if } t=0; \\ 1/4 & \text{otherwise}. \end{cases} \] Then $\phi$ is nondecreasing, $\psi$ is right upper semicontinuous on $(0,\infty)$, and $\phi(t)> \psi(t)$ for all $t>0$. Since \[ \phi \bigl( f(x) \bigr) = \phi(1) = 1/2 \leq 1 = \psi(0) = \psi \bigl( g(x) \bigr), \] it follows that $\phi \bigl( f(y) \bigr) \leq \bigl( g(y) \bigr)$ for all $y \in K$. Therefore Example~\ref{e:MK/=MKs} also shows that the implication \eqref{Phi-Psi:fg} $\Rightarrow$ \eqref{MKs:fg} in Theorem~\ref{t:fg} does not hold without the assumption $g^{-1}(0) \subset f^{-1}(0)$. \end{remark} Using Theorem~\ref{t:fg}, we can easily obtain Theorem~\ref{t:MKC-char-R}. \begin{proof}[Proof of Theorem~\ref{t:MKC-char-R}] Let $f\colon R \to \mathbb R_+$ and $g\colon R \to \mathbb R_+$ be functions defined by $f(x,y) = d(Tx, Ty)$ and $g(x,y)=d(x,y)$ for $(x,y) \in R$. Then it is clear that $g^{-1}(0) \subset f^{-1}(0)$. Therefore Theorem~\ref{t:fg} implies the conclusion. \end{proof} \section{Fixed point theorems} \label{s:fpt} The aim of this section is to establish fixed point theorems for a Meir-Keeler type mapping defined on a complete metric space endowed with a transitive relation or a partial order. \begin{theorem}\label{t:fpt} Let $X$ be a complete metric space with metric $d$, $T\colon X \to X$ a mapping, and $R$ a nonempty subset of $X \times X$. Suppose that \begin{enumerate} \item \label{i:transitive} $(u,v) \in R$ and $(v,w) \in R$ imply $(u,w) \in R$; \item \label{i:x} there exists $x \in X$ such that $(x,Tx) \in R$; \item \label{i:R2R} $(Tu,Tv) \in R$ for all $(u,v)\in R$; \item \label{i:MKC} for any $\epsilon >0$ there exists $\delta >0$ such that $(u,v) \in R$ and $\epsilon \leq d (u, v) < \epsilon + \delta$ imply $d(Tu, Tv) < \epsilon$; \item \label{i:pseudoc} if $\{x_n \}$ is a sequence in $X$ such that $x_n \to y$ and $(x_n, x_{n+1}) \in R$ for all $n \in \mathbb N$, then there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $T x_{n_k} \to Ty$ as $k \to \infty$. \end{enumerate} Then $\{T^n x\}$ converges to a fixed point of $T$, that is, $T$ has a fixed point. Moreover, suppose that \begin{enumerate} \setcounter{enumi}{5} \item \label{i:forally} $(x,y) \in R$ for all $y \in X$; \item \label{i:Rclosed} $R$ is closed in $X\times X$. \end{enumerate} Then $T$ has a unique fixed point. \end{theorem} \begin{remark} The assumptions \eqref{i:forally} and \eqref{i:Rclosed} in Theorem~\ref{t:fpt} can be replaced by the following condition: \begin{quote} If $y$ is a fixed point of $T$, and $\{x_n\}$ is a sequence in $X$ such that $x_n \to z \in X$ and $(x_n,y)\in R$ for all $n \in \mathbb N$, then $(z,y) \in R$. \end{quote} \end{remark} To prove Theorem~\ref{t:fpt}, we need lemmas below, which are based on the results in \cite{MR0250291}*{\S2}. \begin{lemma}\label{l:nonincreasing} Let $X$ be a metric space with metric $d$, $T\colon X \to X$ a mapping, $x \in X$, and $\{x_n\}$ a sequence in $X$ defined by $x_n = T^n x$ for $n \in \mathbb N$. Suppose that for any $\epsilon >0$ there exists $\delta >0$ such that \begin{equation}\label{e:MKseq} n \in \mathbb N, \, \epsilon \leq d (x_n, x_{n+1}) < \epsilon + \delta \Rightarrow d(x_{n+1}, x_{n+2}) < \epsilon. \end{equation} Then $\left\{ d ( x_n, x_{n+1} ) \right\}$ is nonincreasing and $\lim_n d ( x_n, x_{n+1} ) = 0$. \end{lemma} \begin{proof} Suppose that $d(x_m, x_{m+1}) = 0$. Then $x_m = x_{m+1}$. Thus we have $x_{m+1} = T^{m+1}x = T x_m =Tx_{m+1} =x_{m+2}$, and hence $d(x_{m+1}, x_{m+2}) = 0$. On the other hand, suppose that $\epsilon = d(x_m, x_{m+1}) > 0$. Then there exists $\delta > 0$ such that \eqref{e:MKseq} holds. Thus we have $d(x_{m+1},x_{m+2}) < \epsilon = d(x_m, x_{m+1})$. Consequently, we know that $\left\{ d ( x_n, x_{n+1} ) \right\}$ is nonincreasing, and hence $\lim_n d ( x_n, x_{n+1} )$ exists. Suppose that $\epsilon = \lim_n d(x_n, x_{n+1}) > 0$. Then there exists $\delta > 0$ such that \eqref{e:MKseq} holds. Since $d(x_n, x_{n+1}) \searrow \epsilon$, there exists $k \in \mathbb N$ such that $\epsilon \leq d(x_k,x_{k+1}) < \epsilon + \delta$. Thus we have $\epsilon \leq d(x_{k+1}, x_{k+2}) < \epsilon$, which is a contradiction. Therefore, $\lim_n d(x_n,x_{n+1}) = \epsilon = 0$. \end{proof} \begin{lemma}\label{l:pre-cauchy} Let $X$ be a metric space with metric $d$, $\{x_n\}$ a sequence in $X$, $l,m$ positive integers, and $\epsilon,\eta$ positive real numbers. Suppose that $l<m$, $\eta \leq \epsilon$, $d(x_l, x_m) \geq 2 \epsilon$, and $d(x_i, x_{i+1}) < \eta/3$ for all $i\in \mathbb N$ with $l \leq i \leq m$. Then there exists $j \in \mathbb N$ such that $l< j <m$ and $\epsilon + 2\eta/3 \leq d(x_l,x_j) < \epsilon + \eta$. \end{lemma} \begin{proof} Set $A= \{ i \in \mathbb N\colon l < i <m,\, \epsilon + 2\eta/3 \leq d (x_l, x_i)\}$. We first show that $m -1 \in A$. Suppose that $m-1 \leq l$. Then $m = l+1$, and we have \[ 2 \epsilon \leq d(x_l, x_m) = d(x_l, x_{l+1}) < \eta/3 \leq \epsilon/3, \] which is a contradiction. Thus $l < m-1$. Moreover, we have \[ d(x_l, x_{m-1}) \geq d(x_l,x_m) - d(x_m, x_{m-1}) \geq 2\epsilon - \eta/3 \geq \epsilon + 2\eta/3. \] Therefore, $m-1 \in A$, and hence $A$ is nonempty. Set $j = \min A$. Suppose that $l \geq j-1$. Then $j=l+1$. Thus we have $\epsilon + 2\eta/3 \leq d(x_l, x_j) = d(x_l,x_{l+1}) < \eta/3$, which is a contradiction. Therefore, $l < j-1 < j < m$. Since $j-1 \notin A$, we have $d(x_l, x_{j-1}) < \epsilon +2\eta/3$, and hence \[ d(x_l,x_j) \leq d(x_l,x_{j-1}) + d(x_{j-1},x_j) < \epsilon + 2\eta/3 + \eta/3 = \epsilon + \eta. \] As a result, we conclude that $l< j <m$ and $\epsilon + 2\eta/3 \leq d(x_l,x_j) < \epsilon + \eta$. \end{proof} \begin{lemma} \label{l:cauchy} Let $X$, $T$, $x$, and $\{x_n\}$ be the same as in Lemma~\ref{l:nonincreasing}. Suppose that for any $\epsilon >0$ there exists $\delta >0$ such that \begin{equation}\label{e:MKseq2} i, j \in \mathbb N, \, \epsilon \leq d (x_i , x_j ) < \epsilon + \delta \Rightarrow d(x_{i+1}, x_{j+1}) < \epsilon. \end{equation} Then $\{x_n\}$ is a Cauchy sequence. \end{lemma} \begin{proof} Suppose that $\{x_n\}$ is not a Cauchy sequence. Then there exists $\epsilon> 0$ such that for each $i \in \mathbb N$ there exist $m_i,n_i \in \mathbb N$ such that \begin{equation}\label{e:not-cauchy} i \leq m_i < n_i \text{ and } d(x_{m_i}, x_{n_i}) \geq 2 \epsilon. \end{equation} By assumption, we know that there exists $\delta>0$ such that \eqref{e:MKseq2} holds. Set $\eta = \min \{ \delta, \epsilon\}$. Since $d(x_n,x_{n+1}) \searrow 0$ by Lemma~\ref{l:nonincreasing}, it follows from~\eqref{e:not-cauchy} that there exist $m, n \in \mathbb N$ with $m<n$ such that $d(x_m, x_n) \geq 2 \epsilon$ and \begin{equation}\label{e:eta3} d(x_i, x_{i+1}) < \eta/3 \end{equation} for all $i \in \mathbb N$ with $i\geq m$. Thus Lemma~\ref{l:pre-cauchy} shows that there exists $j \in \mathbb N$ such that $m<j<n$ and \[ \epsilon + 2\eta/3 \leq d (x_m, x_j) < \epsilon + \eta. \] As a result, we see that $\epsilon \leq d (x_m, x_j) < \epsilon + \delta$. Taking into account \eqref{e:eta3} and \eqref{e:MKseq2}, we have \begin{align*} \epsilon + 2\eta/3 \leq d(x_m, x_j) &\leq d(x_m, x_{m+1}) + d(x_{m+1}, x_{j+1}) + d(x_{j+1},x_j)\\ &< \eta/3 + \epsilon + \eta/3 = \epsilon + 2\eta/3, \end{align*} which is a contradiction. Therefore, $\{x_n\}$ is a Cauchy sequence. \end{proof} Now we prove Theorem~\ref{t:fpt}. \begin{proof}[Proof of Theorem~\ref{t:fpt}] Let $\{x_n\}$ be a sequence in $X$ defined by $x_n = T^n x$ for $n \in \mathbb N$. Then, by the assumptions~(\ref{i:transitive}), (\ref{i:x}), and (\ref{i:R2R}), we see that $(x_m, x_n) \in R$ for all $m,n \in \mathbb N$ with $m < n$. Thus it follows from the assumption~(\ref{i:MKC}) that for any $\epsilon >0$ there exists $\delta >0$ such that \eqref{e:MKseq2} holds. Since $X$ is complete, Lemma~\ref{l:cauchy} shows that $\{x_n\}$ converges to some point $z \in X$. We show that $z$ is a fixed point of $T$. By virtue of the assumption \eqref{i:pseudoc}, there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $Tx_{n_k} \to Tz$ as $k \to \infty$. Taking into account $x_{n_k + 1}\to z$, we conclude that \[ d(Tz,z) \leq d(Tz, x_{n_k + 1}) + d(x_{n_k + 1}, z) = d(Tz, Tx_{n_k}) + d(x_{n_k + 1}, z) \to 0 \] as $k\to \infty$. Therefore, $Tz=z$, and hence $z$ is a fixed point of $T$. We next show that $z$ is the unique fixed point of $T$ under the assumptions \eqref{i:forally} and \eqref{i:Rclosed}. Let $y$ be a fixed point of $T$. Since $(x,y) \in R$ by~\eqref{i:forally}, it follows from~\eqref{i:R2R} that $(Tx,y) = (Tx, Ty) \in R$. Therefore, $(T^n x,y) \in R$ for all $n \in \mathbb N$. Since $T^n x \to z$ and $R$ is closed by~\eqref{i:Rclosed}, we conclude that $(z,y) \in R$. Using Theorem~\ref{t:MKC-char-R} and the function $\gamma$ in Theorem~\ref{t:MKC-char-R} \eqref{i:R2R}, we have \[ \gamma \bigl( d(z,y) \bigr) = \gamma \bigl( d(Tz,Ty) \bigr) \leq d(z,y), \] and hence $z=y$. \end{proof} Using Theorem~\ref{t:fpt}, we obtain the following: \begin{corollary}[% Nieto \& Rodr\'{i}guez-L\'{o}pez \cite{MR2212687}*{Theorem 2.2}] Let $X$ be a complete metric space with metric $d$, $T\colon X \to X$ a mapping, and $\preceq$ a partial order in $X$. Suppose that \begin{enumerate} \item[(NR1)] there exists $x \in X$ such that $x \preceq Tx$; \item[(NR2)] $Tu \preceq Tv$ for all $u,v\in X$ with $u \preceq v$; \item[(NR3)] there exists $\theta \in [0,1)$ such that $d(Tu, Tv) \leq \theta d(u,v)$ for all $u,v\in X$ with $u \preceq v$; \item[(NR4)] if $\{x_n \}$ is a sequence in $X$ such that $x_n \to y$ and $x_n \preceq x_{n+1}$ for all $n \in \mathbb N$, then then $x_n \preceq y$ for all $n \in \mathbb N$. \end{enumerate} Then $T$ has a fixed point. \end{corollary} \begin{proof} Set $R = \{ (u,v) \in X \times X\colon u \preceq v\}$. Since $(x,Tx) \in R$ by~(NR1), we know that $R$ is a nonempty subset of $X\times X$ and the assumption~\eqref{i:x} in Theorem~\ref{t:fpt} holds. The assumption \eqref{i:transitive} in Theorem~\ref{t:fpt} is valid clearly. The assumptions~\eqref{i:R2R} and~\eqref{i:MKC} in Theorem~\ref{t:fpt} follow from (NR2) and (NR3), respectively. We must check the assumption~\eqref{i:pseudoc} in Theorem~\ref{t:fpt}. Let $\{x_n \}$ be a sequence in $X$ such that $x_n \to y$ and $(x_n, x_{n+1}) \in R$ for all $n \in \mathbb N$. Taking into account (NR3) and (NR4), we see that \[ d(Tx_n, Ty) \leq \theta d(x_n,y) \to 0 \] as $n \to \infty$. Therefore Theorem~\ref{t:fpt} implies the conclusion. \end{proof} Using Theorem~\ref{t:fpt}, we also deduce the following fixed point theorem, which is similar to \cite{reich2017monotone}*{Theorem 1.2}. \begin{theorem} Let $Y$ be a complete metric space with metric $d$, $\preceq$ a partial order in Y, $X$ a nonempty closed subset of $Y$, and $T\colon X \to X$ a mapping. Suppose that \begin{itemize} \item[(RZ0)] $\{(u,v) \in Y\times Y \colon u \preceq v\}$ is closed in $Y\times Y$; \item[(RZ1)] the graph of $T$ is closed in $Y\times Y$; \item[(RZ2)] $Tu \preceq Tv$ for all $u,v \in X$ with $u \preceq v$; \item[(RZ3)] there exists a right upper semicontinuous function $\psi\colon \mathbb R_+ \to \mathbb R_+$ such that $t > \psi(t)$ for all $t > 0$ and $d(Tu,Tv) \leq \psi \bigl( d(u,v) \bigr)$ for all $u,v \in X$ with $u \preceq v$; \item[(RZ4)] there exists $x \in X$ such that $x \preceq y$ for all $y \in X$. \end{itemize} Then $\{T^n x\}$ converges to a unique fixed point of $T$. \end{theorem} \begin{proof} By assumption, it is clear that $X$ is complete. Set $R = \{(u,v) \in X\times X \colon u \preceq v\}$. By virtue of (RZ4), $(x, x) \in R$, and hence $R$ is nonempty. Moreover, since $\preceq$ is a partial order, the assumption \eqref{i:transitive} in Theorem~\ref{t:fpt} holds. The assumptions \eqref{i:x} and \eqref{i:forally} in Theorem~\ref{t:fpt} follow from (RZ4); the assumption \eqref{i:R2R} in Theorem~\ref{t:fpt} follows from (RZ2). Since $X$ is closed, the assumption \eqref{i:Rclosed} in Theorem~\ref{t:fpt} is deduced from (RZ0). Using Theorem~\ref{t:MKC-char-R}, we know that (RZ3) implies the assumption \eqref{i:MKC} in Theorem~\ref{t:fpt}. Therefore it is enough to verify the assumption \eqref{i:pseudoc} in Theorem~\ref{t:fpt}. Let $\{x_n \}$ be a sequence in $X$ such that $x_n \to y$ and $(x_n, x_{n+1}) \in R$ for all $n \in \mathbb N$. Since $X$ is closed, it follows that $y \in X$. Let $m \in \mathbb N$ be fixed. Then it is easy to check that $(x_m, x_n) \in R$ for all $n \in \mathbb N$ with $m \leq n$. Since $\{(x_m, x_n) \}_{n\geq m}$ converges to $(x_m,y)$ in $X\times X$ and $R$ is closed in $X\times X$, we see that $(x_m, y) \in R$. Hence $(x_m, y) \in R$ for all $m \in \mathbb N$. Set $A = \{n \in \mathbb N \colon x_n = y \}$. Suppose that $A$ is an infinite set. Then there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $x_{n_k} = y$ for all $k \in \mathbb N$, and hence $Tx_{n_k} \to Ty$ as $k\to \infty$. On the other hand, suppose that $A$ is not a infinite set. Then there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $x_{n_k} \ne y$ for all $k \in \mathbb N$. Since $(x_{n_k}, y) \in R$ and $d(x_{n_k}, y) > 0$ for all $k \in \mathbb N$, it follows from~(RZ3) that \[ d(Tx_{n_k}, Ty) \le \psi \bigl( d(x_{n_k},y) \bigr) < d(x_{n_k},y) \to 0 \] as $k \to \infty$. Therefore the assumption \eqref{i:pseudoc} in Theorem~\ref{t:fpt} holds. Consequently, Theorem~\ref{t:fpt} implies the conclusion. \end{proof} \section{Lemmas}\label{s:lemmas} In this section, we prove lemmas which are used in the proof of Theorem~\ref{t:fg}. In what follows, let $K$ be a nonempty set and let $f\colon K \to \mathbb R_+$ and $g\colon K \to \mathbb R_+$ be functions. \begin{lemma}\label{l:MK=Lim} The conditions \eqref{MK:fg} and \eqref{Lim:fg} in Theorem~\ref{t:fg} are equivalent. Moreover, in \eqref{Lim:fg}, one can choose $l$ to be a right continuous and nondecreasing function such that $l(s) > 0$ for all $s>0$. \end{lemma} \begin{proof} We first prove \eqref{Lim:fg} $\Rightarrow$ \eqref{MK:fg}. Let $\epsilon> 0$. Since $l$ is of type (L), there exists $\delta >0$ such that $l(t) \leq \epsilon$ for all $t \in [\epsilon, \epsilon + \delta]$. Let $x \in K$ with $\epsilon \leq g(x) < \epsilon + \delta$. Then $g(x) \ne 0$. Thus it follows from~\eqref{Lim:fg} that $f(x) < l \bigl( g(x) \bigr) \leq \epsilon$. We next prove \eqref{MK:fg} $\Rightarrow$ \eqref{Lim:fg} and the ``Moreover'' part. We follow the proof of \cite{MR2196804}*{Proposition 1}. By assumption, for any $\epsilon>0$ there exists $\alpha(\epsilon) >0$ such that \begin{equation}\label{183605} x\in K,\, \epsilon \leq g(x) < \epsilon + 2 \alpha(\epsilon) \Rightarrow f(x) < \epsilon. \end{equation} Since $\{ \epsilon > 0\colon t \leq \epsilon + \alpha(\epsilon)\}\ne \emptyset$ for all $t>0$, we can define a function $\beta \colon (0,\infty) \to [0,\infty)$ by \[ \beta(t) = \inf \{ \epsilon > 0\colon t \leq \epsilon + \alpha(\epsilon)\} \] for $t > 0$. Then it is clear that $\beta$ is nondecreasing, $\beta(t) \leq t$ for all $t >0$, and moreover, $\min \{ \epsilon > 0\colon t \leq \epsilon + \alpha(\epsilon)\}$ exists for all $t>0$ with $\beta(t) = t$. Let $\phi_1 \colon (0,\infty) \to [0,\infty)$ be a function defined by \[ \phi_1(t) = \begin{cases} \beta(t) & \text{if } \min \{ \epsilon >0 \colon t \leq \epsilon + \alpha(\epsilon)\} \text{ exists}; \\ \dfrac{\beta(t) + t}2 & \text{otherwise} \end{cases} \] for $t > 0$. Then we verify the following: \begin{itemize} \item[(i)] $\phi_1(t) > 0$ for all $t>0$; \item[(ii)] $\phi_1$ is of type~(L); \item[(iii)] $f(x) < \phi_1 \bigl( g(x) \bigr)$ for all $x \in K$ with $g(x) \ne 0$. \end{itemize} By the definition of $\phi_1$, (i) is clear. We show (ii). Let $s > 0$ be fixed. Suppose that $\phi_1 (t) \leq s$ for all $t \in (s,s+\alpha(s)]$. Then setting $\delta = \alpha(s)$, we conclude that \begin{equation}\label{152553} t \in [s, s+ \delta] \Rightarrow \phi_1(t) \leq s. \end{equation} On the other hand, suppose that there exists $\sigma \in (s, s+ \alpha(s)]$ such that $\phi_1(\sigma) > s$. Then $s \in \{\epsilon>0 \colon \sigma \leq \epsilon + \alpha(\epsilon)\}$, and hence $\beta(\sigma) \leq s$. If $\beta(\sigma) = s$, then we have $\beta(\sigma) = \min \{ \epsilon> 0\colon \sigma \leq \epsilon + \alpha (\epsilon)\}$, and thus \[ \phi_1 (\sigma) = \beta(\sigma) = s < \phi_1 (\sigma), \] which is a contradiction. Consequently, we know that \[ \beta (\sigma) < s < \phi_1(\sigma) = \dfrac{\beta(\sigma) + \sigma}2. \] Taking into account the definition of $\beta(\sigma)$, we can choose $u \in [\beta(\sigma), s)$ with $\sigma \leq u + \alpha(u)$. Then set $\delta = s-u$ and let $t \in [s, s+\delta]$. Since \[ t \leq s + \delta = 2s - u < 2\cdot \dfrac{\beta(\sigma) + \sigma}2 - \beta(\sigma) = \sigma \leq u+ \alpha(u), \] it follows that $\beta(t)\leq u$. Therefore we have \[ \phi_1 (t) \leq \dfrac{\beta(t) + t}2 \leq \dfrac{u+s+\delta}2 = s. \] Thus \eqref{152553} holds, and hence $\phi_1$ is of type (L). We next show (iii). Let $x \in K$ with $g(x) \ne 0$. Taking into account the definition of $\phi_1$, we know that for any $t>0$ there exists $\epsilon \in (0, \phi_1(t)]$ such that $\epsilon \leq t \leq \epsilon + \alpha (\epsilon)$, and thus there exists $\epsilon \in \left(0, \phi_1 \bigl( g(x) \bigr) \right]$ such that $\epsilon \leq g(x) \leq \epsilon + \alpha (\epsilon)$. Hence we deduce from \eqref{183605} that $f(x) < \epsilon \leq \phi_1 \bigl( g(x) \bigr)$. Consequently, (iii) holds. Now let us define functions $\phi_2\colon (0,\infty) \to \mathbb R_+$ and $l \colon (0,\infty) \to \mathbb R_+$ by \[ \phi_2(t)= \sup\{\phi_1(s) \colon s \leq t\} \text{ and } l(t)= \inf\{\phi_2(s) \colon s > t\} \] for $t\in (0,\infty)$. Then it is not hard to check that $\phi_2$ and $l$ are well-defined and nondecreasing, and moreover, \[ 0 < \phi_1(t) \leq \phi_2(t) \leq l(t) \leq t \] for all $t>0$. Thus it follows from~(ii) and~(iii) that $l$ is of type~(L) and $f(x) < l \bigl( g(x) \bigr)$ for all $x \in K$ with $g(x) \ne 0$. We can also verify that $l$ is right continuous. This completes the proof. \end{proof} \begin{lemma}\label{l:mks2gamma} The condition \eqref{MKs:fg} in Theorem~\ref{t:fg} implies the condition \eqref{Gamma:fg} in Theorem~\ref{t:fg}. \end{lemma} \begin{proof} Define a function $\gamma \colon \mathbb R_+ \to [0,\infty]$ by \[ \gamma(t) = \inf \{g(x)\colon x\in K,\, f(x)\geq t\} \] for $t \in \mathbb R_+$, where $\inf \emptyset = \infty$. Then the function $\gamma$ is well-defined and nondecreasing, and moreover, $\gamma \bigl( f(x) \bigr) \leq g(x)$ for all $x \in K$. Hence it is enough to show that $\gamma(t) > t$ for all $t > 0$. Suppose that $\gamma(t) \leq t$ for some $t >0$. Then, by assumption, there exists $\delta>0$ such that $x \in K$ and $g(x) < t + \delta$ imply $f(x) < t$. Since $\gamma(t) < t + \delta$, there exists $y \in K$ such that $f(y)\geq t$ and $g(y)< t + \delta$. Therefore we have $t \leq f(y) < t$, which is a contradiction. \end{proof} \begin{lemma}\label{l:gamma2w-finite} The condition \eqref{Gamma:fg} in Theorem~\ref{t:fg} implies the condition \eqref{Wong-finite:fg} in Theorem~\ref{t:fg}. \end{lemma} \begin{proof} We follow the idea of the proof of \cite{MR1845580}*{Theorem 1}. If $\{ t \in \mathbb R_+ \colon \gamma(t) = \infty\}$ is empty, then we easily obtain the conclusion. Thus we may assume that $\{ t \in \mathbb R_+ \colon \gamma(t) = \infty\}$ is nonempty. Set $t_0 = \inf \{ t \in \mathbb R_+ \colon \gamma(t) = \infty\}$. In the case of $\gamma(t_0) < \infty$, let $w_1 \colon \mathbb R_+ \to \mathbb R_{+}$ be a function defined by \[ w_1 (t) = \begin{cases} \gamma(t) & \text{if }t\in [0,t_0]; \\ \gamma(t_0) + t - t_0 & \text{otherwise}. \end{cases} \] Then it is clear that $w_1(s) > s$ for all $s>0$. Since $w_1$ is nondecreasing, we know that $w_1$ is right lower semicontinuous on $(0,\infty)$. We can also check that $w_1 \bigl( f(x)\bigr) \leq g(x)$ for all $x \in K$. On the other hand, in the case of $\gamma(t_0) = \infty$, let $w_2 \colon \mathbb R_+ \to \mathbb R_{+}$ be a function defined by \[ w_2(t) = \begin{cases} \gamma(t) & \text{if }t\in [0,t_0), \\ 2t & \text{otherwise}. \end{cases} \] Then it is clear that $w_2(s) > s$ for all $s>0$. Since $w_2$ is nondecreasing on $(0,t_0)$ and continuous on $[t_0,\infty)$, we know that $w_2$ is right lower semicontinuous on $(0,\infty)$. We can also check that $w_2 \bigl( f(x)\bigr) \leq g(x)$ for all $x \in K$. \end{proof} \begin{lemma}\label{l:w-finite2MKs} The condition \eqref{Wong-finite:fg} in Theorem~\ref{t:fg} implies the condition \eqref{MKs:fg} in Theorem~\ref{t:fg}. \end{lemma} \begin{proof} Suppose that \eqref{MKs:fg} does not hold. Then there exist $\epsilon>0$ and a sequence $\{x_n\}$ in $K$ such that $g(x_n)< \epsilon + 1/n$ and $f(x_n) \geq \epsilon$ for all $n \in \mathbb N$. Since $f(x_n) > 0$, it follows from the properties of $w$ that \[ \epsilon \leq f(x_n) < w\bigl( f(x_n)\bigr) \leq g(x_n) < \epsilon + 1/n \] for all $n \in \mathbb N$. Hence $f(x_n) \to \epsilon$ and $w\bigl( f(x_n) \bigr) \to \epsilon$. Since $w$ is right lower semicontinuous at $\epsilon$ and $\epsilon < w(\epsilon)$, we have $\epsilon < w(\epsilon) \leq \liminf_n w\bigl( f(x_n) \bigr) = \epsilon$, which is a contradiction. \end{proof} \begin{lemma}\label{l:MK2MKs} Suppose $g^{-1}(0) \subset f^{-1} (0)$. Then the condition \eqref{MK:fg} in Theorem~\ref{t:fg} implies the condition \eqref{MKs:fg} in Theorem~\ref{t:fg}. \end{lemma} \begin{proof} Let $\epsilon > 0$ be given. Then, by~\eqref{MK:fg}, there exists $\delta > 0$ such that $x \in K$ and $\epsilon \leq g(x) < \epsilon + \delta$ imply $f(x) < \epsilon$. Let $x \in K$ such that $g(x) < \epsilon$. It is enough to show that $f(x) < \epsilon$. Suppose that $g(x)=0$. Then, by assumption, $f(x) = 0 < \epsilon$. On the other hand, suppose that $0< g(x) < \epsilon$. Set $\epsilon' = g(x)$. Then, by \eqref{MK:fg}, there exists $\delta' > 0$ such that $y \in K$ and $\epsilon' \leq g(y) < \epsilon' + \delta'$ imply $f(y) < \epsilon'$. Since $\epsilon' = g(x) < \epsilon' + \delta'$, we have $f(x) < \epsilon' = g(x) < \epsilon$. \end{proof} \begin{lemma}\label{l:Phi-Psi2MK} The condition \eqref{Phi-Psi:fg} in Theorem~\ref{t:fg} implies the condition \eqref{MK:fg} in Theorem~\ref{t:fg}. \end{lemma} \begin{proof} Suppose that \eqref{MK:fg} does not hold. Then there exist $\epsilon >0$ and a sequence $\{x_n\}$ in $K$ such that $\epsilon \leq g(x_n) < \epsilon + 1/n$ and $f(x_n) \geq \epsilon$ for all $n \in \mathbb N$. Thus $g(x_n) \to \epsilon$ and, by assumption, \[ \psi(\epsilon) < \phi(\epsilon) \leq \phi \bigl( f(x_n) \bigr) \leq \psi \bigl( g(x_n) \bigr) \] for all $n \in \mathbb N$. Since $\psi$ is right upper semicontinuous at $\epsilon$, we conclude that $\psi(\epsilon) < \phi(\epsilon) \leq \limsup_n \psi \bigl( g(x_n) \bigr) \leq \psi(\epsilon)$, which is a contradiction. \end{proof} \section*{Acknowledgment} The first author would like to acknowledge the financial support from Professor Kaoru Shimizu of Chiba University. \begin{bibdiv} \begin{biblist} \bib{MR3346760}{article}{ author={Ben-El-Mechaiekh, Hichem}, title={The {R}an-{R}eurings fixed point theorem without partial order: a simple proof}, date={2014}, ISSN={1661-7738}, journal={J. Fixed Point Theory Appl.}, volume={16}, pages={373\ndash 383}, url={https://doi.org/10.1007/s11784-015-0218-3}, } \bib{gavruta2014two}{article}{ author={Gavruta, L}, author={Gavruta, P}, author={Khojasteh, F}, title={Two classes of meir-keeler contractions}, date={2014}, journal={arXiv preprint arXiv:1405.5034}, } \bib{MR1845580}{article}{ author={Lim, Teck-Cheong}, title={On characterizations of {M}eir-{K}eeler contractive maps}, date={2001}, ISSN={0362-546X}, journal={Nonlinear Anal.}, volume={46}, pages={113\ndash 120}, url={https://doi.org/10.1016/S0362-546X(99)00448-4}, } \bib{MR0250291}{article}{ author={Meir, A.}, author={Keeler, Emmett}, title={A theorem on contraction mappings}, date={1969}, ISSN={0022-247x}, journal={J. Math. Anal. Appl.}, volume={28}, pages={326\ndash 329}, url={https://doi.org/10.1016/0022-247X(69)90031-6}, } \bib{MR2212687}{article}{ author={Nieto, Juan~J.}, author={Rodr\'{\i}guez-L\'{o}pez, Rosana}, title={Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations}, date={2005}, ISSN={0167-8094}, journal={Order}, volume={22}, pages={223\ndash 239 (2006)}, url={https://doi.org/10.1007/s11083-005-9018-5}, } \bib{MR2053350}{article}{ author={Ran, Andr\'{e} C.~M.}, author={Reurings, Martine C.~B.}, title={A fixed point theorem in partially ordered sets and some applications to matrix equations}, date={2004}, ISSN={0002-9939}, journal={Proc. Amer. Math. Soc.}, volume={132}, pages={1435\ndash 1443}, url={https://doi.org/10.1090/S0002-9939-03-07220-4}, } \bib{reich2017monotone}{article}{ author={Reich, S}, author={Zaslavski, AJ}, title={Monotone contractive mappings}, date={2017}, journal={J. Nonlinear Var. Anal}, volume={1}, pages={391\ndash 401}, } \bib{MR2196804}{article}{ author={Suzuki, Tomonari}, title={Fixed-point theorem for asymptotic contractions of {M}eir-{K}eeler type in complete metric spaces}, date={2006}, ISSN={0362-546X}, journal={Nonlinear Anal.}, volume={64}, pages={971\ndash 978}, url={https://doi.org/10.1016/j.na.2005.04.054}, } \bib{MR644645}{article}{ author={Wong, Chi~Song}, title={Characterizations of certain maps of contractive type}, date={1977}, ISSN={0030-8730}, journal={Pacific J. Math.}, volume={68}, pages={293\ndash 296}, url={http://projecteuclid.org/euclid.pjm/1102817386}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2020-12-29T02:25:59", "yymm": "2012", "arxiv_id": "2012.14121", "language": "en", "url": "https://arxiv.org/abs/2012.14121" }
\section{Introduction.}\label{intro} \section{Introduction} \label{Sec:Intro} Benders decomposition has seen many successful applications to two-stage stochastic optimization, where it typically takes the form of the {\em L-shaped method} \citep{benders1962partitioning,van1969shaped}. It offers the advantage that the second-stage problem decouples into a separate problem for each possible scenario, allowing much faster computation of the recourse decision. A limitation of classical Benders decomposition, however, is that the subproblem must be a linear programming problem, or a continuous nonlinear programming problem in the case of ``generalized'' Benders decomposition \citep{geoffrion1972generalized}. This is necessary because the Benders cuts are derived from dual multipliers (or Lagrange multipliers) in the subproblem. Yet in many problems, the recourse decision is a combinatorial optimization problem that does not yield dual multipliers. This issue has been addressed by the {\em integer \mbox{L-shaped} method} \citep{laporte1993integer}, which formulates the subproblem as a mixed integer/linear programming (MILP) problem and obtains dual multipliers from its linear programming (LP) relaxation. To ensure finite convergence, classical Benders cuts from the LP relaxation are augmented with ``integer cuts'' that simply exclude the master problem solutions enumerated so far. Unfortunately, a combinatorial subproblem may be difficult to model as an MILP, in the sense that many variables are required, or the LP relaxation is weak. This is particularly the case when the recourse decision poses a scheduling problem. We therefore investigate the option of applying {\em logic-based Benders decomposition} (LBBD) to problems with a second-stage scheduling decision \citep{Hoo00,hooker2003logic}, because it does not require dual multipliers to obtain Benders cuts. Rather, the cuts are obtained from an ``inference dual'' that is based on a structural analysis of the subproblem. This allows the subproblem to be solved by a method that is best suited to compute optimal schedules, without having to reformulate it as an MILP. We investigate the LBBD option by observing its behavior on a generic planning and scheduling problem in which scheduling takes place after the random events have been observed. The planning element is an assignment of tasks to facilities that occurs in the first-stage. Tasks assigned to each facility are then scheduled in the second-stage subject to time windows. We assume that the task processing time is a random variable, but the LBBD approach is easily modified to accommodate other random elements, such as the release time. The subproblem decouples into a separate scheduling problem for each facility and each scenario. For greater generality, we suppose the recourse decision is a {\em cumulative} scheduling problem in which multiple tasks can run simultaneously on a single facility, subject to a limit on total resource consumption at any one time. We solve the first-stage problem by MILP, which is well suited for assignment problems. More relevant to the present study is our choice to solve the scheduling subproblem by constraint programming (CP), which has proved to be effective on a variety of scheduling problems, perhaps the state of the art in many cases. We therefore formulate the subproblem in a CP modeling language rather than as an MILP. In view of the past success of LBBD on a number of deterministic planning and scheduling problems, we test the hypothesis that it can obtain similar success on stochastic problems with many scenarios. Our computational study focuses on the minimum makespan problem as a proof of concept, but we show how LBBD is readily modified to accommodate other objectives, such as minimizing total tardiness or total assignment cost. We also derive new logic-based Benders cuts for the minimum makespan problem that have not been used in previous work. In addition to standard LBBD, we experiment with {\em branch and check}, a variation of LBBD that solves the master problem only once and generates Benders cuts on the fly during the MILP branching process \citep{Hoo00,Tho01}. We find that both versions of LBBD are superior to the integer L-shaped method. In particular, branch and check is faster by several orders of magnitude, allowing significantly larger instances to be solved. We also conduct a variety of tests to identify factors that explain the superior performance of LBBD, the relative effectiveness of various Benders cuts, and the impact of modifying the integer L-shaped method in various ways. To our knowledge, this is the first computational comparison between LBBD and the integer L-shaped method on any kind of stochastic optimization problem. It also appears to be the first application of LBBD to two-stage stochastic optimization with a scheduling second-stage problem. The remainder of this paper is organized as follows. We introduce the stochastic planning and scheduling problem in Section \ref{Sec:Problem_definition}. This is followed by Section \ref{Sec:Algorithm} where we propose the logic-based Benders decomposition based solution methods for solving three variants of the stochastic planning and scheduling problem. We present the computational results in Section \ref{Sec:CompStudy} and give our concluding remarks in Section \ref{Sec:Conclusion}. \section{Previous Work} A wide range of problems can be formulated as two-stage stochastic programs. For theory and various applications, we refer the reader to \cite{birge2011introduction}, \cite{shapiro2009lectures}, \cite{prekopa2013stochastic}, and the references therein. Allowing discrete decisions in the second-stage problem significantly expands the applicability of the two-stage stochastic framework, as for example to last-mile relief network design \citep{noyan2015stochastic} and vehicle routing with stochastic travel times \citep{laporte1992vehicle}. Benders decomposition \citep{benders1962partitioning} has long been applied to large-scale optimization problems \citep{geoffrion1974multicommodity,cordeau2001benders,binato2001new,contreras2011benders}. \cite{rahmaniani2017benders} provide an excellent survey of enhancements to the classical method. In particular, it has been applied to two-stage stochastic programs with linear recourse by means of the L-shaped method \citep{van1969shaped}. Its applicability was extended to integer recourse by the integer \mbox{L-shaped} method of \cite{laporte1993integer}, which was recently revisited and improved by \cite{angulo2016improving} and \cite{li2018improved}. Other Benders-type algorithms that have been proposed for integer recourse include disjunctive decomposition \citep{sen2005c3} and decomposition with parametric Gomory cuts \citep{gade2014decomposition}. The essence of these two methods is to convexify the integer second-stage problem using disjunctive cuts and Gomory cuts, respectively. Still other decomposition-based methods in the literature include progressive hedging for multi-stage stochastic convex programs \citep{rockafellar1991scenarios} and a dual decomposition method for multi-stage stochastic programs with mixed-integer variables \citep{caroe1999dual}. We refer the reader to \cite{kuccukyavuz2017introduction} for a review of two-stage stochastic mixed-integer programming. Logic-based Benders decomposition was introduced by \cite{Hoo00} and further developed in \cite{hooker2003logic}. Branch and check, a variant of LBBD, was also introduced by \cite{Hoo00} and first tested computationally by \cite{Tho01}, who coined the term ``branch and check.'' A general exposition of both standard LBBD and branch and check, with an extensive survey of applications, can be found in \cite{Hoo19a}. A number of these applications have basically the same mathematical structure as the planning and scheduling problem studied here, albeit generally without a stochastic element. In more recent work, \cite{atakan2017minimizing} focus on a one-stage stochastic model for single-machine scheduling in which they minimize the value-at-risk of several random performance measures. \cite{Bulbul2014} consider a two-stage chance-constrained mean-risk stochastic programming model for single-machine scheduling problem, but the scheduling decisions do not occur in the second-stage. Rather, the second-stage problem is a simple optimal timing problem that can be solved very rapidly. The deterministic version of the planning and scheduling problem we consider here is solved by LBBD in \cite{hooker2007planning} and \cite{CirCobHoo16}. We rely on some techniques from these studies. We are aware of three prior applications of LBBD to stochastic optimization. \cite{LomMilRugBen10} use LBBD to assign computational tasks to chips and to schedule the tasks assigned to each chip. However, this is not an instance of two-stage stochastic optimization, because the stochastic element appears in the first-stage assignment problem and is replaced with its deterministic equivalent. \cite{fazel2013solving} solve a stochastic location-routing problem with LBBD, but there is no actual recourse decision in the second-stage, which only penalizes vehicles if the route determined by first-stage decisions exceeds their threshold capacity. \cite{GuoBodAleUrb19} use LBBD to schedule patients in operating rooms, where the random element is the surgery duration. Here the scheduling takes place in the master problem, where patients are assigned operating rooms and surgery dates. The subproblem checks whether there is time during the day to perform all the surgeries assigned to a given operating room, and if not, finds a cost-minimizing selection of surgeries to cancel on that day. Unstrengthened nogood cuts are used as LBBD cuts, along with classical Benders cuts obtained from a network flow model of the subproblem that is obtained from a binary decision diagram. The present study therefore appears to be the first application of LBBD to two-stage stochastic optimization with scheduling in the second-stage. It is also the first to compare any application of stochastic LBBD with the integer L-shaped method. \begin{comment} \textbf{Assumptions.} In this study, we make the following assumptions about the structure of the two-stage stochastic programs \eqref{TwoStage:General} that we consider: \begin{enumerate}[label=(A{{\arabic*}})] \item The parameters of the second-stage problem come from a known finite probability distribution. \label{A:ProbDist} \vspace{-0.7cm} \item The domain of the first-stage decisions $Y$ is finite under all possible random events. \label{A:DomainY} \end{enumerate} Assumption \ref{A:ProbDist} is well studied in the stochastic programming literature \citep[see, e.g.,][]{kleywegt2002sample}. Even though this assumption is an important restriction, having a finite discrete distribution can be justified under the goal of solving stochastic optimization problems using sample average approximations. Assumption \ref{A:DomainY} is also restrictive, however, many discrete optimization problems do have finite domains. In fact, we make no assumptions on the structure of the second-stage problem, it can be a nonlinear, non-convex optimization problem as long as its optimal objective function value can be computed in reasonable amount of time. Lastly, it is significant to highlight that in our setting all second-stage parameters can be random. \end{comment} \section{The Problem} \label{Sec:Problem_definition} We study a two-stage stochastic programming problem that, in general, has the following form: \begin{equation} \min_{\vx\in X} \big\{f(\vx) + \mathbb{E}_{\omega}[Q(\vx,\omega)] \big\} \label{TwoStage:General} \end{equation} where $Q(\vx,\omega)$ is the optimal value of the second-stage problem: \begin{equation} Q(\vx,\omega) = \min_{\vy\in Y(\omega)} \big\{g(\vy)\big\} \end{equation} Variable $\vx$ represents the first-stage decisions, while $\vy$ represents second-stage decisions that are made after the random variable $\omega$ is realized. We suppose that $\omega$ ranges over a finite set $\Omega$ of possible scenarios, where each scenario $\omega$ has probability $\pi_{\omega}$. The first-stage problem (\ref{TwoStage:General}) may therefore be written \[ \min_{\vx\in X} \Big\{f(\vx) + \sum_{\omega\in \Omega} \pi_{\omega} Q(\vx,\omega) \Big\} \] We consider a generic planning and scheduling problem in which the first-stage assigns tasks to facilities, and the second-stage schedules the tasks assigned to each facility. The objective is to minimize makespan or total tardiness. We assume that only the processing times are random in the second-stage, but a slight modification of the model allows for random release times and/or deadlines as well. We therefore suppose that each job $j$ has a processing time $p^{\omega}_{ij}$ on facility $i$ in scenario $\omega$ and must be processed during the interval $[r_j,d_j]$. For greater generality, we allow for cumulative scheduling, where each job $j$ consumes resources $c_{ij}$ on facility $i$, and the total resource consumption must not exceed $K_i$. To formulate the problem we let variable $x_j$ be the facility to which job $j\in J$ is assigned. The first-stage problem is \begin{equation} \min_{\vx} \Big\{g(\vx) + \sum_{\omega\in \Omega} \pi_{\omega}Q(\vx,\omega) \; \Big| \; x_j\in I, \;\mbox{all} \; j\in J \Big\} \label{eq:P&S} \end{equation} where $I$ indexes the facilities. In the second-stage problem, we let $s_j$ be the time at which job $j$ starts processing. We also let $J_i(\vx)$ be the set of jobs assigned to facility $i$, so that $J_i(\vx)=\{j\in J \;|\; x_j=i\}$. Thus \[ Q(\vx,\omega) = \min_{\bm{s}} \Big\{ h(\bm{s},\vx,\omega) \;\Big|\; s_j\in [r_j,d_j-p^{\omega}_{x_jj}], \;\mbox{all}\;j\in J; \;\; \hspace{-3ex} \sum_{\substack{j\in J_i(\vx)\\0\leq t\leq s_j+p^{\omega}_{ij}}} \hspace{-3ex} c_{ij} \leq K_i, \; \mbox{all}\; i\in I,\;\mbox{all}\; t\Big\} \] The two-stage problem (\ref{TwoStage:General}) is risk-neutral in the sense that it is concerned with minimizing expectation. However, the LBBD approach presented here can be adapted to a more general class of problems that incorporate a dispersion statistic $\mathbb{D}_{\omega}$ that measures risk, such as variance, as in the classical Markovitz model \cite{}. Then the problem (\ref{TwoStage:General}) becomes \begin{equation} \min_{\vx\in X} \big\{f(\vx) + (1-\lambda)\mathbb{E}_{\omega}[Q(\vx,\omega)] + \lambda \mathbb{D}_{\omega}[Q(\vx,\omega)]\big\} \label{TwoStage:Risk} \end{equation} and the first-stage planning and scheduling problem (\ref{eq:P&S}) becomes \begin{equation} \min_{\vx} \Big\{g(\vx) + (1-\lambda)\sum_{\omega\in \Omega} \pi_{\omega}Q(\vx,\omega) + \lambda\mathbb{D}_{\omega}\big[Q(\vx,\omega)\big] \; \Big| \; x_j\in I, \;\mbox{all} \; j\in J \Big\} \label{eq:P&Srisk} \end{equation} Formulations (\ref{TwoStage:Risk}) and (\ref{eq:P&Srisk}) also accommodate robust optimization, as for example when $\lambda=1$ and \[ \mathbb{D}_{\omega}(Q(\vx,\omega)) = \max_{\omega\in\Omega}\{Q(\vx,\omega)\} \] and $\Omega$ is an uncertainty set. See \cite{ahmed2006convexity} for a discussion of various tractable and intractable risk measures. \begin{comment} We next elaborate on our decision framework under the stochastic version of the planning and scheduling problem. In our setting, each task $j \in J=\{1,\dots,n\}$ is assigned to a facility $i \in I=\{1,\dots,m\}$ before the values of the uncertain parameters are known. Then, after the values of the uncertain parameters are observed, we schedule the tasks assigned to each facility by taking the realizations of the uncertain parameters into account. In the stochastic programming literature, such decisions are known as recourse decisions. Note that in our setting we consider cumulative scheduling problems where facilities can serve more than one task at a time as long as the total consumption rate of tasks does not exceed the capacity of the facilities at any given time. We assume that only the time it takes to process a task on a facility depends on the scenario $s \in S$, hence we denote the processing times with $p_{ij}^s$. The remaining parameters, rate of consumption of each task on each facility ($c_{ij}$), release time $(r_j)$ and deadline $(d_j)$ of each task as well as the capacity $(K_i)$ of each facility are deterministic and assumed to be the same under all scenarios. Therefore, we have that $\bm{\xi}^s = \bm{p}^s,~s \in S$. Note however that the release times and deadlines can assumed to be stochastic with a very slight modification in our model. Formally speaking, our stochastic scheduling and planning problem is a non-trivial extension of the deterministic model proposed in \cite{hooker2007planning}. In particular, in this study we employ a two-stage stochastic programming approach for the stochastic planning and scheduling problem where task-facility assignment decisions are made in the first-stage and scheduling decisions on each facility are made in the second-stage. We define the decision variables of the proposed stochastic planning and scheduling problem as follows: \begin{align*} y_{ij}=& \begin{cases} 1, & \text{if task $j$ is assigned to facility $i$} \\ 0, & \text{otherwise} \end{cases} \\ x_{ijt}^s=& \begin{cases} 1, & \text{if task $j$ is scheduled to start at time $t$ on facility $i$ under scenario $s$} \\ 0, & \text{otherwise} \end{cases} \end{align*} For any given first-stage decision vector $(\vy)$ and the random input vector $(\bm{\xi}(\omega))$, we denote the random second-stage objective function value by $Q(\vy, \bm{\xi}(\omega))$. Furthermore, given a realization of the random input vector $\bm{\xi}^s$ under scenario $s \in S$, we formulate the second-stage problem as follows: \begin{subequations} \label{SP:generalForm} \begin{align} Q(\vy, \bm{\xi}^s)= \quad \text{minimize} &&& h(\vx, \bm{\xi}^s) & \label{SP:generalForm:objective}\\ \text{subject to} &&& x_{ijt}^s \leq y_{ij}, \quad i \in I,~ j \in J,~ t \in T, \label{SP:generalForm:facility}\\ &&& \sum_{i \in I} \sum_{t \in T} x_{ijt}^s = 1, \quad j \in J, \label{SP:generalForm:scheduling}\\ &&&\sum_{j \in J} \sum_{t^\prime \in T_{ijt}^s} c_{ij} x_{ijt^\prime}^s \leq K_i, \quad i \in I,~ t \in T, \label{SP:generalForm:capacity}\\ &&& x_{ijt}^s = 0, \quad i \in I, ~ j \in J, ~ t \in T\\ &&& \qquad \qquad \qquad \text{with}~ t \leq r_j ~\text{or}~ t > d_j - p_{ij}^s, \label{SP:generalForm:timeWindow}\\ &&& x_{ijt}^s \in \{0,1\}, \quad i \in I,~ j \in J,~ t \in T. \label{SP:generalForm:nonnegativity} \end{align} \end{subequations} Here, \eqref{SP:generalForm:objective} denotes an objective function that minimizes the desired performance measure, e.g. maximum makespan or total tardiness. We defer the discussion on the modification of the second-stage objective function with respect to different scheduling performance measures to Section \ref{Sec:Algorithm}. Constraints \eqref{SP:generalForm:facility} make sure that the tasks are scheduled only on the facilities that they are assigned in the first-stage. Constraints \eqref{SP:generalForm:scheduling} guarantee that each task is scheduled to start processing only once. Capacity restrictions on each facility are taken into consideration in constraint set \eqref{SP:generalForm:capacity}. Time-window constraints \eqref{SP:generalForm:timeWindow} play a dual role in ensuring that no task starts before its release time and all tasks finish before their deadlines. Given the description of the second-stage problem, now we can formulate our two-stage stochastic planning and scheduling problem $\ensuremath{\left(\mathbf{SPSP}\right)}{}$. \begin{subequations} \label{twoStageFormulation} \begin{align} \ensuremath{\left(\mathbf{SPSP}\right)}{}= \quad \text{minimize} &&& g(\vy) + \mathbb{E} \Big[Q(\vy, \bm{\xi}(\omega)) \Big] &&& \label{twoStage:objective}\\ \text{subject to} &&& \sum_{i \in I} y_{ij} = 1, \quad j \in J, \label{twoStage:assignment}\\ &&& y_{ij} \in \{0,1\}, \quad i \in I,~ j \in J. \label{twoStage::nonnegativity} \end{align} \end{subequations} The objective function \eqref{twoStage:objective} minimizes a linear first-stage cost function plus the expected second-stage cost. Constraints \eqref{twoStage:assignment} assign each task to a single facility and binary restrictions on the assignment decisions are given in \eqref{twoStage::nonnegativity}. It is important to note this model is risk-neutral since we minimize the expected total cost. However, our two-stage framework allows us to employ a risk measure in the objective function. We discuss this issue in Section \ref{Sec:Algorithm:MeanRisk}. We conclude this section by elaborating on the formulation of the second-stage problem. As discussed in Section \ref{Sec:Intro}, the second-stage problem need not be a mixed-integer program. In fact, we present CP the formulations of \eqref{SP:generalForm} for different objective functions in the next section. We make use of the CP formulations in our proposed solution method since they are superior to MILP formulations in solving the cumulative scheduling problems. \end{comment} \section{Logic-based Benders Decomposition} \label{Sec:Algorithm} \begin{comment} In this section we outline the LBBD algorithm that we propose to solve $\ensuremath{\left(\mathbf{SPSP}\right)}{}$. We first summarize the general framework of the LBBD algorithm. Then, we demonstrate how this general framework is applied to the three variants of $\ensuremath{\left(\mathbf{SPSP}\right)}{}$: Minimum cost problem, minimum makespan problem, and minimum tardiness problem. We cover some enhancement strategies that have proved to be useful in related works prior to this study. We discuss the integer L-Shaped method and its relation to LBBD. Lastly, we illustrate how LBBD algorithm can be modified to solve mean-risk two-stage optimization problems. We begin by revisiting the convergence properties of LBBD. We refer readers to \cite{hooker2003logic} for a more detailed discussion on this issue. \end{comment} Logic-based Benders decomposition (LBBD) is designed for problems of the form \begin{equation} \label{LBBD:general} \min_{\vx,\vy} \big\{ f(\vx,\vy) \; \big| \; C(\vx,\vy), \; \vx\in D_x, \; \vy\in D_y \big\} \end{equation} where $C(\vx,\vy)$ denotes a set of constraints that contain variables $\vx$ and $\vy$, and $D_y$ and $D_x$ represent variable domains. The rationale behind dividing the variables into two groups is that once some of the decisions are fixed by setting $\vx = \bar{\vx}$, the remaining {\em subproblem} becomes much easier to solve, perhaps by decoupling into smaller problems. In our study, the smaller problems will correspond to scenarios and facilities. The subproblem has the form \begin{equation} \label{LBBD:subproblem} \mathrm{SP}(\bar{\vx}) = \min_{\vy} \big\{ f(\bar{\vx},\vy) \;\big|\; C(\bar{\vx},\vy), \; \vy\in D_y \big\} \end{equation} The key to LBBD is analyzing the subproblem solution so as to find a function $B_{\bar{\vx}}(\vx)$ that provides a lower bound on $f(\vx,\vy)$ for any given $\vx\in D_{x}$. The bound must be sharp for $\vx=\bar{\vx}$; that is, $B_{\bar{\vx}}(\bar{\vx})=\mathrm{SP}(\bar{\vx})$. The bounding function is derived from the {\em inference dual} of the subproblem in a manner discussed below. In classical Benders decomposition, the subproblem is an LP problem, and the inference dual is the LP dual. Each iteration of the LBBD algorithm begins by solving a {\em master problem}: \begin{equation} \label{eq:relaxMaster} \mathrm{MP}(\bar{\bm{X}}) = \min_{\vx,\beta} \big\{ \beta \;\big|\; \beta\geq B_{\bar{\vx}}(\vx), \; \mbox{all}\; \bar{\vx}\in \bar{\bm{X}}; \;\; \vx\in D_x \big\} \end{equation} where the inequalities $\beta\geq B_{\bar{\vx}}(\vx)$ are {\em Benders cuts} obtained from previous solutions $\bar{\vx}$ of the subproblem. There may be several cuts for a given $\bar{\vx}$, but for simplicity we assume in this section there is only one. Initially, the set $\bar{\bm{X}}$ can be empty, or it can contain a few solutions obtained heuristically to implement a ``warm start.'' The optimal value MP$(\bar{\bm{X}})$ of the master problem is a lower bound on the optimal value of the original problem (\ref{LBBD:general}). If $\bar{\vx}$ is an optimal solution of the master problem, the corresponding subproblem is then solved to obtain SP$(\bar{\vx})$, which is an upper bound on the optimal value of (\ref{LBBD:general}). A new Benders cut $\beta\geq B_{\bar{\vx}}(\vx)$ is generated for the master problem and $\bar{\vx}$ added to $\bar{\bm{X}}$ in (\ref{eq:relaxMaster}). The process repeats until the lower and upper bounds provided by the master problem and subproblem converge; that is, until MP$(\bar{\bm{X}})=\min_{\bar{\vx}\in\bar{\bm{X}}} \{\mathrm{SP}(\bar{\vx})\}$. The following is proved in \cite{Hoo00}: \begin{theorem} If $D_x$ is finite, the LBBD algorithm converges to an optimal solution of (\ref{LBBD:general}) after a finite number of iterations. \end{theorem} The inference dual of the subproblem seeks the tightest bound on the objective function that can be inferred from the constraints. Thus the inference dual is \begin{equation} \mathrm{DSP}(\bar{\vx}) = \max_{P\in\mathcal{P}} \Big\{ \gamma \; \Big| \; \big(C(\bar{\vx}),\; y\in D_y\big) \stackrel{P}{\Rightarrow} \big( f(\bar{\vx},\vy)\geq\gamma)\big) \Big\} \label{eq:dual} \end{equation} where $A\stackrel{P}{\Rightarrow}B$ indicates that proof $P$ deduces $B$ from $A$. The inference dual is always defined with respect to set $\mathcal{P}$ of valid proofs. In classical linear programming duality, valid proofs consist of nonnegative linear combinations of the inequality constraints in the problem. We assume a strong dual, meaning that SP$(\bar{\vx})=\mathrm{DSP}(\bar{\vx})$. The dual is strong when the inference method is complete. For example, the classical Farkas Lemma implies that nonnegative linear combination is a complete inference method for linear inequalities. Indeed, any exact optimization method is associated with a complete inference method that it uses to prove optimality, perhaps one that involves branching, cutting planes, constraint propagation, and so forth. In the context of LBBD, the proof $P$ that solves the dual (\ref{eq:dual}) is the proof of optimality the solver obtains for the subproblem (\ref{LBBD:subproblem}). The bounding function $B_{\bar{\vx}}(\vx)$ is derived by observing what bound on the optimal value {\em this same proof} $P$ can logically deduce for a given $\vx$, whence the description ``logic-based.'' In practice, the solver may not reveal how it proved optimality, or the proof may be too complicated to build a useful cut. One option in such cases is to tease out the structure of the proof by re-solving the subproblem for several values of $\vx$ and observing the optimal value that results. This information can be used to design {\em strengthened nogood cuts} that provide useful bounds for many values of $\vx$ other than $\bar{\vx}$. Another approach is to use {\em analytical Benders cuts}, which deduce bounds on the optimal value when $\bar{\vx}$ is changed in certain ways, based on structural characteristics of the subproblem and its current solution. We will employ both of these options. {\em Branch and check} is a variation of LBBD that solves the master problem only once and generates Benders cuts on the fly. It is most naturally applied when the master problem is solved by branching. Whenever the branching process discovers a solution $\bar{\vx}$ that is feasible in the current master problem, the corresponding subproblem is solved to obtain one or more Benders cuts, which are added to the master problem. Branching then continues and terminates in the normal fashion, all the while satisfying Benders cuts as they accumulate. Branch and check can be superior to standard LBBD when the master problem is much harder to solve than the subproblems. \begin{comment} Let $B_{\bar{y}}(y)$ be the bounding function for $y = \bar{y}$. In principle, a bounding function that satisfies the following two properties is necessary to obtain a convergent LBBD algorithm. \vspace{2mm} \begin{property} \label{Property:1} $B_{\bar{y}}(y)$ provides a valid lower bound on $f(y,x)$ for any given $y \in D_y$. That is, $f(y,x) \geq B_{\bar{y}}(y)$ for any feasible $(y,x)$ in \eqref{LBBD:general}. \end{property} \vspace{2mm} \begin{property} \label{Property:2} $B_{\bar{y}}(\bar{y}) = SP(\bar{y})$. \end{property} Let $\beta$ denote the objective function of \eqref{LBBD:general}. Then the inequality $\beta \geq B_{\bar{y}}(y)$ is called a Benders cut. Now suppose that $D_y$ is finite. Then, according to Property \ref{Property:2}, this implies that we have finitely many Benders cuts that we can generate, one for each $y \in D_y$. The following problem that contains all possible Benders cuts is called \emph{the master problem} in the literature: \begin{equation} \label{LBBD:masterProblem} \begin{alignedat}{4} \ensuremath{\left(\mathbf{MP}\right)} \qquad \text{minimize} \qquad &&& \beta &&& \\ \text{subject to} \qquad &&& \beta \geq B_{\bar{y}}(y), \quad y \in D_y,\\\ &&& y \in D_y, \quad \beta \in \mathbb{R}. \end{alignedat} \end{equation} The master problem \eqref{LBBD:masterProblem} is a reformulation of the original problem \eqref{LBBD:general}. That is, one can solve \eqref{LBBD:masterProblem} and obtain an optimal solution to \eqref{LBBD:general}. However, in practice there are exponentially many Benders cuts even for very small sized problems and a delayed constraint generation scheme is necessary to make use of this reformulation. Typically, the master problem is relaxed by removing all Benders cuts to obtain \emph{the relaxed master problem}. In the course of the LBBD algorithm, the relaxed master problem is solved to obtain a feasible $y$ decision at the beginning of each iteration. Then the subproblem \eqref{LBBD:subproblem} is modified and solved to optimality. The optimal value of the relaxed master problem provides a lower bound on the optimal value of \eqref{LBBD:general}. This is true, since the bounding function $B_{\bar{y}}(y)$ satisfies Property \ref{Property:1} and \ref{Property:2}. Furthermore, the solution of the master problem and the subproblem together provides a feasible solution to \eqref{LBBD:general}, hence this solution gives an upper bound on the optimal value of \eqref{LBBD:general}. The algorithm terminates when the lower bound and the upper bound are close enough to each other. Otherwise, a Benders cut is added to the relaxed master problem and the algorithm moves to a new iteration by re-solving the modified relaxed master problem. The following theorem is due to \cite{hooker2007planning}. \begin{theorem} \label{theorem} If the bounding function $B_{\bar{y}}(y)$ satisfies Properties \ref{Property:1} and \ref{Property:2} in each iteration of the Benders algorithm, and $D_y$ is finite, then the Benders algorithm converges to the optimal value of \eqref{LBBD:general} after finitely many steps. \end{theorem} An alternative interpretation of the delayed constraint generation procedure - the Benders type method - is the following: For any given solution of the relaxed master problem, we solve the subproblem to check if there is any violated Benders cut. If we identify a violated Benders cut, we add it to the relaxed master problem and resolve it. This procedure, depicted in Algorithm \ref{Alg:Classical}, continues until we cannot find any violated Benders cuts. \begin{algorithm}[http] \caption{Standard implementation of Benders type methods.} \label{Alg:Classical} Initialization\; CutFound $\leftarrow{}$ True\; UB $\leftarrow{} \infty$, LB $\leftarrow{} -\infty$\; \While{\normalfont{CutFound} \textit{is true}} { Solve the relaxed master problem\; Obtain optimal values of $\bar{y}$ and $\bar{\beta}$\; LB $\leftarrow \max(\text{LB},~ \bar{\beta})$\; Update the subproblem and solve it to obtain $SP(\bar{y})$\; UB $\leftarrow \min(\text{UB}, ~SP(\bar{y}))$\; \eIf{\normalfont{UB} $-$ \normalfont{LB} $> \epsilon$} { CutFound $\leftarrow{}$ True\; Add a Benders cut\; } { CutFound $\leftarrow{}$ False\; The current solution is optimal\; } } \end{algorithm} We conclude this section by giving a few remarks on the comparison between the classical Benders decomposition and the logic-based Benders decomposition. As discussed in Section \ref{Sec:Intro}, LBBD generalizes the classical Benders decomposition approach. In particular, in order to implement the classical Benders decomposition algorithm, the subproblem has to be a linear program, because the classical Benders cuts rely on the linear programming duality. For this reason, adapting the classical Benders decomposition to different problems is in general a straightforward task. On the other hand, LBBD does not rely on linear programming duality. This widens the range of applications one can wish to solve using LBBD. Furthermore, it is important to highlight that LBBD cuts are problem specific. A typical adaptation of LBBD requires identification of a valid bounding function on the objective function of the original problem, and this is not a trivial task in practice. However, by the same reason, LBBD allows us to make use of the structure of the optimization problem on hand. One of the promises of this work is to empower practitioners by showcasing LBBD as a solid solution method for solving two-stage stochastic programs with integer recourse. In particular, when the second-stage problem is a tough combinatorial optimization problem, it is crucial to benefit from other disciplines such as constraint programming and exploit the structure of the problem as much as possible. LBBD is a valuable tool to fulfil this task. \subsection{Algorithmic Enhancements} \label{Sec:Algorithm:Enhancements} \textbf{Single search tree.} In many cases, the relaxed master problem is solved from scratch at every iteration of Benders type methods, see Algorithm \ref{Alg:Classical}. Typically, solving the relaxed master problem becomes the bottleneck of the algorithm, since the relaxed master problem becomes much harder to solve as we add Benders cuts. This problem is especially amplified when the relaxed master problem includes integer variables, because in this case we solve an integer program at every iteration, thus a new branch-and-bound tree is built and solved to obtain the optimal solution of the relaxed master problem. As an alternative to this approach, one can implement a similar algorithm in which the relaxed master problem is only solved once and the subproblem is solved to look for violated Benders cuts whenever an integer feasible solution is identified, see Algorithm \ref{Alg:BranchAndCheck}. This approach is called branch-and-check algorithm in LBBD literature \citep{ottosson2002mixed}. However, alternative names exist for the single search tree approach, see for example branch-and-cut decompostion algorithm \citep{luedtke2014branch}, Benders decomposition-based branch-and-cut algorithm \citep{elcci2018chance} or simply branch-and-cut algorithm \citep{fortz2009improved}. \begin{algorithm}[http] \caption{Single search tree implementation of Benders type methods.} \label{Alg:BranchAndCheck} Initialization\; Invoke Solver to solve the relaxed master problem\; \While{Solver determines that optimality gap is greater than the threshold} { Identify a new candidate incumbent solution $\bar{y}$ and $\bar{\beta}$\; Update the subproblem and solve it to obtain $SP(\bar{y})$\; \eIf{$\bar{\beta} < SP(\bar{y})$} { Add a Benders cut\; } { Set the current solution as the incumbent solution \; } } \end{algorithm} The implementation of the single search tree approach is not so much different than the standard approach nowadays, since commercial softwares such as \texttt{CPLEX} and \texttt{GUROBI} allow users to interact with the branch and bound tree. To this end, Lazy constraint callback feature of the MILP solvers is used to interact with the branch-and-bound algorithm whenever an integer feasible solution is identifed. Typically, the single search tree approach will solve more subproblem and identify more violated Benders cuts. However, the algorithm might still take less time to find the optimal solution, because we solve the relaxed master problem only one time. Using a commercial solver to implement the single search tree approach also allows us to benefit from many other advancements from the integer programming literature which are readily implemented in commercial MILP solvers, such as advanced branching rules, preprocessing, addition of general purpose cutting planes, etc. We present results on the comparison of the single search tree approach and the standard approach in Section \ref{Sec:CompStudy:Performance}. \end{comment} A common enhancement of LBBD and other Benders methods is a {\em warm start}, which includes initial Benders cuts in the master problem. Recent studies that benefit from this technique include \cite{angulo2016improving}, \cite{elcci2018chance}, and \cite{HecHooKim19}. Benders cuts can also be aggregated before being added to the master problem, a technique first explored in \cite{birge1988multicut}. A particularly useful enhancement for LBBD is to include a relaxation of the subproblem in the master problem, where the relaxation is written in terms of the master problem variables \citep{ hooker2007planning,fazel2012using}. We employ this technique in the present study. \begin{comment} To this end, there are two extreme ways to add Benders cuts. One is called the single-cut approach where Benders cuts for all scenarios are aggregated. The other extreme is called the multi-cut approach where none of the Benders cuts are aggregated. The relaxed master problems for each case is presented below. \begin{multicols}{2} \begin{equation} \label{TwoStage:SingleCut} \begin{alignedat}{4} & {\text{minimize}} \qquad&& cy + \theta &\\ & \text{subject to} && y \in Y,&\\ &&&\theta \geq \sum_{s \in S} p^s B^s_{\bar{y}}(y) - c\bar{y}. & \end{alignedat} \end{equation} \break \begin{equation} \label{TwoStage:MultiCut} \begin{alignedat}{4} & {\text{minimize}} \qquad&& cy + \sum_{s \in S} p^s \theta^s &\\ & \text{subject to} && y \in Y,&\\ &&&\theta^s \geq B^s_{\bar{y}}(y) - c\bar{y}, \quad s \in S. & \end{alignedat} \end{equation} \end{multicols} The first-stage objective and constraints are exactly the same in two relaxed master problems. The only difference is that in single-cut approach \eqref{TwoStage:SingleCut}, we have a single auxiliary variable that captures the entire subproblem cost, whereas we have $|S|$ many auxiliary variables in the multi-cut approach \eqref{TwoStage:MultiCut} in which each variable captures the cost under a particular scenario. However, we can implement a partial aggregation scheme where we partition the scenarios so that there is an auxilary variable for each partition that captures the second-stage cost. \textbf{Obtaining lower bounds.} We solve the following lifted version of the second-stage problem to obtain a valid lower bound on the optimal objective function of the second-stage problem under scenario $s \in S$: \begin{subequations} \label{lowerBounding} \begin{align} \Delta^s= \quad \text{minimize} &&& \eqref{SP:generalForm:objective}\\ \text{subject to} &&& \eqref{SP:generalForm:facility} - \eqref{SP:generalForm:timeWindow}\\ &&& \eqref{twoStage:assignment}\\ &&& y_{ij} \in \{0,1\}, \quad i \in i,~ j \in J, \\ &&& x_{ijt}^s \in \{0,1\}, \quad i \in i,~ j \in J,~ t \in T. \end{align} \end{subequations} Having solved \eqref{lowerBounding} and obtained the values of $\Delta^s, ~ s \in S$, we add the following lower bounding cuts to the relaxed master problem: $$ \theta^s \geq \Delta^s, \quad s \in S.$$ Note that computing these lower bounds are very crucial in implementing the integer L-Shaped method. We elaborate on this issue in Section \eqref{Sec:intLShaped}. In the following sections, we will focus on three variants of $\ensuremath{\left(\mathbf{SPSP}\right)}{}$. In particular, we will formulate the deterministic equivalent formulations, give constraint programming formulation of the second-stage problems, derive Benders cuts, and provide a linear relaxation of the second-stage problems for each of the variants. \textbf{Obtaining lower bounds.} We solve the following lifted version of the second-stage problem to obtain a valid lower bound on the optimal objective function of the second-stage problem under scenario $s \in S$: \begin{subequations} \label{lowerBounding} \begin{align} \Delta^s= \quad \text{minimize} &&& \eqref{SP:generalForm:objective}\\ \text{subject to} &&& \eqref{SP:generalForm:facility} - \eqref{SP:generalForm:timeWindow}\\ &&& \eqref{twoStage:assignment}\\ &&& y_{ij} \in \{0,1\}, \quad i \in i,~ j \in J, \\ &&& x_{ijt}^s \in \{0,1\}, \quad i \in i,~ j \in J,~ t \in T. \end{align} \end{subequations} Having solved \eqref{lowerBounding} and obtained the values of $\Delta^s, ~ s \in S$, we add the following lower bounding cuts to the relaxed master problem: $$ \theta^s \geq \Delta^s, \quad s \in S.$$ Note that computing these lower bounds are very crucial in implementing the integer L-Shaped method. We elaborate on this issue in Section \eqref{Sec:intLShaped}. \end{comment} \section{Benders Formulation of Planning \& Scheduling} We apply LBBD to the generic planning and scheduling problem by placing the assignment decision in the master problem and the scheduling decision in the subproblem. The master problem is therefore \[ \min_{\vx} \Big\{ g(\vx) + \sum_{\omega\in\Omega} \pi_{\omega}\beta_{\omega} \; \Big| \; \mbox{Benders cuts}; \; \mbox{subproblem relaxation}; \; x_j\in I, \;\mbox{all}\; j\in J \Big\} \] The Benders cuts provide lower bounds on each $\beta_{\omega}$. The cuts and subproblem relaxation are somewhat different for each variant of the problem we consider below. The scheduling subproblem decouples into a separate problem for each facility and scenario. If $\bar{\vx}$ is an optimal solution of the master problem, the scheduling problem for facility $i$ and scenario $\omega$ is \[ \mathrm{SP}_{i\omega}(\bar{\vx}) = \min_{\bm{s}} \Big\{ h_i(\bm{s},\bar{\vx},\omega) \;\Big| \; s_j\in [r_j, d_j-p^{\omega}_{ij}], \; \mbox{all}\; j\in J_i(\bar{\vx}); \hspace{-2ex} \sum_{\substack{j\in J_i(\bar{\vx})\\0\leq t\leq s_j+p^{\omega}_{ij}}} \hspace{-3ex} c_{ij} \leq K_i, \;\mbox{all}\; t \Big\} \] We solve the master problem and subproblem by formulating the former as an MILP problem and the latter as a CP problem. In the master problem, we let variable $x_{ij}=1$ when task $j$ is assigned to facility $i$. The master problem becomes \begin{equation} \begin{array}{ll} \mbox{minimize} & {\displaystyle \hat{g}(\vx) + \sum_{\omega\in\Omega} \pi_{\omega}\beta_{\omega} } \vspace{0.5ex} \\ \mbox{subject to} & {\displaystyle \sum_{i\in I} x_{ij} = 1, \;\; j\in J } \vspace{0.5ex} \\ & \mbox{Benders cuts} \vspace{0.5ex} \\ & \mbox{subproblem relaxation} \vspace{0.5ex} \\ & x_{ij}\in \{0,1\}, \; i\in I, \; j\in J \end{array} \label{eq:MILP} \end{equation} where $\vx$ now denotes the matrix of variables $x_{ij}$. If $\bar{\vx}$ is an optimal solution of the master problem, the subproblem for each facility $i$ and scenario $\omega$ becomes \begin{equation} \begin{array}{ll} \text{minimize} & \hat{h}_i(\bm{s},\bar{\vx},\omega) \vspace{0.5ex} \\ \text{subject to} & \text{cumulative}\Big(\big(s_j\,\big|\,j \in J_i(\bar{\vx})\big), \, \big(p^{\omega}_{ij}\,\big|\,j \in J_i(\bar{\vx})\big), \,\big(c_{ij}\,\big|\,j \in J_i(\bar{\vx})\big), \, K_i \Big) \vspace{0.5ex} \\ & s_j \in [r_j, d_i-p^{\omega}_{ij} ], \; j \in J_i(\bar{\vx}) \end{array} \label{eq:CP} \end{equation} The optimal value of (\ref{eq:CP}) is again SP$_{i\omega}(\bar{\vx})$. The cumulative global constraint in (\ref{eq:CP}) is a standard feature of CP models and requires that the total resource consumption at any time on facility $i$ be at most $K_i$. To solve a problem (\ref{eq:P&Srisk}) that incorporates risk, one need only replace the objective function of (\ref{eq:MILP}) with \[ \hat{g}(\vx) + (1-\lambda)\sum_{\omega\in\Omega} \pi_{\omega}\beta_{\omega} + \lambda\mathbb{D}_{\omega} [\beta_{\omega}] \] and otherwise proceed as in the risk-neutral case. \subsection{Minimum Makespan Problem} \label{Sec:Algorithm:Makespan} We begin by considering a minimum makespan problem in which the tasks have release times and no deadlines. The first-stage objective function is $g(\vx)=0$, and so we have $\hat{g}(\vx)=0$ in the MILP model (\ref{eq:MILP}). The second-stage objective function is the finish time of the last task to finish: \[ h(\bm{s},\vx,\omega) = \max_{j\in J} \Big\{ s_j + p^{\omega}_{x_j j} \Big\} \] This objective function is incorporated into the CP problem (\ref{eq:CP}) by setting $\hat{h}_i(\bm{s},\bar{\vx},\omega)=M$ and adding to (\ref{eq:CP}) the constraints $M\geq s_j + p^{\omega}_{ij}$ for all $j\in J_i(\bar{\vx})$. Since there are no deadlines, we assume $d_j=\infty$ for all $j\in J$. \begin{comment} The objective of the second-stage problem is to minimize the longest makespan. Let $M^s$ denote the longest makespan under scenario $s \in S$. Then the second-stage objective takes the following form: \begin{equation} h(\vx, \bm{\xi}^s) = \max_{j \in J} \left\{ \sum_{i \in I} \sum_{t \in T} (t + p_{ij}^s) x_{ijt}^s \right\}. \end{equation} \emph{Deterministic equivalent formulation.} This function can easily be linearized and we obtain the following deterministic equivalent formulation of the variant of $\ensuremath{\left(\mathbf{SPSP}\right)}{}$ which minimizes the expected maximum makespan. \begin{subequations} \label{DEF:Makespan} \begin{align} \ensuremath{\left(\mathbf{SPSP-M}\right)}{}= \quad \text{minimize} &&& \sum_{s \in S} p^s M^s\\ \text{subject to} &&& \eqref{twoStage:assignment}, ~ \eqref{twoStage::nonnegativity},\\ &&& M^s \geq \sum_{i \in I} \sum_{t \in T} (t + p_{ij}^s) x_{ijt}^s, \quad j \in J,~ s \in S,\\ &&& x_{ijt}^s \leq y_{ij}, \quad i \in I,~ j \in J,~ t \in T,~ s \in S, \label{DEF:Makespan:facility}\\ &&& \sum_{i \in I} \sum_{t \in T} x_{ijt}^s = 1, \quad j \in J, ~ s \in S, \label{DEF:Makespan:scheduling}\\ &&&\sum_{j \in J} \sum_{t^\prime \in T_{ijt}^s} c_{ij} x_{ijt^\prime}^s \leq K_i, \quad i \in I,~ t \in T,~ s \in S, \label{DEF:Makespan:capacity}\\ &&& x_{ijt}^s = 0, \quad i \in I,~ j \in J,~ t \in T, ~ s \in S \quad \text{with}~ t \leq r_j, \label{DEF:Makespan:timeWindow}\\ &&& x_{ijt}^s \in \{0,1\}, \quad i \in i,~ j \in J,~ t \in T, ~ s \in S. \label{DEF:Makespan:nonnegativity} \end{align} \end{subequations} Here, the constraints \eqref{DEF:Makespan:facility} - \eqref{DEF:Makespan:nonnegativity} are the same constraints given in \eqref{SP:generalForm:facility} - \eqref{SP:generalForm:nonnegativity}, except that they are appended (augmented) to include all scenarios. \emph{CP formulation of the subproblem.} For any given first-stage decision vector, we can formulate the subproblem as in \eqref{SP:generalForm}, and solve the resulting integer program for every scenario. However, once we know the task-facility assignments, the subproblem further decomposes into separate scheduling problems on each facility. Let $J_{ik}$ denote the set of tasks assigned to facility $i$ at iteration $k$. Then we solve the following CP model to calculate the minimum makespan on facility $i$ under scenario $s$. \begin{subequations} \label{CP:makespan} \begin{align} \quad \text{minimize} &&& M\\ \text{subject to} &&& M \geq s_j + p^s_{ij}, \quad j \in J_{ik},\\ &&& \text{cummulative}\Big((s_j\,|\,j \in J_{ik}), \, (p^s_{ij}\,|\,j \in J_{ik}), \,(c_{ij}\,|\,j \in J_{ik}), \, K_i \Big),\\ &&& s_j \in [r_j, \infty), \quad j \in J_{ik}. \end{align} \end{subequations} \end{comment} Both strengthened nogood cuts and analytic Benders cuts can be developed for this problem. A simple nogood cut for scenario $\omega$ can take the form of a set of inequalities \begin{equation} \beta_{\omega} \geq \beta_{i\omega}, \; i\in I \label{eq:nogood0} \end{equation} where each $\beta_{i\omega}$ is bounded by \begin{equation} \beta_{i\omega} \geq \mathrm{SP}_{i\omega}(\bar{\vx}) \Big( \sum_{j\in J_i(\bar{\vx})} \hspace{-1.5ex} x_{ij} - |J_i(\bar{\vx})| + 1 \Big) \label{eq:nogood1} \end{equation} and where $\bar{\vx}$ is the solution of the current master problem. The cut says that if all the jobs in $J_i(\bar{\vx})$ are assigned to facility $i$, possibly among other jobs, then the makespan of facility $i$ in scenario $\omega$ is at least the current makespan SP$_{i\omega}(\bar{\vx})$. The cut is weak, however, because if even one job in $J_i(\bar{\vx})$ is not assigned to $i$, the bound in (\ref{eq:nogood1}) becomes useless. The cut can be strengthened by heuristically assigning proper subsets of the jobs in $J_i(\bar{\vx})$ to facility $i$, and re-computing the minimum makespan for each subset, to discover a smaller set of jobs that yields the same makespan. This partially reveals which task assignments serve as premises of the optimality proof. Then $J_i(\bar{\vx})$ in (\ref{eq:nogood1}) is replaced with this smaller set to strengthen the cut. This simple scheme, and variations of it, can be effective when the makespan problem solves quickly \cite{hooker2007planning}. A stronger cut can be obtained without re-solving the makespan problem by using an analytical Benders cut. We introduce a cut based on the following lemma: \begin{lemma} \label{lemma1} Consider a minimum makespan problem $P$ in which each task $j\in J$ has release time $r_j$ and processing time $p_j$, with no deadlines. Let $M^*$ denote the minimum makespan for $P$, and $\hat{M}$ the minimum makespan for the problem $\hat{P}$ that is identical to $P$ except that the tasks in a nonempty set $\hat{J}\subset J$ are removed. Then \begin{equation} M^* - \hat{M} \leq \Delta + r^+ - r^- \label{lemma_eq} \end{equation} where $\Delta = \sum_{j \in \hat{J}} p_j$, $r^+ = \max_{j \in J} \{ r_j\}$ is the latest release time, and $r^- = \min_{j \in J} \{ r_j\}$ is the earliest release time. \end{lemma} \begin{proof} Consider any solution of $\hat{P}$ with makespan $\hat{M}$. We will construct a feasible solution for $P$ by extending this solution. If $\hat{M} > r^+$, we schedule all the tasks in $\hat{J}$ sequentially starting from time $\hat{M}$, resulting in makespan $\hat{M}+\Delta$. This is a feasible solution for $P$, and we have $M^* \leq \hat{M} + \Delta$. The lemma follows because $r^+ - r^-$ is nonnegative. If $\hat{M} < r^+$, we schedule all the tasks in $\hat{J}$ sequentially starting from time $r^+$ to obtain a solution with makespan of $r^+ + \Delta$. Again this is a feasible solution for $P$, and we have $M^* \leq r^+ + \Delta$. This implies \begin{equation} M^* - \hat{M} \leq r^+ - \hat{M} + \Delta \label{proof_eq} \end{equation} Because $\hat{M}$ is at least $r^-$, \eqref{proof_eq} implies \eqref{lemma_eq}, and the lemma follows. \qed \end{proof} We can now derive a valid analytical cut: \begin{theorem} Inequalities (\ref{eq:nogood0}) and the following comprise a valid Benders cut for scenario $\omega$: \begin{equation} \label{eq:makespanCutA} \beta_{i\omega} \geq \left\{ \begin{array}{ll} {\displaystyle \mathrm{SP}_{i\omega}(\bar{\vx}) - \Big( \hspace{-0.5ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}) p^{\omega}_{ij} + r^+ - r^- \Big), } & \text{if $x_{ij}=0$ for some $j\in J_i(\bar{\vx})$} \vspace{0.5ex} \\ \mathrm{SP}_{i\omega} (\bar{\vx}), & \text{otherwise} \end{array} \right\}, \; i\in I \end{equation} \end{theorem} \begin{proof} The cut clearly provides a sharp bound $\max_{i\in I} \{\mathrm{SP}_{i\omega} (\bar{\vx})\}$ when $\vx=\bar{\vx}$, because the second line of (\ref{eq:makespanCutA}) applies in this case. The validity of the cut follows immediately from Lemma~\ref{lemma1}. \qed \end{proof} We linearize the cut (\ref{eq:makespanCutA}) as follows: \begin{equation} \label{Cut:Makespan:Strong} \begin{array}{l} {\displaystyle \beta_{i\omega} \geq \mathrm{SP}_{i\omega}(\bar{\vx}) - \hspace{-1ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}) p^{\omega}_{ij} - z_{i\omega} } \vspace{0.5ex} \\ {\displaystyle z_{i\omega} \leq \left( r^+ - r^- \right) \hspace{-1ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}) } \vspace{0.5ex} \\ z_{i\omega} \leq r^+ - r^- \end{array} \end{equation} The Benders cut is inserted into the master problem by including inequalities (\ref{Cut:Makespan:Strong}) for each $i\in I$ and $\omega\in\Omega$, along with the inequalities (\ref{eq:nogood0}). The inequalities (\ref{Cut:Makespan:Strong}) incur the expense of introducing a new continuous variable $z_{i\omega}$ for each $i$ and $\omega$, which may not be desirable while solving large-scale problems. As an alternative, a slightly weaker Benders cut can be used. \begin{corollary} Inequalities (\ref{eq:nogood0}) and the following comprise a valid Benders cut for scenario $\omega$: \begin{equation} \label{Cut:Makespan:Strong:ver2} \beta_{i\omega} \geq \mathrm{SP}_{i\omega}(\bar{\vx}) - \hspace{-1ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}) p^{\omega}_{ij} - \left( r^+ - r^- \right) |J_i(\bar{\vx})|^{-1} \hspace{-1ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}), \;\; i\in I \end{equation} \end{corollary} \begin{proof} We first note that the inequalities provide a sharp bound if $\vx=\bar{\vx}$, because in this case $x_{ij}=1$ for all $j\in J_i(\bar{\vx})$, and (\ref{Cut:Makespan:Strong:ver2}) is identical to the second line of (\ref{Cut:Makespan:Strong}). If $x_{ij}=0$ for some $j\in J_i(\bar{\vx})$, we have \[ \left( r^+ - r^- \right) |J_i(\bar{\vx})|^{-1} \hspace{-1ex} \sum_{j \in J_i(\bar{\vx})} \hspace{-1ex} (1 - x_{ij}) \leq r^+ - r^- \] because $\sum_{j\in J_i(\bar{\vx})} (1-x_{ij})\leq J_i(\bar{\vx})$. Thus (\ref{Cut:Makespan:Strong:ver2}) is implied by the first line of (\ref{Cut:Makespan:Strong}) and is therefore valid. \qed \end{proof} Finally, we add a subproblem relaxation to the master problem. We use a relaxation from \cite{hooker2007planning}, modified to be scenario-specific: \begin{equation} \beta_{i\omega} \geq \frac{1}{K_i} \sum_{j \in J} c_{ij} p^{\omega}_{ij} x_{ij}, \quad i \in I, \; \omega\in \Omega \label{eq:relaxMakespan} \end{equation} This relaxation is valid for arbitrary release times and deadlines. \begin{comment} \emph{Relaxed master problem.} Using the Benders cuts we derived and the relaxation of the subproblem, we obtain the following relaxed master problem for \ensuremath{\left(\mathbf{SPSP-M}\right)}{}: \begin{subequations} \label{RMP:makespan} \begin{align} \text{minimize} &&& \sum_{s \in S} p^s M^s\\ \text{subject to} &&& \eqref{twoStage:assignment}, ~ \eqref{twoStage::nonnegativity},\\ &&& \text{Benders cuts } \eqref{Cut:Makespan:Strong} \text{ added so far},\\ &&& \text{Relaxation of the subproblem } \eqref{Relaxation:Makespan}. \end{align} \end{subequations} At each iteration of the LBBD algorithm, we solve \eqref{RMP:makespan} and obtain a first-stage decision. Then, we update the subproblems \eqref{CP:makespan} and solve them. We create a Benders cut of the form \eqref{Cut:Makespan:Strong}, one for each scenario $s \in S$, and add it to the relaxed master problem \eqref{RMP:makespan}. We continue to iterate until we achieve convergence. \end{comment} \subsection{Minimum cost problem} \label{Sec:Algorithm:MinCost} In the minimum cost problem, there is only a fixed cost $\phi_{ij}$ associated with assigning task $j$ to facility $i$. So we have \[ \hat{g}(\vx) = \sum_{i\in I} \sum_{j\in J} \phi_{ij}x_{ij} \] in the MILP master problem (\ref{eq:MILP}), and we set $\beta_{\omega}=0$ for $\omega\in\Omega$. The subproblem decouples into a feasibility problem for each $i$ and $\omega$, because $\hat{h}_i(\bm{s},\bar{\vx},\omega) = 0$. A Benders cut is generated for each $i$ and $\omega$ when the corresponding scheduling problem (\ref{eq:CP}) is infeasible. A simple nogood cut is \begin{equation} \sum_{j\in J_i(\bar{\vx})} (1-x_{ij}) \geq 1 \end{equation} The cut can be strengthened in a manner similar to that used for the makespan problem. To create a subproblem relaxation for the master problem, one can exploit the fact that we now have two-sided time windows $[r_j,d_j]$. Let $J(t_1,t_2)$ be the set of tasks $j$ for which $[r_j,d_j]\subseteq [t_1,t_2]$. Adapting an approach from \citep{hooker2007planning}, one can add the following inequalities to the master problem for each $i\in I$: \begin{equation} \label{Relaxation:Cost} \frac{1}{K_i} \sum_{j \in J(t_1,t_2)} \hspace{-2ex} p^{\min}_{ij} c_{ij} x_{ij} \leq t_2 - t_1, \;\; t_1\in \{\bar{r}_1,\ldots,\bar{r}_{n'}\}, \; t_2\in \{\bar{d}_1,\ldots,\bar{d}_{n''}\} \end{equation} where $\bar{r}_1,\ldots,\bar{r}_{n'}$ are the distinct release times among $r_1,\ldots, r_n$, and $\bar{d}_1,\ldots,\bar{d}_{n''}$ the distinct deadlines among $d_1,\ldots,d_n$. Some of these inequalities may be redundant, and a method for detecting them is presented in \citep{hooker2007planning}. Because the relaxation must be valid across all scenarios, the processing time is set to $p^{\min}_{ij}=\min_{\omega\in\Omega} \{p^{\omega}_{ij}\}$. \subsection{Minimum tardiness problem} \label{Sec:Algorithm:Tardiness} In this section, we consider a minimum tardiness problem in which tasks are all released at time zero but have different due dates $\bar{d}_j$. There are no hard deadlines, and so we let $d_j=\infty$ for all $j\in J$. As in the minimum makespan problem, there is no first-stage cost, so that $\hat{g}(\vx)=0$ in the MILP model (\ref{eq:MILP}). The second-stage objective function is expected total tardiness, and we have \[ \hat{h}_i(\bm{s},\vx,\omega) = \hspace{-1ex} \sum_{j\in J_i(\vx)} \hspace{-1ex} \big( s_j+p^{\omega}_{ij} - \bar{d}_j \big)^+ \] in the CP scheduling problem (\ref{eq:CP}). Here $\alpha^+ = \max\{0,\alpha\}$. The following analytic Benders cut can be adapted from \cite{Hoo12}: \[ \beta_{\omega} \geq \sum_{i\in I} \Big( \mathrm{SP}_{i\omega}(\bar{\vx}) - \hspace{-1ex} \sum_{j\in J_i(\bar{\vx})} \hspace{-1ex} \Big( \hspace{-0.5ex} \sum_{j'\in J_i(\bar{\vx})} \hspace{-1.5ex} p^{\omega}_{ij'} - \bar{d}_j\Big)^+ (1-x_{ij}) \Big) \] The cut is added to (\ref{eq:MILP}) for each $\omega\in\Omega$. Strengthened nogood cuts similar to those developed for the makespan problem can also be used. Two subproblem relaxations can be adapted from \cite{hooker2007planning}. The simpler one is analogous to (\ref{Relaxation:Cost}) and adds the following inequalities to (\ref{eq:MILP}) for each $i$ and $\omega$ \[ \beta_{i\omega} \geq \frac{1}{K_i} \hspace{-0.5ex} \sum_{j'\in J(0,\bar{d}_j)} \hspace{-2ex} p^{\min}_{ij'}c_{ij'}x_{ij'} - \bar{d}_j, \;\; j\in J \] along with the bounds $\beta_{i\omega}\geq 0$. A second relaxation more deeply exploits the structure of the subproblem. For each facility $i$ and scenario $\omega$, let $\tau^{\omega}_i$ be a permutation of $\{1,\ldots,n\}$ such that $p^{\omega}_{i\tau^{\omega}_i(1)}c_{i\tau^{\omega}_i(1)} \leq \cdots \leq p^{\omega}_{i\tau^{\omega}_i(n)}c_{i\tau^{\omega}_i(n)}$. We also assume that tasks are indexed so that $\bar{d}_1\leq \cdots\leq \bar{d}_n$. Then we add the following inequalities to the master problem (\ref{eq:MILP}) for each $i$ and $\omega$: \[ \beta_{i\omega} \geq \frac{1}{K_i} \sum_{j'\in J} p^{\omega}_{i\tau^{\omega}_i(j')}c_{i\tau^{\omega}_i(j')}x_{i\tau^{\omega}_i(j')} - \bar{d}_j - (1-x_{ij})U_{ij\omega}, \;\; j\in J \] where \[ U_{ij\omega} = \frac{1}{K_i} \sum_{j'\in J} p^{\omega}_{i\tau^{\omega}_i(j')}c_{i\tau^{\omega}_i(j')} - \bar{d}_j \] \section{The Integer L-Shaped Method} \label{Sec:intLShaped} The integer L-Shaped method is a Benders-based algorithm proposed by \cite{laporte1993integer} to solve two-stage stochastic integer programs. It terminates in finitely many iterations when the problem has complete recourse and binary first-stage variables. It is similar to branch and check in that Benders cuts are generated while solving the first-stage problem by branching. It differs in that it uses subgradient cuts derived from a linear programming relaxation of the subproblem rather than combinatorial cuts derived from the original subproblem. It also uses a simple integer nogood cut to ensure convergence, but the cut is quite weak and does not exploit the structure of the subproblem as does branch and check. We describe the integer \mbox{L-shaped} method here as it applies to minimizing makespan in the planning and scheduling problem. We first state an MILP model of the deterministic equivalent problem, as it will play a benchmarking role in computational testing. We index discrete times by $t\in T$ and introduce a 0--1 variable $z^{\omega}_{ijt}$ that is 1 if task $j$ starts at time $t$ on facility $i$ in scenario $\omega$. The model is \begin{equation} \begin{array}{lll} \text{minimize} & {\displaystyle \sum_{\omega \in \Omega} \pi_{\omega} \beta_{\omega} } & (a) \vspace{0.5ex} \\ \text{subject to} & {\displaystyle \sum_{i\in I} x_{ij} = 1, \;\; j\in J } & (b) \vspace{0.5ex} \\ & \beta_{\omega} \geq \beta_{i\omega}, \;\; i\in I, \; \omega\in\Omega & (c) \vspace{0.5ex} \\ & x_{ij}\in \{0,1\}, \;\; i\in I, \; j\in J & (d) \vspace{0.5ex} \\ & {\displaystyle \beta_{\omega} \geq \sum_{t\in T} (t + p^{\omega}_{ij}) z^{\omega}_{ijt}, \;\; i\in I, \; j\in J, \; \omega\in\Omega } & (e) \vspace{0.5ex} \\ & z^{\omega}_{ijt} \leq x_{ij}, \;\; i\in I, \; j\in J, \; t\in T, \; \omega\in\Omega & (f) \vspace{0.5ex} \\ & {\displaystyle \sum_{i\in I} \sum_{t\in T} z^{\omega}_{ijt} = 1, \;\; j\in J, \; \omega\in\Omega } & (g) \vspace{0.5ex} \\ & {\displaystyle \sum_{j\in J} \sum_{t'\in T^{\omega}_{tij}} \hspace{-1ex} c_{ij} z^{\omega}_{ijt'} \leq K_i, \;\; i\in I, \; t\in T, \; \omega\in\Omega } & (h) \vspace{0.5ex} \\ & z^{\omega}_{ijt}=0, \;\; i\in I, \; \omega\in \Omega, \; j\in J, \; \mbox{all}\; t\in T\; \mbox{with} \; t<r_j & (i) \vspace{0.5ex} \\ & z^{\omega}_{ijt} \in \{0,1\}, \;\; i\in i, \; j\in J, \; t\in T, \; \omega\in\Omega & (j) \end{array} \label{eq:DEQ} \end{equation} where $T^{\omega}_{tij}= \{t'\;|\; 0\leq t' \leq t-p^{\omega}_{ij}\}$. In the integer L-shaped method, the first-stage minimizes (\ref{eq:DEQ}a) subject to (\ref{eq:DEQ}b)--(\ref{eq:DEQ}d) and Benders cuts that provide bounds on $\beta_{i\omega}$. The Benders cuts consist of classical Benders cuts derived from the linear relaxation of the second-stage scheduling problem for each $i$ and $\omega$, as well as integer cuts. If $\bar{\vx}$ is an optimal solution of the first-stage problems, the second-stage problem for facility $i$ and scenario $\omega$ is \begin{equation} \begin{array}{ll} \text{minimize} & M \vspace{0.5ex} \\ \text{subject to} & {\displaystyle M \geq \sum_{t\in T} (t + p^{\omega}_{ij}) z^{\omega}_{ijt}, \;\; j\in J_i(\bar{\vx}) } \vspace{0.5ex} \\ & {\displaystyle \sum_{t\in T} z^{\omega}_{ijt} = 1, \;\; j\in J_i(\bar{\vx}) } \vspace{0.5ex} \\ & {\displaystyle \sum_{j\in J} \sum_{t'\in T^{\omega}_{tij}} \hspace{-1ex} c_{ij} z^{\omega}_{ijt'} \leq K_i, \;\; t\in T } \vspace{0.5ex} \\ & x^{\omega}_{ijt} \in \{0,1\}, \;\; j\in J_i(\bar{\vx}), \; t\in T \vspace{0.5ex} \\ & z^{\omega}_{ijt}=0, \;\; j\in J_i(\bar{\vx}), \; \mbox{all}\; t\in T\; \mbox{with} \; t<r_j \vspace{0.5ex} \\ & z^{\omega}_{ijt} \in \{0,1\}, \;\; j\in J_i(\bar{\vx}), \; t\in T \end{array} \label{eq:LshapedSub} \end{equation} The following integer cut is used for each $i$ and $\omega$ to ensure convergence: \begin{equation} \beta_{i\omega} \geq \big( \mathrm{SP}_{i\omega}(\bar{\vx}) - \mathrm{LB}\big) \Big( \hspace{-0.5ex} \sum_{j\in J_i(\bar{\vx})}\hspace{-1.5ex} x_{ij} - \hspace{-1ex} \sum_{j\not\in J_i(\bar{\vx})} \hspace{-1.5ex} x_{ij} - |J_i(\bar{\vx})| + 1 \Big) + \mathrm{LB} \label{eq:integerCut} \end{equation} where LB is a global lower bound on makespan. Note that if $\mathrm{LB}=0$, (\ref{eq:integerCut}) is weaker than the unstrengthened nogood cut (\ref{eq:nogood1}). This is because (\ref{eq:integerCut}) becomes useless if $x_{ij}\neq \bar{x}_{ij}$ for even one $j\in J$, while (\ref{eq:nogood1}) becomes useless only if $x_{ij}\neq \bar{x}_{ij}$ for some $j\in J_i(\bar{\vx})$. If a bound $\mathrm{LB}>0$ is available, (\ref{eq:integerCut}) is still weaker than (\ref{eq:nogood1}) if the same bound is added to (\ref{eq:nogood1}) by writing \[ \beta_{i\omega} \geq \big( \mathrm{SP}_{i\omega}(\bar{\vx}) - \mathrm{LB}\big) \Big( \sum_{j\in J_i(\bar{\vx})} \hspace{-1.5ex} x_{ij} - |J_i(\bar{\vx})| + 1 \Big) + \mathrm{LB} \] Finally, we strenghthen the initial master problem by adding bounds of the form \begin{equation} \beta_{\omega} \geq \beta^{\mathrm{LB}}_{\omega}, \;\; \omega\in\Omega. \label{eq:initialBounds} \end{equation} Here $\beta^{\mathrm{LB}}_{\omega}$ is a lower bound on makespan obtained by solving the LP relaxation of (\ref{eq:DEQ}) for fixed scenario $\omega$. We use the same bound in LBBD and branch-and-check methods. \begin{comment} \subsection{Modifying LBBD for a mean-risk objective} \label{Sec:Algorithm:MeanRisk} The two-stage problem \eqref{TwoStage:General} we have considered so far is a risk-neutral problem since we minimize the expectation of the random second-stage cost. However, some decision makers might seek for risk-averse decisions. One way to accomplish such task is to consider an objective function that features a risk measure. Let $\rho \colon \mathcal{L} \to \mathbb{R}$ be a convexity preserving risk measure, where $\rho$ is a functional and $\mathcal{L}$ is a linear space of $\mathcal{F}$-measurable functions on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. We refer the reader to \cite{ahmed2006convexity} for a detailed discussion on convexity and decomposition of mean-risk stochastic programs. Then we define the two-stage mean-risk model as follows: \begin{equation} \label{TwoStage:MeanRisk} \begin{alignedat}{4} & {\text{minimize}} \qquad&& cy + (1-\lambda) \mathbb{E}_{\omega} \Big[ q(\omega)\, x(\omega) \Big] + \lambda \rho_\omega \Big[ q(\omega)\, x(\omega) \Big] &\\ & \text{subject to} && y \in Y,&\\ &&&W(\omega) \, x(\omega) = h(\omega) - T(\omega)\, y, \quad \omega \in \Omega,&\\ &&&x(\omega) \in X(\omega), \quad \omega \in \Omega.& \end{alignedat} \end{equation} As shown in \cite{ahmed2006convexity}, the existing decomposition algorithms for two-stage stochastic programming such as the L-Shaped method can be modified to solve problems of the form \eqref{TwoStage:MeanRisk}. Now, suppose that we use conditional value-at-risk ($\operatorname{CVaR}$), a very popular convexity preserving risk measure. Let $W$ be a random variable coming from a finite probability space. Then, $\operatorname{CVaR}$ of $W$ at the confidence interval $\alpha$ is calculated as follows: \begin{align} \operatorname{CVaR}_\alpha (W) &= \min \left\{ \eta + \frac{1}{1-\alpha} \mathbb{E}[W-\eta] ~:~ \eta \in \mathbb{R} \right\}, \label{CVaR:general}\\ &=\min \left\{ \eta + \frac{1}{1-\alpha} \sum_{s \in S} p^s v^s ~:~ \eta \in \mathbb{R}, \quad v^s \geq 0, \quad v^s \in \mathbb{R} ~ \forall s \in S\right\}. \label{CVaR:LP} \end{align} The equivalence of \eqref{CVaR:general} and \eqref{CVaR:LP} allows us to write the following master problem which is a valid reformulation of \eqref{TwoStage:MeanRisk}. \begin{equation} \label{TwoStage:MultiCut:MeanRisk} \begin{alignedat}{4} & {\text{minimize}} \qquad&& cy + (1-\lambda) \left( \sum_{s \in S} p^s \theta^s \right) + \lambda \left( \eta + \frac{1}{1 - \alpha} \sum_{s \in S} p^s \nu^s \right)&\\ & \text{subject to} && y \in Y,&\\ &&&\theta^s \geq B^s_{\bar{y}}(y), \quad s \in S, ~x \in D^s_x, &\\ &&&\nu^s \geq \theta^s - \eta, \quad s \in S&\\ &&&\nu^s \in \mathbb{R}_+, \quad s \in S, &\\ &&& \eta \in \mathbb{R}.& \end{alignedat} \end{equation} As discussed earlier, the Benders cuts typically generated by using a delayed constraint generation approach. Note that $\lambda$ is a parameter that captures the risk preference of the decision makers. For $\lambda = 0$ we obtain the usual risk netural model, and for $\lambda = 1$ we get the pure risk model. It is important to highlight the fact that a slight modification of the master problem allows us to incorporate a risk measure in the objective function. \end{comment} \section{Computational Study} \label{Sec:CompStudy} In this section, we describe computational experiments we conducted for the stochastic planning and scheduling problem, with the objective of minimizing makespan. All experiments are conducted on a personal computer with a 2.80 GHz Intel\textsuperscript{\textregistered} Core\textsuperscript{\texttrademark} i7-7600 processor and 24 GB memory running on a Microsoft Windows 10 Pro. All MILP and CP formulations are solved in \texttt{C++} using the \texttt{CPLEX} and \texttt{CP Optimizer} engines of \texttt{IBM}\textsuperscript{\textregistered} \texttt{ILOG}\textsuperscript{\textregistered} \texttt{CPLEX}\textsuperscript{\textregistered} \texttt{12.7 Optimization Studio}, respectively. We use only use a single thread in all computational experiments. We modify \texttt{CP Optimizer} parameters to execute an extended filtering and DFS search. The rest of the parameters are set to their default values for both \texttt{CPLEX} and \texttt{CP Optimizer} engines. Lastly, we use the \texttt{Lazy Constraint Callback} function of \texttt{CPLEX} to implement branch and check. \subsection{Problem Instances} \label{Sec:CompStudy:InstanceGen} We generate problem instances by combining ideas from \cite{hooker2007planning} and \cite{atakan2017minimizing}. We first generate the deterministic problem as in \cite{hooker2007planning}. Let $|I| = m$ and $|J| = n$. The capacity limits of the facilities is set to $K_i = 10$ for all $i\in I$, and integer capacity requirements of tasks are drawn from a uniform distribution on $[1,10]$. Integer release times are drawn from a uniform distribution on $[0,~2.5 n(m+1)/m]$. For each facility $i\in I$, integer mean processing times $\bar{p}_{ij}$ are drawn from a uniform distribution on $[2, ~25 - 10(i-1)/(m-1)]$. This causes facilities with a higher index to process tasks more rapidly. We then follow \cite{atakan2017minimizing} by perturbing the mean processing times to obtain a set of scenarios. In particular, we first divide the tasks into two groups, one group containing tasks $i$ for which $0<\bar{p}_{ij} \leq 16$, and the other group containing the remainder of the tasks. We then generate a perturbation parameter $\epsilon^{\omega}$ for each scenario $\omega \in \Omega$ from a mixture of uniform distributions. Specifically, for tasks in the first group, $\epsilon^{\omega}$ is distributed uniformly on the interval $[-0.1,~ 0.5]$ with probability 0.9 and on the interval $[2.0, ~3.0]$ with probability 0.1. For tasks in the second group, $\epsilon^{\omega}$ is distributed uniformly on the interval $[-0.1,~ 0.5]$ with probability 0.99 and on the interval $[1.0, ~1.5]$ with probability 0.01. Finally, we generate the processing times under scenario $\omega \in \Omega$ by letting $p^{\omega}_{ij}= \lceil \bar{p}_{ij}(1+\epsilon^{\omega}) \rceil$. \subsection{Computational performance} \label{Sec:CompStudy:Performance} In this section, we report comparisons of LBBD and branch and check with the integer \mbox{L-shaped} method. The experiments are designed to investigate how various algorithmic features affect performance. All results presented in the tables to follow are averages over 3 random instances. Table~\ref{table:CompPerf1} compares computation times and optimality gaps for seven solution methods. Each method solves the first-stage problem using the MILP engine in \texttt{CPLEX}. \begin{itemize} \item {\em Deterministic equivalent MILP.} We solve the deterministic equivalent model (\ref{eq:DEQ}) using the MILP engine in \texttt{CPLEX}. \item {\em Standard integer L-shaped method.} We decouple the second-stage problem by facility and scenario and solve the resulting problems and their LP relaxations using the MILP engine of \texttt{CPLEX} whenever a candidate incumbent solution is identified. We then we add the integer cut \eqref{eq:integerCut} and the classical Benders cut from the LP relaxation for each facility and scenario. The initial bounds (\ref{eq:initialBounds}) are included in the master problem, even though they are not standard, because previous experience indicates that they significantly enhance performance. The subproblem relaxation (\ref{eq:relaxMakespan}) is likewise included in the master problem for fair comparison with LBBD and branch and check, where it is standard. \item {\em Integer L-shaped method with CP.} We modify the standard method by solving the second-stage subproblems with CP rather than MILP. Integer cuts are as before, and classical Benders cuts are derived from the LP relaxation of the MILP model as before. The initial bounds (\ref{eq:initialBounds}) and subproblem relaxation (\ref{eq:relaxMakespan}) are again included in the master problem. \item {\em Standard LBBD with nogood cuts.} We use (\ref{eq:nogood0}) and unstrengthened nogood cuts (\ref{eq:nogood1}). We solve the decoupled subproblems by \texttt{CP Optimizer}. The initial bounds (\ref{eq:initialBounds}) are included in the master problem for comparability with the integer \mbox{L-shaped} method. \item {\em Standard LBBD with analytical cuts.} We use (\ref{eq:nogood0}) and analytical cuts (\ref{Cut:Makespan:Strong:ver2}) rather than nogood cuts. The decoupled subproblems are solved by \texttt{CP Optimizer}. The initial bounds (\ref{eq:initialBounds}) are again included in the master problem. \item {\em Branch and check with nogood cuts.} We use (\ref{eq:nogood0}) and unstrengthened nogood cuts (\ref{eq:nogood1}). We solve the decoupled subproblems by \texttt{CP Optimizer}. The initial bounds (\ref{eq:initialBounds}) are included in the master problem. \item {\em Branch and check with analytical cuts.} We use (\ref{eq:nogood0}) and analytical cuts (\ref{Cut:Makespan:Strong:ver2}) rather than nogood cuts. The decoupled subproblems are solved by \texttt{CP Optimizer}. The initial bounds (\ref{eq:initialBounds}) are again included in the master problem. \end{itemize} In addition to average computation time (in seconds), Table~\ref{table:CompPerf1} reports the optimality gap obtained for each solution method, defined as $(\mathrm{UB}-\mathrm{LB})/\mathrm{UB}$. For the deterministic equivalent and branch-and-check methods, UB and LB are, respectively, the upper and lower bounds obtained from \texttt{CPLEX} upon solution of the master problem. For standard LBBD, UB and LB are, respectively, the smallest subproblem optimal value and the largest master problem optimal value obtained during the Benders algorithhm. \begin{comment} \begin{table}[ht] \small \centering \begin{tabular}{c | c | c | c | c | c | c | c} \multicolumn{1}{c}{} & \multicolumn{7}{c}{Time[GAP]}\\ \cmidrule{2-8} $|S|$ & DEF & LBBD-W & LBBD-S& Int-L & Int-L* & B\&C-W & B\&C-S \\ \cmidrule{1-8} 1 &2.4 &2.7 &0.6 &78.1 &8.7 &1.2 &0.3 \\ 5 &2558.7\color{blue} $^\dagger$\tilim [7.8] &16.1 &3.0 &906.5 &110.2 &3.0 &1.2 \\ 10 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$[12.4] &40.4 &7.3 &2213.0\color{blue} $^\dagger$ [4.6] &504.8 &6.0 &2.3 \\ 50 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$[17.4] &391.7 &42.8 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [21.0] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [14.0] &27.4 &10.7 \\ 100 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$[25.4] &1156.3 &118.8 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [15.0] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [19.8] &50.0 &22.0 \\ 500 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$[44.5] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$[13.8] &900.9 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [23.0] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [24.5] &329.3 &111.0 \\ \cmidrule{1-8} Average &2827.0 [17.9] &876.6 [2.3] &178.9 &2554.0 [10.6] &2208.5 [9.7] &69.5 &24.6 \\ \bottomrule \multicolumn{1}{c}{} & \multicolumn{7}{l}{Each dagger sign indicates one instance hitting the time limit with an integer feasible solution.} \\ \multicolumn{1}{c}{} & \multicolumn{7}{l}{The time information associated with a cell is skipped if all three instances hit the time limit.} \\ \multicolumn{1}{c}{} & \multicolumn{7}{l}{All instances are with $|J|=10$ and $|I|=2$.} \\ \end{tabular} \caption{Computational performances of various solution methods.} \label{table:CompPerf1} \end{table} \end{comment} \begin{table}[!t] \small \centering \caption{Average computation time in seconds over 3 instances (upper half of table) and average relative optimality gap (lower half) for various solution methods, based on 10 tasks and 2 facilities.} \label{table:CompPerf1} \vspace{3ex} \begin{tabular}{r|r@{}r@{}r@{}r@{}r@{}r@{}rrrr} &Determ.&&Integer&&Integer&& LBBD & LBBD & B\&Ch & B\&Ch \\ Scenarios &equiv. &&L-shaped &&L-shaped &&Nogood &Analytic &Nogood &Analytic \\ & MILP &&method &&with CP&&cuts &cuts &cuts &cuts \\ \cmidrule{1-11} 1& 2.4 & & 127.3&& 27.9&& 2.0& 0.6& 1.8& 1.6 \\ 5& 475.8 &$^{\dagger\dagger}$& 839.2&& 149.3&& 12.1& 3.0& 3.3& 1.7 \\ 10& *& & 2316.9&$^{\dagger}$& 437.8&& 27.4& 7.3& 5.1& 2.8 \\ 50& *& &*&& 2517.8&$^{\dagger\dagger}$& 243.1& 42.8& 33.3& 18.0 \\ 100& *&& *&& *&& 952.8& 118.8& 80.1& 30.2 \\ 500& *&& *&& *&& *& 900.9& 416.2& 166.2 \\ \cmidrule{1-11} 1& 0.0&& 0.0&& 0.0&& 0.0& 0.0& 0.0& 0.0 \\ 5& 7.8&& 0.0&& 0.0&& 0.0& 0.0& 0.0& 0.0 \\ 10& 12.4&& 3.8&& 0.0&& 0.0& 0.0& 0.0& 0.0 \\ 50& 17.4&& 21.7&& 13.9&& 0.0& 0.0& 0.0& 0.0 \\ 100& 25.4&& 25.4&& 21.7&& 0.0& 0.0& 0.0& 0.0 \\ 500& 44.5&& 25.8&& 25.4&& 13.5& 0.0& 0.0& 0.0 \\ \bottomrule \multicolumn{1}{r}{\vspace{-2ex}}\\ \multicolumn{10}{l}{\footnotesize$^\dagger$Average excludes one instance that exceeded an hour in computation time.} \\ \multicolumn{10}{l}{\footnotesize$^{\dagger\dagger}$Average excludes two instances that exceeded an hour.} \\ \multicolumn{10}{l}{\footnotesize$^*$All three instances exceeded an hour.} \end{tabular} \end{table} \begin{comment} THIS IS THE PREVIOUS TABLE \begin{table}[!t] \small \centering \caption{Average computation time in seconds over 3 instances (upper half of table) and average relative optimality gap (lower half) for various solution methods, based on 10 tasks and 2 facilities.} \label{table:CompPerf1} \vspace{3ex} \begin{tabular}{r|rrrrrrr} &Determ.&Integer&Integer& LBBD & LBBD & B\&Ch & B\&Ch \\ Scenarios &equiv. &L-shaped &L-shaped &Nogood &Analytic &Nogood &Analytic \\ & MILP &method &with CP&cuts &cuts &cuts &cuts \\ \cmidrule{1-8} 1 &2.4 &78.1 &8.7 &2.7 &0.6 &1.2 &0.3 \\ 5 &2558.7$^{\dagger\dagger}$ \hspace{-2.75ex} &906.5 &110.2 &16.1 &3.0 &3.0 &1.2 \\ 10 &* &2213.0$^\dagger$ \hspace{-1.75ex} &504.8 &40.4 &7.3 &6.0 &2.3 \\ 50 &* &* &* &391.7 &42.8 &27.4 &10.7 \\ 100 &* &* &* &1156.3 &118.8 &50.0 &22.0 \\ 500 &* &* &* &* &900.9 &329.3 &111.0 \\ \cmidrule{1-8} 1 &0 &0 &0 &0 &0 &0 &0 \\ 5 &7.8 &0 &0 &0 &0 &0 &0 \\ 10 &12.4 &4.6 &0 &0 &0 &0 &0 \\ 50 &17.4 &21.0 &14.0 &0 &0 &0 &0 \\ 100 &25.4 &15.0 &19.8 &0 &0 &0 &0 \\ 500 &44.5 &23.0 &24.5 &13.8 &0 &0 &0 \\ \bottomrule \multicolumn{1}{r}{\vspace{-2ex}}\\ \multicolumn{7}{l}{\footnotesize$^\dagger$Average excludes one instance that exceeded an hour in computation time.} \\ \multicolumn{7}{l}{\footnotesize$^{\dagger\dagger}$Average excludes two instances that exceeded an hour.} \\ \multicolumn{7}{l}{\footnotesize$^*$All three instances exceeded an hour.} \end{tabular} \end{table} \end{comment} As one might expect, the integer L-shaped implementations are faster than solving the deterministic equivalent MILP, because they exploit the scenario-based block structure of two-stage stochastic programs. We also see that the integer L-shaped method can be significantly accelerated by solving the exact subproblem with CP rather than MILP (to obtain upper bounds and generate the integer cut), since CP is more effective for this type of scheduling problem. It is clear from Table~\ref{table:CompPerf1} that all four implementations of LBBD substantially outperform the integer L-shaped method, even when the latter uses CP. Furthermore, the two branch-and-check implementations scale much better than standard LBBD, due mainly to time spent in solving the master problem in standard LBBD. This confirms the rule of thumb that branch and check is superior when solving the master problem takes significantly longer than solving the subproblems. The results also indicate that analytical Benders cuts are more effective than unstrengthened nogood cuts in both standard LBBD and branch and check. In summary, branch and check with analytical Benders cuts is the best of the seven methods for these test instances. In particular, it is far superior to the integer L-shaped method, as it easily solves instances with 500 scenarios, while the integer L-shaped method cannot deal with more than 10 scenarios within an hour of computation time. Table~\ref{table:AlgAnalysis} provides a more detailed comparison of the integer \mbox{L-shaped} method with the branch-and-check implementations. The L-shaped method with CP is shown, as we have seen that it is faster than solving the subproblem with MILP. Interestingly, solving a CP formulation of the subproblem is much faster than solving the LP relaxation of an MILP formulation. This illustrates the computational cost of using the larger MILP formulation. We also see that the stronger analytical cuts substantially reduce the number of times the subproblem must be solved, and therefore the number of cuts generated and the resulting size of the master problem. Furthermore, the number of subproblem calls is roughly constant as the number of scenarios increases. Finally, the subproblem solutions consume about half of the total computation time in the branch-and-cut algorithms. Previous experience suggests that for best results, the computation time should, in fact, be about equally split between the master problem and subproblem (\citeauthor{CirCobHoo16} \citeyear{CirCobHoo16}). \begin{table}[!t] \small \setlength{\tabcolsep}{4pt} \centering \caption{Analysis of the integer L-shaped method with CP subproblems and two branch-and-check algorithms. Each number is an average over 3 problem instances.} \label{table:AlgAnalysis} \vspace{3ex} \begin{tabular}{r | r@{}r@{}@{\hspace{2.2ex}}r@{}r@{}@{\hspace{1.5ex}}r@{}r@{}@{\hspace{2ex}}r@{}r@{}@{\hspace{0ex}}r@{}r | rrr@{\hspace{1ex}}r | rr@{\hspace{4ex}}r@{\hspace{0.5ex}}r } \multicolumn{1}{c}{} & \multicolumn{10}{c}{Integer L-shaped with CP} & \multicolumn{4}{c}{B\&C with nogood cuts} & \multicolumn{4}{c}{B\&C with analytical cuts}\\ \cmidrule{2-19} \multicolumn{1}{c}{} & \multicolumn{6}{c}{Time (sec)} & \multicolumn{4}{c}{Statistics} & \multicolumn{2}{|c}{Time (sec)} & \multicolumn{2}{c}{Statistics} & \multicolumn{2}{|c}{Time (sec)} & \multicolumn{2}{c}{Statistics}\\ \cmidrule{2-19} Scenarios & Total && \multicolumn{1}{@{\hspace{-3ex}}c@{\hspace{-1ex}}}{CPsub} && \multicolumn{1}{@{\hspace{-2ex}}c@{\hspace{-1ex}}}{LPsub} && Cuts && Calls && Total & \multicolumn{1}{@{\hspace{-1ex}}c@{\hspace{-1ex}}}{CPsub} & Cuts & Calls & Total & \multicolumn{1}{@{\hspace{0ex}}c@{\hspace{-1ex}}}{CPsub} & Cuts & Calls \\ \toprule 1& 27.9&& 0.9&& 2.1&& 450&& 452&& 1.8& 0.5& 282& 150& 1.6& 0.2& 82& 47 \\ 5& 149.3&& 5.6&& 16.4&& 2692&& 541&& 3.3& 1.7& 1289& 144& 1.7& 0.5& 311& 40 \\ 10& 437.8&& 15.8&& 73.4&& 5114&& 515&& 5.1& 2.6& 2390& 134& 2.8& 1.0& 702& 46 \\ 50& 2517.8&$^{\dagger}$ & 97.3&$^{\dagger}$\hspace{-2.75ex}& 500.2&$^{\dagger}$\hspace{-2.75ex}& 20002&$^{\dagger}$\hspace{-2.75ex}& 401&$^{\dagger}$\hspace{-2.75ex}& 33.3& 16.9& 12616& 148& 18.0& 7.6& 3906& 51 \\ 100& *&& *&& *&& *&& *&& 80.1& 42.0& 25880& 152& 30.2& 11.3& 7607& 50 \\ 500& *&& *&& *&& *&& *&& 416.2& 187.6& 127404& 150& 166.2& 55.7& 35409& 47 \\ \bottomrule \multicolumn{1}{r}{\vspace{-2ex}}\\ \multicolumn{19}{l}{\footnotesize$^{\dagger}$Average excludes two instances that exceeded an hour.} \\ \multicolumn{19}{l}{\footnotesize$^*$Computation terminated for all 3 instances after one hour.} \end{tabular} \end{table} \begin{comment} THIS IS THE PREVIOUS TABLE \begin{table}[!t] \small \centering \caption{Analysis of the integer L-shaped method with CP subproblems and two branch-and-check algorithms. Each number is an average over 3 problem instances.} \label{table:AlgAnalysis} \vspace{3ex} \begin{tabular}{r | r@{\hspace{3ex}}r@{\hspace{2ex}}r@{\hspace{4ex}}r@{\hspace{1ex}}r | rrr@{\hspace{1ex}}r | rrr@{\hspace{1ex}}r } \multicolumn{1}{c}{} & \multicolumn{5}{c}{Integer L-shaped with CP} & \multicolumn{4}{c}{B\&C with nogood cuts} & \multicolumn{4}{c}{B\&C with analtyical cuts}\\ \cmidrule{2-14} \multicolumn{1}{c}{} & \multicolumn{3}{c}{Time (sec)} & \multicolumn{2}{c}{Statistics} & \multicolumn{2}{|c}{Time (sec)} & \multicolumn{2}{c}{Statistics} & \multicolumn{2}{|c}{Time (sec)} & \multicolumn{2}{c}{Statistics}\\ \cmidrule{2-14} Scenarios & Total & \multicolumn{1}{@{\hspace{-3ex}}c@{\hspace{-1ex}}}{CPsub} & \multicolumn{1}{@{\hspace{-2ex}}c@{\hspace{-1ex}}}{LPsub} & Cuts & Calls & Total & \multicolumn{1}{@{\hspace{-1ex}}c@{\hspace{-1ex}}}{CPsub} & Cuts & Calls & Total & \multicolumn{1}{@{\hspace{0ex}}c@{\hspace{-1ex}}}{CPsub} & Cuts & Calls \\ \toprule 1 &8.7 &0.2 &6.4 &154 &156 &1.2 &0.6 &282 &150 &0.3 &0.1 &82 &47\\ 5 &110.2 &1.8 &72.9 &1316 &271 &3.0 &1.8 &1289 &144 &1.2 &0.5 &311 &40\\ 10 &504.8 &8.3 &182.5 &2469 &252 &6.0 &3.7 &2380 &133 &2.3 &1.2 &704 &46\\ 50 &* &* &* &* &* &27.4 &16.6 &12682 &149 &10.7 &5.4 &3935 &52\\ 100 &* &* &* &* &* &50.0 &29.4 &25296 &149 &22.0 &11.0 &7465 &49\\ 500 &* &* &* &* &* &329.3 &156.7 &124468 &145 &111.0 &45.8 &35379 &48\\ \bottomrule \multicolumn{1}{r}{\vspace{-2ex}}\\ \multicolumn{14}{l}{\footnotesize$^*$Computation terminated for all 3 instances after one hour.} \end{tabular} \end{table} \end{comment} Given the computational burden of solving the LP relaxation of the MILP subproblem, we experimented with running the integer \mbox{L-shaped} method with only integer cuts. This obviates the necessity of solving the LP relaxation of an MILP model. We also solved instances with 14 and 18 as well as 10 tasks and with 4 facilities as well as 2. The results appear in Table~\ref{table:CompPerf2}. The three implementations shown in the table are exactly the same except for the cuts used and therefore permit a direct comparison of the effectiveness of the cuts. Comparison with Tables~\ref{table:CompPerf1} and~\ref{table:AlgAnalysis} reveals that the integer \mbox{L-shaped} method actually runs faster using only integer cuts, without any classical Benders cuts obtained from the LP relaxation. We also see that the analytical cuts are more effective than unstrengthened nogood cuts in nearly every instance, and much more effective than integer cuts, which are quite weak. Finally, branch and cut is increasingly superior to even this accelerated version of the integer L-shaped method as the instances scale up, and therefore far superior to the standard method. \begin{comment} \begin{table}[ht] \small \centering \begin{tabular}{c | ccccc | cccc | cccc } \multicolumn{1}{c}{} & \multicolumn{5}{c}{Int-L*} & \multicolumn{4}{c}{B\&C-W} & \multicolumn{4}{c}{B\&C-S}\\ \cmidrule{2-14} \multicolumn{1}{c}{} & \multicolumn{3}{c}{Time} & \multicolumn{2}{c}{Stats} & \multicolumn{2}{c}{Time} & \multicolumn{2}{c}{Stats} & \multicolumn{2}{c}{Time} & \multicolumn{2}{c}{Stats}\\ \cmidrule{2-14} $|S|$ & Total & Sub & Sub-L & Cut & Call & Total & Sub & Cut & Call & Total & Sub & Cut & Call \\ \toprule 1 &8.7 &0.2 &6.4 &154 &156 &1.2 &0.6 &282 &150 &0.3 &0.1 &82 &47\\ 5 &110.2 &1.8 &72.9 &1316 &271 &3.0 &1.8 &1289 &144 &1.2 &0.5 &311 &40\\ 10 &504.8 &8.3 &182.5 &2469 &252 &6.0 &3.7 &2380 &133 &2.3 &1.2 &704 &46\\ 50 &3600.0 &53.4 &771.7 &5414 &111 &27.4 &16.6 &12682 &149 &10.7 &5.4 &3935 &52\\ 100 &3600.0 &64.3 &778.6 &3532 &37 &50.0 &29.4 &25296 &149 &22.0 &11.0 &7465 &49\\ 500 &3600.0 &74.6 &778.0 &3114 &7 &329.3 &156.7 &124468 &145 &111.0 &45.8 &35379 &48\\ \midrule Average &2208.5 &33.8 &431.7 &2666 &139 &69.5 &34.8 &27733 &145 &24.6 &10.7 &7979 &47\\ \bottomrule \multicolumn{1}{r}{}& \multicolumn{13}{l}{All instances are with $|J|=10$ and $|I|=2$.} \\ \end{tabular} \caption{Analysis of the single search tree algorithms.} \label{table:AlgAnalysis} \end{table} \end{comment} \begin{comment} \begin{table}[ht] \small \centering \begin{tabular}{cc | c | c | c | c | c | c } \multicolumn{2}{c}{} & \multicolumn{6}{c}{Time[GAP]}\\ \cmidrule{3-8} \multicolumn{2}{c}{} & \multicolumn{3}{c}{$|I|=2$} & \multicolumn{3}{c}{$|I|=4$} \\ \cmidrule{3-5} \cmidrule{6-8} $|J|$ & $|S|$ & Int-L** & B\&C-W & B\&C-S & Int-L** & B\&C-W & B\&C-S \\ \toprule 10 &1 &1.8 &1.2 &0.3 &78.1 &0.5 &0.4 \\ &5 &8.7 &3.0 &1.2 &906.5 &2.5 &2.3 \\ &10 &15.3 &6.0 &2.3 &2213.0\color{blue} $^\dagger$ [4.6] &3.9 &4.6 \\ &50 &90.9 &27.4 &10.7 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [21.0] &24.4 &19.6 \\ &100 &217.9 &50.0 &22.0 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [7.4] &41.9 &33.5 \\ &500 &1318.3 &329.3 &111.0 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [19.0] &268.3 &205.2 \\ \midrule &Average&275.5 &69.5 &24.6 &2409.0 [8.7] &56.9 &44.3 \\ \midrule 14 &1 &48.9 &4.3 &1.5 &2403.9\color{blue} $^\dagger$\tilim[4.7] &0.9 &0.9 \\ &5 &229.5 &16.3 &5.7 &2402.8\color{blue} $^\dagger$\tilim [2.6] &5.6 &3.1 \\ &10 &284.7 &37.7 &9.0 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [7.6] &12.4 &5.9 \\ &50 &1850.6 &186.0 &31.6 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [8.2] &88.0 &41.9 \\ &100 &2810.4\color{blue} $^\dagger$\tilim [6.1] &411.0 &70.2 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [10.3] &189.5 &75.2 \\ &500 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [11.5] &2431.9\color{blue} $^\dagger$ [5.6] &494.8 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [16.4] &854.7 &504.9 \\ \midrule &Average&1471.0 [2.9] &514.5 [0.9] &102.1 &3201.8 [8.3] &191.8 &105.3 \\ \midrule 18 &1 &1358.6\color{blue} $^\dagger$ [2.3] &208.3 &15.2 &1346.8\color{blue} $^\dagger$ [6.5] &1.5 &1.3 \\ &5 &229.5 &16.3 &5.7 &2402.8\color{blue} $^\dagger$\tilim [2.6] &5.6 &3.1 \\ &10 &3477.2\color{blue} $^\dagger$ [4.3] &2184.0 &113.4 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [8.7] &116.4 &35.2 \\ &50 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [8.4] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [8.7] &1138.1 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [9.9] &458.2 &148.5 \\ &100 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [9.3] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [9.8] &2298.7 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [11.9] &943.7 &285.6 \\ &500 &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [10.4] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [10.6] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [7.7] &\color{blue} $^\dagger$\tilim\color{blue} $^\dagger$ [13.3] &2804.5\color{blue} $^\dagger$\tilim [2.9] &1318.9 \\ \midrule &Average&2663.0 [5.8] &3066.1 [4.8] &1195.3 [1.3] &3025.8 [8.8] &721.6 [0.5] &298.8 \\ \bottomrule \multicolumn{1}{c}{} & \multicolumn{7}{l}{Each dagger sign indicates one instance hitting the time limit with an integer feasible solution.} \\ \multicolumn{1}{c}{} & \multicolumn{7}{l}{The time information associated with a cell is skipped if all three instances hit the time limit.} \\ \end{tabular} \caption{Empirical analysis of different Benders cuts.} \label{table:CompPerf2} \end{table} \end{comment} \begin{table}[!t] \small \centering \caption{Average computation time in seconds over 3 instances for various types of cuts.} \label{table:CompPerf2} \vspace{3ex} \begin{tabular}{rr | rrr | rrr } \multicolumn{2}{c}{} & \multicolumn{3}{c}{2 facilities} & \multicolumn{3}{c}{4 facilities} \\ \cmidrule{3-5} \cmidrule{6-8} & &L-shaped &B\&C &B\&C &L-shaped &B\&C &B\&C \\ Tasks &Scenarios &integer&nogood &analytic &integer&nogood &analytic \\ & &cuts only&cuts &cuts &cuts only &cuts &cuts \\ \toprule 10 &1 &1.8 &1.2 &0.3 &78.1 &0.5 &0.4 \\ &5 &8.7 &3.0 &1.2 &906.5 &2.5 &2.3 \\ &10 &15.3 &6.0 &2.3 &2213.0$^\dagger$ \hspace{-1.75ex} &3.9 &4.6 \\ &50 &90.9 &27.4 &10.7 &* &24.4 &19.6 \\ &100 &217.9 &50.0 &22.0 &* &41.9 &33.5 \\ &500 &1318.3 &329.3 &111.0 &* &268.3 &205.2 \\ \midrule 14 &1 &48.9 &4.3 &1.5 &2403.9$^{\dagger\dagger}$ \hspace{-2.75ex} &0.9 &0.9 \\ &5 &229.5 &16.3 &5.7 &2402.8$^{\dagger\dagger}$ \hspace{-2.75ex} &5.6 &3.1 \\ &10 &284.7 &37.7 &9.0 &* &12.4 &5.9 \\ &50 &1850.6 &186.0 &31.6 &* &88.0 &41.9 \\ &100 &2810.4$^{\dagger\dagger}$ \hspace{-2.75ex} &411.0 &70.2 &* &189.5 &75.2 \\ &500 &* &2431.9$^\dagger$ \hspace{-1.75ex} &494.8 &* &854.7 &504.9 \\ \midrule 18 &1 &1358.6$^\dagger$ \hspace{-1.75ex} &208.3 &15.2 &1346.8$^\dagger$ \hspace{-1.75ex} &1.5 &1.3 \\ &5 &229.5 &16.3 &5.7 &2402.8$^{\dagger\dagger}$ \hspace{-2.75ex} &5.6 &3.1 \\ &10 &3477.2$^\dagger$ \hspace{-1.75ex} &2184.0 &113.4 &* &116.4 &35.2 \\ &50 &* &* &1138.1 &* &458.2 &148.5 \\ &100 &* &* &2298.7 &* &943.7 &285.6 \\ &500 &* &* &* &* &2804.5$^{\dagger\dagger}$ \hspace{-2.75ex} &1318.9 \\ \bottomrule \multicolumn{1}{r}{\vspace{-2ex}}\\ \multicolumn{8}{l}{\footnotesize$^\dagger$Average excludes one instance that exceeded an hour in computation time.} \\ \multicolumn{8}{l}{\footnotesize$^{\dagger\dagger}$Average excludes two instances that exceeded an hour.} \\ \multicolumn{8}{l}{\footnotesize$^*$All three instances exceeded an hour.} \end{tabular} \end{table} \section{Conclusion} \label{Sec:Conclusion} In this study, we applied logic-based Benders decomposition (LBBD) to two-stage stochastic optimization with a scheduling task in the second stage. While Benders decomposition is often applied to such problems, notably in the integer L-shaped method, the necessity of generating classical Benders cuts requires that the subproblem be formulated as a mixed integer/linear programming problem and cuts generated from its continuous relaxation. We observed that this process incurs substantial computational overhead that LBBD avoids by generating logic-based cuts directly from a constraint programming model of the scheduling subproblem. Although the integer cuts used with the L-shaped method can be regarded as a special case of logic-based Benders cuts, they are extremely weak, even weaker than simple nogood cuts often used in an LBBD context. Furthermore, the type of subproblem analysis that has been used for past applications of LBBD permits much stronger logic-based cuts to be derived, again without the overhead of obtaining a continuous relaxation. Computational experiments found that, due to these factors, LBBD solves a generic stochastic planning and scheduling problem much more rapidly than the integer L-shaped method. The speedup is several orders of magnitude when a branch-and-check variant of LBBD is used. This outcome suggests that LBBD could be a promising approach to other two-stage stochastic and robust optimization problems with integer or combinatorial recourse, particularly when the subproblem is relatively difficult to model as an integer programming problem. \begin{comment} In this study, we introduce stochastic planning and scheduling problem \ensuremath{\left(\mathbf{SPSP}\right)}{} which is an extension of its deterministic counterpart proposed by \cite{hooker2007planning}. To the best of our knowledge, our study is the first in the stochasic programming literature in which the scheduling decisions are treated as recourse decisions. We consider three variants of \ensuremath{\left(\mathbf{SPSP}\right)}{}, derive strong cuts for the newly introduced makespan problem and propose logic-based Benders decomposition algorithms to solve these problems efficiently. Our computational study shows the effectiveness of our LBBD algorithms. Overall, we believe that our study underlines the importance of exploiting problem structure with the use of decomposition-based hybrid methods in solving two-stage stochastic programs with integer recourse. We conclude this paper by listing the possible directions that we can follow before its submission to a journal. First, the proposed solution method can be improved in several ways. We can make use of some of the techniques mentioned in Section \ref{Sec:Algorithm:Enhancements}, such as including initial Benders cuts within the master problem, cut aggregation, and cut strengthening through resolving the subproblem. Second, we can implement improved integer L-Shaped method of \cite{angulo2016improving} as a more recent benchmark against our LBBD method. Lastly, we can generalize Lemma \ref{lemma1} so that it is valid for problems with hard deadlines. \end{comment} \clearpage
{ "timestamp": "2020-12-29T02:23:54", "yymm": "2012", "arxiv_id": "2012.14074", "language": "en", "url": "https://arxiv.org/abs/2012.14074" }
\section{Introduction} Sequential decision making under uncertainty lies in the heart of many real world problems, ranging from portfolio management to robotic control. Many models and methods have been proposed to study the dynamic decision-making process, among which Markov decision process (MDP) \cite{puterman_mdp} and online optimization \cite{shai_online}, together with their variants, are most popular and well studied ones. Based on Bellman principle \cite{Bellman:1954uq}, solving an MDP problem relies on the backward induction using dynamic programming, where the future is considered when making decisions. For online settings, where the transition kernel and/or the reward function are unknown, reinforcement learning (RL) algorithms \cite{Sutton:2018wc} come into play, which more or less are based on dynamic programming idea. Combined with linear or nonlinear function approximators \cite{Tsitsiklis97TD,silver16go,tao_multiRL}, dynamic programming-based RL methods such as $Q-$learning \cite{Watkins:1992jx}, actor-critic \cite{konda99two} have brought about many empirical successes. On the other hand, online optimization operates in a forward fashion, where decisions are made based on history and no information regarding the future is revealed during the process. In online optimization, the solution concept rests on the optimality in hindsight, widely referred to as no-regret property, as we shall introduce in the background. Closely related to the no-regret idea, Blackwell approachability \cite{blackwell56} gives a geometric interpretation of how regret vanishes over time in online decision-making. As pointed out in \cite{perchet14blackwell,abernethy11approach}, approachcbility and no-regret are equivalent, and we can develop no-regret algorithms based on the geometric intuition of approachability. As we will show in this paper, such connection can be made explicitly by considering a Blackwell game with vector-valued payoffs measuring the regret. Instead of studying MDP from an online optimization perspective, most prior works focus on online version of MDP, where transition and/or reward are time-varying, referred to as online MDP \cite{shimkin09onlinemdp,mansour09onlinemdp} or non-stationary RL \cite{NEURIPS2019_859b00ae,silva06}. Under this setting, the no-regret idea plays an important role, and we see no difficulty in extending our approachbility framework to these problems as we adopt the online learning viewpoint. Related to our work, \cite{kash20no_regretQ} also applies the no-regret idea to MDP problems, which provides theoretical guarantees for offline settings. As shown in the paper, the convergence of the proposed method relies on no-absolute-regret algorithm, such as follow-the-regularized-leader (FTRL) with the linear cost. We argue that such no-regret method is a special case of our Blackwell approachability based framework. In this paper, we take a step toward understanding MDP from the perspective of online optimization. We construct an auxiliary Blackwell game for MDP, so that we can leverage online optimization methods based on regret minimization. Our main contributions includes: 1) we give a no-regret value iteration algorithm, based on Blackwell approachability, which we term Blackwell value iteration. We show that this method provides asymptotic convergence guarantee as classical value iteration in discounted MDP; 2) We extend this idea to RL domain with unknown transition and reward, which accounts for online learning problems. Similar to $Q$-learning, our proposed method, Blackwell $Q$-learning, does not require any prior information nor any access to state sampling distribution. Hence, instead of an asynchronous version of value iteration \cite{Bertsekas1996NeurodynamicP}, our Blackwell $Q$-learning is indeed a RL algorithm based approachability idea. To the best of our knowledge, this is the first work that interprets an MDP as a Blackwell game, which leads to provably convergent learning algorithms. The rest of the paper is organized as follows. We first introduce some preliminaries, including Blackwell approachability and no-regret in \cref{sec:back}. We then move to our proposed methods based on Blackwell approachability in \cref{sec:black}, where we give both value iteration like and $Q-$learning like algorithms for both offline planning and online learning problems. Our theoretical analysis is supported by numerical examples presented in \cref{sec:num}. Finally, we conclude the paper in \cref{sec:conclusion}. Due to the limit of space, we suppress all proofs in the paper and they can be found in the supplementary\footnote{Supplementary materials and code can be found at the repository site: \url{https://github.com/TaoLi-NYU/Blackwell-Online-Learning}.}. \section{Background}\label{sec:back} \subsection{Markov Decision Process} An infinite-horizon discounted MDP can be characterized by a tuple, $\left\langle \mathcal{S},\mathcal{A}, \mathbb{P}, r, \gamma \right\rangle$, where $\mathcal{S}$ is the finite state set; $\mathcal{A}$ is the finite action set; $\mathbb{P}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(S)$ is the transition probability and $\Delta(S)\subset \mathbb{R}^{|S|}$ denotes the simplex over $S$; $r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$ is the reward function; and $\gamma\in(0,1)$ is the discounting factor. For a given policy $\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})$, the total expected reward starting from an initial state $s\in \mathcal{S}$ is defined as $V^\pi(s)=\mathbb{E}_{\mathbb{P},\pi}[\sum_{k=1}^\infty \gamma^k r(s_k,a_k)]$. If we denote $\pi(s,a)$ the probability of choosing $a$ at state $s$, then with the Bellman principle \cite{Bellman:1954uq}, $V^\pi$ can also be written as \begin{align*} V^\pi(s)=\sum_{a\in \mathcal{S}}\pi(s,a)\bigg[r(s,a)+\gamma\sum_{s'\sim \mathbb{P}(s,a)}V^\pi(s')\bigg], \end{align*} where we denote $Q^\pi(s,a)=r(s,a)+\gamma\sum_{s'\sim \mathbb{P}(\cdot|s,a)}V^\pi(s')$, known as the $Q$ function or $Q$ table. The goal is to find an optimal policy $\pi^*$ such that $V^{\pi*}(s)\geq V^\pi(s)$ for all $s\in \mathcal{S}$. Similarly, for $\pi\in \mathbb{R}^{|S||A|}$, $\pi(s)\in \Delta(A)$. Since we focus on finite cases throughout this paper, all functions introduced above are of finite dimensions. To better present our work, we use the following notations. For $Q\in\mathbb{R}^{|S||A|}$, $Q(s):=[Q(s,a)]_{a\in \mathcal{A}}$ denotes the vector in $\mathbb{R}^{|A|}$. Similarly, for $\pi\in \mathbb{R}^{|S||A|}$, $\pi(s)$ denotes the a vector in $\Delta(\mathcal{A})$. Finally, we assume that for every $s\in \mathcal{S}$ there exists an action $a\in \mathcal{A}$ such that the Markov chain is aperiodic and irreducible, which is a common assumption in reinforcement learning \cite{Sutton:2018wc}. \subsection{Blackwell Approachability } Blackwell approachability theory \cite{blackwell56} was developed for studying repeated game play between two players with vector-valued payoffs. In such a game, which we refer to as \textit{Blackwell game}, at the $k$-th round, both Player 1 and Player 2 select their actions $x_k\in \mathcal{X}$ and $y_k\in \mathcal{Y}$ and then player 1 incurs the vector-valued payoff given by $u(x_k,y_k)\in \mathbb{R}^m$, where $u:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^m$ is a bi-affine function. We assume that action sets $\mathcal{X}, \mathcal{Y}$ are compact and convex. The objective of Player 1 is to guarantee that the average payoff converges to a desired closed convex target $\mathcal{D}\in \mathbb{R}^m$. We let $d(x,\mathcal{D}):=\inf_{z\in \mathcal{D}}\|x-z\|$ denote the distance between a point $x\in\mathbb{R}^n$ and the set $\mathcal{D}$ under norm $\|\cdot\|$. If we consider the Blackwell game $\left\langle\mathcal{X},\mathcal{Y}, u, \mathcal{D}\right\rangle$, then we can define an approachable set for Player 1 as follows. \begin{definition}[Approachable Set \cite{blackwell56}] A set $\mathcal{D}$ is said to be approachable for Player 1, if there exists an algorithm $\sigma_k(\cdot):\mathcal{X}^{k}\times\mathcal{Y}^{k}\rightarrow\mathcal{X}$ which chooses an action at each round based on the history of play: $x_k=\sigma_{k}(x_{0:k-1},y_{0:k-1})$, such that for any sequence of $\{y_k\}_{k=1}^K$, $\lim_{K\rightarrow\infty}d(\frac{1}{K}\sum_{k=1}^Ku(x_k,y_k),\mathcal{D})=0$. \end{definition} A key concept in Blackwell approachability theory is the approachable halfspace, defined as below. \begin{definition}[Approachable Halfspace \cite{blackwell56}] A halfspace $\mathcal{H}:\{z \in \mathbb{R}^m| a^\mathsf{T} z\leq b\}$ for some $a\in \mathbb{R}^m, b\in \mathbb{R}$ is approachable for Player 1 if there exists $x^*\in \mathcal{X}$ such that for all $y\in \mathcal{Y}$, $u(x^*,y)\in \mathcal{H}$. \end{definition} Blackwell's approachability theorem states that $\mathcal{D}$ is approachable if and only if all halfspaces $\mathcal{H}$ that contain $\mathcal{D}$ are approachable. Based on this theorem, we can construct a Blackwell strategy that guarantees the approachability, as shown in \cite{blackwell56}. We denote the average payoff up to time $k$ by $\bar{u}_k:= \sum_{i=1}^k u(x_i, y_i)/k$ and the projection operator regarding the set $\mathcal{D}$ by $P_\mathcal{D}(x):=\{z\in \mathcal{D}:\|z-x\|=d(x,\mathcal{D})\}$. Since we deal with convex sets, $P_\mathcal{D}(x)$ returns a singleton. If $\mathcal{D}$ is approachable, then the halfspace $\mathcal{H}$ defined by $\mathcal{H}:=\{z:\left\langle z, \bar{u}_k-P_\mathcal{D}(\bar{u}_k)\right\rangle\leq 0\}$ is approachable. Therefore, there exists $x^*\in\mathcal{X}$ such that for all $y$, $u(x^*,y)\in H$ and hence, if we let $x_{k+1}=x^*$, $u(x_{k+1},y_{k+1})$ falls into the same halfspace as the set $\mathcal{D}$ does. By doing so, we make $\bar{u}_{k+1}$ closer to the set, as shown in Fig.~\ref{fig:approach} and repeating the same procedure at each round, the average payoff converges to $\mathcal{D}$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{approach.pdf} \caption{Blackwell strategy ensures that the next iterate $u(x_{k+1},y_{k+1})$ always falls within the halfspace $\mathcal{H}$, no matter what $y_{k+1}$ is. } \label{fig:approach} \end{figure} To better present our idea of leveraging Blackwell approachability and no-regret idea to solve RL problems, we first consider an example of online learning, which is a repeated game between the player and the nature. At each time $k$, the player chooses an action $x_k\in\Delta^m\subset\mathbb{R}^m$, a simplex in $\mathbb{R}^m$, while the nature chooses a payoff vector $y_k\in \mathbb{R}^m$, which evaluates the action according to the revealed payoff $\left\langle x_k,y_k\right\rangle$. Here, $m$ is a positive integer. Then, the regret for not having played action $e_i\in \Delta^m$ at time $k$ is given by $y_k(i)-\left\langle x_k,y_k\right\rangle$, measuring the difference of counterfactual outcomes of $e_i$ and the received payoff, where $y_k(i)$ is the $i$-th element of the vector $y_k$. Naturally, one would like to have a sequence of $\{x_k\}_{k}$ that achieves the best possible result: \begin{align}\label{eq:noreg} \lim_{K\rightarrow\infty}\frac{1}{K}\max_{x\in \Delta^m}\sum_{k=1}^K (\langle x,y_k\rangle-\langle x_k, y_k\rangle) =0, \end{align} showing that the sequence yields the same average performance as the best action in hindsight and a sequence is said to achieve no regret if it satisfies \eqref{eq:noreg}. One way to construct such no-regret sequences is to leverage Blackwell approachability as we show in the following. We consider the Blackwell game $\left\langle\Delta^m, \mathbb{R}^m, u, \mathbb{R}^m_{-} \right\rangle $, where $u:\Delta^m\times\mathbb{R}^m\rightarrow\mathbb{R}^m$ is \begin{equation*} \begin{aligned} &\left({x}_{k}, {y}_{k}\right) \mapsto y_k-\left\langle {x}_{k},{y}_{k}\right\rangle \mathbf{1}_m\\ &\qquad\qquad\qquad=\left(y_{k}(1)-\left\langle{x}_{k}, {y}_{k}\right\rangle, \ldots,y_{k}(m)-\left\langle{x}_{k}, {y}_{k}\right\rangle\right), \end{aligned} \end{equation*} where $\mathbf{1}_m\in\mathbb{R}^m$ is an all-ones vector. We note that such $u$ measures the change in regret incurred at time $k$. If we adopt Blackwell strategy, we aim to find $x\in\Delta^m$ such that for all $y$, $$\left\langle u(x,y), \bar{u}_k-P_{\mathbb{R}^m_{-}}(\bar{u}_k)\right\rangle=\left\langle u(x,y), [\bar{u}_k]^+\right\rangle\leq 0,$$ where $([u]^{+})_i:=\max\{u_i,0\}.$ If we let \begin{align}\label{eq:rm} \mathcal{RM}(x_{1:k},y_{1:k})=\left\{ \begin{aligned} &[\bar{u}_k]^+/\|[\bar{u}_k]^+\|_1, \text{if}\quad[\bar{u}_k]^+\neq0 \\ & \text{any point in } \Delta^m, \text{otherwise}\quad \end{aligned}\right.\tag{\text{RM}} \end{align} then we obtain that, for $x_{k+1}=\mathcal{RM}(x_{1:k},y_{1:k})$, \begin{align*} \left\langle u(x_{k+1},y), [\bar{u}_k]^+\right\rangle &=\left\langle y-\left\langle x_{k+1},y\right\rangle\mathbf{1},[\bar{u}_k]^+ \right\rangle\\ &=\left\langle y , [\bar{u}_k]^+\right\rangle-\left\langle[\bar{u}_k]^+ ,y\right\rangle=0, \end{align*} showing that \eqref{eq:rm} is indeed a Blackwell strategy. Intuitively, this strategy outputs the next action $x_{k+1}$ that is proportional to current cumulative regret $\bar{u}_k$: actions with larger regret shall be player more frequently, as they bring up better payoffs. Hence, it is also referred to as regret matching (RM) and has been studied in various contexts, including game theory \cite{hart00regret_match, hart03regret_cont_time} and online optimization \cite{perchet14blackwell}. \section{Blackwell $Q$-learning}\label{sec:black} In this section, we present how to incorporate Blackwell approachability framework into MDP problem through the Blackwell game we introduced above. \subsection{Blackwell Value Iteration: Offline Planing } We first address the planing problem of MDP, where the transition kernel and reward function is known. From dynamic programming perspective, to solve such a MDP problem, we either resort to value iteration or policy iteration, both of which relies on the stationarity of the environment. However, as we have mentioned before, online learning (optimization) methods operate in a forward fashion for non-stationary or time-varying systems. In this subsection, we show that under stationary environments, online learning methods also guarantee the optimality of the solutions. As for implementation, we first initialize a $Q$-value table, $Q_0$, and an initial policy $\pi_0(s)$ for each $s\in\mathcal{S}$. We run $|\mathcal{S}|$ copies of the algorithm; i.e., one for each state $s\in\mathcal{S}$, and we iteratively reveal the rewards $r(s,a)$ for all $a\in \mathcal{A}$, which further translates to the payoffs in the online learning problem induced by MDP. In this algorithm, for each state $s\in\mathcal{S}$, we view the policy $\pi_k(s)\in \Delta(\mathcal{A})$ as the decision variable and $Q_k(s)=(Q_k(s,a))_{a\in \mathcal{A}}$ as the payoff vectors, which is obtained by expected SARSA \cite{seijen09SARSA} \begin{align}\label{eq:expectedSARSA} Q_{k}(s,a)=r(s,a)+\gamma\mathbb{E}_{s'\sim \mathbb{P}(\cdot|s,a), a'\sim \pi_{k-1}}[Q_{k-1}(s',a')] . \end{align} In this case, the payoff of the decision $\pi_k(s)$ is given by $\left\langle \pi_k(s),Q_k(s) \right\rangle$. Similar to the Blackwell game in \cref{sec:back}, we can also construct an approachability game for MDP. For the decision $\pi_k(s)$ and the feedback $Q_k(s)$, we define cost $\mathcal{R}:\Delta(\mathcal{A})\times \mathbb{R}^{|A|}\rightarrow\mathbb{R}^{|A|}$ as \begin{align*} (\pi_k(s),Q_k(s))\mapsto Q_k(s)-\left\langle \pi_k(s),Q_k(s)\right\rangle\mathbf{1}_{|A|}. \end{align*} We note that for a given $\pi, Q$, the $i$-th entry of $\mathcal{R}(\pi(s),Q(s))$ measures the quality of the policy $\delta(a_i)$, i.e., simply choosing $a_i$, compared with current policy $\pi$. Intuitively, larger $\mathcal{R}_i$ implies that $\delta(a_i)$ could have given a better payoff, had it been implemented. It is not surprising that when using the Blackwell strategy such as \eqref{eq:rm}: $\pi_{k+1}=\mathcal{RM}(\pi_{0:k}, Q_{0:k})$, we drive the averaged regret $\bar{\mathcal{R}}_n=\frac{1}{n}\sum_{k=0}^{n-1}\mathcal{R}(\pi_k(s),Q_k(s))$ to the non-positive orthant $\mathbb{R}^{|A|}_{-}$ and in the limit; i.e., no action can produce a positive regret, showing that the limiting point achieves optimality. Since we consider the average regret under the Blackwell framework, our convergence result in \cref{prop:offline} is also about the average $Q$ tables. \begin{proposition} \label{prop:offline} Let $\bar{Q}_n=1/n \sum_{k=0}^n Q_k$ and $Q^*$ be the $Q$ tables under the optimal policy, then $\lim_{n\rightarrow\infty}\bar{Q}_n=Q^*.$ \end{proposition} Similar result has also been shown in \cite{kash20no_regretQ}, when applying FTRL. Compared with their approach, our Blackwell approachability-based method is in fact more generic. We argue that for the linear cost considered in that paper i.e., $\left\langle \pi(s), Q(s) \right\rangle$, it can be shown that FTRL is equivalent to RM proposed here. The details are included in the supplementary, which is mainly based on the connection between Blackwell approachability and online linear optimization studied in \cite{abernethy11approach}. On the other hand, though RM is probably the most natural Blackwell strategy, it is definitely not the only one. In the supplementary, we claim that various online linear optimizers, including online gradient descent and mirror descent algorithms, all can be leveraged to construct Blackwell strategies, offering much freedom in designing algorithms. \subsection{Blackwell Q-learning: Online Learning} Though intuitive and provably convergent, Blackwell value iteration only applies to the offline setting where full information regarding the MDP is known. However, in standard RL problems, the agent is required to find the optimal policy without any access to the transition probability and reward functions. Hence, we need an online version of such approachability-based algorithms. As pointed out in \cite{kash20no_regretQ}, developing such online learning schemes is not straightforward, and there are two major challenges. The first one is about rewards revelation: in the Blackwell value iteration, we require that at each iteration, rewards $r(s,a)$ for all actions $a\in \mathcal{A}$ at a certain state $s\in \mathcal{S}$ are revealed for updating the $Q$-table according to \eqref{eq:expectedSARSA}. However, in RL, since the reward function is unknown, we only have access to the feedback corresponding to the actual action executed at each iteration. Apparently, such bandit feedback cannot produce the regret vector $\bar{\mathcal{R}}_k$. One workaround proposed in \cite{kash20no_regretQ} is to use importance sampling technique in multi-arm bandit problems \cite{slivkins19bandit}. The update rule becomes $Q_{k+1}(s,a)=\left(r(s,a)+\gamma\mathbb{E}_{\pi_k}[Q_k(s',a')]\right)/\pi_{k}(s,a)$ if $a$ is an action sampled from $\pi_k(s)$ for state $s$, and $Q_{k+1}(s,a)=0$ otherwise. Unfortunately, as in bandit problems, the incorporation of importance sampling can only ensure expectation convergence, a weaker guarantee than almost sure convergence in $Q$-learning, which is less desirable in practice. The other challenge is that the asynchronous update in online learning is more involved than the synchronous one. In Blackwell value iteration, we update every state at every iteration, whereas in an online setting, this synchronous update is impossible, which introduces additional complexity to the convergence analysis. Different from the synchronous update, in the current iteration at time $k$, $Q_k(s)$ may be updated at different time instances for different states for the first $k$ steps. And it is highly likely that some states are visited more frequently than others. One straightforward remedy is to require all states to be visited with the same frequency, i.e., the state to be updated at each iteration is chosen uniformly from $\mathcal{S}$. With this additional condition, the asynchronous version of value iteration still guarantees the convergence as shown in \cite{kash20no_regretQ}, though it still falls within the realm of offline planning as the state transition, instead of fixed, is influenced by the chosen actions in online settings. In this subsection, we propose an online learning scheme based on the Blackwell approachability. We address these issues by the two-time scale asynchronous stochastic approximation \cite{Borkar:2009ts}. By leveraging the Lyapunov stability theory of differential inclusion developed in \cite{benaim05SADI,benaim06SADI}, we show that adopting the Blackwell strategy in asynchronous update gives a provably convergent online learning scheme for tackling RL problems. As we have discussed above, in the online setting, the state transition is influenced by executed actions, sampled from the policy, which further influences the update of the $Q$-table. This coupled dynamics of the policy update and the $Q$-table update makes it difficult to directly extend value iteration to online learning. One way to decouple the two dynamics is to adjust the timescales of the two. Simply put, we update the $Q$-table in the faster timescale while the policy in the slower one, where the faster timescale sees the slower as quasi-static while the slower timescale sees the faster as equilibrated. Specifically, we consider the following learning scheme based on the regret matching we have introduced. Let $s_{k+1}\in \mathcal{S}$ and $a_{k+1}\in\mathcal{A}$ be the state and action visited at time $k+1$, and the agent receives a noised reward $R_{k+1}$ from the environment. We assume that $R_{k+1}$ is unbiased in the sense that for the $\sigma$ fields $\mathcal{F}_{k+1}=\sigma\{a_{0:k+1},s_{0:k+1}, R_{0:k+1}\}$, $\mathbb{E}[R_{k+1}|\mathcal{F}_{k+1}]=r(s_{k+1},a_{k+1})$. Since states and actions are visited asynchronously, the agent need the asynchronous counters $\phi_k(s,a):=\sum_{i=1}^k\mathbbm{1}_{\{(s_i,a_i)=(s,a)\}}, \psi_k(s):=\sum_{i=1}^k\mathbbm{1}_{\{s_i=s\}}$, together with the step sizes $\{\alpha(k)\}_{k\in \mathbb{N}}, \{\beta(k)\}_{k\in \mathbb{N}}$, to determine the learning rates. Based on all the above, the agent estimates the $Q$ function by \eqref{onpolicy} and then updates its policy by \eqref{emp_freq}. Finally, the agent chooses an action based on regret matching idea in \eqref{eq:rm}, i.e, sampling an action from the probability proportional to $[ \mathcal{R}(\pi_{k}(s), Q_{k}(s))]^{+}$, which we denote by $\mathcal{RM}(\pi_{k}(s),Q_{k}(s))$ with abuse of notations. We summarize the scheme in the following: for every $s\in\mathcal{S}$ and $a\in\mathcal{A}$, \begin{align} &Q_{k+1}(s,a)=Q_k(s,a)+\alpha(\phi_{k+1}(s,a))\nonumber\\ &\qquad\cdot\mathbbm{1}_{\{(s,a)=(s_{k+1},a_{k+1})\}}[R_{k+1}+\gamma V_k(s_{k+2})-Q_k(s,a)],\label{onpolicy}\\ &\pi_{k+1}(s)=\pi_{k}(s)+\beta(\psi_{k+1}(s))\mathbbm{1}_{\{s=s_{k+1}\}}[e_{a}-\pi_k(s)],\label{emp_freq}\\ & a_{k+1}\sim \mathcal{RM}(\pi_{k}(s),Q_{k}(s)),\label{sampling} \end{align} where $V_k(s)=\sum_{a\in \mathcal{A}}\pi_k(s,a)Q_k(s,a)$, and $e_a$ is the unit vector in $\mathbb{R}^{|A|}$. We note that different from Blackwell value iteration, we here do not rely on all historical $\pi_k$ and $Q_k$ as our stochastic approximation schemes \eqref{onpolicy} and \eqref{emp_freq} already return averaged results. It is clear that \eqref{onpolicy} and \eqref{emp_freq} are coupled as $V_k$ involves both $\pi_k$ and $Q_k$. Technically speaking, in order to analyze the limiting behavior of the coupled dynamics, we must ``decouple'' them, and one possible approach as proposed in \cite{Borkar:2009ts,konda99two} is to adjust the timescales. Specifically, in our case, we require that $\beta(k)=o(\alpha(k))$, meaning that the $Q$ update \eqref{onpolicy} operates at a faster timescale than the policy one \eqref{emp_freq}. Intuitively speaking, when synchronous update \eqref{eq:expectedSARSA} becomes impossible in an online setting, in order to produce a feedback $Q_k$ that can approximately evaluate the current policy $\pi_k$, we must wait until $Q_k$ stabilizes before we update the policy. By running the policy update at a slow timescale, $Q$ updates see $\pi_k$ as quasi-static, hence \eqref{onpolicy} can be viewed as expected SARSA \cite{seijen09SARSA}, while policy update sees $Q_k$ as stabilized, serving as an approximation to $Q^{\pi_k}$. In the subsequent, we show that the two timescale stochastic approximation indeed converges to the optimal $Q$ function and policy. \subsubsection{Convergence of the fast timescale} In order to solve for the $Q$-learning problem defined in \eqref{onpolicy}, we resort to stochastic approximation introduced in \cite{benaim05SADI,benaim06SADI} and we first rewrite it in a more concise form. We define an operator $\mathcal{T}:\mathbb{R}^{|S||A|}\times\mathbb{R}^{|S||A|}\rightarrow\mathbb{R}^{|S||A|}$ whose $(s,a)$ entry is given by $\mathcal{T}_{(s,a)}(\pi,Q):=r(s,a)+\gamma\sum_{a\in \mathcal{A}}\pi(s,a)Q(s,a)$ and then we define a vector $\Gamma_{k+1}\in \mathbb{R}^{|S|}$, whose $s$ entry is \begin{equation*} \begin{aligned} &\Gamma_{k+1}{(s)}\\ &\quad=\mathbbm{1}_{\{s=s_{k+1}\}}[R_{k+1}+\gamma V_k(s_{k+2})-\mathcal{T}_{(s_{k+1},a_{k+1})}(\pi_k,Q_{k})]. \end{aligned} \end{equation*} Note that \eqref{onpolicy} is equivalent to \begin{equation*} \begin{aligned} &Q_{k+1}(s,a)-Q_k(s,a)=\alpha(\phi_{k+1}(s,a))\mathbbm{1}_{\{(s,a)=(s_{k+1},a_{k+1})\}}\\ &\qquad\qquad\qquad\qquad\cdot[\mathcal{T}_{(s,a)}(\pi_k,Q_k)-Q_k(s,a)+\Gamma_{k+1}{(s)}]. \end{aligned} \end{equation*} We further define the asynchronous step sizes $\bar{\alpha}_k$ and the relative step sizes $\mu_k(s,a)$ as $$\bar{\alpha}_k:=\max_{(s,a)}\alpha(\phi_k(s,a)), \qquad \mu_k(s,a):={\alpha(\phi_k(s,a))}/{\bar{\alpha}}.$$ By letting $M_k$ be the $|S||A|\times |S||A|$ diagonal matrix whose $(s,a)$ entry is given by $\mu_k(s,a)$, we can rewrite the asynchronous update \eqref{onpolicy} as \begin{equation*} Q_{k+1}-Q_k=\bar{\alpha}_{k+1}M_{k+1}[\mathcal{T}(\pi_k,Q_k)-Q_k+\Gamma_{k+1}\otimes\mathbbm{1}_{|A|}], \end{equation*} where $\otimes$ denotes Kronecker product. Denote the interpolated version of the stochastic approximation of the update above by $\{\bar{Q}_t(s)\}_{s\in\mathcal{S}}$ (See Definition 2.2 in \cite{Perkins12asy_SA}), where $t$ is the continuous time index. \begin{proposition} \cite{Perkins12asy_SA} There exists $0<\eta<1$ such that almost surely, the interpolated stochastic approximation $\{\bar{Q}_t(s)\}_{s\in\mathcal{S}}$ is an asymptotic pseudo-trajectory to the differential inclusion, \begin{equation}\label{onpolicy_diff} {d}{Q}_t(s)/{dt}\in \Omega^{\eta}[\mathcal{T}_{s}(\pi_t, Q_t)-Q_t], \end{equation} where \begin{equation}\label{omega_eta} \Omega^{\eta}:=\left\{\operatorname{diag}(\{\omega(s)\}_{s\in\mathcal{S}}):\omega(s)\in[\eta,1],\forall\ s\in\mathcal{S}\right\}. \end{equation} \end{proposition} \begin{proposition} When $\pi$ is fixed, $Q^\pi$ is the unique global attractor of \eqref{onpolicy_diff}. \end{proposition} \subsubsection{Convergence of the slow timescale} From the discussions above, the sequence of $Q$-tables $\{Q_k\}_k$ converge to the $Q^{\pi}$ when $\pi$ is fixed and it are Lipschitz continuous in $\pi$ \cite{Perkins12asy_SA}. Therefore, we can study the limiting behavior of \eqref{emp_freq} by analyzing its continuous counterpart in \eqref{emp_freq_diff}, where we can replace the $Q$-table in \eqref{emp_freq} with the attractor $Q^{\pi_t}$ given the current policy $\pi_t$, as the slow timescale views the $Q$ updates as stabilized. \begin{equation}\label{emp_freq_diff} {d}{\pi}_t(s)/{dt}\in \Omega^{\eta'}[\mathcal{RM}(\pi_t,Q^{\pi_t})-\pi_t(s)], \end{equation} where $\Omega^{\eta^\prime}$ is defined similarly as \eqref{omega_eta}. It remains to show that the differential inclusion \eqref{emp_freq_diff} has a global attractor, which we prove by standard Lyapunov argument and moreover, we shall present that such a global attractor is indeed the set of optimal policy. In the following lemma, we identify the Lyapunov function associated with \eqref{emp_freq_diff} \begin{lemma}\label{le:vec_field} For every $s\in \mathcal{S}$ and any fixed $\omega(s)$ in $\Omega^{\eta'}$, let $d{\pi}_t(s)/dt=\omega(s)[\hat{\pi}_t(s)-\pi_t(s)], \hat{\pi}_t(s)=\mathcal{RM}(\pi_t(s),Q^{\pi_t}(s))$, then $\left\langle\nabla_{\pi_t} V^{\pi_t}(s), {d}{\pi_t}(s)/{dt} \right\rangle \geq 0.$ \end{lemma} With this lemma, we now construct the Lyapunov function for \eqref{emp_freq_diff}, which further leads to the global convergence of the algorithm. First, given $\pi^*$, an optimal policy, we define $L(\pi)=\sum_{s \in \mathcal{S}}\big[V^{\pi^*}(s)-V^{\pi}(s)\big].$ Apparently $L(\pi)$ is a positive semi-definite function, since the optimality gives $V^{\pi^*}(s)-V^\pi(s)\geq 0$ for all $s\in \mathcal{S}$ and $L(\pi)=0$ only if $\pi$ is an optimal policy. Then with \cref{le:vec_field}, for any $t>0$, we have \begin{align*} \left\langle \nabla_{\pi_t}L(\pi_t), {d}{\pi_t}/{dt}\right\rangle=-\sum_{s\in\mathcal{S}}\left\langle\nabla_{\pi_t}V^{\pi_t}(s),{d}{\pi_t}(s)/{dt} \right\rangle\leq 0. \end{align*} This implies that $L(\pi)$ is a Lyapunov function for the differential inclusion \eqref{emp_freq_diff}, with a global attractor $\Pi=\{\pi: \pi \text{ is an optimal strategy}\}$, showing that $\pi_t$ given by \eqref{emp_freq_diff} converges almost surely to the attractor. Therefore, from the convergence result of the continuous dynamics, we claim the convergence of the coupled dynamics \eqref{onpolicy}, \eqref{emp_freq}. \begin{proposition} The sequence $\{Q_k,\pi_k\}_{k}$ given by the coupled recursive scheme \eqref{onpolicy} and \eqref{emp_freq} converges almost surely to $(Q^{\pi*}, \pi^*)$, where $\pi^*$ is an optimal policy and $Q^{\pi*}$ is associate optimal $Q$ values. \end{proposition} \section{Numerical Experiments}\label{sec:num} In this section, we present experimental results when applying our Blackwell $Q$-learning to MDP problems. Since our proposed method resembles expected SARSA \cite{seijen09SARSA}, we consider cliff walking task in that paper, where the agent has to find its way from the start to the goal in a grid world. The agent can take any of four-movement actions: up, down, left, and right, each of which moves the agent one square in the corresponding direction. Each step results in a reward of -1, except when the agent steps into the cliff area, which results in a reward of -100 and an immediate return to the start state. The episode ends upon reaching the goal state. We evaluate the performance of $Q$-learning, SARSA, expected SARSA, and our Blackwell $Q$-learning. It is noted that our Blackwell $Q$-learning does not need any hyperparameter for encouraging exploration, as \eqref{eq:rm} always retains some probabilities for actions that yield positive regret. Hence, our method is less aggressive in terms of exploitation, compared with the others. In our experiments, we adopt $\epsilon$-greedy policy for the first three, where $\epsilon=0.1$ for $Q$-learning and SARSA, and for expected SARSA, we run the algorithm with two different exploration rates $\epsilon=0.1, 0.5$. We test these algorithms with 2000 episodes and we average the results over 200 independent runs. The numerical results are shown in Fig.~\ref{fig:compare}. \begin{figure}[htp] \centering \includegraphics[width=0.42\textwidth]{compare.png} \caption{Comparison between different learning methods in cliff walking experiments: RM, Expected SARSA, SARSA and $Q$-learning.} \label{fig:compare} \end{figure} At first glance, both expected SARSA with $\epsilon=0.1$ and our Blackwell $Q$-learning give the best performance in the end, though the expected SARSA converges faster, due to the greedy policy. However, we note that the success of the expected SARSA relies on a carefully crafted exploration rate. If we set $\epsilon=0.5$, then the performance is even worse than that of SARSA. This observation highlights one merit of Blackwell $Q$-learning: it is hyperparameter free for exploration. Though in our experiments, Blackwell $Q$-learning seems not to outperform expected SARSA in terms of the convergence rate, because of the difference in action selection, we argue that such conservative action selection is actually more desired for online learning problem, where the environment is non-stationary. One prominent example is learning in games \cite{fudenberg_learning}, where the payoff is jointly determined by all players' actions. In this case, if one player only seeks the best response based on his own $Q$ function, he may not achieve any equilibrium in the end, as observed in \cite{nips2005_2834}. Due to the limited space, we fully develop our arguments in the supplementary. \section{Conclusion}\label{sec:conclusion} We have introduced a novel approach for tackling MDP problems based on the Blackwell approachability theory. By constructing an auxiliary Blackwell game, we use its geometric interpretation to solve MDP problems by deriving no-regret learning from the Blackwell strategy, which provides an alternative to dynamic programming for MDP. Specifically, we have discussed one simple Blackwell strategy, regret matching, and how it can be incorporated into both offline planning methods (e.g., Blackwell value iteration) and online learning schemes (e.g. Blackwell $Q$-learning). Both are provably convergent. Related numerical results have been used to corroborate our results. As for future work, we would like to extend our Blackwell approachability-based idea to online (adversarial) MDP \cite{mansour09onlinemdp} and multi-agent systems, where the environment is not stationary from any player's perspective, hence imposing difficulties on applying dynamic programming. \bibliographystyle{ieeetr}
{ "timestamp": "2020-12-29T02:22:41", "yymm": "2012", "arxiv_id": "2012.14043", "language": "en", "url": "https://arxiv.org/abs/2012.14043" }
\subsubsection{Steady Simulation} \label{sssec:steady} To validate our formulation for these settings, we perform -- in a first step -- a flow simulation through a straight circular duct. The results are collected in Figure~\ref{fig:arteryPoiseuille}. The velocity distribution is visualised with glyphs showing a paraboloid in qualitative agreement with the Poiseuille paraboloid obtained for an incompressible Hagen-Poiseuille flow. Note that neither on the inflow nor on the outflow boundary the velocity component in normal direction is prescribed. For the pressure driven incompressible flow through a straight circular pipe, there is the well-known analytical solution for the velocity component in axial direction $u_a$~\cite{white2006viscous}, which reads \begin{equation} u_a = -\frac{1}{4\mu} \ddx{p}{x} \left(r_0^2 - r^2 \right) = \SI{138.9}{\meter \per \second} \left(1 - \left(\frac{y}{\SI{5}{\milli \meter}}\right)^2 \right) \end{equation} for the considered configuration. Figure~\ref{fig:poiseuilleProfile} shows that the compressible flow solutions are flatter than $u_a$. The simulation is performed on a coarse, medium, and fine mesh, with \num{111744}, \num{206568}, and \num{652090} tetrahedral elements, respectively. The slight variation of the centerline axial velocity---\SI{122.03}{\meter\per\second} on the coarse mesh (3.1\% smaller than fine), \SI{124.91}{\meter\per\second} on the medium mesh (0.8\% smaller than fine), and \SI{125.98}{\meter\per\second} on the fine mesh---indicates that the coarse mesh resolution is sufficient to obtain a solution within the range of engineering accuracy. \begin{figure} \subfloat[Velocity distribution in circular pipe\label{fig:poiseuilleVel}]{ \includegraphics[width=0.53\textwidth,trim={0cm 5cm 10cm 10cm},clip]{./Pics/PoiseuilleVel} } \subfloat[Velocity in axial direction at $x=\SI{2}{\centi \meter}$, $z = \SI{0}{\centi \meter}$ \label{fig:poiseuilleProfile}]{ \includegraphics{Drawings/ArteryVelo} } \caption{Compressible Poiseuille flow in circular pipe.} \label{fig:arteryPoiseuille} \end{figure} \subsubsection{Transient Simulation} \label{sssec:transient} As second step, we perform a transient simulation on a straight duct with approximately circular cross-section. Starting point for the SST mesh generation is an unstructured tetrahedral $x$-$y$-$t$-Mesh. The mesh has a temporal resolution comparable to the one shown in Figure~\ref{fig:arteryMesh}, but the influence of the clamp is excluded for now. Next, the tetrahedral mesh is extruded in $z$-direction, such that the cross-section in the $y$-$z$-plane forms a square. In $z$-direction, 14 nodes are added during the extrusion, such that the spatial resolution of the resulting mesh is comparable to the coarse pipe mesh discussed above. To obtain the approximately circular cross-section, we perform the 4DEMUM with the space-time coordinates $(x,y,z,t)$ identified with $(x_1, x_2, x_3, x_4)$. Further, the extruded mesh is shifted and scaled, such that $x_1 \in [-6, 6], \, x_2 \in [-1,1], \, x_3 \in [-1,1], \, x_4 \in [-1,1]$. On the pentatope mesh boundary $\partial Q_\#^D$, we prescribe the displacements \begin{equation} \label{eq:artDisp} \mathbf{d}_D = 0.9 \left[ \begin{array}{c} 0 \\ \left( \sqrt{1-\frac{x_3^2}{2}} -1\right) x_2\\ \left( \sqrt{1-\frac{x_2^2}{2}} -1\right) x_3\\ 0 \end{array} \right], \quad \end{equation} which map the square cross-section in the $x_2$-$x_3$-plane on an approximately circular shape (see Figure~\ref{fig:perturbedVel}). In a final step before the transient simulation, the mesh is shifted and rescaled such that $x \in [-\SI{3}{\centi\meter}, \SI{3}{\centi\meter}], \, y \in [-\SI{0.5}{\centi\meter},\SI{0.5}{\centi\meter}], \, z \in [-\SI{0.5}{\centi\meter},\SI{0.5}{\centi\meter}], \, t \in [ \SI{0}{\second} ,\SI{1}{\second}]$. The shifts are performed to allow for relatively simple expressions in the boundary conditions of 4DEMUM, and at the same time allow the fluid simulation to start at $t=0$. In this second simulation, a transient feature is introduced by ramping up the inflow pressure from the initial value $p_{out}$ to $p_{in}$ over the first \SI{0.5}{\second}. For the second \SI{0.5}{\second}, the pressure value is kept constant as shown in Figure~\ref{fig:presEvo}. The flow velocity at the center of the inflow (Figure~\ref{fig:veloEvo}) closely follows the temporal evolution of the pressure value. Note that the computed flow velocity is slightly larger than zero at $t=0$. This is in line with our numerical formulation, which weakly enforces the initial condition (compare Equation~\eqref{eq:weak}). However, the transient nature of the problem is properly captured in the computed flow field. \begin{figure} \centering \subfloat[Prescribed inflow pressure\label{fig:presEvo}]{\includegraphics{Drawings/InflowPressure}} \subfloat[Velocity evolution at $x=\SI{-3}{\centi \meter}$, $y = \SI{0}{\centi \meter}$, $z = \SI{0}{\centi \meter}$ \label{fig:veloEvo}]{\includegraphics{Drawings/InflowVelo}} \caption{Transient pressure-driven compressible flow in approximately circular pipe.} \label{fig:arteryTrans} \end{figure} With this test case, we also want to explain our choice of an approximately circular cross-section. A perfectly circular cross-section introduces an expected complication in 4DEMUM, i.e., the elements formerly in the corners of the square cross-section attain very large dihedral angles and eventually lead to a tangled mesh. We circumvent this problem by introducing the prefactor in Equation~\eqref{eq:artDisp} to obtain a mesh with an approximately circular cross section in the $y$-$z$-plane. The ``missing 10\%`` towards the perfect circular cross-section have hardly any influence on the flow field as shown in Figure~\ref{fig:arteryPoiseuilleTrans}. A more quantitative comparison is presented in Figure~\ref{fig:arteryComp}. The parabolic velocity profile in radial direction (Figure~\ref{fig:arteryCompProfile}) as well as the linear pressure decay along the pipe axis (Figure~\ref{fig:arteryCompPressure}) are obtained independently of the approximation of the circular cross-section. Therefore, we consider an approximately circular cross-section of the artery in the following. \begin{figure} \centering \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0cm},clip]{Pics/ArteryVelocityLegend}\\ \subfloat[Circular pipe\label{fig:circularVel}]{ \includegraphics[width=0.3\textwidth,trim={2cm 2cm 2cm 2cm},clip]{Pics/ArteryVeloOutflow} } \subfloat[Approximately circular pipe\label{fig:perturbedVel}]{ \includegraphics[width=0.3\textwidth,trim={2cm 2cm 2cm 2cm},clip]{Pics/ArteryNoClampVeloOutflow} } \caption{Comparison of velocity distribution at $x = \SI{3}{cm}$ between approximated and circular pipe.} \label{fig:arteryPoiseuilleTrans} \end{figure} \begin{figure} \centering \includegraphics{Drawings/ArteryCompLegend}\\ \subfloat[Velocity $u_1$ at $x=\SI{2}{\centi \meter}$, $z = \SI{0}{\centi \meter}$, $t = \SI{0.9}{\second}$ \label{fig:arteryCompProfile}]{ \includegraphics{Drawings/ArteryCompProfile} } \quad \subfloat[Pressure at centre line $y=\SI{0}{\centi \meter}$, $z = \SI{0}{\centi \meter}$, $t = \SI{0.9}{\second}$ \label{fig:arteryCompPressure}]{ \includegraphics{Drawings/ArteryCompPressure} } \caption{Comparison between circular pipe, approximately circular pipe, and artery geometry.} \label{fig:arteryComp} \end{figure} \subsubsection{Transient Simulation with Topology Change} \label{sssec:topology} \begin{figure} \centering \includegraphics{Drawings/Artery3DMesh} \caption{Clamped artery test case. $x$-$y$-$t$-Mesh} \label{fig:arteryMesh} \end{figure} As third and final step, we consider the transient simulation with topology change of the spatial computational domain. We now include the topology change caused by the clamp in the $x$-$y$-$t$-mesh shown in Figure~\ref{fig:arteryMesh}. The further pentatope mesh generation steps of extrusion, connectivity generation, and elastic deformation are performed as in the previous example; the boundary displacements are given in Equation~\eqref{eq:artDisp}. The subsequent finite element flow simulation was performed in on 240 cores using a distributed memory parallelisation based on MPI. It took 16 minutes of wall clock time. Figure~\ref{fig:artery} presents the simulation results at $t =$ \SI{0.5}{\second}, \SI{0.7}{\second}, and \SI{0.9}{\second}. For the fully clamped case, we obtain two separate domains and negligible flow velocities (Figure~\ref{fig:art3P5}). In the absence of flow, the temperature distribution in the fluid is an interpolation of the temperature prescribed on the domain boundaries (Figure~\ref{fig:art3T5}). Note that on the right most boundary ($x=\SI{3}{\centi \meter}$) no temperature boundary condition is prescribed because this part is an outflow boundary during most of the simulation. However, from $t=\SI{0.5}{\second}$ until $t=\SI{0.7}{\second}$, back-flow across this boundary introduces a small disturbance in the temperature field. When the artery is reopened to roughly half of the total diameter, a strong pressure gradient across the clamp region accelerates the flow in this area (Figure~\ref{fig:art3P7}). Further at $t=\SI{0.7}{\second}$, the temperature distribution is strongly influenced by the flow field as well as the cooler clamp (Figure~\ref{fig:art3T7}). In the open configuration (Figure~\ref{fig:art3P9} and ~\ref{fig:art3T9}), we observe a linear pressure decrease from the inflow to the outflow and a parabolic velocity profile everywhere except for the clamp region. The comparison to the velocity profile of the approximately circular pipe shows that the flow speed computed for the clamped artery is slower at $x=\SI{2}{\centi \meter}$, $z=\SI{0}{\centi \meter}$, $t=\SI{0.9}{\second}$ (Figure~\ref{fig:arteryCompProfile}). This is to be expected as the same pressure gradient has to overcome the additional obstacle of the clamp region (Figure~\ref{fig:arteryCompPressure}). \renewcommand{\myW}{0.47\textwidth} \begin{figure} \centering \includegraphics[width=0.35\textwidth,trim={23.5cm 26cm 22.5cm 10cm},clip]{Pics/ArteryPressureLegend}\qquad \includegraphics[width=0.35\textwidth,trim={23.5cm 26cm 22.5cm 10cm},clip]{Pics/ArteryTemperatureLegend} \subfloat[Pressure distribution on velocity glyphs at $t=0.5\,$s \label{fig:art3P5}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryVeloT05}}\quad \subfloat[Temperature at $z=0$, $t=0.5\,$s \label{fig:art3T5}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryTempT05}} \subfloat[Pressure distribution on velocity glyphs at $t=0.7\,$s \label{fig:art3P7}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryVeloT07}}\quad \subfloat[Temperature at $z=0$, $t=0.7\,$s \label{fig:art3T7}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryTempT07}} \subfloat[Pressure distribution on velocity glyphs at $t=0.9\,$s \label{fig:art3P9}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryVeloT09}}\quad \subfloat[Temperature at $z=0$, $t=0.9\,$s \label{fig:art3T9}]{\includegraphics[width=\myW,trim={0cm 3cm 0cm 0cm},clip]{Pics/ArteryTempT09}} \caption{Pressure, velocity, and temperature in the fully clamped, partially, and fully opened artery.} \label{fig:artery} \end{figure} \section{Introduction and Problem Definition} \label{sec:intro} \input{intro.tex} \section{Four-Dimensional Elastic Mesh Update Method} \label{sec:method} \input{method.tex} \section{Simulation Workflow} \label{sec:workflow} \input{workflow.tex} \section{Application Examples} \label{sec:examples} \input{examples.tex} \subsection{Transient Gas Flow through Valve} \label{ssec:valve} \input{valve.tex} \subsection{Flow Inspired by Clamped Artery} \label{ssec:artery} \input{artery.tex} \section{Conclusion and Outlook} \label{sec:conclusion} \input{conclusion.tex} \input{main.bbl} \end{document}
{ "timestamp": "2020-12-29T02:28:04", "yymm": "2012", "arxiv_id": "2012.14200", "language": "en", "url": "https://arxiv.org/abs/2012.14200" }
\section{Introduction} \label{section1} Given a sequence of harmonic maps with uniformly bounded energy, Uhlenbeck Compactness Theorem discovered ``bubble phenomenon'' where bubble occurs at points of energy concentration. Later, Parker \cite{P} showed bubble tree extension to describe how a given sequence of harmonic maps converges over bubbles. For fixed domain, this result is regarded as a full answer of the problem because \begin{enumerate} \item it shows energy identity and zero distance bubbling, and \item it specifies where each bubble locates, including bubbles on the bubbles. \end{enumerate} However, when domain is varying, we do not have above properties. The difficulty is when complex structure (or metric) degenerates and there were several studies in this direction. For example, Chen-Tian \cite{CT} focused on energy minimizing harmonic maps and showed compactness result, together with connecting geodesics. Zhu \cite{Z} studied the conditions that energy identity and zero distance bubbling do not hold and Chen-Li-Wang \cite{CLW} showed length formula of the neck, but their conditions contain non-geometric quantity. There are other ways to consider change of complex structure, for example, looking at Teichm{\"u}ller space. This space restricts to the constant curvature metric and hence seems less appropriate to describe bubbles and necks than Deligne-Mumford moduli space. In this paper we first define convergence of maps (Definition \ref{conv fam}) in terms of family of complex curves using Deligne-Mumford moduli space. For each energy concentration points, we put additional marked points to build bigger family in which convergence becomes better in terms of {\it residual energy} (Definition \ref{RE}). Details of this procedure will be explained in Section \ref{section4}. Note that energy concentration may occur at regular point or nodal point, where the latter corresponds to a degeneration of complex structure. Since not always energy identity holds in the case of degenerating complex structure, we need more refined notion of {\it regular node} (Definition \ref{reg node def}). Now our main theorem can be stated in terms of regular node. \begin{theorem}\label{main theorem} Suppose $f_{k} : C_{k} \rightarrow X$ be a sequence of harmonic maps with uniformly bounded energy defined on smooth $(g,n)$ curves. Then there is a subsequence $n_{k}$ and a way of marking points $P_{k}$ on $C_{k}$ such that corresponding sequence $f_{n_{k}} : C'_{n_{k}} = (C_{n_{k}},P_{n_{k}}) \rightarrow X$ converges to some $f_{0} : C_{0} \rightarrow X$ off the singular set $S$ (possibly empty) in $C^{1}$ where all points in $S$ are non-regular nodal points. Furthermore, $f_{0}$ is harmonic on closure of each component of $C_{0} \setminus S$ separately. \end{theorem} With additional assumption, we can make the singular set $S$ empty. \begin{cor} \label{main2} With the same assumption of Theorem \ref{main theorem}, also assume that $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$ and all nodes in $C_{0}$ are regular. Then the singular set $S$ in the convergence in Theorem \ref{main theorem} is empty. Furthermore, the energy identity holds and the image of $f_{0}$ is connected. \end{cor} Now Parker's theorem is a corollary of the main theorem. \begin{cor}\label{fixed domain} Let $\Sigma$ be a smooth Riemann surface with genus $g$ and suppose $f_{k} : \Sigma \rightarrow X$ be a sequence of harmonic maps with uniformly bounded energy. Then Corollary \ref{main2} can be applied with $C_k = C_0 = \Sigma$. \end{cor} Another corollary is when $g=0$, that is, all domains are $n$-marked sphere $S^2$. \begin{cor}\label{sphere} Let $C_{k}$ be 2-spheres with $n$-marked points and suppose $f_{k} : C_{k} \rightarrow X$ be a sequence of harmonic maps with uniformly bounded energy. Then Corollary \ref{main2} can be applied. \end{cor} Proofs will be given in Section \ref{section7}. The remaining parts of this paper is organized as follows. Section \ref{section2} deals with basic properties of harmonic maps and Deligne-Mumford moduli space. In Section \ref{section3}, we develop necessary convergence terminology. Section \ref{section4} focuses on neck analysis and Section \ref{section5} introduces regular nodes. In Section \ref{section6} we explain the procedure of building bigger family by putting appropriate marked points. Finally, the last section contains proof of the main theorem. The author thanks to his advisor, Thomas H. Parker, for valuable advices, comments and inspirations about this paper. \section{Background} \label{section2} \subsection{Harmonic Maps} Let $(\Sigma, g)$ and $(X,h)$ be compact Riemannian manifolds with Riemannian metrics $g$ and $h$ with $\dim(\Sigma) = 2$. We use the same letter $g$ to denote Riemannian metric of $\Sigma$ and genus of $\Sigma$ if there is no confusion. A map $f : (\Sigma,g) \rightarrow (X,h)$ is harmonic if it is a critical point of the energy functional \begin{equation}\label{energy func} E(f) = \mathcal{L}(f)= \frac{1}{2}\int_{\Sigma}\lvert df \rvert ^{2} dvol_{g}. \end{equation} Using $f_{k}$ and $g$, we can define corresponding energy density measures $e(f_{k})$ on $\Sigma$ by \begin{equation}\label{e(f)} e(f_{k}) = \frac{1}{2}\lvert df_{k}\rvert^{2} dvol_{g}. \end{equation} We summarize some important lemmas regarding harmonic maps. Here we follow \cite{SU} and \cite{P}. For more results, see \cite{EL} or \cite{EL2}. \begin{theorem} Suppose $f: (\Sigma,g) \rightarrow (X,h)$ be harmonic. Then we have the followings: \begin{enumerate} \item ($\varepsilon$-regularity) There is a constant $\varepsilon_{0}>0$ depending only on second fundamental form of the embedding $X \hookrightarrow \mathbb{R}^{N}$ such that if $f$ is a harmonic map on a disk $D$ and if $E_{D}(f) = \frac{1}{2}\int_{D}\lvert df \rvert ^{2} dvol_{g} < \varepsilon_{0}$, then for any $D' \subset \subset D$, \begin{equation} \lVert df \rVert_{W^{1,p}(D')} \leq C \lVert df \rVert_{L^{2}(D)}, \end{equation} where $1<p<\infty$ and $C$ is a constant which only depends on $p$, $D'$ and the geometry of $X$. \item (Energy-gap) There is a constant $\varepsilon'_{0}>0$ depending only on $(X,h)$ such that if $f$ is a smooth harmonic map on a compact domain $\Sigma$ satisfying $E(f) = \frac{1}{2}\int_{\Sigma}\lvert df\rvert ^{2} dvol_{g} < \varepsilon'_{0}$, then $f$ is constant. \item (Removable Singularity) If $f : D \setminus \{0\} \rightarrow X$ is a $C^{1}$ harmonic map with $E(f) < \varepsilon_{0}$ on a punctured disk $D \setminus \{0\}$, then $f$ can be extended to $D$ in $C^{1}$. \item ($C^{1}$-convergence) There is a constant $\varepsilon_{0}>0$ such that if $\{f_{k}\}$ is a family of harmonic maps on $D$ and satisfying $E_{D}(f_{k}) < \varepsilon_{0}$ for all $k$, then there is a subsequence $f_{k}$ that converges to $f$ in $C^{1}$. \end{enumerate} \end{theorem} Using these, Uhlenbeck proved the following compactness theorem. \begin{theorem}\label{cpt} (Uhlenbeck Compactness Theorem, \cite[Theorem 4.4]{SU} or \cite[Lemma 1.2]{P}) Suppose $\{f_{k}\}$ be a sequence of harmonic maps with uniformly bounded energy. Then there are at most finite number of points $\{p_{1}, \ldots, p_{l}\}$, called bubble points, subsequence of $\{f_{k}\}$ and limit map $f_{\infty} : (\Sigma,g) \rightarrow (X,h)$ such that $f_{k} \rightarrow f_{\infty}$ in $C^{1}$ for any compact set away from $\{p_{1}, \ldots, p_{l}\}$, and \begin{equation}\label{measure conv1} e(f_{k}) \rightarrow e(f_{\infty}) + \sum_{i=1}^{l} m_{i} \delta_{p_{i}} \end{equation} as measures where $m_{i} \geq \varepsilon'_{0}$. \end{theorem} Parker \cite{P} used iterated renormalizations to construct a so-called ``bubble tree" to analyze the energy completely, i.e., all the energy comes from either the limit map or bubbles. \begin{theorem}\label{bubble tree} (Bubble Tree Convergence, \cite[Theorem 2.2]{P}). Under the same assumption of Theorem \ref{cpt}, there is a subsequence $\{f_{n}\}$ and a bubble tower domain $T = \Sigma \cup \bigcup S_{I}$ so that the renormalized maps \begin{equation*} \{f_{n,I}\} : T \rightarrow X \end{equation*} converges in $W^{1,2} \cap C^{0}$ to a smooth harmonic bubble tree map $\{f_{I}\} : T \rightarrow X$. Moreover, \begin{enumerate} \item (No energy loss) $E(f_{n})$ converges to $\sum E(f_{I})$, and \item (Zero distance bubbling) At each bubble point $x_{J}$ (at an level in the tree), the images of the base map $f_{I}$ and the bubble map $f_{J}$ meet at $f_{I}(x_{J}) = f_{J}(p^{-})$. \end{enumerate} \end{theorem} \subsection{Families of Curves} In this subsection we define a notion of convergence of complex curves $C_{k}$. As in algebraic geometry, it is useful and important to consider not just single curves, but instead {\it families} of curves. There are standard definitions of such families used by algebraic geometers. Many details can be found at \cite{ACG}. We will use an equivalent definition that is more in the spirit of differential geometry. For details, see \cite{RS}. \begin{definition} A $(g,n)$ curve is defined to be a connected $n$-marked nodal curve of genus $g$, that is, a complex curve $C$ of genus $g$ with at most nodal singularity together with a sequence $\{x_{1}, \ldots, x_{n}\}$ of distinct points of $C$. A $(g,n)$ curve is said to be stable if $2g-2+n>0$. \end{definition} \begin{definition}\label{nodal family} (\cite{RS} 4.2,4.7) An $n$-marked nodal family is a surjective proper holomorphic map $\pi : \mathcal{C} \rightarrow B$ between connected complex manifolds with disjoint submanifolds, together with $\mathcal{N},S_{1}, \ldots, S_{n}$ such that \begin{enumerate} \item $\dim_{\mathbb{C}}(\mathcal{C}) = \dim_{\mathbb{C}}(B) + 1$, \item $\pi|_{S_{i}}$ maps $S_{i}$ diffeomorphically onto $B$, \item $\mathcal{N} = \{p \in \mathcal{C} : d\pi(p) \textrm{ not surjective}\}$, and \item \sloppy Each critical point $p \in \mathcal{N}$ has local holomorphic coordinate chart $(x,y,z_{2},\ldots , z_{n})$, called a nodal chart, such that \begin{equation} \pi(x,y,z_{2},\ldots,z_{n}) = (xy,z_{2},\ldots,z_{n}) \end{equation} is a local holomorphic coordinate chart in a neighborhood of $\pi(p)$. \end{enumerate} We call $\mathcal{N}$ the nodal set, and $S_{i}$ the marked sections. \end{definition} \sloppy By the holomorphic Implicit Function Theorem, $p \in \mathcal{C} \setminus \mathcal{N}$ has local holomorphic coordinate chart $(x,z_{1},z_{2},\ldots,z_{n})$, called a regular chart, such that $\pi(x,z_{1},z_{2},\ldots,z_{n}) = (z_{1},z_{2},\ldots,z_{n})$ is a local holomorphic coordinate chart in a neighborhood of $\pi(p)$. Note that $\mathcal{N}$ intersects each fiber $C_{b} := \pi^{-1}(b)$ in a finite set. For each regular value $b \in B$ of $\pi$ the fiber $C_{b}$ is a compact Riemann surface. For each critical value $b \in B$ of $\pi$ the fiber $C_{b}$ is a nodal curve. Together with an isomorphism between a curve $C$ and a fiber, one can consider a deformation of $C$ as follows. \begin{definition}\label{deform} (\cite{ACG} 11.2.1, 11.4.2) Let $C$ be a (marked) complex curve (or a nodal curve). A deformation of $C$ is a nodal family $\pi : \mathcal{C} \rightarrow (B,b_{0})$ plus a given isomorphism between $C$ and the central fiber $\pi^{-1}(b_{0})$ that maps marked points to marked sections. \end{definition} We understand the concept of deformation in terms of a germ. Hence a restriction of a deformation containing the central fiber is regarded as equivalent as the original one. Now we describe the Deligne-Mumford moduli space. The Deligne-Mumford moduli space of stable $n$-marked nodal curves of genus $g$ is defined by \begin{equation}\label{DM} \overline{\mathcal{M}}_{g,n} := \{ \textrm{ isomorphism classes } [C] \textrm{ of } (g,n) \textrm{ curves }\}. \end{equation} The structure of \eqref{DM} is well-known. Here we point out properties of \eqref{DM} that will be used in this paper. \begin{itemize} \item Let $\mathcal{M}_{g,n}$ be a moduli space of stable $n$-marked smooth curves of genus $g$. Then $\overline{\mathcal{M}}_{g,n}$ is its compactification. \item $\overline{\mathcal{M}}_{g,n}$ is a complex projective variety, and has orbifold structure. \item There is a projection map, called forgetful map, given by \begin{equation} \Phi : \overline{\mathcal{M}}_{g,n+1} \rightarrow \overline{\mathcal{M}}_{g,n} \end{equation} which forgets the last marked point and collapses unstable component to a point. \end{itemize} There are several different ways to construct $\overline{\mathcal{M}}_{g,n}$. Here we see a neighborhood of $[C_{0}]$ in $\overline{\mathcal{M}}_{g,n}$ as a quotient of Kuranishi family of $C_{0}$ by a finite group $Aut(C_{0})$ of automorphisms of central fiber $C_{0}$. If $[C_{k}] \rightarrow [C_{0}]$ in $\overline{\mathcal{M}}_{g,n}$ with $Aut(C_{0})$ being not trivial, then the quotient has a singularity at $C_{0}$. Instead, we use Kuranishi family $\pi : \mathcal{C} \rightarrow (B,0)$ of one of representatives $C_{0}$ of $[C_{0}]$ without quotient. Then the total space $\mathcal{C}$ is a smooth manifold, but $Aut(C_{0})$ acts as transformation of fibers, which destroys uniqueness of embedding of $C_{k}$ into family. \begin{definition} \label{conv of curve} For sequence of $(g,n)$ curves $C_{k}$, we say $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$ if $\pi : \mathcal{C} \rightarrow (B,0)$ is a deformation of $C_{0}$ with isomorphism $\varphi : C_{0} \rightarrow \pi^{-1}(0)$ and isomorphisms $\varphi_{k} : C_{k} \rightarrow \pi^{-1}(b_{k})$ with $b_{k} \rightarrow 0$. Since $\varphi$ and $\varphi_{k}$ are isomorphisms, we identify $C_{k}$ with $\pi^{-1}(b_{k})$ and $C_{0}$ with $\pi^{-1}(0)$. \end{definition} \begin{remark} There is no uniqueness statement for the choice of the sequence $b_{k}$ and the isomorphisms $\varphi$ and $\varphi_{k}$. However, all of the convergence statements below hold for any choice. \end{remark} \section{Convergence of Maps} \label{section3} Now we care about a sequence of $C^{l}$ (of $W^{l,p}$) maps $f_{k} : C_{k} \rightarrow X$ with $C_{k} \rightarrow C_{0}$ in a family $\pi : \mathcal{C} \rightarrow (B,0)$. In a regular chart of the family, we can give a local coordinate of $C_{k}$ by $(x,b_{k})$ and $C_{0}$ by $(x,0)$. Also, assume that $E(f_{k}) \leq E_{0} < +\infty$ for all $k$. \begin{definition}\label{znp} Let $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$ and $p$ be a node of $C_{0}$. We say a sequence of maps $f_{k} : C_{k} \rightarrow X$ satisfies the zero neck property at $p$ if the following holds: For any $\varepsilon>0$, there is $\delta_{1}>0$ such that for any $0<\delta<\delta_{1}$ and for all $k$ sufficiently large, \begin{align*} E(f_{k}, B(p,\delta) \cap C_{k}) &\leq \varepsilon\\ \mathrm{diam}(f_{k}( B(p,\delta) \cap C_{k})) &\leq \varepsilon \end{align*} where $B(p,\delta)$ is a ball of radius $\delta$ centered at $p$ in the family $\mathcal{C}$. \end{definition} \begin{definition} \label{conv fam} We say $f_{k}$ converges to $f_{0} : C_{0} \rightarrow X$ off the set $S \subset C_{0}$ in $C^{l}$ (or $W^{l,p}$) if \begin{enumerate} \item $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$, \item for any node $p \notin S$, $f_k$ satisfies the zero neck property at $p$. \item in every regular chart away from $S$, the projected map \begin{equation} \label{tilde f} \tilde{f}_{k}(x) := f_{k} (x,b_{k}) \end{equation} converges to $\tilde{f}_{0}(x) := f_{0}(x,0)$ in $C^{l}$ (or $W^{l,p}$). \end{enumerate} The set $S$ is called singular set. \end{definition} \begin{lemma} (Energy Identity and Connected Image) Suppose $f_{k} : C_{k} \rightarrow X$ converges to $f_{0} : C_{0} \rightarrow X$ in $C^{1}$ with empty singular set. Then \begin{equation} \lim_{k \rightarrow \infty}E(f_{k}) = E(f_{0}). \end{equation} Furthermore, if each $C_{k}$ is connected, then the image of $f_{0}$ is connected. \end{lemma} \begin{proof} Pick any regular point $p \in C_{0}$ and consider a regular chart $\pi : U \times B \rightarrow B$ with $p=(0,0)$. The projection map $\pi_{k} : U \times \{b_{k}\} \rightarrow U$ given by $\pi_{k}(x,b_{k}) = x$ is holomorphic, so the energy of $f_{k}$ over $U \times \{b_{k}\}$ is the same as the energy of $\tilde{f}_{k}$ over $U$. Since $f_{k}$ converges to $f_{0}$ in $C^{1}$, $\lim_{k \rightarrow \infty}E(f_{k},U \times \{b_{k}\}) = E(f_{0},U)$. Also, because of the zero neck property, we have \begin{equation*} \lim_{\delta \rightarrow 0} \lim_{k \rightarrow \infty} E(f_{k}, B(q,\delta) \cap C_{k}) = 0 \end{equation*} for any node $q$. Cover $C_{0}$ by finitely many regular charts $\{U_{i} \times B\}$ and we have \begin{equation*} \lim_{k \rightarrow \infty}E(f_{k}) = E(f_{0}). \end{equation*} Connectedness comes from the other condition of the zero neck property. \end{proof} Using this new definition of convergence, Uhlenbeck Compactness Theorem \ref{cpt} can be rewritten as follows: Note that in regular chart $\tilde{f}_{k}(x) := f_{k}(x,b_{k})$ as in \eqref{tilde f}. \begin{lemma}\label{loc conv} (Local convergence) Suppose a sequence of smooth $(g,n)$ curves $C_{k}$ converges to $C_{0}$ in a family $\mathcal{C}$ and $N$ be a nodal set in $C_{0}$. Let $f_{k} : C_{k} \rightarrow X$ be a sequence of harmonic maps with uniformly bounded energy. Then there is a subsequence $n_{k}$, a finite set $Q = \{p_{1}, \ldots, p_{l}\} \subset C_{0} \setminus N$, and a harmonic map $f_{0} : C_{0} \rightarrow X$ such that $f_{n_{k}}$ converges to $f_{0}$ off the set $S = Q \cup N$ in $C^{1}$. Also, near $p \in Q \subset C_{0}$, we have \begin{equation}\label{meas conv} e(\tilde{f}_{n_{k}}) \rightarrow e(f_{0}) + m_{p}\delta_{p}, \end{equation} where $\delta_{p}$ is a Dirac-delta measure at $p$ and $m_{p} \geq \varepsilon'_{0}$. \end{lemma} \begin{proof} Cover $C_{0} \setminus N$ by finitely many regular chart $\{U_{i} \times B\}$. Since the regular chart is holomorphic coordinate chart, $\tilde{f}_{k}(x)$ is also harmonic maps over $U_{i}$ for each $i$. By Uhlenbeck compactness theorem \ref{cpt}, there is a subsequence $n_{k,1}$ such that $\tilde{f}_{n_{k,1}}$ converges on $U_{1} \setminus Q_{1}$ where $Q_{1}$ is a finite set. Applying Theorem \ref{cpt} again to find subsequence $n_{k,2}$ of $n_{k,1}$ such that $\tilde{f}_{n_{k,2}}$ converges on $U_{2} \setminus Q_{2}$ where $Q_{2}$ is a finite set. Repeat this process to obtain $n_{k,i}$ for $U_{i}$, and choose diagonal subsequence $n_{k} = n_{k,k}$. Then $\tilde{f}_{n_{k}}$ converges on $U_{i} \setminus Q_{i}$ for all $i$, where $Q_{i}$ is a finite set. Note that at each point $p \in Q_{i}$, by \eqref{measure conv1}, the energy concentration at $p$ is at least $\varepsilon'_{0}$. Since the total energy is finite, $Q = \cup Q_{i}$ is at most finite. The measure convergence \eqref{meas conv} comes from \eqref{measure conv1}. \end{proof} The above lemma says that energy loss may occur only at points in $S$. \begin{definition}\label{bubble pt} In Lemma \ref{loc conv}, we say $p \in Q$ a smooth bubble point and $p \in N$ a nodal bubble point. \end{definition} The above lemma only says about smooth bubble points and nothing about nodes. So the question is: does $f_k$ satisfies the zero neck property at each node? We will see one sufficient condition that this is true in Section \ref{section5}. \section{Neck analysis} \label{section4} In this section we will deal with sequence of harmonic maps over the neck region. Throughout this section, we will use both nodal chart or cylindrical chart that are described below. Many of the arguments in here can be found in \cite{P}, \cite{Z}, or \cite{LW2}. First we set up the terminology. Consider a sequence of harmonic maps $f_{k} : C_{k} \rightarrow X$ defined on family of stable curves $C_{k}$ with $C_{k} \rightarrow C_{0}$ where $C_{0}$ is a limit curve with a node $p$. For any $\delta>0$, using nodal chart, we can denote $B(p,\delta) \cap C_{k}= \{x,y \in \mathbb{C}^{2} : xy = t_{k}, \lvert x \rvert, \lvert y \rvert \leq \delta\}$ and $t_{k} \rightarrow 0$ as $k \rightarrow \infty$. Consider polar coordinate $x = (r,\theta)$, and take log by $(r,\theta) \rightarrow (t,\theta)$ where $t = \ln(r/\sqrt{t_{k}})$. This gives a cylindrical coordinate over the neck which is conformal to the original nodal chart, given by \begin{equation} \label{cyl coord} \phi : [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1} \rightarrow B(p,\delta) \cap C_{k} \end{equation} where $\phi(t,\theta) = (x,y) = (\sqrt{t_{k}} e^{t+i \theta}, \sqrt{t_{k}} e^{-t-i \theta})$ and $T_{k}^{\delta} = \ln(\delta/\sqrt{t_{k}})$. Note that pullback of the metric to the cylinder is conformally equivalent to $dt^{2}+d\theta^{2}$, so we can consider flat metric over the cylinder. For simplicity, we often denote $f_{k} \circ \phi$ by $f_k$, or simply by $f$. By \cite[Lemma 3.3]{Z}\label{alpha}, the quantity \begin{equation*} \alpha(t) := \frac{1}{2} \int_{\{t\} \times S^{1}} \lvert f_{t} \rvert^{2} - \lvert f_{\theta} \rvert^{2} d\theta \end{equation*} is a constant $\alpha$, which is independent of $t$. Hence, the energy can be written as \begin{equation} \label{energy theta} E(f) = \frac{1}{2} \iint_{[-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1}} \lvert f_{t} \rvert^{2} + \lvert f_{\theta} \rvert^{2} dt\,d\theta = 2T_{k}^{\delta} \alpha + \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \Theta dt \end{equation} where \begin{equation*} \Theta(t) := \int_{\{t\} \times S^{1}} \lvert f_{\theta} \rvert^{2} d\theta. \end{equation*} Moreover, by \cite[Lemma 3.1]{Z}, there exists $\varepsilon_{0}''>0$ such that if $f : [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1} \rightarrow X$ is a harmonic map and $E(f) \leq \varepsilon_{0}''$, then \begin{equation*} \Theta'' \geq \Theta > 0 \end{equation*} and that for any $-T_{k}^{\delta} \leq T_{1} < T_{2} \leq T_{k}^{\delta}$, we have \begin{align*} \int_{T_{1}}^{T_{2}} \Theta dt &\leq 2(\Theta(T_{1}) + \Theta(T_{2})),\\ \int_{T_{1}}^{T_{2}} \sqrt{\Theta} dt &\leq 4(\sqrt{\Theta(T_{1})} + \sqrt{\Theta(T_{2})}). \end{align*} From the above, energy is controlled by $\alpha$ and boundary values of $\Theta$. We need one more quantity. Let the average length of the neck be given by: \begin{equation*} \overline{L} := \frac{1}{2\pi}\iint_{[-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1}} \lvert f_{t} \rvert dtd\theta. \end{equation*} Then \begin{align*} \overline{L} &\leq \frac{1}{2\pi} \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \sqrt{2\pi} \left( \int_{S^{1}}\lvert f_{t} \rvert^{2} d\theta \right)^{1/2} dt \\ &\leq \frac{1}{2\pi} \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \sqrt{2\pi} \left[ \left( \int_{S^{1}}\lvert f_{t} \rvert^{2} - \lvert f_{\theta} \rvert^{2} d\theta \right)^{1/2} + \left( \int_{S^{1}} \lvert f_{\theta} \rvert^{2} d\theta \right)^{1/2} \right] dt \\ &= \frac{2}{\sqrt{2 \pi}} \sqrt{\alpha}T_{k}^{\delta} + \frac{1}{\sqrt{2\pi}} \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \sqrt{\Theta}dt. \end{align*} Now we state and prove the main proposition of the section. \begin{prop} \label{gap2} Suppose $f_{k} : [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1} \rightarrow X$ is a sequence of harmonic maps and $E(f_{k}) \leq \varepsilon_{0}''$ for all $k$. For any $\varepsilon>0$, there is $\delta_{1}>0$ such that for any $0<\delta<\delta_{1}$ and for all $k$ sufficiently large, \begin{align*} E(f_{k}, [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1}) &\leq 2T_{k}^{\delta} \alpha + \varepsilon,\\ \mathrm{diam}(f_{k}( [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1}) &\leq C \sqrt{\alpha} T_{k}^{\delta} + \varepsilon. \end{align*} \end{prop} \begin{proof} Consider the nodal chart of the neck $B(p,\delta) \cap C_{k}$ and $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$. Away from the node $p$ and by choosing subsequence, $f_{k} \rightarrow f_{0}$ in $C^{1}$ for some $f_{0} : C_{0} \setminus \{p\} \rightarrow X$. For simplicity, denote $B_{\delta} = B(p,\delta)$. We first claim that, for any $\varepsilon>0$, there is $\delta_1>0$ such that for any $0<\delta<\delta_1$ and for all $k$ sufficiently large, \begin{equation} \label{theta eq3} \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \Theta dt \leq \varepsilon, \qquad \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \sqrt{\Theta} dt \leq \varepsilon, \qquad \textrm{ and } \qquad \int_{\{t\} \times S^{1}} \lvert f_{\theta} \rvert d\theta \leq \varepsilon. \end{equation} Fix $\varepsilon>0$. Choose $\delta_{1}>0$ such that for all $\delta < \delta_1$, \begin{enumerate} \item $E(f_0, (B_{\delta} \setminus B_{\delta/2}) \cap C_0) \le \varepsilon/2$, and \item the length of $f_{0}(\lvert x \rvert = \delta)$ and $f_{0}(\lvert y \rvert = \delta)$ are less than $\varepsilon/2$. \end{enumerate} Fix such $\delta$, then for all $k$ sufficiently large, \begin{enumerate} \item $E(f_{k},(B_{\delta} \setminus B_{\delta/2}) \cap C_{k}) \le \varepsilon$, and \item $\displaystyle \int_{\{\pm T_{k}^{\delta}\} \times S^{1}} \lvert f_{\theta} \rvert d\theta \leq \varepsilon$. \end{enumerate} Using cylindrical domain, the first energy bound inequality means that \begin{equation*} E(f_{k},B_{\delta} \setminus B_{\delta/2}) = \int_{-T_{k}^{\delta}}^{-T_{k}^{\delta}+\ln2} \int_{S^{1}} \lvert f_{t} \rvert^{2} + \lvert f_{\theta} \rvert^{2} d\theta\, dt + \int_{T_{k}^{\delta} - \ln2}^{T_{k}^{\delta}} \int_{S^{1}}\lvert f_{t} \rvert^{2} + \lvert f_{\theta} \rvert^{2} d\theta\, dt \leq \varepsilon. \end{equation*} Now fix $\delta,k$. Choose $a \in (-T_{k}^{\delta},-T_{k}^{\delta}+\ln2), b \in (T_{k}^{\delta}-\ln2, T_{k}^{\delta})$ such that \begin{equation*} \Theta(a) = \frac{1}{\ln2}\int_{-T_{k}^{\delta}}^{-T_{k}^{\delta}+\ln2} \Theta dt, \qquad \Theta(b) = \frac{1}{\ln2}\int_{T_{k}^{\delta}-\ln2}^{T_{k}^{\delta}} \Theta dt. \end{equation*} Then, \begin{align*} \int_{-T_{k}^{\delta}}^{T_{k}^{\delta}} \Theta dt &= \int_{-T_{k}^{\delta}}^{a} \Theta dt + \int_{b}^{T_{k}^{\delta}} \Theta dt + \int_{a}^{b} \Theta dt\\ &\leq \int_{-T_{k}^{\delta}}^{-T_{k}^{\delta}+\ln2} \Theta dt+ \int_{T_{k}^{\delta}-\ln2}^{T_{k}^{\delta}} \Theta dt + 2 ( \Theta(a) + \Theta(b))\\ &= (1 + \frac{2}{\ln2}) \left( \int_{-T_{k}^{\delta}}^{-T_{k}^{\delta}+\ln2} \Theta dt+ \int_{T_{k}^{\delta}-\ln2}^{T_{k}^{\delta}} \Theta dt \right) \leq (1 + \frac{2}{\ln2}) \varepsilon. \end{align*} Second inequality in \eqref{theta eq3} can be obtained in the similar manner. For the last inequality in \eqref{theta eq3}, using $\varepsilon$-regularity, \begin{align*} \int_{\{t\} \times S^{1}} \lvert f_{\theta} \rvert d\theta &\leq \sqrt{2\pi} \sqrt{\Theta(t)} \leq \sqrt{2 \pi} \sqrt{\Theta(\pm T_{k}^{\delta})} = \sqrt{2 \pi} \left(\int_{\{\pm T_{k}^{\delta}\} \times S^{1}} \lvert f_{\theta} \rvert^{2} d\theta \right)^{1/2}\\ &\leq \sqrt{2 \pi} \sqrt{\sup \lvert df \rvert} \left(\int_{\{\pm T_{k}^{\delta}\} \times S^{1}} \lvert f_{\theta} \rvert d\theta \right)^{1/2}\\ &\leq C \sqrt{\varepsilon''_{0}} \sqrt{\varepsilon}. \end{align*} Now $E \leq 2T_{k}^{\delta} \alpha + \varepsilon$ is clear from \eqref{energy theta}. To see the diameter, choose $\theta_{0}$ such that $\overline{L} = L_{\theta_{0}}$. Let $(t_{1},\theta_{1})$ and $(t_{2},\theta_{2})$ be such that \begin{equation*} \max_{x,y \in [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1}} (\lvert f(x) - f(y) \rvert) = \lvert f(t_{1},\theta_{1}) - f(t_{2},\theta_{2}) \rvert. \end{equation*} Then, \begin{align*} \mathrm{diam} f &\leq \lvert f(t_{1},\theta_{1}) - f(t_{1},\theta_{0}) \rvert + \lvert f(t_{1},\theta_{0})-f(t_{2},\theta_{0}) \rvert + \lvert f(t_{2},\theta_{0}) - f(t_{2},\theta_{2}) \rvert\\ &\leq \int_{\{t_{1}\} \times S^{1}} \lvert f_{\theta} \rvert d\theta + L_{\theta_{0}}+ \int_{\{t_{2}\} \times S^{1}} \lvert f_{\theta} \rvert d\theta \leq 2\varepsilon + \overline{L}. \end{align*} This completes the proof. \end{proof} \section{Continuity at Regular nodes} \label{section5} In general, $\alpha$ in the previous section is not small enough to obtain the zero neck property at each node. However, if the node is {\em regular}, $\alpha$ vanishes and hence the zero neck property holds. Here the existence of forgetful map is crucial. We first define the notion of regular node. \begin{definition}\label{reg node def} Let $\pi : \mathcal{C} \rightarrow B$ be a family of $(g,n)$ curves and $p$ be a nodal point. We say $p$ is a regular node if there exists a family of $(g,l)$ curves $\overline{\pi} : \overline{\mathcal{C}} \rightarrow \overline{B}$ with $l<n$ and a forgetful map $\Phi : \mathcal{C} \rightarrow \overline{\mathcal{C}}$ such that $\overline{p} := \Phi(p)$ is a regular point. \end{definition} \begin{lemma}\label{gap neck} (Energy gap in the neck) Suppose $p$ be a regular node and $E(f_{k}, B(p,\delta_{0}) \cap C_{k}) \leq \varepsilon'_{0}$ for some $\delta_{0}>0$ and for all $k$. Then $f_{k}$ satisfies the zero neck property at $p$. \end{lemma} Lemma \ref{gap neck} provides another energy quantization $\varepsilon''_{0}$ over regular nodes. Combined the energy quantization $\varepsilon'_{0}$ at smooth bubble points, these quantizations are the key of the definition of residual energy. Proof of this lemma will be given at the end of the section. \begin{lemma}\label{reg node} Suppose $f_{k} : C_{k} \rightarrow X$ is a sequence of harmonic maps, $C_{k} \rightarrow C_{0}$ and $p \in C_{0}$ is a regular node. Then in the cylindrical coordinate, \begin{equation*} \int_{0}^{2\pi} \lvert f_{t} \rvert^{2} d\theta = \int_{0}^{2\pi} \lvert f_{\theta} \rvert^{2} d\theta \end{equation*} which means that $\alpha = 0$. \end{lemma} \begin{proof} Let $\Phi : \mathcal{C} \rightarrow \overline{\mathcal{C}}$ be a forgetful map such that $\overline{p} = \Phi(p)$ is a regular point. Choose a regular chart $(z,\overline{b})$ of $\overline{\mathcal{C}}$ centered at $\overline{p} = (0,0)$ and denote $B_{\delta} \subset \overline{C_0}$ by a ball of radius $\delta$ centered at $\overline{p}$ in this chart. Then $\Phi^{-1}(B_{\delta} \setminus \{\overline{p}\})$ is biholomorphic to a punctured disk $\mathring{B} \subset C_0$ while $\Phi^{-1}(\overline{p})$ is a union of components with marked points on them. Without loss of generality, we can assume that $p$ is the puncture of $\mathring{B}$ in which the other component meets. Pick a nodal chart near $p$ of $C_{k} \cap B(p,\delta)$ given by $(x,y,b)$ with $x \in \mathring{B}$ such that $xy=t_{k}$ for some $t_{k}$ and $|x|,|y| \leq \delta$. If necessary, by modifying the charts, we can arrange that $\Phi(x,y,b) = (z,\overline{b})$ with $z=x$. Denote $F_{k} := f_{k} \circ \Phi^{-1} : \overline{C_{k}} \rightarrow X$, which is also harmonic. By Pohozaev identity, \begin{equation} \label{Poho} \int_{0}^{2\pi} \big\lvert F_{\phi} \big\rvert^{2} d\phi = r^{2} \int_{0}^{2\pi} \big\lvert F_{r} \big\rvert^{2} d\phi \end{equation} where $(r,\phi)$ is the polar coordinate given by $z = re^{i\phi}$. For the proof of this identity, see \cite[Lemma 3.5]{SU} or \cite[Lemma 6.1.5]{LW}. Recall the cylindrical coordinate $\phi : [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1} \rightarrow C_{k}$ given by \eqref{cyl coord}. Then $\Phi \circ \phi : [-T_{k}^{\delta},T_{k}^{\delta}] \times S^{1} \rightarrow \overline{C_{k}}$ gives a coordinate change given by $\Phi \circ \phi (t,\theta) = x = \sqrt{t_{k}}e^{t+i\theta}$, so \begin{equation*} (F \circ \Phi \circ \phi)_{t} = \sqrt{t_{k}}e^{t}F_{r} = r F_{r}, \quad (F \circ \Phi \circ \phi)_{\theta} = F_{\phi}. \end{equation*} Then \eqref{Poho} becomes \begin{equation*} \int_{0}^{2\pi} \big\lvert (F \circ \Phi \circ \phi)_{\theta} \big\rvert^{2} d\theta = \int_{0}^{2\pi} \big\lvert (F \circ \Phi \circ \phi)_{t} \big\rvert^{2} d\theta. \end{equation*} Since $F \circ \Phi \circ \phi = f \circ \phi$, this proves the lemma. \end{proof} \begin{proof} (Proof of Lemma \ref{gap neck}) By Proposition \ref{gap2} and Lemma \ref{reg node}. \end{proof} The assumption that $p$ is a regular node is non-avoidable. Parker \cite{P} showed that there could be non-regular nodes by providing an example that the zero neck property fails. In the language of Deligne-Mumford moduli space, his example can be seen as a family of torus with singular central fiber, which is a pinched torus. Note that this node is not regular, because there is no forgetful map from this singular family to family with smaller marked points. However, by modifying the example, one can get zero energy and zero (average) length of the neck, which satisfies the zero neck property. (See \cite{Z} for details.) This modification suggests the possibility that even over non-regular nodes, the zero neck property may hold, but not much research is done in this direction. \section{Adding marked points to build new family} \label{section6} In this section we describe the procedure of adding marked points to build bigger family. Let $f_{k} : C_{k} \rightarrow X$ be a sequence of harmonic maps with uniformly bounded energy where $C_{k}$ are smooth $(g,n)$ curves. By Lemma \ref{loc conv}, $f_{k}$ converges to $f_{0} : C_{0} \rightarrow X$ off the singular set $S = Q \cup N$ where $Q$ is a set of smooth bubble points and $N$ is a set of nodal bubble points. Denote $\overline{N} \subset N$ be a set of regular nodes. Note that over smooth bubble point, energy concentration is at least $\varepsilon'_{0}$ and over regular nodal bubble point, energy concentration is at least $\varepsilon''_{0}$. Together with $\varepsilon_{0}$ which comes from $\varepsilon$-regularity, we set \begin{equation} \bar{\varepsilon} = \frac{1}{2}\min{(\varepsilon_{0}, \varepsilon'_{0}, \varepsilon''_{0})}. \end{equation} Now we develop a framework of building bigger family. \begin{lemma}\label{new family} Fix a deformation $\pi : \mathcal{C} \rightarrow B$ and suppose $C_{k} = \pi^{-1}(b_{k})$, $C_{0} = \pi^{-1}(0)$ with $b_{k} \rightarrow 0$. Suppose for each $k$ that \begin{enumerate} \item[{\sc {case 1}}] For $p \in C_{0}$ a smooth unmarked point, there are two unmarked distinct points $q_{k},r_{k} \in C_{k}$ such that \begin{equation*} p = \lim_{k \rightarrow \infty}q_{k} = \lim_{k \rightarrow \infty}r_{k}. \end{equation*} \item[{\sc {case 2}}] For $p \in C_{0}$ a nodal point, there is an unmarked point $r_{k} \in C_{k}$ such that \begin{equation*} p = \lim_{k \rightarrow \infty}r_{k}. \end{equation*} \end{enumerate} Denote $C'_{k} = (C_{k},q_{k},r_{k})$ for Case 1, or $C'_{k} = (C_{k},r_{k})$ for Case 2. Then there is a subsequence $n_{k}$, a nodal $(g,n+i)$ marked curve $C'_{0}$ where $i=1$ for Case 1 and $i=2$ for Case 2, a deformation $\pi' : \mathcal{C}' \rightarrow B'$ of $C'_{0}$ such that $C'_{n_k} \rightarrow C'_{0}$ in the family $\mathcal{C}'$, and forgetful map \begin{equation} \label{forgetful map} \begin{tikzcd} \mathcal{C}' \arrow[r,"\Phi"] \arrow[d,"\pi'"] & \mathcal{C} \arrow[d,"\pi"]\\ B' \arrow[r] & B \end{tikzcd} \end{equation} such that $\Phi(C'_{n_{k}}) = C_{n_{k}}$ and $\Phi(C'_{0}) = C_{0}$. Here $\mathcal{C}'$ comes with two additional sections $\sigma$ and $\tau$ corresponding to $q_{k}$ and $r_{k}$ for Case 1, and with one additional section $\tau$ corresponding to $r_{k}$ for Case 2. Furthermore, the restriction of $\Phi$ on $C'_{0}$ is the map collapsing a rational curve $E$, and is biholomorphic on $C'_{0} \setminus E$. In Case 1, $E$ has two marked points $\sigma(0),\tau(0) \in E$ and one node, and in Case 2, $E$ has one marked point $\tau(0) \in E$ and two nodes. \end{lemma} \begin{proof} Since the Deligne-Mumford moduli space $\overline{\mathcal{M}}_{g,n+i}$ is compact, there is a subsequence $n_{k}$ such that $[C'_{n_{k}}] \rightarrow [C'_{0}]$ for some $(g,n+i)$ curve $C'_{0}$. Let \begin{equation}\label{new family eq} \begin{tikzcd} \mathcal{C}' \arrow[d,"\pi'"]\\ B' \end{tikzcd} \end{equation} be a Kuranishi family of $C'_{0} = (\pi')^{-1}(0)$. For Case 1, this family comes with two sections $\sigma$ and $\tau$ corresponding to the last two marked points, and also with a forgetful map $\Phi$ as in \eqref{forgetful map}, which forgets $\sigma$ and $\tau$ and collapses unstable components. For Case 2, this family comes with one section $\tau$ corresponding to the last marked point, and also with a forgetful map $\Phi$ as in \eqref{forgetful map}, which forgets $\tau$ and collapses unstable components. Then we can choose $b'_{k} \rightarrow 0$ in $B'$ and for each $k$, an identification of $C'_{n_{k}}$ with a fiber $(\pi')^{-1}(b'_{k})$ of $\mathcal{C}'$ such that $q_{k} = \sigma(b'_{k})$ and $r_{k} = \tau(b'_{k})$ for Case 1, and $r_{k} = \tau(b'_{k})$ for Case 2. In any case, the restriction of $\Phi$ on $C'_{0}$ is the map collapsing a rational curve $E$, and is biholomorphic on $C'_{0} \setminus E$. Note that in Case 1, $E$ has two marked points $\sigma(0),\tau(0) \in E$ and one node, and in Case 2, $E$ has one marked point $\tau(0) \in E$ and two nodes. (For details, see \cite{ACG} section 10.6 and 10.8) \end{proof} \begin{lemma}\label{new node regular} Fix a family $\pi : \mathcal{C} \rightarrow B$ and suppose $C_{k} \rightarrow C_{0}$ in $\mathcal{C}$ and $p \in C_{0}$ is a regular node. Also suppose that there is bigger family $\mathcal{C}'$ of $(g,n')$ curves with $n<n'$ and forgetful map $\Phi : \mathcal{C}' \rightarrow \mathcal{C}$. Fix $C'_{0}$ be a fiber of $\mathcal{C}'$ such that $\Phi(C'_{0}) = C_{0}$. Then any node $q \in C'_{0}$ with $\Phi(q) = p$ is regular. \end{lemma} \begin{proof} Since $p$ is regular, there exists a family of $(g,l)$ curves $\overline{\pi} : \overline{\mathcal{C}} \rightarrow \overline{B}$ with $l<n$ and a forgetful map $\overline{\Phi} : \mathcal{C} \rightarrow \overline{\mathcal{C}}$ such that $\overline{p} := \Phi(p)$ is a regular point. Then composition of forgetful maps $\overline{\Phi} \circ \Phi : \mathcal{C}' \rightarrow \overline{\mathcal{C}}$ is also a forgetful map from a family of $(g,n')$ curves to a family of $(g,l)$ curves. Moreover, $\overline{\Phi} \circ \Phi (q) = \overline{p}$ is a regular point. So $q$ is regular. \end{proof} Next, we specify where to put marked points near bubble points. First consider $p \in Q$ be a smooth bubble point with energy concentration $m$. Choose a neighborhood $U$ of $p$ and, given two distinct points $q,r \in U$, define (simplified) cross ratio $CR_{q,r} : U \rightarrow \mathbb{C}$ by \begin{equation} CR_{q,r}(x) = \frac{x-q}{r-q}. \end{equation} Note that $CR_{q,r}(q) = 0$ and $CR_{q,r}(r) = 1$. Given $q_{k},r_{k}$, denote $R_{k} = CR_{q_{k},r_{k}}$. Let $\mu_{k}$ be the energy density measure on $U$ and $\nu_{k} = (R_{k})_{*}\mu_{k}$ be the push forward measure on $R_{k}(B_{k}) \subset \mathbb{C}$. \begin{lemma}\label{mark} (Marking points near smooth bubble point) Let $p \in Q$ be a smooth bubble point with energy concentration $m$. Then after passing to a subsequence, there exist unmarked points $q_{k},r_{k} \in C_{k}$ both converging to $p$ such that \begin{enumerate} \item \label{cond1} $C'_{k} = (C_{k},q_{k},r_{k}) \rightarrow C'_{0}$ in bigger family $\mathcal{C}'$ as in Lemma \ref{new family}. \item \label{cond2} Denote $f'_{k} = f_{k} \circ \Phi : C'_{k} \rightarrow X$. Identify $E$ with $\mathbb{C}P^{1}$ by mapping $\sigma(0)$ to $[0:1]$, $\tau(0)$ to $[1:1]$, and the node $p'$ of $E$ to $[1:0]$. Under the chart $[z:1] \mapsto z$, \begin{align} \lim_{k \rightarrow \infty}\int_{E \setminus D} \nu_{k} &= \bar{\varepsilon}, \label{mark-eq1}\\ \int_{D} z \,\nu_{k} &= 0 \label{mark-eq2} \end{align} where $D = \{z : \lvert z \rvert < 1\}$ and $\nu_{k} = e(\tilde{f}'_{k})$ are energy density measures on $E$. \item \label{cond3} On $E$, $\tilde{f}'_{k}$ converges to $f'_{0}$ in $C^{1}$ away from $\{p'\} \cup \{q_{j}\}_{j=1, \ldots, l'}$ where $q_{j} \in D \subset E$ with energy concentration $m_{j} \geq \varepsilon'_{0}$. In addition, new node $p'$ is regular and \begin{equation} E(f'_{0}\vert_{E}) + \sum_{j=1, \ldots, l'}m_{j} = m. \end{equation} \end{enumerate} \end{lemma} Here the map $R_{k}$ acts as coordinate change from $U$ to $E$. From the choice of $q_k,r_k$, the push-forward measure $\nu_{k}$ is such that \begin{enumerate} \item essentially all of the mass $m$ is captured by $\nu_{k}$ over $E$, \item all but $\bar{\varepsilon}$ of that mass lies outside the unit disk $D$, and \item the center of mass of $\nu_{k}$ over $D$ is at the origin. \end{enumerate} The proof of this lemma is technical and will be in the appendix. \bigskip Now consider $p \in \overline{N}$ be a regular nodal bubble point with energy concentration $m$. Nodal chart of $p$ can be written as $ B(p,\delta) \cap C_{k} = \{x,y \in \mathbb{C}^{2} : xy = t_{k}, \lvert x \rvert, \lvert y \rvert \leq \delta\}$. Let $\pi_1$ be the projection to the first factor, given by $\pi_{1} (x,y) = x$, and denote $A_{k,\delta} := \pi_{1}(B(p,\delta) \cap C_{k}) = B_{\delta} \setminus B_{t_{k}/\delta} \subset \mathbb{C}$ where $B_{\delta}$ is a ball of radius $\delta$ centered at origin in the complex plane. Define extended push forward energy density measure $\mu_{k}$ over $U := B_{\delta}$ by $\mu_{k} = (\pi_{1})_{*}e(f_{k})$ on $A_{k,\delta}$ and $\mu_{k} = 0$ on $B_{t_{k}/\delta}$. Then $\mu_{k} \rightarrow \mu_{\infty} + (m + E_{\delta}) \delta_{p}$ where $\mu_{\infty} = e(f_{0})$ on $B_{\delta}$, $E_{\delta} = E(f_{0},\left(B(p,\delta) \cap C_0 \right)|_{\{x=0\}})$ and $\delta_{0}$ is Dirac-delta measure centered at $p$ and $m \geq 2\bar{\varepsilon}$. By choosing $\delta$ small, we can make $E_{\delta}$ as small as we want. Therefore, without loss of generality, we just denote $m$ instead of $m+E_{\delta}$ and consider \begin{equation*} \mu_{k} \rightarrow \mu_{\infty} + m \delta_{p}. \end{equation*} Given $r_{k}$, denote $R_{k} = CR_{p,r_{k}}$ and $\nu_{k} = (R_{k})_{*}\mu_{k}$ as above. \begin{lemma}\label{mark2} (Marking points near regular nodal bubble point) Let $p \in \overline{N}$ be a regular nodal bubble point with energy concentration $m$. Then after passing to a subsequence, there exist unmarked points $r_{k} \in C_{k}$ converging to $p$ such that \begin{enumerate} \item \label{cond1'} $C'_{k} = (C_{k},r_{k}) \rightarrow C'_{0}$ in bigger family $\mathcal{C}'$ as in Lemma \ref{new family}. \item \label{cond2'} Denote $f'_{k} = f_{k} \circ \Phi : C'_{k} \rightarrow X$. Identify $E$ with $\mathbb{C}P^{1}$ by mapping one node $p_{2}$ of $E$ to $[0:1]$, $\tau(0)$ to $[1:1]$, and the other node $p_{1}$ of $E$ to $[1:0]$. Under the chart $[z:1] \mapsto z$, \begin{equation} \label{mark2-eq} \lim_{k \rightarrow \infty}\int_{E \setminus D} \nu_{k} = \bar{\varepsilon} \end{equation} where $D = \{z : \lvert z \rvert < 1\}$ and $\nu_{k} = e(\tilde{f}'_{k})$ are energy density measures on $E$. \item \label{cond3'} On $E$, $\tilde{f}'_{k}$ converges to $f'_{0}$ in $C^{1}$ away from $\{p_{1}\} \cup \{p_{2}\} \cup \{q_{j}\}_{j=1, \ldots, l'}$ where $q_{j} \in D \subset E$ with $q_{j} \neq p_{2}$ and with energy concentration $m_{j} \geq \varepsilon'_{0}$. In addition, new nodes $p_{1},p_{2}$ are regular and \begin{equation} E(f'_{0}\vert_{E}) + \sum_{j=1, \ldots, l'}m_{j} + m_{0} = m \end{equation} where $m_{0}$ is the energy concentration at $p_{2} \in E$ and $m_{0} \geq \varepsilon''_{0}$. \end{enumerate} \end{lemma} The proof of this lemma will be also in the appendix. \section{Completion of the proof - Induction} \label{section7} In this section we prove Main Theorem \ref{main theorem} and its special cases, Corollaries \ref{main2}, \ref{fixed domain} and \ref{sphere}. First we define the residual energy. \begin{definition} \label{RE} Suppose $f_{k} : C_{k} \rightarrow X$ be a sequence of maps with uniformly bounded energy that converges to $f_{0} : C_{0} \rightarrow X$ off the set $S = Q \cup N$, where $Q = \{p_{1}, \ldots, p_{l}\}$ is a set of smooth bubble points and $N$ is a set of nodal bubble points. Let $\overline{N} = \{q_{1}, \ldots, q_{n}\}$ be a set of regular nodes that energy concentrates. Define residual energy, denoted by $RE$, by \begin{equation} RE = \lim_{k \rightarrow \infty}E(f_{k}) - E(f_{0}) - l \bar{\varepsilon} - n \bar{\varepsilon}/2. \end{equation} If we denote $m_{i}$ be energy concentration at $p_{i}$ and $m'_{j}$ be energy concentration at $q_{j}$, then since $m_{i}, m'_{j} \geq 2\bar{\varepsilon}$, we have \begin{equation*} RE = \sum_{i=1}^{l}{(m_{i} - \bar{\varepsilon})} + \sum_{j=1}^{n} {(m'_{j} - \bar{\varepsilon}/2)} \geq l \bar{\varepsilon} + 3 n \bar{\varepsilon}/2. \end{equation*} \end{definition} \begin{proof} (Proof of Theorem \ref{main theorem}) If there is a subsequence $n_{k}$ and a finite set of points $P_{k}$ on $C_{k}$ such that $C'_{n_{k}} = (C_{n_{k}},P_{n_{k}}) \rightarrow C'_{0}$ for some $C'_{0}$ and corresponding residual energy $RE = 0$, then there is no energy concentration points except non-regular nodes and we are done. Now suppose $RE > 0$ for any subsequence $n_{k}$ and any set of marking points $P_{k}$ such that $C'_{n_{k}}$ converges. That means, energy concentration occurs at either $p \in Q$ or at $p \in \overline{N}$. {\bf Case 1:} There is energy concentration $m$ at $p \in Q$. By Lemma \ref{mark}, after passing to a subsequence, we can add two marked points $q_{k},r_{k} \in C'_{k}$ such that $E(f'_{0}\vert_{E}) + \sum_{j=1}^{l'}m_{j} = m$, where $m_{j} \geq 2\bar{\varepsilon}$. In the new family, \begin{equation*} RE' = \lim_{k \rightarrow \infty}E(f'_{k}) - E(f'_{0}) - (l-1+l') \bar{\varepsilon} - n \bar{\varepsilon}/2. \end{equation*} Note that $n$ does not change because the new node is a regular node with no energy concentration. Then the difference of new residual energy from old one is \begin{equation*} RE' - RE = - (l'-1) \bar{\varepsilon} - E(f'_{0}\vert_{E}). \end{equation*} If $E(f'_{0}\vert_{E}) > 0$, then since $E(f'_{0}\vert_{E}) \geq 2\bar{\varepsilon}$, $RE' \leq RE - \bar{\varepsilon}$. If $E(f'_{0}\vert_{E}) = 0$ and $l' \geq 2$, then $RE' \leq RE - \bar{\varepsilon}$. Finally, if $E(f'_{0}\vert_{E}) = 0$ and $l' \leq 1$, we know $l'=1$ because of the energy identity $E(f'_{0}\vert_{E}) + \sum_{j=1}^{l'}m_{j} = m$. Note that from \eqref{mark-eq2}, the location of the bubble on $E$ is $[0:1]$. But then \eqref{mark-eq1} implies that energy of amount of $\bar{\varepsilon}$ on a subset of $E \setminus D$ can not be used for the bubble, hence $E(f'_{0}\vert_{E}) \geq \bar{\varepsilon}$. This contradicts to the assumption $E(f'_{0}\vert_{E}) = 0$, so this case is impossible. Hence, in any case, $RE' \leq RE - \bar{\varepsilon}$. \smallskip {\bf Case 2:} There is energy concentration $m$ at a regular node $p \in \overline{N}$. By Lemma \ref{mark2}, we can add one marked point $r_{k} \in C'_{n_{k}}$ and a subsequence $n'_{k}$ of $n_{k}$ such that $E(f'_{0}\vert_{E}) + \sum_{j=1}^{l'}m_{j} + m_{\infty} = m$, where $\nu_{0} = e(f'_{0})$, $m_{j}\geq 2\bar{\varepsilon}$, and $m_{\infty}$ is either zero or at least $2\bar{\varepsilon}$. In the new family, \begin{equation*} RE' = \lim_{k \rightarrow \infty}E(f'_{k}) - E(f'_{0}) - (l+l') \bar{\varepsilon} - n' \bar{\varepsilon}/2 \end{equation*} where $n'$ is the number of new regular nodes that energy concentrates. Note that $n'=n$ if $m_{\infty} \neq 0$ or $n'=n-1$ if $m_{\infty} = 0$, so the difference of new residual energy from old one is \begin{equation*} RE' - RE \leq - l' \bar{\varepsilon} + \bar{\varepsilon}/2 - E(f'_{0}\vert_{E}). \end{equation*} If $E(f'_{0}\vert_{E}) > 0$, then since $E(f'_{0}\vert_{E}) \geq 2\bar{\varepsilon}$, $RE' \leq RE - \bar{\varepsilon}$. If $E(f'_{0}\vert_{E}) = 0$ and $l' \geq 1$, then $RE' \leq RE - \bar{\varepsilon}/2$. Finally, if $E(f'_{0}\vert_{E}) = 0$ and $l' = 0$, then $m_{\infty} = m$ and all energy concentrates at $[0:1]$. But from \eqref{mark2-eq}, energy of amount of $\bar{\varepsilon}$ on a subset of $E \setminus D$ can not concentrate at $[0:1]$, which contradicts to $E(f'_{0}\vert_{E}) = 0$. So this case is impossible. Hence, in any case, $RE' \leq RE - \bar{\varepsilon}/2$. \smallskip In conclusion, if $RE>0$, we mark either two points near a bubble point or one point near regular nodal point and make $RE' \leq RE-\bar{\varepsilon}/2$. Since $RE$ is finite, this process should stop when energy concentrates only at non-regular nodes. This proves the theorem. \end{proof} \begin{proof} (Proof of Corollary \ref{main2}) The only thing we need to show is that all nodes of $C'_{0}$ are regular. Pick $p'$ be a node in $C'_{0}$. Since forgetful map $\Phi : \mathcal{C}' \rightarrow \mathcal{C}$ maps $C'_{0}$ to $C_{0}$, $\Phi(p') \in C_{0}$ is either regular point or nodal point. If $\Phi(p')$ is regular point, $p'$ is regular node by definition. If $\Phi(p')$ is nodal point, by assumption, it is regular nodal point. So by Lemma \ref{new node regular}, $p'$ is regular node. Hence we can apply Theorem \ref{main theorem} and conclusion follows from the fact that the singular set $S$ is empty. \end{proof} \begin{proof} (Proof of Corollary \ref{fixed domain}) Let $C_{k} = \Sigma$, then $C_{0} = \Sigma$ which do not have any node. The corollary then follows from Corollary \ref{main2}. \end{proof} \begin{proof} (Proof of Corollary \ref{sphere}) Choose a limit $C_{0}$ such that $C_{k} \rightarrow C_{0}$ in a family $\mathcal{C}$ of $(0,n)$ curves with $n \geq 3$. It is enough to show that all nodes of $C_{0}$ are regular. Note that if $n=3$, Deligne-Mumford moduli space $\overline{\mathcal{M}}_{0,3}$ is trivial and there is no node in $C_{0}$. Assume $n \geq 4$. Pick $p \in C_{0}$ be a node. Consider a forgetful map $\Phi : \mathcal{C} \rightarrow \overline{\mathcal{C}}$ that forgets $n-3$ marked points and collapse unstable components. Here $\overline{\mathcal{C}}$ is a family of $(0,3)$ curves, which is again trivial. So $\Phi(p)$ is regular point and hence $p$ is regular node. This proves the corollary. \end{proof} \section{Appendix - Marking points lemmas} \label{section8} This section describes the proof of Lemmas \ref{mark} and \ref{mark2}. \subsection{Local properties} Assume the convergence of measures \begin{equation}\label{meas conv2-re} \mu_{k} \rightarrow \mu_{\infty} + m\delta_{p} \end{equation} where $\mu_{k},\mu_{\infty}$ are measures on $U \subset \mathbb{C}$, $\delta_{p}$ is Dirac-delta measure at the origin and $m \geq 2\bar{\varepsilon}$. Throughout this section, $B(x,r) \subset U$ denotes a ball of radius $r$ in $U$ centered at $x \in U$ and $D \subset \mathbb{C}$ be a unit disk. \begin{lemma} There are $\delta_{k}$, $\varepsilon_{k}$, and a subsequence of $\mu_{k}$, still denoted by $\mu_{k}$, such that $\delta_{k}, \varepsilon_{k} \rightarrow 0$ and satisfying the following: \begin{equation} \int_{B(0,\delta_{0})} d\mu_{\infty} \leq \varepsilon_{0} = \frac{\bar{\varepsilon}}{4} \qquad \textrm{ and } \qquad \int_{B(0,\delta_{k})} d\mu_{\infty} \leq \varepsilon_{k} \label{subseq1} \end{equation} for all $k$. Moreover, given any $k$, for all $m$ with $1 \leq m \leq 2k$, \begin{equation} \left\lvert \int_{B(0,\delta_{m})} d\mu_{k} - d\mu_{\infty} - m \delta_{p} \right\rvert < \varepsilon_{m}. \label{subseq2} \end{equation} \end{lemma} \begin{proof} Choose $\delta_{0} > 0$ and $\varepsilon_{0} = \bar{\varepsilon}/4$ such that the first equation in \eqref{subseq1} holds. From \eqref{meas conv2-re}, given any $\varepsilon < \varepsilon_{0}$, $\delta < \delta_{0}$, there is a subsequence $\mu_{n_{k}}$ of $\mu_{k}$ such that \begin{equation} \label{subseq2'} \left\lvert \int_{B(0,\delta)} d\mu_{n_{k}} - d\mu_{\infty} - m \delta_{p} \right\rvert < \varepsilon \end{equation} for all $k$. Pick $\varepsilon_{k} \rightarrow 0$ with $\varepsilon_{k} \leq \varepsilon_{k-1}/2$. Pick $\delta_{k} \rightarrow 0$ with $\delta_{k} \leq \delta_{k-1}/2$ such that the second equation in \eqref{subseq1} holds. For $(\varepsilon_{1},\delta_{1})$, there exist a subsequence $\mu^{1}_{k}$ of $\mu_{k}$ such that Equation \eqref{subseq2'} holds with $(\varepsilon_{1},\delta_{1})$. For $(\varepsilon_{2},\delta_{2})$, there exist further subsequence $\mu^{2}_{k}$ of $\mu^{1}_{k}$ such that Equation \eqref{subseq2'} holds with $(\varepsilon_{1},\delta_{1})$ and $(\varepsilon_{2},\delta_{2})$. Keep going and choose diagonal of above, say $\mu^{k}_{k}$. Finally rename $\mu_{k} = \mu^{2k}_{2k}$. Then, given any $k$, for all $1 \leq m \leq 2k$, Equation \eqref{subseq2} holds and the lemma is proved. \end{proof} Denote $B_{k} = B(0,\delta_{k})$. By the choice of $B_{k}$, we have \begin{equation} \label{mkBk} \lim_{k \rightarrow \infty} \int_{B_{k}} d\mu_{k} = m. \end{equation} For simplicity, we fix $k$ and denote $\mu_{k}$ by simply $\mu$. We first clarify which assumption we will use. \begin{assume}\label{assume mu} Assume $\mu$ is a smooth finite mass measure on a bounded set $U \in \mathbb{C}$. By choosing $k$ large enough we may assume \begin{enumerate} \item \label{k cond1} $2\varepsilon_{k} + 2\varepsilon_{2k} < \bar{\varepsilon}$, \item \label{k cond2} $3 \delta_{2k-1} < \delta_{k}$, \item \label{k cond3} $E := \mu (B_{k}) > m - 2 \varepsilon_{k} > \bar{\varepsilon}$ by \eqref{subseq1} and \eqref{subseq2}, \item \label{k cond4} $(E - \bar{\varepsilon} - 2\varepsilon_{k} - 2\varepsilon_{2k})/2 > 8 (\varepsilon_{k} + \varepsilon_{2k})$. \end{enumerate} \end{assume} \begin{definition} Given $q \in B_{k}$ and $t \in (0,1)$, define $r_{t} = q + t/(1-t)$. Also define cross ratio $R_{q,t}(x) : U \rightarrow \mathbb{C}$ by \begin{equation} \label{CR def} R_{q,t}(x) = \frac{(x-q)}{(r_{t}-q)} = \frac{1-t}{t}(x-q) = (t^{-1}-1) (x-q). \end{equation} \end{definition} Note that for fixed $q$, \begin{itemize} \item as $t \rightarrow 0$, $r_{t} \rightarrow q$ and $R_{q,t}(x) \rightarrow \infty$ for all $x \neq q$. \item as $t \rightarrow 1$, $r_{t} \rightarrow \infty$ and $R_{q,t}(x) \rightarrow 0$ for all $x$. \end{itemize} \begin{lemma} \label{out measure1} Let $\mu$ be as in Assumption \ref{assume mu}. Given $q \in B_{k}$, there exists a unique $t=t_{q} \in (0,1)$ such that \begin{equation} \int_{R_{q,t_{q}}(B_{k}) \setminus D} (R_{q,t_{q}})_{*} d\mu = \bar{\varepsilon}. \end{equation} \end{lemma} \begin{proof} Define a continuous function $f(t) : (0,1) \rightarrow [0,\infty)$ by \begin{equation*} f(t) = \int_{R_{q,t_{q}}(B_{k}) \setminus D} (R_{q,t})_{*} d\mu = \int_{A_{t}} d\mu \end{equation*} where $A_{t} = B_{k} \setminus {R_{q,t}}^{-1}(D) = \{x \in B_{k} : \lvert R_{q,t}(x) \rvert > 1\}$. Now for $x \neq q$, \begin{equation*} \frac{\partial}{\partial t} \lvert R_{q,t}(x) \rvert = -t^{-2} \left\lvert x-q \right\rvert < 0 \end{equation*} so $\{A_{t}\}$ is a family of sets strictly descending on $t$ hence $f(t)$ is strictly decreasing. Note that \begin{align*} \lim_{t \rightarrow 0}f(t) &= \lim_{t \rightarrow 0}\int_{A_{t}} d\mu = \int_{B_{k} \setminus \{q\}} d\mu = E > \bar{\varepsilon}\\ \lim_{t \rightarrow 1}f(t) &= \lim_{t \rightarrow 1}\int_{A_{t}} d\mu = \int_{\emptyset} d\mu = 0 < \bar{\varepsilon} \end{align*} hence there exists a unique $t_{q}$ such that $f(t_{q}) = \bar{\varepsilon}$. \end{proof} \begin{definition} For simplicity, denote $R_{q} = R_{q,t_{q}}$. \end{definition} \begin{lemma}\label{t cont} The assignment $q \mapsto t_{q}$ is continuous on $q \in B_{k}$. \end{lemma} \begin{proof} Denote $f(t) = t/(1-t)$. Then, $x \in R_{q,t}^{-1}(D)$ implies $\lvert x-q \rvert \leq f(t)$. Fix $q$ and $\varepsilon>0$. Since $f^{-1}$ is continuous, there is $\delta>0$ such that if $\lvert f(t_{q}) - f(t) \rvert \leq \delta$, then $\lvert t_{q} - t \rvert \leq \varepsilon$. Fix $q' \in B_{k}$ such that $\lvert q - q' \rvert \leq \delta$. Now it is enough to show that $\lvert t_{q} - t_{q'} \rvert \leq \varepsilon$. Define $t_{\pm}$ be such that $f(t_{\pm}) = f(t_{q}) \pm \delta$. It is easy to see that $t_{+} < t_{-}$. We will show that \begin{equation*} R_{q',t_{+}}^{-1}(D) \subset R_{q,t_{q}}^{-1}(D) \subset R_{q',t_{-}}^{-1}(D). \end{equation*} To see this, for $x \in R_{q,t_{q}}^{-1}(D)$, $\lvert x-q \rvert \leq f(t_{q})$ implies \begin{equation*} \lvert x-q' \rvert \leq f(t_{q}) + \lvert q - q' \rvert \leq f(t_{q}) + \delta = f(t_{+}), \end{equation*} so $w \in R_{q',t_{+}}^{-1}(D)$. The other direction is similar. Hence, from the definition of $t_{q}$, we have $t_{+} \leq t_{q'} \leq t_{-}$. Therefore $\lvert f(t_{q}) - f(t_{q'}) \rvert \leq \delta$, which implies $\lvert t_{q} - t_{q'} \rvert \leq \varepsilon$. \end{proof} \begin{definition}\label{F def} Denote $\nu_{q} = (R_{q})_{*} \mu$. Define $F : B_{k} \rightarrow \mathbb{C}$ by \begin{equation} F(q) = \int_{D} z d\nu_{q} (z). \end{equation} \end{definition} \begin{prop}\label{conti} F(q) in Definition \ref{F def} is continuous on $q \in B_{k}$. \end{prop} \begin{proof} From Lemma \ref{t cont} and Equation \eqref{CR def}, it is obvious that $R_{q} = R_{q,t_{q}}$ is continuous on $q \in B_{k}$. Hence push-forward measure $\nu_{q} = (R_{q})_{*}\mu$ is also continuous on $q$, and $F(q)$ is also continuous on $q$. \end{proof} \begin{prop}\label{zero} Let $F(q)$ be in Definition \ref{F def}. There exists $q_{k} \in B_{2k-1}$ such that $F(q_{k}) = 0$. \end{prop} This proposition is not trivial. For example, consider $F(q) = (q+2\delta_{k})/3$. Then $\lvert F(q) \rvert \geq \delta_{k}/3 > 0$ for all $q$, which is not the desired result. To avoid this case, we need the following lemma. \begin{lemma}\label{angle} Let $F(q)$ be in Definition \ref{F def}. For any given $q \in \partial B_{2k-1}$, \begin{equation*} Re \left( \frac{F(q)}{-q} \right) > 0. \end{equation*} \end{lemma} \begin{proof} \begin{equation*} F(q) = \int_{D} z d\nu_{q}(z) = \int_{R_{q}^{-1}(D)} R_{q}(x) d\mu(x) = \frac{1-t_{q}}{t_{q}}\int_{R_{q}^{-1}(D)} (x-q) d\mu(x). \end{equation*} Denote $f(x) = x-q$. If $x \in B_{2k}$, then for $u=x/q$, \begin{equation*} Re \left( \frac{f(x)}{f(0)} \right) = Re (1-u) > 0 \end{equation*} because $\lvert u \rvert \leq 1/2$. Define sets $A,B,C$ by \begin{align*} A &:= \{ x \in R_{q}^{-1}(D) : x \in B_{2k}\}\\ B &:= \left\{ x \in R_{q}^{-1}(D) : x \not\in A \textrm{ and } Re \left(\frac{f(x)}{f(0)}\right) \geq 0\right\}\\ C &:= \left\{ x \in R_{q}^{-1}(D) : Re \left(\frac{f(x)}{f(0)}\right) < 0\right\}. \end{align*} Note that $\mu(A) + \mu(B) + \mu(C) = \mu(R_{q}^{-1}(D)) = E - \bar{\varepsilon}$ and $\mu(B) + \mu(C) \leq \mu(B_{k} \setminus B_{2k}) < 2\varepsilon_{k} + 2\varepsilon_{2k}$, by Equation \eqref{subseq2} for $m=k$ and $m=2k$. So we have $\mu(A) = E - \bar{\varepsilon} - \mu(B) - \mu(C) \geq E - \bar{\varepsilon} - 2 \varepsilon_{k} - 2 \varepsilon_{2k}$ and $\mu(C) \leq 2\varepsilon_{k} + 2\varepsilon_{2k}$. Now, \begin{align*} Re \left(\frac{F(q)}{-q} \frac{t_{q}}{1-t_{q}}\right) &= \int_{A \cup B \cup C} Re \left(\frac{f(x)}{f(0)} \right) d\mu(x)\\ &\geq \int_{A} Re\left( \frac{f(x)}{f(0)} \right) d\mu(x) + \int_{C} Re\left( \frac{f(x)}{f(0)} \right) d\mu(x)\\ &\geq \int_{A} \frac{\delta_{2k-1} - \delta_{2k}}{\delta_{2k-1}} d\mu - \int_{C} \frac{\delta_{2k-1} + 3\delta_{2k-1}}{\delta_{2k-1}} d\mu \\ &\geq \frac{1}{2} \mu(A) - 4 \mu(C) \geq \frac{1}{2} (E - \bar{\varepsilon} - 2\varepsilon_{k} - 2 \varepsilon_{2k}) - 8 (\varepsilon_{k} + \varepsilon_{2k}) > 0 \end{align*} by Assumption \eqref{k cond4}. Hence we get $Re \left( -F(q)/q \right) > 0$. \end{proof} \begin{proof} (Proof of Proposition \ref{zero}) Note that by Lemma \ref{angle}, $F(\partial B_{2k-1})$ is a closed curve with nonzero index. So $F(B_{2k-1})$ contains $0$, which means that there exists $q \in B_{2k-1}$ such that $F(q)=0$. \end{proof} \smallskip Now we go back to original sequence with subscript $k$. For $q_{k}$ in Proposition \ref{zero}, denote $t_{k}=t_{q_{k}}$, $r_{k} = r_{t_{k}}$ and $R_{k} = R_{q_{k}}$. \begin{lemma}\label{conv to C} $r_{k} \in B_{k}$. Also, $R_{k}(B_{k}) \rightarrow \mathbb{C}$ as $k \rightarrow \infty$. \end{lemma} \begin{proof} First we claim that for any $q \in B_{k}$, $B_{2k} \not\subset R_{q}^{-1}(D)$. Suppose not. Then we have \begin{equation*} \bar{\varepsilon} = \int_{B_{k}\setminus R_{q}^{-1}(D)} d\mu_{k} < \int_{B_{k}\setminus B_{2k}} d\mu_{k} \leq 2\varepsilon_{k} + 2\varepsilon_{2k} < \bar{\varepsilon} \end{equation*} which is a contradiction, so proves the claim. So, there exists $x_{0} \in B_{2k}$ such that $x_{0} \not\in R_{q}^{-1}(D)$. $\lvert R_{q}(x_{0}) \rvert > 1$ and $\lvert x_{0} \rvert \leq \delta_{2k}$ implies that \begin{equation*} \frac{t_{q}}{1-t_q} \leq \lvert q \rvert + \delta_{2k}. \end{equation*} Therefore, \begin{equation*} \lvert r_{k} \rvert \leq \lvert q_{k} \rvert + \frac{t_{k}}{1-t_{k}} \leq 2 \lvert q_{k} \rvert + \delta_{2k} \leq 3\delta_{2k-1} \leq \delta_{k} \end{equation*} which proves the first. To show the second, it is enough to show that for any $R>0$, for all $k$ large enough, $R_{k}^{-1}(D_{R}) \subset B_{k}$ where $D_{R} \subset \mathbb{C}$ is a disk of radius $R$. Fix $R>0$ and choose $x \in R_{k}^{-1}(D_{R})$. Then we have \begin{equation*} \lvert x \rvert \leq \lvert q_{k} \rvert + R\frac{t_{k}}{1-t_{k}} \leq \delta_{2k-1} + R (\delta_{2k-1} + \delta_{2k}) \leq (1+2R)2^{-k+1}\delta_{k}. \end{equation*} Now choose $k$ large enough so that $(1+2R)2^{-k+1} \leq 1$. \end{proof} \subsection{Family properties} Now we are ready to prove Lemmas \ref{mark} and \ref{mark2}. \begin{proof} (Proof of Lemma \ref{mark}) The energy density measures $\mu_{k}$ of $f_k$ on a neighborhood $U \subset \mathbb{C}$ of $p$ satisfies $\mu_{k} \rightarrow \mu_{\infty} + m\delta_{p}$ as measures and $m \geq 2\bar{\varepsilon}$, where $\mu_{\infty}$ is the energy density measure of $f_0$. Choose $q_{k}$ as in Proposition \ref{zero} and $r_{k}$ as in Lemma \ref{out measure1}. We abuse the notation $q_{k}$ and $r_{k}$ to refer points $(q_{k},b_{k}),(r_{k},b_{k}) \in C_{k}$ in regular chart, and mark them. Since $\lim_{k}q_{k} = \lim_{k}r_{k} = p$, by Lemma \ref{new family}, we have new family of curves $\mathcal{C}'$ and forgetful map $\Phi : \mathcal{C}' \rightarrow \mathcal{C}$ such that $C'_{k} = (C_{k},q_{k},r_{k}) \rightarrow C'_{0}$ in $\mathcal{C}'$, which proves (1). To see (2), we first describe coordinate expression in the family $\mathcal{C}'$ near $E \simeq \mathbb{C}P^{1}$ which agrees with coordinate of $\mathcal{C}$ near $p$ under forgetful map $\Phi$. Description about coordinates can be found with more details in \cite{ACG} section 10.8. Equip regular chart near $p$ by $(x,b) \in U \times B$ where $U \subset \mathbb{C}$ and $p = (0,0)$. $\Phi(\sigma), \Phi(\tau)$ are sections in $\mathcal{C}$ which meet at $p$. By adding $q$ in the base, we can see $\Phi(\sigma)$ as a marked section in this new family. Locally, we consider the chart $(x,q,b) \in U \times U \times B \rightarrow (q,b)$ and view $\Phi(\sigma)$ and $\Phi(\tau)$ as functions on $U \times B$ with values on $U$ given by $\Phi(\sigma)(q,b) = q$ and $\Phi(\tau)(q,b) = r$ for some functions $q,r$ on $B$. Note that for any fiber $C_{b}$ in $\mathcal{C}$, $\Phi^{-1}(C_{b})$ is a two parameter subfamily of the family $\mathcal{C}'$, consisting of $(C_{b},q,r)$ and its limit case $q=r$, which is a $1$ dimensional subset of nodal family whose fibers look like $C_{b} \cup \mathbb{C}P^{1}$, parametrized by $q$ which denotes the gluing position. Translate coordinate from $x$ to $x' = x-q$ so that $\Phi(\sigma)$ is given by $\{x'=0\}$ and $\Phi(\tau)$ is given by $\{x' = t\}$ where $t=r-q$. Choose a homogeneous coordinate $[\lambda:\mu] \in \mathbb{C}P^{1}$. This gives local coordinate of $\mathcal{C}'$ near $E$ given by \begin{equation} \label{new chart1} ((x',t),q,b,[\lambda:\mu]) \in (U \times U) \times U \times B \times \mathbb{C}P^{1} \rightarrow (t,q,b) \in U \times U \times B \end{equation} with equation \begin{equation} \label{eq in C'-1} x' \mu = t \lambda, \end{equation} and $\sigma$ and $\tau$ in $\mathcal{C}'$ can be written by equations \begin{equation} \lambda = 0 \quad \textrm{ and } \quad \lambda = \mu \end{equation} respectively. Hence $\sigma$ and $\tau$ have coordinates $[0:1]$ and $[1:1]$ in $E$ respectively. Furthermore, by choosing chart near the new node $p' = [1:0]$ in $E$ as $[\lambda : \mu] = [1:z'] \mapsto z'$ with $z' = \mu/\lambda$, Equation \eqref{eq in C'-1} can be written by \begin{equation} x' z' = t \end{equation} which is the same as nodal chart. $(x-q,r-q,q,b,[\lambda:\mu])$ maps to $(x,b)$ under forgetful map $\Phi$ and projected to $[\lambda:\mu] \in E$ under local trivialization. Consider the chart $[y:1] \mapsto y$ away from $p'$. Note that $\Phi(C'_{k}) = C_{k}$ and $f'_{k} = f_{k} \circ \Phi : C'_{k} \rightarrow X$ is also a sequence of harmonic maps and \begin{equation*} \tilde{f}_{k}(x) = f_{k}(x,b_{k}) = f'_{k}(x-q_{k},r_{k}-q_{k},q_{k},b_{k},[\lambda:\mu]) = \tilde{f}'_{k}(z) \end{equation*} with $z = \lambda/\mu =(x-q_{k})/(r_{k}-q_{k}) = R_{k}(x)$, so we have $\nu_{k} = (R_{k})_{*}(e(\tilde{f}_{k})) = e(\tilde{f}'_{k})$. Note that $\nu_{k}$ can extend to the whole $E$ by Lemma \ref{conv to C}. Since the choice of $q_{k}$ and $r_{k}$ come from Lemmas \ref{out measure1} and \ref{zero} and cross ratio is conformally invariant, Equations \eqref{mark-eq1} and \eqref{mark-eq2} follows. This proves (2). Now consider (3). By applying Lemma \ref{loc conv} again, there is a subsequence and a finite set of bubble points $\{q_{1}, \ldots, q_{l}\} \subset E \setminus \{p'\}$ such that after passing to a subsequence, $\nu_{k} \rightarrow e(f'_{0}) + \sum_{j}m_{j}\delta_{q_{j}}$ on $E \setminus \{p'\}$ with $m_{j} \geq \varepsilon'_{0}$. Here $f'_{0} : C'_{0} \rightarrow X$ is a limit of $f'_{k}$. By Equation \eqref{mark-eq1}, $q_{j} \in D$. Denote $m_{\infty}$ the amount of energy concentration at $p'$, then we have \begin{equation} e(f'_{0})(E) + \sum_{j}m_{j} + m_{\infty} = m. \end{equation} For any compact set $K = \{[1:z'] : \lvert z' \rvert \geq \delta \} \subset \subset E \setminus \{p'\}$, define $B' = \{(x,b_{k}) \in C_{k} : x \in B_{k}\}$ and $K' = \{(x-q_{k},r_{k}-q_{k},q_{k},b_{k},[\lambda:\mu]) \in C'_{k} : [\lambda:\mu] \in K\}$. Then \begin{equation*} m_{\infty} \leq m - e(f'_{0})(K) - \sum_{j}m_{j} = \lim_{k \rightarrow \infty} \left(\mu_{k}(B_{k}) - \nu_{k}(K) \right) = \lim_{k \rightarrow \infty} E(f'_{k},\Phi^{-1}(B') \setminus K'). \end{equation*} We first show that for given $\delta>0$, $\Phi^{-1}(B') \setminus K' \subset B(p',\delta) \cap C'_{k}$ for all $k$ sufficiently large. Pick $u = (x-q_{k},r_{k}-q_{k},q_{k},b_{k},[\lambda:\mu]) \in \Phi^{-1}(B') \setminus K'$. Because $x,q_{k},r_{k} \in B_{k}$, $\lvert x-q_{k} \rvert, \lvert r_{k}-q_{k} \rvert, \lvert q_{k} \rvert, \lvert b_{k} \rvert \leq \delta$ for all $k$ sufficiently large. Moreover, $u \notin K'$ means $[\lambda:\mu] \notin K$, which implies $\lvert \mu/\lambda \rvert = \lvert z' \rvert < \delta$. So, $u \in B(p',\delta) \cap C'_{k}$ as desired. Next, we will show that for any $\varepsilon>0$, there is $\delta>0$ such that $E(f'_{k}, B(p',\delta) \cap C'_{k}) < \varepsilon$ for all $k$ sufficiently large. Fix $\varepsilon>0$ and assume $\delta<1$. By Equation \eqref{mark-eq1}, $E(f'_{k},\Phi^{-1}(B') \setminus K') \leq \bar{\varepsilon} \leq \varepsilon''_{0}/2$. Hence, for $\delta$ small enough, we have $E(f'_{k}, B(p',\delta) \cap C'_{k}) < \varepsilon''_{0}$. By definition $p'$ is a regular node, so by Lemma \ref{gap neck}, there is $\delta>0$ such that $\lim_{k \rightarrow \infty}E(f'_{k}, B(p',\delta) \cap C'_{k}) < \varepsilon$. So $m_{\infty} = 0$ and this proves the lemma. \end{proof} \begin{proof} (Proof of Lemma \ref{mark2}) As description before Lemma \ref{mark2}, we have energy density measures $\mu_{k}$ on $U = B_{\delta} \subset \mathbb{C}$ satisfying $\mu_{k} \rightarrow \mu_{\infty} + m\delta_{p}$ as measures and $m \geq 2\bar{\varepsilon}$, where $\mu_{\infty}$ is the energy density measure of $f_0$ on $U$. Choose $r_k$ as in Lemma \ref{out measure1} for $q=p$ which is the origin in $U$. Again we abuse the notation $r_k$ to refer point $(r_{k},t_{k}/r_{k},\tilde{b}_{k}) \in C_{k}$ in nodal chart, and mark them. Here we need to check $t_{k}/r_{k} \rightarrow 0$. Suppose that there is $c>0$ such that $\lvert t_{k}/r_{k} \rvert \geq c$ for all $k$. Since $\mu_{k} = 0$ on $B_{t_{k}/\delta}$ and $B_{c r_{k}/\delta} \subset B_{t_{k}/\delta}$, $\mu_{k} = 0$ on $B_{c r_{k}/\delta}$ for all $k$. Recall that $R_{k}(x) = CR_{p,r_{k}}(x)= x/r_{k}$. Choose $\delta$ small enough such that $R_{k}^{-1}(D) = B_{r_{k}} \subset B_{c r_{k}/\delta}$ for all $k$. Therefore $\mu_{k} = 0$ on $R_{k}^{-1}(D)$. Now Equation \eqref{mark2-eq} can be rewritten as $\int_{B_{k} \setminus R_{k}^{-1}(D)} d\mu_{k} = \mu_{k}(B_{k}) = \bar{\varepsilon}$, which contradicts $\mu_{k}(B_{k}) \rightarrow m \geq 2\bar{\varepsilon}$. Therefore $t_{k}/r_{k} \rightarrow 0$. Since $\lim_{k}r_{k} = p$, by Lemma \ref{new family}, we have new family of curves $\mathcal{C}'$ and forgetful map $\Phi : \mathcal{C}' \rightarrow \mathcal{C}$ such that $C'_{k} = (C_{k},r_{k}) \rightarrow C'_{0}$ in $\mathcal{C}'$, which proves (1). To see (2), we first describe coordinate expression in the family $\mathcal{C}'$ near $E \simeq \mathbb{C}P^{1}$ which agrees with coordinate of $\mathcal{C}$ near the node $p$ under forgetful map $\Phi$. For details, see \cite{ACG} section 10.8. Equip nodal chart near $p$ by $(x,y,\tilde{b}) \in U_{1} \times U_{2} \times \tilde{B}$ such that $xy=t$, where $U_{i} \subset \mathbb{C}$ and $p = (0,0,0)$. The projection $\pi$ is locally given by $\pi(x,y,\tilde{b}) = (t,\tilde{b}) \in B$. $\Phi(\tau)$ is a section in $\mathcal{C}$ which pass the node $p$. Using the nodal chart, we can see $\Phi(\tau)$ as a vector-valued function on $B$ given by $\Phi(\tau) = (r,r')$ for some functions $r,r'$ on $B$ such that $r r' = t$. Note that, for nodal fiber $C_{0}$ in $\mathcal{C}$ with node $p$, $\Phi^{-1}(C_{0})$ is a one parameter subfamily of the family $\mathcal{C}'$, consisting of $(C_{0},r)$ and its limit case where $r=p$, which looks like $C_{0} \cup \mathbb{C}P^{1}$. Choose a homogeneous coordinate $[\lambda:\mu] \in \mathbb{C}P^{1}$. This gives local coordinate of $\mathcal{C}'$ near $E$ given by \begin{equation}\label{new chart2} ((x,y,r,r'),\tilde{b},[\lambda:\mu]) \in (U_{1} \times U_{2} \times U_{1} \times U_{2}) \times \tilde{B} \times \mathbb{C}P^{1} \rightarrow (r,r',\tilde{b}) \in U_{1} \times U_{2} \times \tilde{B} \end{equation} with equations \begin{equation} \label{eq in C'-2} \lambda r = \mu x \quad \textrm{ and } \quad \lambda y = \mu r' \end{equation} and $\tau$ in $\mathcal{C}'$ can be written by equation \begin{equation} \lambda = \mu. \end{equation} So $\tau$ has coordinate $[1:1]$ in $E$. Now $E$ has two nodes, $p_{1} = [1:0]$ and $p_{2} = [0:1]$. Near $p_{1}$, choose chart of $E$ by $[1:z'] \mapsto z'$ with $z' = \mu/\lambda$. Then the first equation in \eqref{eq in C'-2} can be written by \begin{equation} x z' = r \end{equation} which is nodal chart near $p_{1} = [1:0]$. On the other hand, near $p_{2}$, choose chart of $E$ by $[z:1] \mapsto z$ with $z = \lambda/\mu$. Then the second equation in \eqref{eq in C'-2} can be written by \begin{equation} y z = r' \end{equation} which is again nodal chart near $p_{2} = [0:1]$. $(x,y,r,r',\tilde{b},[\lambda:\mu])$ maps to $(x,y,\tilde{b})$ under forgetful map $\Phi$ and projected to $[\lambda:\mu] \in E$ under local trivialization. Consider the chart $[z:1] \mapsto z$ away from both $p_{1},p_{2}$. Note that $\Phi(C'_{k}) = C_{k}$ and $f'_{k} = f_{k} \circ \Phi : C'_{k} \rightarrow X$ is also a sequence of harmonic maps and \begin{equation*} (\pi_{1})^{*}f_{k}(x) = f_{k}(x,y,\tilde{b}_{k}) = f'_{k}(x,y,r_{k},t_{k}/r_{k},\tilde{b}_{k},[\lambda:\mu]) = \tilde{f}'_{k}(z) \end{equation*} with $z = \lambda/\mu = x/r_{k} = R_{k}(x)$, so we have $\nu_{k} = (R_{k})_{*}(\pi_{1})_{*}(e(f_{k})) = e(\tilde{f}'_{k})$. Note that $\nu_{k}$ can extend to the whole $E$ by Lemma \ref{conv to C}. Since the choice of $r_{k}$ come from Lemma \ref{out measure1} and cross ratio is conformally invariant, Equation \eqref{mark2-eq} follows. This proves (2). Now consider (3). By applying Lemma \ref{loc conv} again, there is a subsequence and a finite set of bubble points $\{q_{1}, \ldots, q_{l}\} \subset E \setminus \{p_{1}, p_{2}\}$ such that after passing to a subsequence, $\nu_{k} \rightarrow e(f'_{0}) + \sum_{j}m_{j}\delta_{q_{j}}$ on $E \setminus \{p_{1},p_{2}\}$ with $m_{j} \geq \varepsilon'_{0}$. Here $f'_{0} : C'_{0} \rightarrow X$ is a limit of $f'_{k}$. By Equation \eqref{mark2-eq}, $q_{j} \in D$ and $q_{j} \neq p_{2}$. Denote $m_{0}$ and $m_{\infty}$ the amount of energy concentration at $p_{2}$ and at $p_{1}$ respectively. Then we have \begin{equation} e(f'_{0})(E) + \sum_{j}m_{j} + m_{0} + m_{\infty} = m. \end{equation} Since $p$ is regular node, $p_{1}$ and $p_{2}$ are also regular by Lemma \ref{new node regular}. Hence $m_{0},m_{\infty}$ are either zero or at least $\varepsilon''_{0}$ by Lemma \ref{gap neck}. For any compact set $K \subset \subset E \setminus \{p_{1},p_{2}\}$, define $B' = \{(x,y,\tilde{b}_{k}) \in B(p,\delta) \cap C_{k} : xy=t_{k}, x \in B_{k}\}$ and $K' = \{(x,y,r_{k},t_{k}/r_{k},\tilde{b}_{k},[\lambda:\mu]) \in C'_{k} : [\lambda:\mu] \in K\}$. Then \begin{equation*} m_{0} + m_{\infty} \leq m - e(f'_{0})(K) - \sum_{j}m_{j} \leq \lim_{k \rightarrow \infty} \left(\mu_{k}(B_{k}) - \nu_{k}(K) \right) = \lim_{k \rightarrow \infty} E(f'_{k},\Phi^{-1}(B') \setminus K'). \end{equation*} We first show that, for any $\delta' < \delta$, \begin{align*} K_{1} &:= \{q \in \Phi^{-1}(B') \setminus K' : \lvert \mu/\lambda \rvert < \delta'\} \subset B(p_{1},\delta') \cap C'_{k},\\ K_{2} &:= \{q \in \Phi^{-1}(B') \setminus K' : \lvert \lambda/\mu \rvert < \delta'\} \subset B(p_{2},\delta) \cap C'_{k} \end{align*} for all $k$ sufficiently large. Let $u = (x,y,r_{k},t_{k}/r_{k},\tilde{b}_{k},[\lambda:\mu]) \in K_{1}$. Note that because $x,r_{k} \in B_{k}$, $\lvert x \rvert, \lvert r_{k} \rvert, \lvert \tilde{b}_{k} \rvert \leq \delta'$ for all $k$ sufficiently large. Moreover, $\lvert \mu/\lambda \rvert = \lvert z' \rvert <\delta'$ for all $k$ sufficiently large. Therefore $u \in B(p_{1},\delta') \cap C'_{k}$ as desired. On the other hand, let $u = (x,y,r_{k},t_{k}/r_{k},\tilde{b}_{k},[\lambda:\mu]) \in K_{2}$. As above, $\lvert t_{k}/r_{k} \rvert, \lvert \tilde{b}_{k} \rvert \leq \delta'$ for all $k$ sufficiently large. We also have $\lvert t_{k}/x \rvert = \lvert y \rvert \leq \delta$. Moreover, $\lvert \lambda/\mu \rvert = \lvert z \rvert < \delta'$ for all $k$ sufficiently large. Therefore $u \in B(p_{2},\delta) \cap C'_{k}$ as desired. Next, we will show that for any $\varepsilon>0$, there is $\delta'$ such that $E(f'_{k}, B(p_{1},\delta') \cap C'_{k}) \leq \varepsilon$ for all $k$ sufficiently large. Fix $\varepsilon>0$ and assume $\delta' < 1$. By Equation \eqref{mark2-eq}, $E(f'_{k},K_{1}) \leq \bar{\varepsilon} \leq \varepsilon''_{0}/2$. Hence, for $\delta'$ small enough, we have $E(f'_{k}, B(p_{1},\delta') \cap C'_{k}) < \varepsilon''_{0}$. Therefore, by Lemma \ref{gap neck}, there is $\delta'>0$ such that $\lim_{k \rightarrow \infty}E(f'_{k}, B(p_{1},\delta') \cap C'_{k}) < \varepsilon$. This shows $m_{\infty} \leq \lim_{k \rightarrow \infty}E(f'_{k},K_{1}) \leq \lim_{k \rightarrow \infty}E(f'_{k}, B(p_{1},\delta') \cap C'_{k}) < \varepsilon$ for any $\varepsilon>0$. So $m_{\infty} = 0$ and this proves the lemma. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2021-01-01T02:08:48", "yymm": "2012", "arxiv_id": "2012.14040", "language": "en", "url": "https://arxiv.org/abs/2012.14040" }
\section{Introduction} \label{Section:intro} Solar activity plays a significant role in influencing the interplanetary medium and space weather around Earth and all the other planets of the solar system \citep{Schwenn2006}. Remote-sensing instruments on-board heliophysics missions can provide a wealth of information on the Sun’s activity, primarily via capturing the emission of light from the multi-layered solar atmosphere, thereby leading to the inference of various physical quantities such as magnetic fields, plasma velocities, temperature and emission measure, to name a few. \begin{figure*} \includegraphics[width=\textwidth]{Channels_Horizontal.pdf} \caption{Set of images to exemplify how degradation affects the AIA channels. The two sets are composed of seven images from different EUV channels. From left to right: AIA $94$~\AA, AIA $131$~\AA, AIA $304$~\AA, AIA $335$~\AA, AIA $171$~\AA, AIA $193$~\AA, and AIA $211$~\AA. The top row corresponds to images from May $13^{th}$, $2010$ and the bottom row shows images from August $31^{st}$, $2019$, with no degradation correction. The $304$~\AA~channel images are in log-scale due the severe degradation.} \label{fig:autocalibrate_model_problem} \end{figure*} NASA currently manages the Heliophysics System Observatory (HSO), which consists of a group of satellites that constantly monitor the Sun, its extended atmosphere, space environments around Earth and other planets of the solar system \citep{HSO}. One of the flagship missions of HSO is the Solar Dynamics Observatory \citep[SDO, ][]{SDO_primary}. Launched in $2010$, SDO has been instrumental in monitoring the Sun's activity and providing a high volume of valuable scientific data every day with a high temporal and spatial resolution. It has three instruments onboard: the Atmospheric Imaging Assembly \citep[AIA,][]{AIA}, which records high spatial and temporal resolution images of the Sun in the ultraviolet (UV) and extreme UV (EUV); the Helioseismic \& Magnetic Imager \citep[HMI,][]{HMI}, that provides maps of the photospheric magnetic field, solar surface velocity, and continuum filtergrams; and the EUV Variability Experiment \citep[EVE,][]{EVE}, which measures the solar EUV spectral irradiance. Over the past decade, SDO has played a central role in advancing our understanding of the fundamental plasma processes governing the Sun and space weather. This success can mainly be attributed to its open-data policy and a consistent high data-rate of approximately two terabytes of scientific data per day. The large volume of data accumulated over the past decade (over 12 petabytes) provides a fertile ground to develop and apply novel machine learning (ML) based data processing methods. Recent studies, such as, predicting solar flares from HMI vector magnetic fields \citep{Bobra_2015}, creating high-fidelity virtual observations of the solar corona \citep[\citealt{salvatelli2019} \&][]{Cheung2019}, forecasting far side magnetograms from the Solar Terrestrial Relations Observatory \citep[STEREO, ][]{Kaiser_2008} EUV images \citep{Kim_NatAs_2019}, super-resolution of magnetograms \citep{jungbluth-2019-super}, and mapping EUV images from AIA to spectral irradiance measurements \citep{Szenicereaaw6548}, have demonstrated the immense potential of ML applications in solar and heliophysics. In this paper, we leverage the availability of such high-quality continuous observations from SDO and apply ML techniques to address the instrument calibration problem. One of the crucial issues that limit the diagnostic capabilities of the SDO-AIA mission is the degradation of sensitivity over time. Sample images from the seven AIA EUV channels in Fig.~\ref{fig:autocalibrate_model_problem} show an example of such a deterioration. The top row shows the images observed during the early days of the mission, from 13 May 2010, and the bottom row shows the corresponding images observed more recently on 31 August 2019, scaled within the same intensity range. It is clear that the images in the bottom row appear to be significantly dimmer compared to their top row counterparts. In some channels, especially $304$~\AA\ and $335$~\AA\, the effect is pronounced. The dimming effect observed among the channels is due to the temporal degradation of that EUV instruments in space that is also known to diminish the overall instrument sensitivity with time~\citep[e.g.,][]{BenMoussa_etal_2013}. The possible causes include either the out-gassing of organic materials in the telescope structure, which may deposit on the optical elements \citep{Jiao_2019}, or the decrease in detector sensitivity due to exposure to EUV radiation from the Sun. In general, first-principle models predicting the sensitivity degradation as functions of time and wavelength are not sufficiently well-constrained for maintaining the scientific calibration of such instruments. To circumvent this problem, instrument scientists have traditionally relied on empirical techniques, such as considering sources with known fluxes, the so-called "standard candles." However, no standard candles exist on the solar atmosphere at these wavelengths since the solar corona is continuously driven and structured by evolving magnetic fields, which caused localized and intermittent heating. This causes even the quiet Sun brightness in the EUV channels to vary significantly depending on the configuration of the small-scale magnetic fields \citep[][and the references therein]{2015A&A...581A..51S}. On the one hand, the Sun may not be bright enough to appear in the hotter EUV channels such as AIA 94~\AA\. On the other hand, active regions (ARs) have EUV fluxes that can vary by several orders of magnitude depending on whether it is in an emerging, flaring, or decaying state. Moreover, the brightness depends on the complexity of the AR's magnetic field \citep{2015LRSP...12....1V}. Finally, ARs in the solar corona can evolve on time scales ranging from a few minutes to several hours, leading to obvious difficulties in obtaining a standard flux for the purpose of calibration. Current state-of-the-art methods to compensate for this degradation rely on cross-calibration between AIA and EVE instruments. The calibrated measurement of the full-disk solar spectral irradiance from EVE is passed through the AIA wavelength (filter) response function to predict the integrated AIA signal over the full field of view. Later, the predicted band irradiance is compared with the actual AIA observations \citep{Boerner2013}. The absolute calibration of SDO-EVE is maintained through periodic sounding rocket experiments \citep{EVE_rocket} that use a near-replica of the instrument on-board SDO to gather a calibrated observation spanning the short interval of the suborbital flight (lasting a few minutes). A comparison of the sounding rocket observation with the satellite instrument observation provides an updated calibration, revealing long-term trends in the sensitivities of EVE and, thus, of AIA. Sounding rockets are undoubtedly crucial; however, the sparse temporal coverage (there are flights roughly every two years) and the complexities of inter-calibration are also potential sources of uncertainty in the inter-instrument calibration. Moreover, the inter-calibration analysis has long latencies, of months and sometimes years, between the flights and when the calibration can be updated, due to data analysis of the obtained data during the flight; also this kind of calibrations are limited to observations from Earth, and thus cannot easily be used to calibrate missions in deep space (e.g., STEREO). In this paper, we focus on automating the correction of the sensitivity degradation of different AIA wavebands by using AIA information exclusively and adopting a deep neural network \citep[DNN, ][]{goodfellow2016deep} approach, which exploits the spatial patterns and cross-spectral correlations among the observed solar features in multi-wavelength observations of AIA. We compare our approach with a non-ML method motivated by solar physics heuristics, which we call the baseline model. We evaluate the predicted degradation curves with the ones obtained through the sounding rocket cross-calibration described above. To the best of our knowledge, this is the first attempt to develop a calibration method of this kind.\footnote{We presented an early-stage result of this work as an extended abstract at the NeurIPS workshop on ML and Physical Sciences 2019 (which has no formal proceedings) \citep[NeurIPS 2019, ][]{neuberg2019} where we described some preliminary results in this direction. In this paper, we extend the abstract with full analyses and discussion of several important issues, such as the performance on the real degradation curve and the limitations of the presented models, that are both crucial to evaluate the applicability of this ML-based technique.} We believe that the approach developed in this work could potentially remove a major impediment to developing future HSO missions that can deliver solar observations from different vantage points beyond Earth's orbit. The paper is structured as follows: in Section \ref{Section:data}, we present and describe our dataset. In Section \ref{section:methodology} we illustrate the technique and how it has been developed. Namely, in \S~\ref{section:formulation} we state the hypothesis and propose a formulation of the problem, in \S~\ref{section:convolutional} we present the CNN models, in \S~\ref{Section:Analysis} we describe the training process and the evaluation, in \S~\ref{section:inter_channel} we probe the multi-channel relationship and in \S~\ref{section:model-benchmark-understanding} we reconstruct the temporal degradation curve. Furthermore, in Section \ref{section:baseline} we present the baseline, followed by Section \ref{Section:Results} where we present and discuss the results. The concluding remarks are in Section \ref{section:summary}. \section{Data description and pre-processing} \label{Section:data} We use for this study the pre-processed SDO-AIA dataset from \citet[][hereafter referred to as SDOML]{SDOML}. This dataset is ML-ready to be used for any kind of application related to the AIA and HMI data, and it consists of a subset of the original SDO data ranging from $2010$ to $2018$. It comprises the $7$~EUV channels, $2$~UV channels from AIA, and vector magnetograms from HMI. The data from the two SDO instruments are temporally aligned, with cadences of $6$ minutes for AIA (instead of the original $12$ seconds) and EVE and $12$ minutes for HMI. The full-disk images are downsampled from $4096 \times 4096$ to $512 \times 512$ pixels and have an identical spatial sampling of $\thicksim$ $4\farcs8$ per pixel. In SDOML, the AIA images have been compensated for the exposure time and corrected for instrumental degradation over time using piecewise-linear fits to the V8 corrections released by the AIA team in November 2017.\footnote{Available at \url{https://aiapy.readthedocs.io/en/stable/generated/gallery/instrument\_degradation.html\#sphx-glr-generated-gallery-instrument-degradation-py}} These corrections are based on cross-calibration with SDO-EVE, where the EVE calibration is maintained by periodic sounding rocket under flights (including, in the case of the V8 corrections, a flight on 1 June 2016). Consequently, the resulting dataset offers images where changes in pixel brightness are directly related to the state of the Sun rather than instrument performance. In this paper, we applied a few additional pre-processing steps. First, we downsampled the SDOML dataset to $256\times256$ pixels from $512\times512$ pixels. We established that $256\times256$ is a sufficient resolution for the predictive task of interest (inference of a single coefficient), and the reduced size enabled quicker processing and more efficient use of the computational resources. Secondly, we masked the off-limb signal ($r>R_\odot$) to avoid possible contamination due to the telescope vignetting. Finally, we re-scaled the brightness intensity of each AIA channel by dividing the image intensity by a channel-wise constant factor. These factors represent the approximate average AIA data counts in each channel and across the period from 2011 to 2018 \citep[derived from][]{SDOML}, and this re-scaling is implemented to set the mean pixel values close to unity in order to improve the numerical stability and the training convergence of the CNN. Data normalization such as this is standard practice in NNs \citep{goodfellow2016deep}. The specific values for each channel are reported in Appendix~\ref{section:appendix_average}. \section{Methodology} \label{section:methodology} \subsection{Formulation of the problem} \label{section:formulation} It is known that some bright structures in the Sun are observed across different wavelengths. Figure \ref{fig:morphology} shows a good example from $07$ April $2015$ of a bright structure in the center of all seven EUV channels from AIA. Based on this cross-channel structure, we establish a hypothesis divided into two parts. First is that there is a relationship between the morphological features and the brightness of solar structures in a single channel (e.g., typically, dense and hot loops over ARs). The second is that such a relationship between the morphological features and the brightness of solar structures can be found across multiple channels of AIA. We hypothesize that both these relationships can be used to estimate the dimming factors and that a deep learning model can automatically learn these inter- and cross-channel patterns and exploit them for accurately predicting the dimming factor of each channel. \begin{figure*} \includegraphics[width=\textwidth]{Morphology_AIA.pdf} \caption{A co-located set of images of the seven EUV channels of AIA to exemplify structures that are observed across different wavelengths. From left to right: AIA $94$~\AA, AIA $131$~\AA, AIA $304$~\AA, AIA $335$~\AA, AIA $171$~\AA, AIA $193$~\AA, and AIA $211$~\AA.} \label{fig:morphology} \end{figure*} To test our hypothesis we consider a vector $\vec{C} = \{C_i, i\in [1,...,n]\}$ of multi-channel, synchronous SDO/AIA images, where $C_i$ denotes the $i$-th channel image in the vector, and a vector $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$, where $\alpha_i$ is the dimming factor independently sampled from the continuous uniform distribution between [$0.01, 1.0$]. We choose an upper bound value of $\alpha_i = 1$, since we only consider dimming of the images and not enhancements. Further we create a corresponding vector of dimmed images as $\vec{D} = \{\alpha_i C_i, i\in [1,...,n]\}$, where $\vec{D}$ is the corresponding dimmed vector. It is also to be noted that the dimming factors $\alpha_i$ are applied uniformly per channel and are not spatially dependent. The spatial dependence of the degradation is assumed to be accounted for by regularly updated flat-field corrections applied to AIA images. Our goal in this paper is to find a deep learning model $M: \vec{D} \rightarrow \vec{\alpha}$ that retrieves the vector of multi-channel dimming factors $\vec{\alpha}$ from the observed SDO-AIA vector $\vec{D}$. \subsection{Convolutional Neural Network Model} \label{section:convolutional} Deep learning is a very active sub-field of machine learning that focuses on specific models called deep neural networks (DNNs). A DNN is a composition of multiple layers of linear transformations and non-linear element-wise functions \citep{goodfellow2016deep}. One of the main advantages of deep learning is that it can learn from the data the best feature representation for a given task without the need to manually engineer such features. DNNs have produced state-of-the-art results in many complex tasks including object detection in images \citep{he2016deep}, speech recognition \citep{amodei2016deep} and synthesis \citep{oord2016wavenet}, translation between languages \citep{wu2016google}. A DNN expresses a differentiable function $F_{\vec\theta}: \mathcal{X} \to \mathcal{Y}$ that can be trained to perform complex non-linear transformations, by tuning parameters $\vec{\theta}$ using gradient-based optimization of a loss (also known as objective or error) function $L(\vec{\theta}) = \sum_i l(F_{\vec\theta}(\vec{x}_i), \vec{y}_i)$ for a given set of inputs and desired outputs $\{\vec{x}_i, \vec{y}_i\}$. For the degradation problem summarized in Section~\ref{section:formulation}, we consider two CNN architectures \citep{lecun1995convolutional}. The first architecture does not exploit the spatial dependence across multi-channel AIA images, therefore ignoring any possible relationship that different AIA channels might have, and it is designed to explore only the relationship across different structures in a single channel. This architecture is a test for the first hypothesis in Section~\ref{section:formulation}. The second architecture is instead designed to exploit possible cross-channel relationships while training, and it tests our second hypothesis, that solar surface features appearing across the different channels will make a multi-channel CNN architecture more effective than a single channel CNN that only exploit inter-channel structure correlations. The first model considers a single channel as input in the form of a tensor with shape $1\times256\times256$ and has a single degradation factor $\alpha$ as output. The second model takes in multiple AIA channel images simultaneously as an input with shape $n\times256\times256$ and output $n$ degradation factors $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$, where $n$ is the number of channels as indicated in Fig.~\ref{fig:autocalibrate_CNN_arch}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{my_arch_one.png} \includegraphics[width=0.5\textwidth]{my_arch_multi.png} \caption{The CNN architectures used in this paper. At the top the single-channel architecture with a single wavelength input and composed of two blocks of a convolutional layer, ReLU activation function and max pooling layer, followed by a fully connected (FC) layer and a final sigmoid activation function. At the bottom the multi-channel architecture with a multi wavelength input and composed of two blocks of a convolutional layer, ReLU activation function and max pooling layer, followed by a fully connected (FC) layer and a final sigmoid activation function. Figures constructed with \cite{haris_iqbal_2018_2526396}} \label{fig:autocalibrate_CNN_arch} \end{figure} The single- and multi-channel architectures are described in (Fig.~\ref{fig:autocalibrate_CNN_arch}). They both consist of two blocks of a convolutional layer followed by ReLU (rectified linear unit) activation function \citep{Nair:2010:RLU:3104322.3104425} and a max pooling layer. These are followed by a fully connected (FC) layer and a final sigmoid activation function that is used to output the dimming factors. The first convolution block has 64 filters, while the second convolution block has 128. In both convolution layers, the kernel size is $3$, meaning the filters applied on the image are $3\times3$ pixels, and the stride is $1$, meaning that the kernel slides through the image 1 pixel per step. No padding is applied (i.e., no additional pixels are added at the border of the image to avoid a change in size). The resulting total learnable parameters (LP) are $167,809$ for the single-channel model and $731,143$ for the multi-channel model. The final configurations of the models' architectures were obtained through a grid search among different hyperparameters and layer configurations. More details of the architectures can be found in Appendix \ref{section:appendix_archtectures}. We use the open-source software library PyTorch \citep{paszke_2017} to implement the training and inference code for the CNN. The source code to produce this paper is publicly available at \cite{autocal_code} and \url{https://github.com/vale-salvatelli/sdo-autocal_pub}. \subsection{Training Process} \label{Section:Analysis} The actual degraded factors $\alpha_i(t)$ (where $t$ is the time since the beginning of the SDO mission, and $i$ is the channel) trace a single trajectory in an $n$-dimensional space starting with $\alpha_i(t=0)=1$ $\forall$ $i\in[1,...,n]$ at the beginning of the mission. During training, we intentionally exclude this time-dependence from the model. This is done by ($1$) using the SDOML dataset, which has already been corrected for degradation effects, ($2$) not assuming any relation between $t$ and $\vec{\alpha}$ and not using $t$ as an input feature, and ($3$) temporally shuffling the data used for training. As presented in section \ref{section:formulation}, we degrade the each set of multi-channel images~$\vec{C}$ by a unique $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$. We then devised a strategy such that from one training epoch to the next, the same set of multi-channel images can be dimmed by a completely independent set of $\vec{\alpha}$ dimming factors. This is a data augmentation and regularization procedure that allows the model to generalize and perform well in recovering dimming factors over a wide range of solar conditions. The training set comprises multi-channel images~$\vec{C}$ obtained during the months of January to July from $2010$ to $2013$ obtained every six hours, amounting to a total of $18,970$ images in $2,710$ timestamps.The model was trained using 64 samples per minibatch, and the training has been performed for $1,000$ epochs. We do not use the full dataset to calculate the gradient descent and propagate back to update the network's parameters/weights in the minibatch concept. Instead, we calculate the gradient descent and correct the weights as the model is still going through the data. This procedure allows lowering the computation cost while still obtaining a lower variance. As a consequence of our data augmentation strategy, after $1000$ epochs the model has been trained with $2,710,000$ unique sets of (input, output) pairs since we used a different set of $\vec{\alpha}$ each epoch. We used the Adam optimizer \citep{Optimizer} in our training with an initial learning rate of $0.001$ and the mean squared error (MSE) of the predicted degradation factor ($\alpha_P$), and the ground truth value ($\alpha_{GT}$) was used as the training objective (loss). The test dataset, i.e., the sample of data used to provide an unbiased evaluation of a model fit on the training dataset, holds images obtained during the months of August to October between $2010$ and $2013$, again every six hours per day, totaling $9,422$ images over $1,346$ timestamps. The split by month between the training and test data has a two-fold objective: ($1$) it prevents the bias due to the variation in the solar cycle, thereby allowing the model to be deployed in future deep space missions forecasting $\alpha$ for future time steps, and ($2$) it ensures that the same image is never present in both the datasets (any two images adjacent in time will approximately be the same), leading to a more precise and a comprehensive evaluation metric. \subsection{Toy Model Formulation to Probe the Multi-Channel Relationship} \label{section:inter_channel} Using the described CNN model, we tested the hypothesis using a toy dataset, which is simpler than the SDOML dataset. We tested if the physical relationship between the morphology and brightness of solar structures (e.g., ARs, coronal holes) across multiple AIA channels would help the model prediction. For this purpose, we created artificial solar images, in which a $2$D Gaussian profile is used (Equation \ref{E-relationship}) to mimic the Sun as an idealized bright disk with some center-to-limb variation: \begin{equation} \label{E-relationship} C_i(x,y) = A_i \exp{(-[x^2+y^2]{\sigma^{-2}})}, \end{equation} \noindent where $A$ is the amplitude centered at ($0,0$), characteristic width $\sigma$, and $x$ and $y$, are the coordinates at the image. $\sigma$ is sampled from a uniform distribution between $0$ and $1$. These images are not meant to be a realistic representation of the Sun. However, as formulated in Eq.~\ref{E-relationship}, they include two qualities we posit to be essential for allowing our auto-calibration approach to be effective. The first is the correlation of intensities across wavelength channels (i.e., ARs tend to be bright in multiple channels). The second is the existence of a relationship between the spatial morphology of EUV structures with their brightness. This toy dataset is designed so that we can independently test how the presence of (a) a relation between brightness $A_i$ and size $\sigma$, and (b) a relation between $A_i$ for various channels; and the presence of both (a) and (b) influences performance. To evaluate this test, we will use the MSE loss and expect the presence of both (a) and (b) to minimize this loss. The test result of the multi-channel model with artificial solar images is shown in Table~\ref{table:toy_problem_metrics}. We can see that when $A_0 \propto \sigma$ (linear relation between size and brightness) and $A_i = A_0^i$ (i.e., dependence across channels; here $i$ superscript denotes $A_0$ raised to the $i$-th power), the CNN solution delivered minimal MSE loss (top-left cell). Eliminating the inter-channel relationship (i.e., each $A_i$ was randomly chosen) or the relation between brightness $A_i$ and size $\sigma$, the performance suffered increasing the MSE loss. Ultimately, when both $A_i$ and $\sigma_i$ were randomly sampled for all channels, the model performed equivalently to randomly guessing/regressing (bottom-right cell) and having the greater loss of all tests. These experiments confirm our hypothesis and indicate that a multi-channel input solution will outperform a single-channel input model in the presence of relationships between the morphology of solar structures and their brightness across the channels. \begin{table} \centering \caption{The mean squared error (MSE) for all combinations proposed in Section \ref{section:inter_channel}. The top-left cell is for the scenario when there exists a cross-channel correlation and a relation between brightness and size of the artificial Sun. The top-right cell, has is the loss with a cross-channel correlation but not the relation between brightness and size. The bottom left cell has the loss when there is no cross-channel correlation, but it has a relation between brightness and size. The bottom right cell presents the loss when the parameters are freely chosen.} \label{table:toy_problem_metrics} \begin{tabular}{|cc|c|l|} \hline & & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Brightness and size\\ correlation\end{tabular}} \\ \cline{3-4} & & Yes & No \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cross-channel\\ correlation\end{tabular}}} & Yes & 0.017 & 0.023 \\ \cline{2-4} \multicolumn{1}{|c|}{} & No & 0.027 & 0.065 \\ \hline \end{tabular} \end{table} \subsection{Reconstruction of the Degradation Curve using the CNN Models} \label{section:model-benchmark-understanding} In order to evaluate the model in a different dataset from the one used in the training process, we use both single-channel and multi-channel CNN architectures to recover the instrumental degradation over the entire period of SDO (from $2010$ to $2020$). To produce the degradation curve for both CNN models, we use an equivalent dataset of the SDOML dataset but without compensating the images for degradation\footnote{The SDOML dataset not corrected for degradation overtime is available at \url{https://zenodo.org/record/4430801\#.X_xuPOlKhmE} } \citep{SDOML_degraded} and having data from 2010 to 2020. All other pre-processing steps, including masking the solar limb, re-scaling the intensity, etc., remain unchanged. The CNN's estimates of degradation are then compared to the degradation estimates obtained from cross-calibration with irradiance measurements, computed by the AIA team using the technique described in \citep{Boerner2013}. The cross-calibration degradation curve relies on the daily ratio of the AIA observed signal to the AIA signal predicted by SDO-EVE measurements up through the end of EVE MEGS-A operations in May $2014$. From May $2014$ onwards, the ratio is computed using the FISM model \citep{Chamberlin2020} in place of the EVE spectra. FISM is tuned to SDO-EVE, so the degradation derived from FISM agrees with the degradation derived from EVE through $2014$. However, the uncertainty in the correction derived from FISM is greater than that derived from EVE observations, primarily due to the reduced spectral resolution and fidelity of FISM compared to SDO-EVE. While the EVE-to-AIA cross-calibration introduced errors of approximately $4\%$ (on top of the calibration uncertainty intrinsic to EVE itself), the FISM-to-AIA cross-calibration has errors as large as $25\%$. We examined both $V8$ and $V9$ of the cross-calibration degradation curve. The major change from $V8$ calibration (released in November $2017$, with linear extrapolations extending the observed trend after this date) to $V9$ (July $2020$) is based on the analysis of the EVE calibration sounding rocket flown on $18$ June $2018$. The analysis of this rocket flight resulted in an adjustment in the trend of all channels during the interval covered by the FISM model (from May $2014$ onwards), as well as a $20\%$ shift in the $171$~\AA\ channel normalization early in the mission. This changes become more clear when looking at Fig.~\ref{fig:degradation_curve} at Sec.\ref{Section:Results}. The uncertainty of the degradation correction during the period prior to May $2014$, and on the date of the most recent EVE rocket flight, is dominated by the $\sim10\%$ uncertainty of the EVE measurements themselves. For periods outside of this (particularly periods after the most recent rocket flight), the uncertainty is a combination of the rocket uncertainty and the errors in FISM in the AIA bands (approximately $25\%$). Moreover, we obtain and briefly analyze the feature maps from the second max pooling layer from the multi-channel model. A feature map is simply the output of one mathematical filter applied to the input. Looking at the feature maps, we expand our understanding of the model operation. This process helps to shine light over the image processing and provides insight into the internal representations combining and transforming information from seven different EUV channels into the seven dimming factors. \section{Baseline Model} \label{section:baseline} We compare our DNN approach to a baseline motivated by the assumption that the EUV intensity outside magnetically ARs, i.e. the quiet Sun, is invariant in time \citep[a similar approach is also considered for the in-flight calibration of some UV instruments, e.g.][]{Schule1998}. A similar assumption in measuring the instrument sensitivity of the Solar \& Heliospheric Observatory \citep[SOHO, ][]{soho} CDS was also adopted by \citet{2010A&A...518A..49D}, where they assumed that the irradiance variation in the EUV wavelengths is mainly due to the presence of ARs on the solar surface and the mean irradiance of the quiet Sun is essentially constant over the solar cycle. Though there are evidences of small-scale variations in the intensity of quiet Sun when observed in the transition region \citep{2015A&A...581A..51S}, their contribution is insignificant in comparison to their AR counterparts. We use this idea for our baseline model as described in this section. \begin{figure}[h] \centering \includegraphics[height=3.3in]{hist_demo.pdf} \caption{Histograms of the pixel values for $304$~\AA~ channel. In blue the histogram for the refence image and in red the histogram for the dimmed image. The y-axis is the number of pixels, and the x-axis is the pixel intensity [$DN/px/s$]. The modes are marked with blue and red line for the reference and dimmed images respectively.} \label{fig:baseline_histogram} \end{figure} It is important to remark that we use exactly the same data pre-processing and splitting approach as the one used for the neural network model described in Sect.~\ref{Section:Analysis}. From the processed dataset, a set of reference images per channel, ${\vec{C}_{\rm ref}}$, are selected at time $t=t_{\rm ref}$. Since the level of solar activity continuously evolves in time, we only select the regions of the Sun that correspond to low activity, as discussed in the preceding paragraph. Furthermore, the activity level is decided based on co-aligned (with AIA) magnetic field maps from HMI. To define these regions, we first make a square selection with a diagonal of length $2R_\odot$ centered at $R=0$ of the solar images so as to avoid LOS projection effects towards the limb. We then apply an absolute global threshold value of 5 Mx~cm$^{-2}$ on the co-aligned HMI LOS magnetic field maps corresponding to $t=t_{\rm ref}$, such that only those pixels that have B$_{\mathrm{LOS}}$ less than the threshold are extracted, resulting in a binary mask with 1 corresponding to the pixels of interest and 0 the rest. This minimum chosen value of the magnetic flux density is close to the noise level of the HMI\_720s magnetograms \citep{2012SoPh..279..295L,2018ApJ...862...35B}. Finally, we use this mask to extract the co-spatial quiet Sun (less active) pixels from each AIA channel and compute the respective 1D histograms of the intensity values as shown in Fig.~\ref{fig:baseline_histogram}. Now, based on the assumption that the intensity of the quiet Sun area does not change significantly over time (as discussed in the preceding section), we chose to artificially dim these regions by multiplying them with a constant random factor between 0 and 1. Naturally, values close to 0 will make the images progressively dimmer. The histograms for the dimmed and the original (undimmed) quiet Sun intensities for the AIA~304~\AA\ channel are shown in Fig.~\ref{fig:baseline_histogram}. The idea is to develop a non-machine learning approach that could be used to retrieve this dimming factor. From Fig.~\ref{fig:baseline_histogram} we find that both the dimmed and undimmed 1D histograms have a skewed shape, with a dominant peak at lower intensities and extended tails at higher intensities. Such skewed distribution for the quiet Sun intensities has been reported by various studies in the past \citep[see][]{2015A&A...581A..51S}, where they have been modeled as either a sum of two Gaussians \citep{1976RSPTA.281..319R} or a single log-normal distribution \citep{1999ApJ...512..992G,2007A&A...468..695F}. Despite an increased number of free parameters in double Gaussian fitting, \citet{2000A&A...362..737P} showed that the observed quiet Sun intensity distribution could be fitted significantly better with a single log-normal distribution. The skewed representation, such as the one shown for the 304~\AA\ channel, was also observed for all the other EUV channels, indicating that the criterion for masking the quiet Sun pixels described here is justified. We then compute the mode (most probable value) of both the dimmed and undimmed log-normal distributions and indicate them by $I_{i,{\rm ref}}^{\rm mp}$ (where \textit{i} implies the AIA channel under consideration and \textit{mp} stands for the modal value for the undimmed images), and $I^{\rm mp}_{i}$ representing the modal intensity value for the corresponding images dimmed with a dimming factor (say $\alpha_i$). These are indicated by blue and red vertical lines in Fig.~\ref{fig:baseline_histogram}. Subsequently, the dimming factor is obtained by computing the ratio between the two most probable intensity values according to the following equation: \begin{equation} \alpha_i := \frac{I^{\rm mp}_{i}}{I_{i, {\rm ref}}^{\rm mp}} \end{equation} Since both distributions are essentially similar except for the dimming factor, we suggest that such a ratio is efficient enough to retrieve $\alpha_i$ reliably forming a baseline against which the neural network models are compared. The efficiency of the baseline in recovering the dimming factor is then evaluated according to the success rate metric and the results for all channels are tabulated in Table~\ref{tab:autocalibrate_final_results}. \section{Results and Discussions} \label{Section:Results} \subsection{Comparing the performances of the baseline model with different CNN architectures} The results of the learning algorithm are binarized using five different thresholds: the absolute value of $0.05$ and relative values of $5\%$, $10\%$, $15\%$, and $20\%$. If the absolute difference between the predicted degradation factor ($\alpha_P$) and the ground truth degradation factor ($\alpha_{GT}$) is smaller than the threshold, it is considered a success $\alpha_P$; otherwise, it is not a success. We then evaluate the binarized results by using the success rate, which is the ratio of success $\alpha_P$ and the total amount of $\alpha_{P}$. We chose different success rate thresholds to gauge the model, all of which are smaller than the uncertainty of the AIA calibration \citep[estimated as $28\%$ by ][]{AIA_calib_paper}. The baseline, single-channel, and multi-channel model results are summarized in Table~\ref{tab:autocalibrate_final_results}. The different colors are for different success rates: green is for success rates greater than $90\%$, yellow for success rate between $80\%$ and $90\%$, and red is for success rate lower than $80\%$. \begin{table*} \centering \caption{Results of the baseline and CNN models applied to all the EUV AIA channels. The Table is divided into three sections: Baseline, Single-Channel, and Multi-Channel model. From the left, the channel number, the success rates for the baseline, the success rates for the single-channel CNN model, and the success rates for the multi-channel CNN model. Each model performance is considered at different tolerance levels. At the bottom, the mean of the success rate across all the channels. The color green is for success rates greater than $90\%$, yellow for success rate between $80\%$ and $90\%$, and red is for success rate lower than $80\%$.} \label{tab:autocalibrate_final_results} \centering \begin{tabular}{cccccccccccccccc} \toprule \multirow{2}{*}{Channel} & \multicolumn{5}{c}{\parbox{5cm}{\centering Baseline}} & \multicolumn{5}{c}{\parbox{5cm}{\centering Single-Channel Model}} & \multicolumn{5}{c}{\parbox{5cm}{\centering Multi-Channel Model}} \\ \cmidrule(lr){2-6}\cmidrule(lr){7-11}\cmidrule(lr){12-16} & 0.05 & 5\% & 10\% & 15\% & 20\% & 0.05 & 5\% & 10\% & 15\% & 20\% & 0.05 & 5\% & 10\% & 15\% & 20\%\\ \midrule 94~\AA & \zz {32}\% & \zz{08}\% & \zz{18}\% & \zz{28}\% & \zz {40}\% & \zz {70}\% & \zz {37}\% & \zz {61}\% & \zz {78}\% & \zz {87}\% & \zz {82}\% & \zz {48}\% & \zz {73}\% & \zz {85}\% & \zz {92}\% \\ 131~\AA & \zz {76}\% & \zz{50}\% & \zz{73}\% & \zz{86}\% & \zz {96}\% & \zz {94}\% & \zz {72}\% & \zz {92}\% & \zz {98}\% & \zz {99}\% & \zz {99}\% & \zz {76}\% & \zz {94}\% & \zz {97}\% & \zz {99}\% \\ 171~\AA & \zz {58}\% & \zz{27}\% & \zz{48}\% & \zz{66}\% & \zz {85}\% & \zz {93}\% & \zz {70}\% & \zz {93}\% & \zz {97}\% & \zz {99}\% & \zz {84}\% & \zz {48}\% & \zz {72}\% & \zz {86}\% & \zz {93}\% \\ 193~\AA & \zz {38}\% & \zz{13}\% & \zz{27}\% & \zz{44}\% & \zz {53}\% & \zz {73}\% & \zz {41}\% & \zz {69}\% & \zz {85}\% & \zz {93}\% & \zz {90}\% & \zz {59}\% & \zz {85}\% & \zz {94}\% & \zz {98}\% \\ 211~\AA & \zz {31}\% & \zz{11}\% & \zz{21}\% & \zz{29}\% & \zz {39}\% & \zz {63}\% & \zz {30}\% & \zz {53}\% & \zz {71}\% & \zz {84}\% & \zz {76}\% & \zz {41}\% & \zz {68}\% & \zz {82}\% & \zz {92}\% \\ 304~\AA & \zz {86}\% & \zz{66}\% & \zz{89}\% & \zz{95}\% & \zz{100}\% & \zz {90}\% & \zz {65}\% & \zz {89}\% & \zz {97}\% & \zz {99}\% & \zz {94}\% & \zz {62}\% & \zz {86}\% & \zz {93}\% & \zz {96}\% \\ 335~\AA & \zz {38}\% & \zz{13}\% & \zz{29}\% & \zz{42}\% & \zz {51}\% & \zz {62}\% & \zz {31}\% & \zz {54}\% & \zz {69}\% & \zz {80}\% & \zz {73}\% & \zz {39}\% & \zz {65}\% & \zz {82}\% & \zz {91}\% \\ \textbf{Mean} & \textbf{\zz {51}\%} & \textbf{\zz {27}\%} & \textbf{\zz {43}\%} & \textbf{\zz {56}\%} & \textbf{\zz {66}\%} & \textbf{\zz {78}\%} & \textbf{\zz {50}\%} & \textbf{\zz {73}\%} & \textbf{\zz {85}\%} & \textbf{\zz {92}\%} & \textbf{\zz {85}\%} & \textbf{\zz {53}\%} & \textbf{\zz {77}\%} & \textbf{\zz {89}\%} & \textbf{\zz {94}\%} \\ \bottomrule \end{tabular} \end{table*} A detailed look at Table~\ref{tab:autocalibrate_final_results} reveals that for an absolute tolerance value of $0.05$, the best results for the baseline are $86\%$ ($304$~\AA) and $76\%$ ($131$~\AA), and a mean success rate of $\sim51\%$ across all channels. As we increase the relative tolerance levels, the mean success rate increases from $27\%$ (for $5\%$ relative tolerance) to $66\%$ (with $20\%$ relative tolerance) and with a $39\%$ success rate in the worst-performing channel ($211$~\AA). Investigating the performance of the CNN architecture with a single input channel and an absolute tolerance level of $0.05$, we find that this model performed significantly better than our baseline with much higher values of the metric for all the channels. The most significant improvement was shown by the $94$~\AA~ channel with an increase from $32\%$ in the baseline model to about $70\%$ in the single input CNN model, with an absolute tolerance of $0.05$. The average success rate bumped from $51\%$ in the baseline to $78\%$ in the single-channel model. The worst metric for the single-channel CNN architecture was recorded by the $211$~\AA\ channel, with a success rate of just $63\%$, which is still significantly better than its baseline counterpart ($31\%$). Furthermore, with a relative tolerance value of $15\%$, we find that the mean success rate is $85\%$ for the single-channel model, which increases to more than $90\%$ for a $20\%$ tolerance level. This is a promising result considering the fact that the error associated with the current state-of-the-art calibration techniques (sounding rockets) is $\sim25\%$. Finally, we report the results from the multi-channel CNN architecture in the last section of Table~\ref{tab:autocalibrate_final_results}. As expected, the performance, in this case, is the best of the models, with significant improvements for almost all the EUV channels. Clearly, the success rates belonging to the red category are much lesser compared to the former models implying that the mean success rate is the highest across all tolerance levels. The multi-channel architecture recovers the degradation (dimming) factor for all channels with a success rate of at least $91\%$ for a relative tolerance level of $20\%$ and a mean success rate of $\sim94\%$. It is also evident that this model outperforms the baseline and the single-channel model for all levels of relative tolerances. For any given level of tolerance, the mean across all channels increased significantly. For example, with absolute tolerance of $0.05$, the mean increase from $78\%$ to $85\%$, even changing its color classification. In addition, the success rate is consistently the worst for $335$~\AA~and $211$~\AA~channels across all tolerances, whereas the performance of the $131$~\AA~channel is the best. Looking at specific channels, we can see that $304$~\AA~ does consistently well through all the models with not much variation, which wasn't expected. Now observing $171$~\AA, it does well in the baseline and in the multi-channel model, but surprisingly it has its maximum performance in the single-channel model, through all tolerances, and a remarkable $94\%$ success rate with a tolerance of $0.05$. In opposite to $171$~\AA, channels $211$~\AA~and $335$~\AA~ have a poor performance in the baseline and the single-channel models, and they have a significant improvement in the multi-channel model as expected and hypothesized by this paper. Observing the Fig.\ref{fig:training_curve}, we can see the training and test MSE loss curve evolving by epoch. Based on the results from Table \ref{tab:autocalibrate_final_results} and comparing the training and test loss curves in Fig. \ref{fig:training_curve} we can see the model does not heavily overfit in the range of epochs utilized, and it presents stable generalization performance on test results. We stopped the training before epoch 1000, seeing only marginal improvements achieved in the test set over many epochs. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{training_plot.pdf} \caption{Graphic of the evolution of training and testing MSE loss through the epochs.} \label{fig:training_curve} \end{figure} Overall the result shows higher success rates for the CNN models, particularly for the multi-channel model, which was predicted by the toy problem, and for higher tolerances. \subsection{Modelling Channel Degradation over Time} \label{sec:degradation} In this section, we discuss the results obtained when comparing the AIA degradation curves $V8$ and $V9$, with both single-channel and multi-channel CNN models. This process was performed using a dataset equivalent to the SDOML but with no correction for degradation and data period from $2010$ to $2020$. This tests both models for real degradation suffered by AIA from $2010$ to $2020$. \begin{figure*} \centering \centering \includegraphics[width=0.98\textwidth]{degradation.pdf} \caption{Channels degradation over Time. From top to bottom: Channel $94$~\AA~ (blue) and $131$~\AA~(yellow), $171$\AA~(green) and $193$~\AA~(red), $211$~\AA~(purple) and $304$~\AA~(brown) and $335$~\AA~(magenta). The solid black (gray) curve is the degradation profile of AIA calibration release $V9$ ($V8$). The gray shaded area correspond to the $25\%$ error of the degradation curve $V9$. The colorful shaded areas are the standard deviation of the CNN models. The vertical black dashed line is the last available observation from EVE MEGS-A data and the vertical gray dashed line is the last training date.} \label{fig:degradation_curve} \end{figure*} Figure~\ref{fig:degradation_curve} presents the results of our analysis for all the seven AIA EUV channels. In each panel, we show four quantities: the degradation curve $V9$ (solid black line), the degradation curve $V8$ (solid gray line), predicted degradation from the single-channel model (dashed colorful line), and multi-channel model (solid colorful line). The shaded gray band depicts the region covering $25\%$ variation (error) associated with the $V9$ degradation curve, and the colorful shaded areas are the standard deviation of the single- and multi-channel models. The dashed vertical line coincides with the day (25 May 2014), the last day of EVE MEGS-A instrument data. It is important to note that MEGS-A was earlier used for the sounding rocket calibration purposes, the loss of which caused both the $V8$ and $V9$ degradation curves to become noisier in the future. \citet{Szenicereaaw6548} used deep learning to facilitate a virtual replacement for MEGS-A. Observing the different panels of Fig.~\ref{fig:degradation_curve}, we can see that even though we trained both the single and multi-channel models with the SDOML dataset that was produced and corrected using the $V8$ degradation curve, both CNN models predict the degradation curves for each channel quite accurately over time, except for $94$~\AA\ and $211$~\AA\ channel. However, the deviations of the predicted values for these two channels fall well within the $25\%$ variation of the $V9$ calibration curve. In fact, the CNN predictions have even better agreement with $V9$ than the $V8$ calibration for most of the channels. That hints at the conclusion that the CNN is picking up on some actual information that is perhaps even more responsive to degradation than FISM. The latest degradation curve ($V9$) was updated recently in July $2020$, and the change from $V8$ to $V9$ might have easily caused an impact while training the models. Moreover, the more significant deviation of $94$~\AA\ channel in the early stages of the mission is due to the fact we limited our degradation factor to be less than one. From the predicted calibration curves computed from the single- and multi-channel models, we see that they have a significant overlap throughout the entire period of observation. The single-channel model predictions, however, have a more significant variation for channels $211$~\AA, $193$~\AA~ and $171$~\AA. For a systematic evaluation and a comparison among the results of the two models across channels, we calculated some goodness of fit metrics, and the results are shown in Table~\ref{tab:quantities_degradation}. \begin{table}[h] \centering \caption{Goodness of fit metrics for single-channel and multi-channel models with reference to the $V9$ degradation curve. The first metric is the Two-Sample Kolmogorov-Smirnov Test (KS), and the second metric is the Fast Dynamic Time Warping.} \label{tab:quantities_degradation} \centering \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Channel} & \multicolumn{2}{c}{{\centering Single-Channel}} & \multicolumn{2}{c}{{\centering Multi-Channel}} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} & KS & DTW & KS & DTW\\ \midrule 94~\AA & 0.485 & 7.120 & 0.568 & 9.624\\ 131~\AA & 0.346 & 2.711 & 0.275 & 1.624\\ 171~\AA & 0.298 & 3.074 & 0.329 & 3.549\\ 193~\AA & 0.211 & 1.829 & 0.244 & 2.080\\ 211~\AA & 0.305 & 2.850 & 0.242 & 2.807\\ 304~\AA & 0.282 & 1.412 & 0.100 & 1.311\\ 335~\AA & 0.212 & 2.539 & 0.141 & 2.839\\ \bottomrule \end{tabular} \end{table} Table \ref{tab:quantities_degradation} contains two different metrics for evaluating the goodness of fit of each CNN model with the $V9$ degradation curve. The first is the Two-Sample Kolmogorov--Smirnov Test (KS), which determines whether two samples come from the same distribution \citep{two_ks}, and the null hypothesis assumes that the two distributions are identical. The KS test has the advantage that the distribution of statistics does not depend on the cumulative distribution function being tested. The second metric is the Fast Dynamic Time Warping \citep[DTW, ][]{fastDTW}, which measures the similarity between two temporal sequences that may not be of the same length. This last one is important since statistical methods can be too sensitive when comparing both time series. DTW has distance between the series as an output, and as a reference the DTW for the different EUV channels between the $V8$ and $V9$ degradation curves are: $94$~\AA-$72.17$, $131$~\AA-~$13.03$, $171$~\AA-$9.82$, $193$~\AA-$30.05$, $211$~\AA-$16.86$, $304$~\AA-$7.02$ and $335$~\AA-$5.69$. Similar to Fig.~\ref{fig:degradation_curve} we find in Table~\ref{tab:quantities_degradation}, that the predictions from both the single-channel and multi-channel models overlap significantly both in terms of the metric and the time evolution. Except for the $94$~\AA~channel, all others have very close metric values, well within a given level of tolerance. A low value of the KS test metric suggests that the predictions have a similar distribution as the observed $V9$ calibration curve, which also indicates the robustness of our CNN architecture. KS test agrees well with DTW, where the values obtained are smaller than the reference values (as indicated earlier) between the $V8$ and the $V9$ calibration curves. Overall, the metric analysis for the goodness of fit between the predictions and the actual calibration curve ($V9$) shows that the CNN models perform remarkably well in predicting the degradation curves despite being trained only on the first three years of the observations. \subsection{Feature Maps} \label{sec:feature-maps} \begin{figure} [h!] \centering \includegraphics[width=0.8\linewidth]{reference.pdf} \includegraphics[width=\linewidth]{latent.pdf} \caption{Feature maps obtained from the last layer of CNN of our model. The top row shows a sample input in AIA 193~\AA~ channel, and the bottom row shows four representative feature maps out of one hundred and twenty eight different feature maps from the final convolutional layer of the multi-channel NN model.} \label{fig:autocalibrate_activation_viz} \end{figure} As mentioned in Sect.~\ref{section:model-benchmark-understanding}, the feature maps are the result of applying the filters to an input image. That is, at each layer, the feature map is the output of that layer. In Fig.~\ref{fig:autocalibrate_activation_viz} we present such maps obtained from the output of the last convolutional layer of our CNN. The top row shows a reference input image observed at $193$~\AA\ used in this analysis, with its intensity scaled between $0-1$ pixel units, and the bottom row shows $4$ representative feature maps (out of a total of $128$) with their corresponding weights. These maps are obtained after the final convolutional layer of the multi-channel model, and it represents the result of combining all seven EUV channels as input. The predicted $\alpha$ dimming factors from the model are given by the sigmoid activation function applied to a linear combination of these features. Such mapping allows us to see that the network actually learned to identify the different features of such full-disk solar images such as the limb, the quiet Sun features, and the ARs. The reason for visualizing a feature map for specific AIA images is to gain an understanding of what features a model detects are ultimately useful in recovering the degradation or the dimming factors. \section{Concluding remarks} \label{section:summary} This paper reports a novel ML-based approach to auto-calibration and advances our comprehension of the cross-channel relationship among different EUV channels by introducing a robust novel method to correct for the EUV instrument time degradation. We began with formulating the problem and setting up a toy model to test our hypothesis. We then established two CNN architectures that consider multiple wavelengths as input to auto-correct for on-orbit degradation of the AIA instrument onboard SDO. We trained the models using the SDOML dataset and further augmented the training set by randomly degrading images at each epoch. This approach made sure that the CNN model generalizes well to data not seen during the training, and we also developed a non-ML baseline to test and to compare its performance with the CNN models. With the best trained CNN models, we reconstructed the AIA multi-channel degradation curves of 2010-2020 and compared them with the sounding-rocket based degradation curves $V8$ and $V9$. Our results indicate that the CNN models significantly outperform the non-ML baseline model ($85\%$ vs. $51\%$ in terms of the success rate metric), for a tolerance level of $0.05$. In addition, the multi-channel CNN also outperforms the single-channel CNN with a $78\%$ success rate with an absolute $0.05$ threshold. This result is consistent with the expectation that correlations between structures in different channels, and size (morphology) of structures, and brightness can be used to compensate for the degradation. To further understand the correlation between different channels, we used the concept of feature maps to shed light over this aspect and see how the filters of the CNNs were being activated. We did see that the CNNs learned representations that make use of the different features within solar images, but further work needs to be done in this aspect to establish a more detailed interpretation. We also found that the CNN models reproduce the most recent sounding-rocket based degradation curves ($V8$ and $V9$) very closely and within their uncertainty levels. This is particularly promising, given that no time information has been used in training the models. For some specific channels, like $335$~\AA, the model is reproducing the $V8$ curve instead of $V9$ since the SDOML was corrected using the former. The single-channel model could perform as well as the multi-channel model even though the multi-channel presented a more robust performance when evaluated on the basis of their success rates. Lastly, this paper presents a unique possibility of auto-calibrating deep space instruments such as the ones onboard the STEREO spacecraft, and the recently launched remote sensing instrument called \textit{Extreme Ultraviolet Imager} \citep{2020A&A...642A...8R}, aboard the Solar Orbiter satellite \citep{2020A&A...642A...1M}, that are too far away from the Earth to be calibrated using a traditional method such as sounding-rockets. The auto-calibration model could be trained using the first months of data from the mission, assuming the instrument is calibrated at the beginning of the mission. The data volume could be an issue, and different types of data augmentation could be used to overcome this problem, such as synthetic degradation and image rotation. We further envision that the technique presented here may also be adapted to imaging instruments or spectrographs operating at other wavelengths (e.g., hyperspectral Earth-oriented imagers) observed from different space-based instruments like \textit{IRIS} \citep[][]{2014SoPh..289.2733D}. \begin{acknowledgements} {This project was partially conducted during the 2019 Frontier Development Lab (FDL) program, a co-operative agreement between NASA and the SETI Institute. We wish to thank IBM for providing computing power through access to the Accelerated Computing Cloud, as well as NASA, Google Cloud and Lockheed Martin for supporting this project. L.F.G.S was supported by the National Science Foundation under Grant No. AGS-1433086. M.C.M.C. and M.J. acknowledge support from NASA’s SDO/AIA (NNG04EA00C) contract to the LMSAL. S.B. acknowledges the support from the Research Council of Norway, project number 250810, and through its Centers of Excellence scheme, project number 262622. This project was also partially performed with funding from Google Cloud Platform research credits program. We thank the NASA’s Living With a Star Program, which SDO is part of, with AIA, and HMI instruments on-board. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). A.G.B. is supported by EPSRC/MURI grant EP/N019474/1 and by Lawrence Berkeley National Lab. The authors thank the anonymous referee for the comments.\\ Software: We acknowledge for CUDA processing cuDNN \citep{cudnn}, for data analysis and processing we used Sunpy \citep[][]{Sunpy2020}, Numpy \citep{numpy}, Pandas \citep{pandas}, SciPy \citep{scipy}, scikit-image \citep{scikit-image} and scikit-learn\citep{scikit-learn}. Finally all plots were done using Matplotlib \citep{matplotlib} and Astropy \citep{astropy:2018}}. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2021-02-02T02:36:37", "yymm": "2012", "arxiv_id": "2012.14023", "language": "en", "url": "https://arxiv.org/abs/2012.14023" }
\section{Introduction} Heak and neck cancers are one of the most common cancers~\cite{CancerStatistics2020}. Extracting quantitative image bio-markers from PET and CT images has shown tremendous potential to optimize patient care, such as predicting disease characteristics \cite{lambin2012radiomics} \cite{vallieres2017radiomics}. However, it relies on an expensive and error-prone manual annotation process of Regions of Interest (ROI) to focus the analysis. The fully automatic segmentation methods for head and neck tumors in PET and CT images are highly demanded because they will enable the validation of radiomics models on very large cohorts and with optimal reproducibility. PET and CT modalities include complementary and synergistic information for tumor segmentation. Thus, the key is how to explore the complementary information. Several methods have been proposed for joint PET and CT segmentation. Kumar et al. \cite{LungPET-CT} proposed a co-learning CNN to improve the fusion of complementary information in multi-modality PET-CT, which includes two modality-specific encoders, a co-learning component, and a reconstruction component. Li et al. \cite{PET-CT-CNN-CV} proposed a deep learning based variational method for non-small cell lung cancer segmentation. Specifically, A 3D fully convolutional network (FCN) was traind on CT images to produce a probability. Then, A fuzzy variational model was then proposed to incorporate the probability map and the PET intensity image. A split Bregman algorithm was used to minimize the variational model. Recently, Andrearczyk et al. \cite{MIDL20-PET-CT} used 2D and 3D V-Net to segment head and neck tumor from PET and CT images. Results showed that using the two modalities can obtain a statistically significant improvement than using CT images or PET images only. \begin{figure} \centering \includegraphics[scale=0.4]{Ch5-PETCT-Img-Eg.pdf} \caption{Visual examples of PET and CT image and the corresponding ground truth.} \label{fig:eg} \end{figure} Active contours \cite{kass1988snakes} \cite{caselles1997GAC} \cite{chan2001CV} have been one of the widely used segmentation methods before deep learning ear. The basic idea is to formulate the image segmentation task as an energy functional minimization problem. According to used the information in the energy functional, active contours can be classified into three categories, including edge-based active contours \cite{caselles1997GAC} that rely on image gradient information, region-based active contours \cite{li2008RSF} that rely on image-intensity region descriptors, and hybrid active contours \cite{HAC-TIP2020} \cite{zhang2008HAC} that use both image gradient and intensity information. In this short paper, we propose an automatic segmentation method for head and neck tumors from PET and CT images based on the combination of convolutional neural networks (CNNs) and hybrid active contours. Specifically, we first introduce a multi-channel 3D U-Net to segment the tumor with the concatenated PET and CT images. Then, we estimate the segmentation uncertainty by model ensembles, and define a quality score to select the cases with high uncertainties. Finally, we develop a hybrid active contour model to refine the high uncertainty cases. \section{Method} \subsection{CNN backbone} Our network backbone is the typical 3D U-Net \cite{UNet3d}. The number of features is 32 in the first block. In each downsampling stage, the number of features is doubled. The implementation is based on nnU-Net \cite{nnunet20}. In particular, the network input is configured with a batch size 2. The patch size is $128\times128\times128$. The optimizer is stochastic gradient descent with an initial learning rate (0.01) and a nesterov momentum (0.99). To avoid overfitting, standard data augmentation techniques are used during training, such as rotation, scaling, adding Gaussian Noise, gamma correction. The loss function is the sum between Dice loss and TopK loss~\cite{LossOdyssey}. We train the 3D U-Net with five-fold cross validation. Each fold is trained on a TITAN V100 GPU with 1000 epochs. The training time costs about 4 days. \subsection{Uncertainty quantification} We train five U-Net models with five-fold cross validation. During testing, we infer the test cases with the trained five models. Thus, each test case has five predictions. Let $p_i$ denote the predictions (Probability) of the $i-th$ model, the final segmentation $S$ can be obtained by \begin{equation} S = \frac{1}{5}\sum_{i=1}^5 p_i. \end{equation} Then, we compute the normalized surface Dice $NSD_i$ between each prediction and the final segmentation. Details and the code are publicly available at \url{http://medicaldecathlon.com/files/Surface_distance_based_measures.ipynb}. Finally, the uncertainty of the prediction is estimated by \begin{equation} Unc = 1 - \frac{1}{5}\sum_{i=1}^5 NSD_i. \end{equation} If one case have a uncertainty value over 0.2, it will be selected for the next refinement. \subsection{Refinement with Hybrid active contours} This step aims to refine the segmentation results of the cases with high uncertainties by exploiting the complementary information among CT images, PET images, and network probabilities. Basically, CT images can provide edge informations, and PET and network probabilities can provide location or region informations. We propose the following hybrid active contour model \begin{equation} E(u) = E_{PET}(u) + E_{CT}(u) + E_{CNN}(u), \end{equation} where \begin{equation} \begin{aligned} E_{PET}(u; f_1, f_2) & = \int_\Omega \int_\Omega K(x,y)|I_{PET}(y)-f_1(x)|^2 u d\mathbf{x}\\ & + \int_\Omega\int_\Omega K(x,y)|I_{PET}(y)-f_2(x)|^2 (1-u) d\mathbf{x}, \end{aligned} \end{equation} \begin{equation} E_{CT}(u) = \sqrt{\frac{\pi}{\tau}}\int_\Omega \sqrt{g_{CT}}uG_\tau*(\sqrt{g_{CT}}(1-u))d\mathbf{x} \end{equation} and \begin{equation} E_{CNN}(u; c_1, c_2) = \int_\Omega (P_{CNN}-c_1)^2u + (P_{CNN}-c_2)^2(1-u)d\mathbf{x}. \end{equation} $I$ is the image intensity values, $K(x,y)$ is the Gaussian kernel function, and $c_1, c_2$ are the average image intensities inside and outside the segmentation contour, respectively. $G_\tau$ is the Gaussian kernel, which is defined by \begin{equation} G_\tau(x) = \frac{1}{4\pi\tau}\exp(-\frac{|\mathbf{x}|^2}{4\tau}) \end{equation} The hybrid active contour model is solved by the iterative convolution-thresholding method where the details can be found at \cite{wang2017JCP,ICTM-CV,ICTM-GAC}. \section{Experiments and results} \subsection{Dataset} We use the official HECKTOR dataset \cite{HECKTOR2021overview} to evaluate the proposed method. The training data comprises 201 cases from four centers (CHGJ, CHMR, CHUM and CHUS). The test data comprise 53 cases from another center (CHUV). Each case comprises: CT, PET and GTVt (primary Gross Tumor Volume) in NIfTI format, as well as the bounding box location and patient information. We use the official bounding box to crop all the images. We also resample the images to isotropic resolution $1mm\times1mm\times1mm$. Specifically, We use third order spline interpolation and zero order nearest interpolation for the images and labels, respectively. Furthermore, we apply Z-score (mean subtraction and division by standard deviation) to separately normalize each PET and CT image. \subsection{Quantitative and qualitative results} Table~\ref{tab:test} and Figure \ref{fig:seg} present the quantitative and qualitative results on the testing set, respectively. The proposed method achieved the 2nd place on the official leaderboard, which is also very close to the 1st-place performance. The segmentation results have better precision but inferior recall, indicating that most of the segmentation results are right but some tumors are missed by the method. \begin{table}[!h] \caption{Quantitative results on the testing set.}\label{tab:test} \centering \begin{tabular}{lcccc} \hline Participants & DSC & Precision & Recall & Rank \\ \hline andrei.iantsen & \textbf{0.759} & 0.833 & \textbf{0.740} & 1 \\ junma (\textbf{Ours}) & 0.752 & 0.838 & 0.717& 2 \\ badger & 0.735 & 0.833 & 0.702 & 3 \\ deepX & 0.732 & 0.785 & 0.732 & 4 \\ AIView\_sjtu & 0.724 & \textbf{0.848} & 0.670 & 5 \\ DCPT & 0.705 & 0.765 & 0.705 & 6 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[scale=0.4]{exp-fig.pdf} \caption{Visual examples of segmentation results from testing set.} \label{fig:seg} \end{figure} \section{Conclusion} In this paper, we proposed a fully automatic segmentation method for head and neck tumor segmentation in CT and PET images, which combines modern deep learning methods and traditional active contours. Experiments on official HECKTOR challenge dataset demonstrate the effectiveness of the proposed method. The main limitation of our method is the low recall, indicating that some of the lesions are missed in the segmentation results. This would be our further work to enhance the results towards higher performance. \section*{Acknowledgement} This project is supported by the National Natural Science Foundation of China (No. 11531005, No. 11971229). The authors of this paper declare that the segmentation method they implemented for participation in the HECKTOR challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. We also thanks the HECKTOR organizers for their public dataset and hosting the great challenge. \bibliographystyle{splncs04}
{ "timestamp": "2020-12-29T02:28:19", "yymm": "2012", "arxiv_id": "2012.14207", "language": "en", "url": "https://arxiv.org/abs/2012.14207" }
\section{\label{sec:level1} Introduction} The term quantum walk (QW) was first introduced in the seminal work by Aharonov et al \citep{Aharonov_intro}. Since then, QWs have been studied from computation point of view such as in decision trees \citep{Farhi_path,Farhi_decision}, cellular automata \citep{Meyer_CA}, quantum search algorithms \citep{Shenvi_SA,Ambainis_SA,Kendon_SA} and universal quantum computing \citep{Childs_QC,Lovett_QC}. In addition, QWs have been used as a tool to explore topological phases \citep{Karski_cold,Zahringer_ion,Kitagawa_split,Berry_topo,Wang_exp_walk,Feder_topo}, photosynthetic energy transfer \citep{Mohseni_photo}, Anderson localization and decoherence \citep{Schreiber_decoh,Crespi_anderson,Edge_decoh}. Several experiments with ions, atoms and photons \citep{Schmitz_ion,Broome_photon,Schreiber_photon,Alberti_exp_tom} have also been performed to realize QWs and to simulate its impressions on these phenomena \citep{Obuse_topo,Rakovszky_split}. In continuous time quantum walk (CTQW), a quantum state is evolved by unitary operator $U\left(t\right)={\rm exp}\left(-iHt\right)$ for some time $T$ \citep{Childs_qw_cw}, with the Hamiltonian $H$ governing the walk. This idea is inspired from classical Markov chains where the role of $H$ is played by the adjacency matrix of the underlying graph. Whereas, in discrete time quantum walk (DTQW) \citep{Meyer_CA}, a quantum coin is used to guide the walker along a graph. The size of the wave packet after $n$ steps is given by \citep{Ahlbrecht_packet_width}, \begin{equation} \left\langle \Delta x^{2}\right\rangle =n^{2}\left(1-\left|sin\left(\gamma/2\right)\right|\right) \end{equation} with $\gamma$ as the coin angle. This suggests the ballistic nature of QW compared to classical one which is diffusive in nature as $\left\langle \Delta x^{2}\right\rangle =n$. Moreover, it was also shown that DTQWs are faster than CTQWs \citep{coin_faster}, for further details please see Ref. \citep{Kempe_intro}, an excellent review on the subject. There is much work done on DTQWs on graphs \citep{Aharonov_graphs} and 2D lattices \citep{Bru_cylinder}. In a single dimension, the probability profile for quantum walk spreads at each step along the line. In higher dimensions this distribution is further extended along all direction depending on the protocol used for the walk. In a recent article \citep{Bru_cylinder}, authors did analysis for a quantum walker on a cylinder. They showed that the boundary conditions on the closed side of cylinder implies several one dimensional walks characterized by their coin angles. They used conventional walk protocol and marginal probability as a measure. This protocol when used for the case of ladder, keeps the walk on one side, see Sec. IIA. On the other hand, in \citep{Omar_ent} authors considered two walkers on a line where due to entanglement and relative phases between the states influence the walk. Here, we explore DTQW on a ladder where due to the boundary conditions the walk is split into effectively two one dimensional components. The simple geometry of ladder lets us observe and control the behavior of individual component as well as the overall walk. Using split-step protocol, we can give richer possibilities for the walk. We also used magnetization like parameters discussed in Ref. \citep{Souza_mag} and thermodynamic-like quantities associated with the walk as in Ref. \citep{Romanelli_therm,ROMANELLI_therm1,Vallejo_therm}. This article is organized as follows. In Sec. II, we describe the setup for our model. In particular, we will show that our choice of the split-step protocol provides more freedom for the walker. In Sec. III, we present our observation on the basis of few simple parameter in analogy with paramagnetism. We finally conclude and summarize in the last section. \section{The choice of protocol} The state of a quantum walker is typically labeled by its position $\left|m\right\rangle $ and spin $\left|s\right\rangle $, degrees of freedom. Initially this state $\left|s,m\right\rangle =\left|s\right\rangle \otimes\left|m\right\rangle $ can be set to any position (usually taken to be localized) and spin (up, down or any mixture). Of course, these choices have effects on the final probability distribution. The state is then evolved through a given protocol for $N$ steps and the final state is projected over all positions to get the probability distribution for the walker \citep{Kempe_intro}. The conventional protocol for 1DQW is the flip of coin followed by the shift operator as $\hat{U}^{{\rm con}}=\hat{T}\hat{C}\left(\gamma/2\right)$. Here $\hat{C}$ is the coin operator with some angle $\gamma\in\left[0,\pi\right]$. The general rotation of the coin allows for richer scenarios by changing the mixing between up and down components and hence more control over the walker. In this article we will use the coin introduced in the experiments as in Refs. \citep{Karski_cold,Zahringer_ion} and for exploring topological states \citep{Kitagawa_split}, \begin{equation} \hat{C}\left(\gamma/2\right)=c\left|\uparrow\right\rangle \left\langle \uparrow\right|-s\left|\uparrow\right\rangle \left\langle \downarrow\right|+s\left|\downarrow\right\rangle \left\langle \uparrow\right|+c\left|\downarrow\right\rangle \left\langle \downarrow\right|, \end{equation} where $c=cos\left(\gamma/2\right),s=sin\left(\gamma/2\right)$. The translation operator $\hat{T}$ decides the step direction by measuring the spin of the walker, \begin{equation} \hat{T}=\sum_{m=-\infty}^{\infty}\left(\left|\uparrow\right\rangle \left\langle \uparrow\right|\otimes\left|m+1\right\rangle \left\langle m\right|+\left|\downarrow\right\rangle \left\langle \downarrow\right|\otimes\left|m-1\right\rangle \left\langle m\right|\right). \end{equation} Another choice for the walker on a line is to split the walk in two half-steps with use of two coins. The unitary evolution for split-step walk is $\hat{U}^{{\rm split}}=\hat{T}^{\downarrow}\hat{C}\left(\beta\right)\hat{T}^{\uparrow}\hat{C}\left(\alpha\right)$ where one component of the spin is moved and other is held after the flip of first coin. Then the next coin is flipped and the process is repeated for the other component. The half-shift operators are, \begin{eqnarray} T^{\uparrow} & = & \sum_{m=-\infty}^{\infty}\left(\left|\uparrow\right\rangle \left\langle \uparrow\right|\otimes\left|m+1\right\rangle \left\langle m\right|+\left|\downarrow\right\rangle \left\langle \downarrow\right|\otimes\left|m\right\rangle \left\langle m\right|\right),\\ T^{\downarrow} & = & \sum_{m=-\infty}^{\infty}\left(\left|\uparrow\right\rangle \left\langle \uparrow\right|\otimes\left|m\right\rangle \left\langle m\right|+\left|\downarrow\right\rangle \left\langle \downarrow\right|\otimes\left|m-1\right\rangle \left\langle m\right|\right). \end{eqnarray} The one dimensional protocol can be extended to two dimensional quantum walks (2DQWs) with the coin operators replaced by the Groover coin or the product of Hadamard coins. This article uses one of the result given in the work by Bru et al \citep{Bru_cylinder}, where they discussed the 2DQW on a cylinder and showed that the boundary conditions implies a collection of 1DQWs with various group velocities. In Fig. 1, it is shown that the use of conventional protocol for both directions keeps the walker on one side of the ladder as in alternative quantum walk (AQW). \begin{figure} \includegraphics[scale=0.3]{ladder} \includegraphics[scale=0.4]{few_steps} \caption{} First three step of the conventional (left) and split-step walk (right) on a ladder. For conventional walk, the longer side coin is set to $-\pi/4$ while for the shorter side it is $-\pi/8$. For split-step $\alpha=-\pi/4$ and $\beta=-\pi/2$. \end{figure} Our aim is to keep the walker on both sides of the ladder. In figure 1, we show a simple ladder graph that is placed with its longer side along $y$ axis and shorter side along $x$ axis. For the longer side of the ladder, we choose the conventional protocol as $\hat{U}_{y}=\hat{T}\hat{C}\left(-\pi/4\right)$ with equal mixture of up and down spin components. However, if we use the split walk then the walker can be kept on both sides of the ladder. One step unitary evolution on the ladder can then be, \begin{equation} \hat{U}=\hat{U}_{y}^{{\rm split}}\hat{U}_{x}^{{\rm con}}=\hat{T}_{y}\hat{C}\left(\pi/4\right)\hat{T}_{x}^{\downarrow}\hat{C}\left(\beta/2\right)\hat{T}_{x}^{\uparrow}\hat{C}\left(\alpha/2\right). \end{equation} We now impose the boundary conditions on the shorter side of the ladder. This leads to the quasi-momentum quantization along $x$ direction as $k_{x}=n\pi$. The translation operator takes two distinct values $+1$ or $-1$ as $e^{\pm ik_{x}}=\left(-1\right)^{n}$. This represents two one dimensional walkers moving along $y$ direction. The unitary operators for the two walkers are now, \begin{eqnarray} \hat{U_{1}} & = & \hat{T}\left(k\right)\hat{C}\left(\gamma_{1}/2\right),\\ \hat{U_{2}} & = & \hat{T}\left(k\right)\hat{C}\left(\gamma_{2}/2\right), \end{eqnarray} where $\gamma_{1}=\alpha+\beta-\pi/2$ and $\gamma_{2}=\gamma_{1}+\phi=\alpha-\beta+3\pi/2$. Note that the phase difference between the coins is $\phi=\pm\left(2\beta-2\pi\right)$. \section{Walk on the ladder} A variety of probability distributions can be produced by setting the coin angle to different values \citep{Panahiyan_stepcoin}. The dependance of walk on coin angle has analogy with paramagnet in a magnetic field \citep{Souza_mag}, where the role of the magnetic field is replaced by the Bloch vector. As the angle of the coin is varied, the probability of up and down spin components is changed, as shown in Fig. 2. It can be seen from Eqs. (\ref{eq:dens_mat}) that the difference of these probabilities is $1-\left|sin\left(\gamma/2\right)\right|$ for one dimensional walk. This parameter gives hint about the walker's distribution in position space \citep{Souza_mag}. In Fig. 2, the initial state is set at the center of the ladder on one side while the spin is set in up state. The angle of coin is then chosen as $0,\pi/4,\pi/2$ and $\pi$. At $\gamma=0$ and $\pi$, this parameter takes its maximum and minimum values respectively and the walk is classical-like. In former case, the walker moves without any spread while for the latter it just oscillates around its initial position. For the Hadamard, case the situation is like a fair classical coin. The probabilities for both up and down components are equal and hence the distribution is well spread over all points. \begin{figure} \includegraphics[scale=0.4]{one_walk} \includegraphics[scale=0.5]{various_one_walks} \caption{} a) Entropy and probabilities for a single walker with initial coin state $\left(1,0\right)^{T}$. b) One dimensional walk for 31 steps for coin angles $0,\pi/4,\pi/2$ and $\pi$ from top to bottom respectively (darker colors indicate higher probabilities). \end{figure} We will now explore various walks on a ladder on the basis of these parameters. Since the walk is decomposed effectively into two one dimensional components, we can associate three such parameters. The first two are the usual ones for each side and can be written down immediately from Eqs. (\ref{eq:dens_mat}) as, \begin{eqnarray} M_{1} & = & 1-\left|sin\left(\gamma_{1}/2\right)\right|,\label{eq:m1}\\ M_{2} & = & 1-\left|sin\left(\gamma_{2}/2\right)\right|.\label{eq:m2} \end{eqnarray} For the probability differences across the ladder, we define a third quantity as, \begin{eqnarray} M & = & 1-\frac{1}{2}\left|sin\left(\frac{\gamma_{1}}{2}\right)\right|-\frac{1}{2}\left|sin\frac{1}{2}\left(\frac{\gamma_{2}}{2}\right)\right|.\nonumber \\ & = & \frac{1}{2}\left(M_{1}+M_{2}\right) \end{eqnarray} This turns out to be merely the average of $M_{1}$ and $M_{2}$. Of course, this is also the difference between the diagonal components of full density matrix. \begin{figure} \includegraphics[scale=0.4]{mag_intu} \includegraphics[scale=0.5]{snc_walk} \caption{} a) $M_{1},M_{2}$ and $M$ vs $\beta$ for $\alpha=-\pi/4$. b) Discriminant of the eigenvalues of density matrices for two components. c) Identical walk for $\alpha=-\pi/4$ and $\beta=\pi/4,3\pi/4$. \end{figure} For any value of $\alpha$, when $\beta$ is an odd multiple of $\pm\pi$, the walker is restricted to only one side of the ladder. It is also evident from the periodicity of Eqs. (\ref{eq:m1}) and (\ref{eq:m2}) that all three parameters are equal at these points. However, at $\beta=0$, the second coin is identity and the split-step walk is reduced to alternative walk between two sides of the ladder, see Fig. 1. As $\beta$ is changed from $\pm\pi$, the walk starts to split on both sides of the ladder until either $M_{1}$ or $M_{2}$ vanishes ($\alpha\pm\beta=-\pi/2$) or gets maximized ($\alpha\pm\beta=\pi/2$). At these points, the distribution on both sides of the ladder is identical as shown in Fig 3. The whole walk is dominated by one component as the second one is classical-like. These two cases are similar to 1DQW with coin angles $0$ and $\pi$, as in Fig. 2, except that the walkers are spread over all points. For the Hadamard cases when $\alpha=\pm\pi/2$, such cases can not be observed as the difference between $M_{1}$ and $M_{2}$ is zero for all values of $\beta$ and none of the components takes over. A summary of these results in given in Tab. 1. \begin{figure} \includegraphics[scale=0.25]{m_info} \caption{Mutual information $I$ vs $\beta$ for $\alpha\in\left[-\pi,\pi\right]$ in $\pi/4$ intervals.} \end{figure} \begin{table}[b] \caption{Summary of various walk pattern.} \begin{ruledtabular} \begin{tabular}{lll} $\alpha\pm\beta$ & Magnetization & Walk patterns\\ \colrule $\alpha\pm0$ & $M_{1}=M_{2}=M$ & Alternating \\ $\alpha\pm\pi$ & $M_{1}=M_{2}=M$ & One-sided \\ $-\pi/2$ & $M_{1}$or $M_{2}=0$ & Identical \\ $\pi/2$ & $M_{1}$or $M_{2}$ is maximized & Identical \\ \end{tabular} \end{ruledtabular} \end{table} We would like to remark that these arguments can be based on other quantities. For example, the discriminants in the eigenvalues of density matrices are of the form, \begin{equation} D=\frac{\left|cos\left(\gamma/4\right)\right|-\left|sin\left(\gamma/4\right)\right|}{\left|cos\left(\gamma/4\right)\right|+\left|sin\left(\gamma/4\right)\right|}. \end{equation} In Fig 3. we have plotted $D_{1}$ and $D_{2}$ for both components where one can note the above mention cases. Indeed, various thermodynamic-like properties can be assigned to a quantum walker, which are related to these variables \citep{Romanelli_therm,ROMANELLI_therm1,Vallejo_therm}. We finally use mutual quantum information \citep{Xue_ent} for the two components, \begin{equation} I=S\left(\rho_{1}\right)+S\left(\rho_{2}\right)-S\left(\rho\right), \end{equation} to distinguish these two types identical distributions. Here $S\left(\rho_{1}\right)$, $S\left(\rho_{2}\right)$ and $S\left(\rho\right)$ are the von Neumann entropies associated with each component and overall walk, respectively. The zeros of this expression indicate the angles at which the two components are independent. As an example, $\alpha=-\pi/4$ and $\beta=\pm3\pi/4$ corresponds to two independent walks while at $\beta=0$, both distributions are fully dependent on each other, see Fig 3 and Fig. 4. This may be the case as the walk is alternative at this value and hence each component is fully determined by the other. \section{Conclusion} We considered the simple situation for a quantum walker moving on a ladder. Using a specific mixed protocol for the walk, in contrast to conventional protocols, gives an extra option for the walker to be on both siders of the ladder. The analysis is done on the basis of probability differences of up and down spin components. We have also shown that for particular choices of two angles, the components can have almost similar probability profiles. As the walk evolves from one side of the ladder to the other, the mutual information is transferred from one component to the other. We have shown that this transfer is zero when the probability difference between up and down spin components across the ladder is maximum. These cases are simpler for a ladder but generalizing to a cylinder will need more angles and hence may not be feasible. \begin{acknowledgments} We would like to thank Aeysha Khalique for useful discussions at the initial stage of this work. \end{acknowledgments}
{ "timestamp": "2020-12-29T02:20:58", "yymm": "2012", "arxiv_id": "2012.13994", "language": "en", "url": "https://arxiv.org/abs/2012.13994" }
\section*{Acknowledgements} T.R. Tan acknowledges support from the Sydney Quantum Academy. The authors thank A. Singh and T.F. Wohlers-Reichel for their contribution to assembling and maintenance of the experimental setup. This work was partially supported by the Intelligence Advanced Research Projects Activity Grant No. W911NF-16-1-0070, the US Army Research Office Grant No. W911NF-14-1-0682, the Australian Research Council Centre of Excellence for Engineered Quantum Systems Grant No. CE170100009, and a private grant from H.\&A. Harley. \newline \noindent TRT and CE contributed equally to this work.
{ "timestamp": "2021-05-11T02:33:34", "yymm": "2012", "arxiv_id": "2012.14187", "language": "en", "url": "https://arxiv.org/abs/2012.14187" }
\@startsection{section}{1}{\z@{\@startsection{section}{1}{\zeta@} {3ex plus-1ex minus-.2ex}{1pt plus1pt}{\large\sf\bfseries\boldmath}} \def\@startsection{subsection}{2}{\z@{\@startsection{subsection}{2}{\zeta@} {1.5ex plus-1ex minus-.2ex}{0.01pt plus1pt}{\sf\slshape}} \def\@startsection{subsubsection}{3}{\z@{\@startsection{subsubsection}{3}{\zeta@} {1.5ex plus-1ex minus-.2ex}{0.01pt plus0.2pt}{\sf\boldmath}} \def\@startsection{paragraph}{4}{\z@{\@startsection{paragraph}{4}{\zeta@} {.75ex \@plus.5ex \@minus.2ex}{-2mm}{\sf\bfseries\boldmath}} \makeatother \allowdisplaybreaks \seceq \usepackage{lipsum} \usepackage{listings} \definecolor{MyDarkGreen}{rgb}{0.0,0.4,0.0} \lstloadlanguages{Perl} \lstset{language=Perl, frame=single, basicstyle=\small\ttfamily, keywordstyle=[1]\color{Blue}\bf, keywordstyle=[2]\color{Purple}, keywordstyle=[3]\color{Blue}\underbar, identifierstyle=, commentstyle=\usefont{T1}{pcr}{m}{sl}\color{MyDarkGreen}\small, stringstyle=\color{Purple}, showstringspaces=false, tabsize=5, % morekeywords={rand}, % morekeywords=[2]{on, off, interp}, % morekeywords=[3]{test}, % morecomment=[l][\color{Blue}]{...}, numbers=left, firstnumber=1, numberstyle=\tiny\color{Blue}, stepnumber=5 } \usepackage[enableskew,vcentermath]{youngtab} \let\TC=\textcolor \definecolor{Hey}{rgb}{.9,.05,.4} \definecolor{orange}{rgb}{1,.5,0} \definecolor{plum}{rgb}{.4,0,.6} \definecolor{R}{rgb}{1,0,0} \definecolor{G}{rgb}{0.1,0.7,0} \definecolor{B}{rgb}{0,0,1} \long\def\CMTblu#1{\leavevmode\TC{blue}{\sf#1}} \long\def\CMTgrn#1{\leavevmode\TC{green}{\sf#1}} \long\def\CMTred#1{\leavevmode\TC{red}{\sf#1}} \long\def\CMTorg#1{\leavevmode\TC{orange}{\sf#1}} \long\def\CMTR#1{\leavevmode\TC{R}{\sf#1}} \long\def\CMTG#1{\leavevmode\TC{G}{\sf#1}} \long\def\CMTB#1{\leavevmode\TC{B}{\sf#1}} \long\def\CMTR#1{\leavevmode\TC{R}{\sf#1}} \newcommand{\perlscript}[2]{ \begin{itemize} \item[]\lstinputlisting[caption=#2,label=#1]{#1.pl} \end{itemize} } \begin{document} \thispagestyle{empty} \noindent{\small \hfill{ \\ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {} }} \vspace*{0mm} \begin{center} {\large \bf \hskip1.5in A Note On \newline Exemplary Off-Shell Constructions Of 4D, $\bm {\cal N}$ = 2 Supersymmetry Representations \\[2pt] } \vskip0.3in {\large { $~~~~~~~~$ Devin D.\ Bristow\footnote{devin.bristow@pepperdine.edu}$^{,a}$, John H.\ Caporaletti\footnote{jcapor@terpmail.umd.edu}$^{,b}$, Aleksander J.\ Cianciara\footnote{aleksander${}_-$cianciara@brown.edu}$^{,c,d}$, \newline $~~~~~~~~~~~~~$ S.\ James Gates, Jr.\footnote{sylvester${}_-$gates@brown.edu}$^{,c,d}$, Delina Levine\footnote{dmlevine@terpmail.umd.edu}$^{,b}$, and Gabriel Yerger\footnote{Gabrielyerger@gmail.com}$^{,b}$ $~~~~~~$ }} \\*[8mm] \emph{ \centering $^{a}$Pepperdine University, Natural Science Division \\[1pt] Malibu, CA 90263, USA, \\[12pt] $^{b} $Department of Physics, University of Maryland, \\[1pt] College Park, MD 20742-4111, USA \\[12pt] $^{c}$Brown University, Department of Physics, \\[1pt] Box 1843, 182 Hope Street, Barus \& Holley 545, Providence, RI 02912, USA, \\[4pt] and \\[4pt] $^{d}$Brown Center for Theoretical Physics, 340 Brook Street, Barus Hall, Providence, RI 02912, USA, } \\*[15mm] { ABSTRACT}\\[4mm] \parbox{142mm}{\parindent=2pc\indent\baselineskip=14pt plus1pt We continue the search for rules that govern when off-shell 4D, $\cal N$ = 1 supermultiplets can be combined to form off-shell 4D, $\cal N$ = 2 supermultiplets. We study the ${\mathbb S}_8$ permutations and Height Yielding Matrix Numbers (HYMN) embedded within the adinkras that correspond to these putative off-shell 4D, $\cal N$ = 2 supermultiplets. Even though the HYMN definition was designed to distinguish between the raising and lowering of nodes in one dimensional valise supermultiplets, they are shown to accurately select out which combinations of off-shell 4D, $\cal N$ = 1 supermultiplets correspond to off-shell 4D, $\cal N$ = 2 supermultiplets. Only the combinations of the chiral + vector and chiral + tensor are found to have valises in the same class. This is consistent with the well known structure of 4D, $\cal N$ = 2 supermultiplets.} \end{center} \vfill \noindent PACS: 11.30.Pb, 12.60.Jv\\ Keywords: adinkra, supersymmetry \vfill \clearpage \@startsection{section}{1}{\z@{Introduction} \label{sec:NTR0} There remain a few simple, unanswered questions in supersymmetry (SUSY). The simplest form of one such class of questions is, ``Given minimal representations of off-shell 4D, $\cal N $ = 1 supermultiplets, what combinations of these can be used as a basis for forming off-shell 4D, $\cal N $ = 2 supermultiplets?" We inaugurated our studies on this in a 2014 \cite{adnkKyeoh} research investigation. This current work represents a continuation along that line. In particular, we propose to use the information contained in the adinkra \cite{Adnk1} projections of these supermultiplets to one-dimensional supersymmetrical systems in order to answer this question. Before turning to the 4D, ${\cal N} $ = 2 theories it is useful to recall our 4D, ${\cal N}$ = 1 supermultiplets. Every off-shell 4D, ${\cal N} $ = 1 supermultiplet reduced to one dimension leads to a set of matrices that satisfy the Garden algebra \cite{Garden} \begin{align} \begin{split} {\bm {{\rm L}}}{}_{{}_{{\rm I}}} \, {\bm {{\rm R}}}{}_{{}_{{\rm J}}} ~+~ {\bm {{\rm L}}}{}_{{}_{{\rm J}}} \, {\bm {{\rm R}}}{}_{{}_{{\rm I}}} & ~=~ 2\, \delta{}_{{}_{\rm {I \, J}}} \, {\bm {\rm{I}}}{}_{\rm d} ~~~, ~~~ {\bm {{\rm R}}}{}_{{}_{{\rm I}}} \, {\bm {{\rm L}}}{}_{{}_{{\rm J}}} ~+~ {\bm {{\rm R}}}{}_{{}_{{\rm J}}} \, {\bm {{\rm L}}}{}_{{}_{{\rm I}}} ~=~ 2\, \delta{}_{{}_{\rm {I \, J}}} \, {\bm {\rm{I}}}{}_{\rm d} ~~~. \end{split} \label{eq:GAlg1} \end{align} where the index I takes on values 1, $\dots $, 4 and d = 4p, where p is a non-negative integer and equal to one for minimal representations. Each of the matrices ${\bm {{\rm L}}}_{\, {{\rm I}}}$ takes the form \begin{equation} {\bm {{\rm L}}}_{\, {{\rm I}}} ~=~ {\bm {\cal S}}_{\, {{\rm I}}} \, {\bm {\cal P}}_{\, {{\rm I}}} ~~~, \label{eq:aas1} \end{equation} for each {\em {fixed}} value of I. Further, each matrix $ {\bm {\cal S}}_{\, {{\rm I}}}$ is diagonal and squares to the identity, and each matrix ${\bm {\cal P}}_{\, {{\rm I}}}$ describes a permutation. Thus, reduction to 1D provides a prescription for mapping supermultiplets onto elements of the permutation group. \@startsection{section}{1}{\z@{Potentially `Colorful' Off-shell 4D, $\bm {\cal N} $ = 2 SUSY Multiplets} \label{sec:ColoR} In the work of \cite{adnkKyeoh}, by starting from pairs of the minimal off-shell 4D, $\bm {\cal N} $ = 1 chiral, tensor, and vector supermultiplets and their free actions, a second potential supersymmetry operator was constructed for the six different choices of pairings of the supermultiplets. A ``representation label'' ${({\cal R})}$ was introduced to describe the six pairings: $(CC)$, $(CT)$, $(CV)$, $(TT)$, $(TV)$, and $(VV)$, where for example $(CC)$ would refer to the Chiral + Chiral supermultiplet and $(TV)$ would refer to the Tensor + Vector supermultiplet. For each value of the representation label the pairs of supermultiplets were reduced to one dimensional theories with extended supersymmetry. This led to eight ${\bm {\rm L}}$ matrices for each pairing. In the following, we list the ${\bm {\rm L}}$ matrices in forms that can readily be used to generate the factorization shown in Eq.\ (\ref{eq:aas1}). We factor \cite{permutadnk} the signed permutation matrices $$\left(\mathrm{L}_{\mathrm{I}}\right)_{i}^{\hat{k}}=\left(\mathcal{S}^{(\mathrm{I})}\right)_{i}^{~\hat{\ell}}\left(\mathcal{P}_{(\mathrm{I})}\right)_{\hat{\ell}}^{~\hat{k}}$$ for each fixed $I = 1,2,...,N$ where the first factor corresponds to a $d \times d$ diagonal matrix with only $\pm 1$ entries, and the second corresponds to a matrix representation of the permutation of d objects. The signed factor can then be rewritten in a (reversed) binary notation where $$\left(\mathcal{S}^{(\mathrm{I})}\right)_{i}^{\hat{\ell}}=\left[\begin{array}{cccc} (-1)^{b_{1}} & 0 & 0 & \cdots \\ 0 & (-1)^{b_{2}} & 0 & \cdots \\ 0 & 0 & (-1)^{b_{2}} & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right] \quad \leftrightarrow \quad\left(\mathcal{R}_{\mathrm{I}}=\sum_{i=1}^{\mathrm{d}} b_{i} 2^{i-1}\right)_{b}$$ Using this notation, ${({\cal R})}$ = $(CC)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {\bm {\rm L}}{}_{1} ~=~ \left[\begin{array}{cc} (10)_b (243) & 0 \\ 0 & (10)_b (243) \\ \end{array}\right] ~~&,~~ {\bm {\rm L}}{}_{2} ~=~ \left[\begin{array}{cc} (12)_b (123) & 0 \\ 0 & (12)_b (123) \\ \end{array}\right] ~~, \cr {\bm {\rm L}}{}_{3} ~=~ \left[\begin{array}{cc} (6)_b (134) & 0 \\ 0 & (6)_b (134) \\ \end{array}\right] ~~~~~&,~~ {\bm {\rm L}}{}_{4} ~=~ \left[\begin{array}{cc} (0)_b (142) & 0 \\ 0 & (0)_b (142) \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{5} ~=~ \left[\begin{array}{cc} 0 & (15)_b (243) \\ (0)_b (243) & 0 \\ \end{array}\right] ~~&,~~ {\bm {\rm L}}{}_{6} ~=~ \left[\begin{array}{cc} 0 & (9)_b (123) \\ (6)_b (123) & 0 \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{7} ~=~ \left[\begin{array}{cc} 0 & (3)_b (134) \\ (12)_b (134) & 0 \\ \end{array}\right] ~~&,~~ {\bm {\rm L}}{}_{8} ~=~ \left[\begin{array}{cc} 0 & (5)_b (142) \\ (10)_b (142) & 0 \\ \end{array}\right] ~~~. } \label{eq:CC} \end{equation} For ${({\cal R})}$ = $(CT)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {\bm {\rm L}}{}_{1} ~=~ \left[\begin{array}{cc} (10)_b (243) & 0 \\ 0 & (14)_b (234) \\ \end{array}\right] ~~&,~~ {\bm {\rm L}}{}_{2} ~=~ \left[\begin{array}{cc} (12)_b (123) & 0 \\ 0 & (4)_b (124) \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{3} ~=~ \left[\begin{array}{cc} (6)_b (134) & 0 \\ 0 & (8)_b (132) \\ \end{array}\right] ~~~~~&,~~ {\bm {\rm L}}{}_{4} ~=~ \left[\begin{array}{cc} (0)_b (142) & 0 \\ 0 & (2)_b (143) \\ \end{array}\right] ~~~~\,~~, \cr {\bm {\rm L}}{}_{5} ~=~ \left[\begin{array}{cc} 0 & (11)_b (243) \\ (0)_b (234) & 0 \\ \end{array}\right] ~~\,~&,~~ {\bm {\rm L}}{}_{6} ~=~ \left[\begin{array}{cc} 0 & (13)_b (123) \\ (10)_b (124) & 0 \\ \end{array}\right] ~~~~, \cr {\bm {\rm L}}{}_{7} ~=~ \left[\begin{array}{cc} 0 & (7)_b (134) \\ (6)_b (132) & 0 \\ \end{array}\right] ~~~~\,~&,~~ {\bm {\rm L}}{}_{8} ~=~ \left[\begin{array}{cc} 0 & (1)_b (142) \\ (12)_b (143) & 0 \\ \end{array}\right] ~~~~~. } \label{eq:CT} \end{equation} For ${({\cal R})}$ = $(CV)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {\bm {\rm L}}{}_{1} ~=~ \left[\begin{array}{cc} (10)_b (243) & 0 \\ 0 & (10)_b (1243) \\ \end{array}\right] ~~&,~~ {\bm {\rm L}}{}_{2} ~=~ \left[\begin{array}{cc} (12)_b (123) & 0 \\ 0 & (12)_b (23) \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{3} ~=~ \left[\begin{array}{cc} (6)_b (134) & 0 \\ 0 & (0)_b (14) \\ \end{array}\right] ~~~~~~~~&,~~ {\bm {\rm L}}{}_{4} ~=~ \left[\begin{array}{cc} (0)_b (142) & 0 \\ 0 & (6)_b (1342) \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{5} ~=~ \left[\begin{array}{cc} 0 & (2)_b (243) \\ (13)_b (1243) & 0 \\ \end{array}\right] ~~~~&,~~ {\bm {\rm L}}{}_{6} ~=~ \left[\begin{array}{cc} 0 & (4)_b (123) \\ (11)_b (23) & 0 \\ \end{array}\right] ~~~~~~, \cr {\bm {\rm L}}{}_{7} ~=~ \left[\begin{array}{cc} 0 & (14)_b (134) \\ (7)_b (14) & 0 \\ \end{array}\right] ~~~~~~\,~&,~~ {\bm {\rm L}}{}_{8} ~=~ \left[\begin{array}{cc} 0 & (8)_b (142) \\ (1)_b (1342) & 0 \\ \end{array}\right] ~~\,~~. } \label{eq:CV} \end{equation} For ${({\cal R})}$ = $(TT)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {~~~~~~} {\bm {\rm L}}{}_{1} ~&=~ \left[\begin{array}{cc} n_+ (14)_b (234) & 0 \\ 0 & m_+ (14)_b (234) \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{2} ~ =~ \left[\begin{array}{cc} n_+ (4)_b (124) & 0 \\ 0 & m_+ (4)_b (124) \\ \end{array}\right] ~~, \cr {\bm {\rm L}}{}_{3} ~&=~ \left[\begin{array}{cc} n_+ (8)_b (132) & 0 \\ 0 & m_+ (8)_b (132) \\ \end{array}\right] ~~~~~,~~ {\bm {\rm L}}{}_{4} ~ =~ \left[\begin{array}{cc} n_+ (2)_b (143) & 0 \\ 0 & m_+ (2)_b (143) \\ \end{array}\right] ~~, \cr {\bm {\rm L}}{}_{5} ~&=~ \left[\begin{array}{cc} 0 & n_- (14)_b (234) \\ m_- (14)_b (234) & 0 \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{6} ~ =~ \left[\begin{array}{cc} 0 & n_- (4)_b (124) \\ m_- (4)_b (124) & 0 \\ \end{array}\right] ~~, \cr {\bm {\rm L}}{}_{7} ~&=~ \left[\begin{array}{cc} 0 & n_- (8)_b (132) \\ m_- (8)_b (132) & 0 \\ \end{array}\right] ~~~~~,~~ {\bm {\rm L}}{}_{8} ~ =~ \left[\begin{array}{cc} 0 & n_- (2)_b (143) \\ m_- (2)_b (143) & 0 \\ \end{array}\right] ~~, } \label{eq:TT} \end{equation} where \begin{equation} \eqalign{ m_{\pm} ~&=~ {\sqrt 2} \, \cos \left[ \fracm {\pi}4 (2 m \mp 1) \right] ~~~, ~~~ n_{\pm} ~=~ {\sqrt 2} \, \cos \left[ \fracm {\pi}4 (2 n \mp 1) \right] ~~~. } \label{eq:Fax} \end{equation} For ${({\cal R})}$ = $(TV)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {~~~~~} {\bm {\rm L}}{}_{1} ~&=~ \left[\begin{array}{cc} n_+ (14)_b (234)& 0 \\ 0 & m_+ (10)_b (1243) \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{2} ~ =~ \left[\begin{array}{cc} n_+ (4)_b (124) & 0 \\ 0 & m_+ (12)_b (23) \\ \end{array}\right] \,~~, \cr {\bm {\rm L}}{}_{3} ~&=~ \left[\begin{array}{cc} n_+ (8)_b (132) & 0 \\ 0 & m_+ (0)_b (14) \\ \end{array}\right] ~~~~~~~~,~~ {\bm {\rm L}}{}_{4} ~ =~ \left[\begin{array}{cc} n_+ (2)_b (143) & 0 \\ 0 & m_+ (6)_b (1342) \\ \end{array}\right] ~, \cr {\bm {\rm L}}{}_{5} ~&=~ \left[\begin{array}{cc} 0 &n_- (14)_b (234) \\ m_- (10)_b (1243) & 0 \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{6} ~ =~ \left[\begin{array}{cc} 0 & n_- (4)_b (124) \\ m_- (12)_b (23) & 0 \\ \end{array}\right] ~\,~, \cr {\bm {\rm L}}{}_{7} ~&=~ \left[\begin{array}{cc} 0 & n_- (8)_b (132) \\ m_- (0)_b (14) & 0 \\ \end{array}\right] ~~~~~~~~,~~ {\bm {\rm L}}{}_{8} ~ =~ \left[\begin{array}{cc} 0 & n_- (2)_b (143) \\ m_- (6)_b (1342) & 0 \\ \end{array}\right] ~. } \label{eq:TV} \end{equation} For ${({\cal R})}$ = $(VV)$, the ${\bm {\rm L}}$-matrices can be presented as: \begin{equation} \eqalign{ {\bm {\rm L}}{}_{1} ~&=~ \left[\begin{array}{cc} n_+ (10)_b (1243) & 0 \\ 0 & m_+ (10)_b (1243) \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{2} ~ =~ \left[\begin{array}{cc} n_+ (12)_b (23) & 0 \\ 0 & m_+ (12)_b (23) \\ \end{array}\right] \,~~, \cr {\bm {\rm L}}{}_{3} ~&=~ \left[\begin{array}{cc} n_+ (0)_b (14) & 0 \\ 0 & m_+ (0)_b (14) \\ \end{array}\right] ~~~~~~~~~~~,~~ {\bm {\rm L}}{}_{4} ~ =~ \left[\begin{array}{cc} n_+ (6)_b (1342) & 0 \\ 0 & m_+ (6)_b (1342) \\ \end{array}\right] ~~, \cr {\bm {\rm L}}{}_{5} ~&=~ \left[\begin{array}{cc} 0 & n_- (10)_b (1243) \\ m_- (10)_b (1243) & 0 \\ \end{array}\right] ~~,~~ {\bm {\rm L}}{}_{6} ~=~ \left[\begin{array}{cc} 0 & n_- (12)_b (23) \\ m_- (12)_b (23) & 0 \\ \end{array}\right] ~~~~~, \cr {\bm {\rm L}}{}_{7} ~&=~ \left[\begin{array}{cc} 0 & n_- (0)_b (14) \\ m_- (0)_b (14) & 0 \\ \end{array}\right] ~~~~~~~~~~~,~~ {\bm {\rm L}}{}_{8} =~ \left[\begin{array}{cc} 0 & n_- (6)_b (1342) \\ m_- (6)_b (1342) & 0 \\ \end{array}\right] ~~. } \label{eq:VV} \end{equation} \@startsection{section}{1}{\z@{HYMN Control of `Colorful' Off-shell 4D, $\bm {\cal N} $ = 2 SUSY Multiplets} \label{sec:HYMNC0N} The real matrices ${\bm {{\rm L}}}_{\rm I}^{({\cal R})}$ and ${\bm {{\rm R}}}_{\rm I}^{({\cal R})}$ can be used to form $16$ $\times$ $16$ matrices using the definition \begin{equation} { {\Hat {\bm \gamma}}{}_{\rm I}^{({\cal R})} ~=~ \frac12 \, (\, {\bm \sigma}^1 \,+\, i {\bm \sigma}^2 \, ) \, \otimes \, {\bm {{\rm L}}}_{\rm I}^{({\cal R})} ~+~ \frac12 \, (\, {\bm \sigma}^1 \,-\, i {\bm \sigma}^2 \, ) \, \otimes \, {\bm {{\rm R}}}_{\rm I}^{({\cal R})} } ~~~. \label{DefGmm} \end{equation} for each of the six representations, and a corresponding matrix $ {\Hat {\bm {\cal C}}}^{({\cal R})}$ derived in the formula shown below from using each ${\Hat {\bm \gamma}}{}_{\rm I}^{({\cal R})}$ representation, \begin{equation} {\Hat {\bm {\cal C}}}^{({\cal R})} = {\Hat {\bm \gamma}}_8^{({\cal R})} \, \cdots \, {\Hat {\bm \gamma}}_1^{({\cal R})} ~~~. \end{equation} This leads the way to HYMN values\cite{HYMN1,HYMN2}, which are the eigenvalues of the $ {\Hat {\bm {\cal C}}}{}^{({\cal R})}$ matrices associated with each supermultiplet. For even N, this matrix is diagonal, i.e $${\Hat {\bm {\cal C}}}^{({\cal R})} = \left(\begin{array}{cc} L_{N} R_{N-1} \cdots R_{1} & 0 \\ 0 & R_{N} L_{N-1} \cdots L_{1} \end{array}\right)$$ The form of the $ {\Hat {\bm {\cal C}}}^{({\cal R})}$ matrices for each representation is shown below, \begin{equation} \eqalign{ {\Hat {\bm {\cal C}}}{}^{(CT)} &~=~ {\Hat {\bm {\cal C}}}{}^{(CV)} ~=~{\bm \sigma}^3\otimes \, {\bm {\rm{I}}}{}_8 ~~~, \cr {\Hat {\bm {\cal C}}}{}^{(CC)} &~=~ {\Hat {\bm {\cal C}}}{}^{(TT)} ~=~ {\Hat {\bm {\cal C}}}{}^{(TV)} ~=~ {\Hat {\bm {\cal C}}}{}^{(VV)} ~=~ \, {\bm {\rm{I}}}{}_{16} ~~~. } \label{EgnV} \end{equation} These results were derived by explicitly calculating all 16 $\times$ 16 matrices using computer-enabling codes. Algorithms were written in Python and Mathematica by two independent groups. The general process of finding HYMN values is described in \cite{HYMN1} (equations 4.28-4.31), and was carried out in the same fashion by both groups, with only syntactical differences. The Python package Numpy was used extensively for matrix operations and to encode the matrices using the Numpy array type. Each group also made use of LaTeX-formatted output commands for the ease of cross-verification at intermittent steps. The results in Eq. (\ref{EgnV}) show that the eigenvalues of $ {\Hat {\bm {\cal C}}}{}^{({\cal R})}$ split the six representations: $(CC)$, $(CT)$, $(CV)$, $(TT)$, $(TV)$, $(VV)$, into two classes. One class contains only $(CT)$, and $(CV)$, while the remaining four representations are all members in a second class. Next we calculate the anti-commutators of the matrices define in Eq. (\ref{DefGmm}). We find for ${({\cal R})}$ = $(CT)$, and $(CV)$ \begin{equation} \left\{ \, {\Hat {\bm \gamma}}{}_{\rm I}^{({\cal R})} ~,~ {\Hat {\bm \gamma}}{}_{\rm J}^{({\cal R})} \, \right\} = 2\, \delta_{\rm {IJ}} \, {\bm {\rm{I}}}{}_{16} ~~~. \label{CLFF2} \end{equation} However for the $(CC)$, $(TT)$, $(TV)$, and $(VV)$ representations we find \begin{equation} \left\{ \, {\Hat {\bm \gamma}}{}_{\rm I} ^{({\cal R})}~,~ {\Hat {\bm \gamma}}{}_{\rm J}^{({\cal R})} \, \right\} = 2\, \delta_{\rm {IJ}} \, {\bm {\rm{I}}}{}_{16} ~+~ {\cal N}{}_{\rm {IJ}}{}^{\Hat \alpha \, ({\cal R})} \, {\bm {\kappa}}{}_{\Hat \alpha}^{({\cal R})} ~~~. \label{CLFF3} \end{equation} where the coefficients $ {\cal N}{}_{\rm {IJ}}{}^{\Hat \alpha \, ({\cal R})} $ and the sets of 16 $\times$ 16 matrices ${\bm {\kappa}}{}_{\Hat \alpha}{}^{({\cal R})}$ are defined in equations (7), (30), (31), (65), (66), (74), (75), (81), and (82) of the work \cite{adnkKyeoh} that began the line of inquiry. We thus see an alignment between the eigenvalue classes of ${\Hat {\bm {\cal C}}}^{({\cal R})}$ and whether the ${\bm {{\rm L}}}_{\rm I}{}^{({\cal R})}$ and ${\bm {{\rm R}}}_{\rm I}{}^{({\cal R})}$ matrices satisfy the condition for 1D SUSY shown in Eq. (\ref{eq:GAlg1}). The result in Eq.(\ref{CLFF3}) might imply that more component auxiliary fields would be needed in the cases of the $(CC)$, $(TT)$, $(TV)$, and $(VV)$ on-shell representations. Therefore, the most elegant way to understand why only the $ {({\cal R})}$ = $(CT)$, and $(CV)$, 4D, $\cal N$ = 1 supermultiplets can describe full-fledged off-shell 4D, $\cal N$ = 2 supermultiplets is because only their adinkras provide a spinor representation of a Euclidean $\mathbb {SO}(8)$ group. It is either an extraordinary coincidence that the split of the six `exemplary' on-shell supermultiplets follows the exact same ratio as the split among the six dissected groups of ${\mathbb S}{}_4 $ \cite{pHEDRON} or there is a deeper connection yet to be uncovered. It turns out that there is yet one more way to test the assertion that valid off-shell supermultiplets lead to a HYMN matrix that is traceless. In the works of \cite{GRana1,GRana2,GHIM} which contain our earliest exploration of these issues, an algorithm is given for constructing L-matrices that is {\it {independent}} of combination of $\cal N$ = 1 supermultiplets that can be combined into a valid $\cal N$ = 2 supermultiplet. One set of these L-matrices leads to an octet that contains the identity matrix and can be dubbed the `Diadem(8)' Octet (denoted by ${\cal D}$O(8)). This is the higher dimensional analogue\footnote{In the sense that its matrices are 8 $\times$ 8 matrices.} of the Klein 4-group (also known as the Klein Vierergruppe). From the work in \cite{GRana1} the elements of this octet are given by $$\begin{aligned} &\mathbf{L}_{1}=\mathbf{I}_{2 \times 2} \otimes \mathbf{I}_{2 \times 2} \otimes \mathbf{I}_{2 \times 2}=\mathbf{R}_{1}\\ &\mathbf{L}_{2}=i \mathbf{I}_{2 \times 2} \otimes \boldsymbol{\sigma}^{3} \otimes \boldsymbol{\sigma}^{2}=-\mathbf{R}_{2}\\ &\mathbf{L}_{3}=i \boldsymbol{\sigma}^{3} \otimes \boldsymbol{\sigma}^{2} \otimes \mathbf{I}_{2 \times 2}=-\mathbf{R}_{3}\\ &\mathbf{L}_{4}=i \mathbf{I}_{2 \times 2} \otimes \boldsymbol{\sigma}^{1} \otimes \boldsymbol{\sigma}^{2}=-\mathbf{R}_{4}\\ &\mathbf{L}_{5}=i \boldsymbol{\sigma}^{1} \otimes \boldsymbol{\sigma}^{2} \otimes \mathbf{I}_{2 \times 2}=-\mathbf{R}_{5}\\ &\mathbf{L}_{6}=i \boldsymbol{\sigma}^{2} \otimes \mathbf{I}_{2 \times 2} \otimes \boldsymbol{\sigma}^{1}=-\mathbf{R}_{6}\\ &\mathbf{L}_{7}=i \boldsymbol{\sigma}^{2} \otimes \mathbf{I}_{2 \times 2} \otimes \boldsymbol{\sigma}^{3}=-\mathbf{R}_{7}\\ &\mathbf{L}_{8}=i \boldsymbol{\sigma}^{2} \otimes \boldsymbol{\sigma}^{2} \otimes \boldsymbol{\sigma}^{2}=-\mathbf{R}_{8} \end{aligned}$$ and it leads to the result \begin{equation} {\Hat {\bm {\cal C}}}^{({\cal D}{\rm O}(8))} = {\Hat {\bm \gamma}}_8^{({\cal D}{\rm O}(8))} \, \cdots \, {\Hat {\bm \gamma}}_1^{(({\cal D}{\rm O}(8)))} ~=~{\bm \sigma}^3\otimes \, {\bm {\rm{I}}}{}_8 ~~~, \end{equation} showing it is indeed in the same class that also contains $(CT)$ and $(CV)$. \@startsection{section}{1}{\z@{4D, $\cal N$ = 1 SUSY and the Permutahedron} \label{sec:RecRev1} In a recent paper, \cite{pHEDRON} the relevance of a well-known mathematical concept, the permutahedron\cite{pHR0n1,pHR0n2,pHR0n3,pHR0n4}, was brought into focus. In particular, it was conjectured the permutahedron for ${\mathbb S}{}_4 $, along with Bruhat weak ordering \cite{BruHT}, provide a foundation for a representation theory of off-shell 4D, $\cal N$ = 1 SUSY theories. A representation of the ${\mathbb S}{}_4 $ permutahedron is shown in Fig. \ref{perm}. Each of the listed subsets (dubbed quartets) contains four elements. Those four elements are shown in the same color on the permutahedron. \begin{figure}[h] \includegraphics[width=13cm]{perm.png} \centering \caption{Colored permutation addresses adorning the permutahedron} \label{perm} \end{figure} We remind the reader that of the subsets, only the $\{ {\cal P}{}_{[1]} \}$, $\{ {\cal P}{}_{[2]} \}$, and $\{ {\cal P}{}_{[3]} \}$ ones are associated, respectively, with the chiral, tensor, and vector supermultiplets. These were derived by a projection procedure carried out in the work of \cite{permutadnk}. Respectively, these are associated with the green, purple and rust colored permutation elements. There is one special property of $\{ {\cal P}{}_{[6]} \}$ as it is the only subset that contains the identity permutation element. Regarding all the subsets as unordered, the following equations follow, $$ \{ {\cal P}{}_{[1]}\} ~=~ (132) \, \{ {\cal P}{}_{[6]}\} ~~~,~~~ \{ {\cal P}{}_{[2]}\} ~=~ (123) \, \{ {\cal P}{}_{[6]}\} ~~~,~~~ \{ {\cal P}{}_{[4]}\} ~=~ (23) \, \{ {\cal P}{}_{[6]}\} ~~~, $$ \begin{equation} \{ {\cal P}{}_{[4]}\} ~=~ (13) \, \{ {\cal P}{}_{[6]}\} ~~~,~~~ \{ {\cal P}{}_{[5]}\} ~=~ (12) \, \{ {\cal P}{}_{[6]}\} ~~~,~~~ \end{equation} and according to the use of a lexicographical ordering prescription $\{ {\cal P}{}_{[6]} \}$ is the ``smallest'' of the twenty-four permutations contained in ${\mathbb S}_4 $. The permutations in the subset $ \{ {\cal P}{}_{[6]}\}$ can also be represented as \begin{equation} \begin{array}{ccccccccc} {\bm {\cal P}}_{1} &=& \,{\bm {\rm I}}{}_{2} & \otimes &{\bm {\rm I}} {}_{2} & = & () & = & \langle 1 2 3 4 \rangle ~~~, \\ {\bm {\cal P}}_{2} &=&{\bm {\rm I}}{}_{2} & \otimes &{\bm \sigma}^1 & =& (12)(34) &= & \langle 2 1 4 3 \rangle ~~~, \\ {\bm {\cal P}}_{3} &=& {\bm \sigma}^1 & \otimes &{\bm {\rm I}}{}_{2} & =& (13)(24) & = & \langle 3 4 1 2 \rangle ~~~. \\ {\bm {\cal P}}_{4} &=& {\bm \sigma}^1 & \otimes &{\bm \sigma}^1 &=& (14)(23) & = & \langle 4 3 2 1 \rangle ~~~, \\ \end{array} \label{N4dia} \end{equation} written successively in matrix, cycle, and one-line notations. The permutations in Eq.\ ({\ref{N4dia}}), which correspond to the supercharges that act on the ${{\cal P}}_{6}$ supermultiplet, occur at two vertices of the base face and two vertices of the top face. In this sense, they can be seen as two-colored quartets. Further explanation of this statement is warranted, for which we turn to the findings of \cite{GRana1}, in particular appendix A. Given an arbitrary $\cal N$-extended theory with N matrices, we can always build an ($\cal N$-1)-extended theory by taking any set of (N-1) matrices from the $\cal N$-extended theory. For the case of 4D, $\cal N$ = 1 SUSY, after dimensional reduction we end up with a 1D, $\cal N$ = 4 theory \cite{holography}. The process, which shows the “interdimensionality” of adinkras, is as follows. Reducing to 3D, we get an $\cal N$ = 2 scalar theory (chirality doesn’t exist in 3D). Further reducing to 2D, we get a 2D, $\cal N$ = (1, 1) theory. Reduction once more leads to the 1D, $\cal N$ = 4 scalar theory. All of these theories have the same adinkras. Turning back to the present case of the 1D, $\cal N$ = 4 theory, it follows that we could use this $\cal N$ = 4 theory to build theories for $\cal N$ = 1, 2, or 3, by simply taking subsets of the quartets. In this particular way, the faces of the permutahedron for the 1D, $\cal N$ = 4 theory are seen to be composed of the supercharges that act on two different $\cal N$ = 2 supermultiplets (where each $\cal N$ = 2 supermultiplet is acted on by 2 of the matrices corresponding to the supercharges for the larger $\cal N$ = 4 supermultiplets). A similar argument is constructed in \cite{Toppan}, where the actions for 1D, $\cal N$ = 4 $\sigma$ -models are computed with respect to the two inequivalent (2,8,6) multiplets. It's shown that imposing the 5th supersymmetry (with four supersymmetry generators already manifest) automatically induces full $\cal N$ = 8 off-shell invariance. This has an interesting implication for the 4D, ${\cal N} $ = 2 theories that are the target of our investigation in this work. In the future we seek to further understand the role that the permutahedron may play in picking out valid combinations of $\cal N$ = 1 supermultiplets to create valid $\cal N$ = 2 supermultiplets. The fact that permutations corresponding to the supercharges of valid supermultiplets of lower degree or extension may be formed by the square faces hints at a potential way to embed lower degree supercharges of supermultiplets (say of $\cal N$ = 1) into higher dimensional permutahedra (say of $\cal N$ = 2). This will be investigated in future works. \@startsection{section}{1}{\z@{ `Colorful' Off-shell 4D, $\bm {\cal N} $ = 2 SUSY Multiplets \& Their Explicit {$\mathbb S{}_8$} Permutations} \label{sec:S8perm} The skeptical reader may object that arguments based on the use of $ {\Hat {\bm \gamma}}{}_{\rm I}{}^{({\cal R})}$ having nothing to do with the permutahedron associated with ${\mathbb S}{}_8$. This is not so. As indicated by the result in Eq.\ (\ref{eq:aas1}), the permutation elements powerfully determine the forms of the matrices that are described by the ${\cal {GR}}(d, N)$ or Garden Algebra in Eq.\ (\ref{eq:GAlg1}). We believe this current work shows how the ${\cal {GR}}$(8, $8)$ representation clearly impacts how off-shell 4D, $\cal N$ = 1 theories can be combined to become off-shell 4D, $\cal N$ = 2 theories. As a step toward enabling a deeper study of these issues, it is necessary to give a reformulation of the results in (\ref{eq:CC}) - (\ref{eq:VV}). We find these can be recast as the following, where explicit dependences on the elements in ${\mathbb S}{}_8$ (as both cycle and one-line notations) appear. ${ ({\cal R}) \,=\, (\bm{CC})}$ \begin{equation} \begin{array}{cccccc} ~{\bm {{\rm L}}}_1 & ~=~ & ~~(170)_b(243)(687) & ~=~ & ~~ (170)_b \langle \, 1 \, 4 \, 2 \, 3 \, 5 \, 8 \, 6 \, 7 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_2 & ~=~ & ~~(204)_b(123)(567) & ~=~ & ~~(204)_b \langle \, 2 \, 3 \, 1 \, 4 \, 6 \, 7 \, 5 \, 8 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_3 & ~=~ & ~~(102)_b(134)(578) & ~=~ & ~~(102)_b \langle \, 3 \, 2 \, 4 \, 1 \, 7 \, 6 \, 8 \, 5 \rangle & ~~~, \\ ~{\bm {{\rm L}}}_4 & ~=~ & ~~(0)_b(142)(586) & ~=~ & ~~(0)_b \langle \, 4 \, 1 \, 3 \, 2 \, 8 \, 5 \, 7 \, 6 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_5 & ~=~ & ~~(15)_b(15)(364728) & ~=~ & ~~(15)_b \langle \, 5 \, 8 \, 6 \, 7 \, 1 \, 4 \, 2 \, 3 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_6 & ~=~ & ~~(105)_b(163527)(48) & ~=~ & ~~(105)_b \langle \, 6 \, 7 \, 5 \, 8 \, 2 \, 3 \, 1 \, 4 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_7 & ~=~ & ~~(195)_b(26)(174538) & ~=~ & ~~ (195)_b \langle \, 7 \, 6 \, 8 \, 5 \, 3 \, 2 \, 4 \, 1 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_8 & ~=~ & ~~(165)_b(37)(182546) & ~=~ & ~~ (165)_b \langle \, 8 \, 5 \, 7 \, 6 \, 4 \, 1 \, 3 \, 2 \, \rangle &~~~, \\ \end{array} \label{cmcm} \end{equation} ${ ({\cal R}) \,=\, (\bm{CT})}$ $$ \begin{array}{cccccc} ~~~~~~~{\bm {{\rm L}}}_1 & ~=~ & ~~~(234)_b(243)(678) {~~~}&~~~~~~=~ & ~~ (234)_b \langle \, 1 \, 4 \, 2 \, 3 \, 5 \, 7 \, 8 \, 6 \, \rangle &\,~\,~, ~\,~ {\,} \\ ~~~~~~~{\bm {{\rm L}}}_2 & ~=~ & (76)_b(123)(568) &~~~~~~=~ & ~~ (76)_b \langle \, 2 \, 3 \, 1 \, 4 \, 6 \, 8 \, 7 \, 5 \, \rangle &, \\ ~~~~~~~{\bm {{\rm L}}}_3 & ~=~ & (134)_b(134)(576) &~~~~~~=~ & ~~ (134)_b \langle \, 3 \, 2 \, 4 \, 1 \, 7 \, 5 \, 6 \, 8 \rangle &, \\ ~~~~~~~{\bm {{\rm L}}}_4 & ~=~ & (32)_b(142)(587) &~~~~~~=~ & ~~ (32)_b \langle \, 4 \, 1 \, 3 \, 2 \, 8 \, 6 \, 5 \, 7 \, \rangle &, \\ \end{array} $$ \begin{equation} \begin{array}{cccccc} ~~~~{\bm {{\rm L}}}_5~~~ & ~=~ & ~~(11)_b(15)(28)(36)(47) & ~=~ & ~~ (11)_b \langle \, 5 \, 8 \, 6 \, 7 \, 1 \, 3 \, 4 \, 2 \, \rangle &~~~, \\ ~~~~{\bm {{\rm L}}}_6~~~ & ~=~ & ~~(173)_b(1648)(2735) & ~=~ & ~~ (173)_b \langle \, 6 \, 7 \, 5 \, 8 \, 2 \, 4 \, 3 \, 1 \, \rangle &~~~, \\ ~~~~{\bm {{\rm L}}}_7~~~ & ~=~ & ~~ (103)_b(1726)(3845) & ~=~ & ~~ (103)_b \langle \, 7 \, 6 \, 8 \, 5 \, 3 \, 1 \, 2 \, 4 \, \rangle &~~~, \\ ~~~~{\bm {{\rm L}}}_8~~~ & ~=~ & ~~(193)_b(1837)(2546) & ~=~ & ~~ (193)_b \langle \, 8 \, 5 \, 7 \, 6 \, 4 \, 2 \, 1 \, 3 \, \rangle &~~~, \\ \end{array} \end{equation} ${ ({\cal R}) \,=\, (\bm{CV})}$ \begin{equation} \begin{array}{cccccc} ~{\bm {{\rm L}}}_1 & ~=~ & ~~(170)_b (243)(5687) & ~=~ & ~~ (170)_b \langle \, 1 \, 4 \, 2 \, 3 \, 6 \, 8 \, 5 \, 7 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_2 & ~=~ & ~~(204)_b(123)(67) & ~=~ & ~~ (204)_b \langle \, 2 \, 3 \, 1 \, 4 \, 5 \, 7 \, 6 \, 8 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_3 & ~=~ & ~~(6)_b(134)(58) & ~=~ & ~~ (6)_b \langle \, 3 \, 2 \, 4 \, 1 \, 8 \, 6 \, 7 \, 5 \rangle &~~~, \\ ~{\bm {{\rm L}}}_4 & ~=~ & ~~(96)_b(142)(5786) & ~=~ & ~~ (96)_b \langle \, 4 \, 1 \, 3 \, 2 \, 7 \, 5 \, 8 \, 6 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_5 & ~=~ & ~~(210)_b(15283647) & ~=~ & ~~ (210)_b \langle \, 5 \, 8 \, 6 \, 7 \, 2 \, 4 \, 1 \, 3 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_6 & ~=~ & ~~(180)_b(1635)(27)(48) & ~=~ & ~~(180)_b \langle \, 6 \, 7 \, 5 \, 8 \, 1 \, 3 \, 2 \, 4 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_7 & ~=~ & ~~(126)_b(1738)(26)(45) & ~=~ & ~~(126)_b \langle \, 7 \, 6 \, 8 \, 5 \, 4 \, 2 \, 3 \, 1 \, \rangle &~~~, \\ ~{\bm {{\rm L}}}_8 & ~=~ & ~~(24)_b(18253746) & ~=~ & ~~(24)_b \langle \, 8 \, 5 \, 7 \, 6 \, 3 \, 1 \, 4 \, 2 \, \rangle &~~~, \\ \end{array} \end{equation} ${ ({\cal R}) \,=\, (\bm{TT})}$ $$ \eqalign{ {\bm {{\rm L}}}_1 ~&~=~ (238)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (234)(678) \cr ~&~=~ (238)_b \, \left[ \, \left( \fracm12 \right) \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 1 \, 3 \, 4 \, 2 \, 5 \, 7 \, 8 \, 6 \, \rangle ~~, \cr {\bm {{\rm L}}}_2 ~&~=~(68)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (124)(568) \cr ~&~=~ (68)_b \, \left[ \, \left( \fracm12 \right) \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 2 \, 4 \, 3 \, 1 \, 6 \, 8 \, 7 \, 5 \, \rangle \,~~~, \cr {\bm {{\rm L}}}_3 ~&~=~(136)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (132)(576) \cr ~&~=~ (136)_b \, \left[ \, \left( \fracm12 \right) \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 3 \, 1 \, 2 \, 4 \, 7 \, 5 \, 6 \, 8 \rangle ~~~,\cr {\bm {{\rm L}}}_4 ~&~=~(34)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (143)(587) \cr ~&~=~ (34)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 4 \, 2 \, 1 \, 3 \, 8 \, 6 \, 5 \, 7 \, \rangle ~~~~, \cr {\bm {{\rm L}}}_5 ~&~=~(238)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (15)(274638) \cr ~&~=~ (238)_b \, \left[ \, \left( \fracm12 \right) \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 5 \, 7 \, 8 \, 6 \, 1 \, 3 \, 4 \, 2 \, \rangle ~~, \cr {\bm {{\rm L}}}_6 ~&~=~(68)_b \, \left[ \, \left( \fracm12 \right) \left( \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (164528)(37) \cr ~&~=~ (68)_b \, \left[ \, \left( \fracm12 \right) \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 6 \, 8 \, 7 \, 5 \, 2 \, 4 \, 3 \, 1 \, \rangle \,~~~, \cr {\bm {{\rm L}}}_7 ~&~=~(136)_b \, \left[ \, \left( \fracm12 \right) \left( \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (172536)(48) \cr ~&~=~ (136)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 7 \, 5 \, 6 \, 8 \, 3 \, 1 \, 2 \, 4 \, \rangle ~\,~, } $$ \begin{equation} \eqalign{ {\bm {{\rm L}}}_8 ~&~\,=~(34)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (183547)(26) \cr ~&\,~=~ (34)_b \, \left[ \, \left( \fracm12 \right) \left( \, ({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 8 \, 6 \, 5 \, 7 \, 4 \, 2 \, 1 \, 3 \, \rangle ~~~~, \cr } \end{equation} ${ ({\cal R}) \,=\, (\bm{TV})}$ \begin{equation} \eqalign{ {\bm {{\rm L}}}_1 ~&~=~ (174)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (234)(5687) \cr ~&~=~ (174)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 1 \, 3 \, 4 \, 2 \, 6 \, 8 \, 5 \, 7 \, \rangle ~~, \cr {\bm {{\rm L}}}_2 ~&~=~(196)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (124)(67) \cr ~&~=~ (196)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 2 \, 4 \, 3 \, 1 \, 5 \, 7 \, 6 \, 8 \, \rangle \,~~~, \cr {\bm {{\rm L}}}_3 ~&~=~(8)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (132)(58) \cr ~&~=~ (8)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 3 \, 1 \, 2 \, 4 \, 8 \, 6 \, 7 \, 5 \rangle \,~~,\cr {\bm {{\rm L}}}_4 ~&~=~(98)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (143)(5786) \cr ~&~=~ (98)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 4 \, 2 \, 1 \, 3 \, 7 \, 5 \, 8 \, 6 \, \rangle \,~~~, \cr {\bm {{\rm L}}}_5 ~&~=~(174)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (38)(46)(1527) \cr ~&~=~ (174)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 5 \, 7 \, 8 \, 6 \, 2 \, 4 \, 1 \, 3 \, \rangle ~~, \cr {\bm {{\rm L}}}_6 ~&~=~(196)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (16372845) \cr ~&~=~ (196)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 6 \, 8 \, 7 \, 5 \, 1 \, 3 \, 2 \, 4 \, \rangle \,~~~, \cr {\bm {{\rm L}}}_7 ~&~=~(8)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (17362548) \cr ~&~=~ (8)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 7 \, 5 \, 6 \, 8 \, 4 \, 2 \, 3 \, 1 \, \rangle \,~\, , \cr {\bm {{\rm L}}}_8 ~&~=~(98)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (35)(47)(1862) \cr ~&~=~ (98)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 8 \, 6 \, 5 \, 7 \, 3 \, 1 \, 4 \, 2 \, \rangle ~~\,~, \cr } \end{equation} ${ ({\cal R}) \,=\, (\bm{VV})}$ $$ \eqalign{ {\bm {{\rm L}}}_1 ~&~=~ (170)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (1243)(5687) \cr ~&~=~ (170)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 2 \, 4 \, 1 \, 3 \, 6 \, 8 \, 5 \, 7 \, \rangle ~~~~~~~, \cr {\bm {{\rm L}}}_2 ~&~=~(204)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (23)(67) \cr ~&~=~ (204)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 1 \, 3 \, 2 \, 4 \, 5 \, 7 \, 6 \, 8 \, \rangle ~~~~~~~, \cr {\bm {{\rm L}}}_3 ~&~=~(0)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (14)(58) \cr ~&~=~ (0)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 4 \, 2 \, 3 \, 1 \, 8 \, 6 \, 7 \, 5 \rangle ~~~~~~~~~~, } $$ \begin{equation} \eqalign{ {\bm {{\rm L}}}_4 ~&~=~(102)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (1342)(5786) \cr ~&~=~ (102)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_+ + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_+ \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 3 \, 1 \, 4 \, 2 \, 7 \, 5 \, 8 \, 6 \, \rangle ~~~~~~, \cr {\bm {{\rm L}}}_5 ~&~=~(170)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (1647)(2835) \cr ~&~=~ (170)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 6 \, 8 \, 5 \, 7 \, 2 \, 4 \, 1 \, 3 \, \rangle ~~~~~~, \cr {\bm {{\rm L}}}_6 ~&~=~(204)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (15)(27)(36)(48) \cr ~&~=~ (204)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 5 \, 7 \, 6 \, 8 \, 1 \, 3 \, 2 \, 4 \, \rangle ~~~~~~~, \cr {\bm {{\rm L}}}_7 ~&~=~(0)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (18)(26)(37)(45) \cr ~&~=~ (0)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 8 \, 6 \, 7 \, 5 \, 4 \, 2 \, 3 \, 1 \, \rangle ~~~~~~~~~~, \cr {\bm {{\rm L}}}_8 ~&~\,=~(102)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] (1746)(2538) \cr ~&~=~ (102)_b \, \left[ \, \left( \fracm12 \right) \, \left( \, \,({\bm {{\rm I}}} + {\bm {\sigma^3}})n_- + \,({\bm {{\rm I}}} - {\bm {\sigma^3}})m_- \, \right) \, \otimes \, {{\bm {{\rm I}}}}_4 \, \right] \langle \, 7 \, 5 \, 8 \, 6 \, 3 \, 1 \, 4 \, 2 \, \rangle ~~~~~~~. \cr } \end{equation} There are twenty-four vertices (= 4!) associated with the permutahedron of order four (for 4D, $ {\cal N} $ = 1 supersymmetry). This same argument implies that 4D, $ {\cal N} $ = 2 theories must have a permutahedron with 8! = 8 $\cdot$ 7 $\cdot$ 6 $\cdot$ 5 $\cdot$ 4! = (1,680) $\cdot$ (24) = 40,320 vertices. In fact, this polytope, the Permutahedron of order 8, is known as the hexipentisteriruncicantitruncated 7-simplex, or more simply, as the omnitruncated 7-simplex, depicted in Fig. \ref{7simplex} \begin{figure}[h] \includegraphics[width=8cm]{7simplex.png} \centering \caption{The permutahedron of order eight, known as the omnitruncated 7-simplex, which exists in 7 dimensions and has 40,320 vertices} \label{7simplex} \end{figure} \begin{equation} \begin{array}{cccccccccccc} {\bm {\cal P}}_{1} &=& \, {\bm {\rm I}}{}_{2}\ & \otimes &{\bm {\rm I}}{}_{2 } & \otimes &{\bm {\rm I}}{}_{2} &~=~ &()& ~=~& \langle 1 \, 2 \, 3 \, 4\, 5 \, 6 \, 7 \, 8 \, \rangle&~~~, ~~~ \\ {\bm {\cal P}}_{2} &=& {\bm {\rm I}}{}_{2} & \otimes &{\bm {\rm {I}}}{}_{2} & \otimes &{\bm \sigma}^1 & ~=~& (12)(34)(56)(78) & ~=~& \langle 2 \, 1 \, 4 \, 3\, 6 \, 5 \, 8 \,7 \, \rangle &~~~, ~~~ \\ {\bm {\cal P}}_{3} &=&{\bm {\rm I}}{}_{2} & \otimes &{\bm \sigma}^1 & \otimes & {\bm {\rm I}}{}_{2} &~=~ & (13)(24)(57)(68) & ~=~& \langle 3 \, 4 \, 1 \, 2\, 7 \, 8 \, 5 \, 6 \, \rangle&~~~, ~~~ \\ {\bm {\cal P}}_{4} &=& {\bm {\rm I}}{}_{2} & \otimes &{\bm \sigma}^1 & \otimes &{\bm \sigma}^1 &~=~ & (14)(23)(58)(67) &~=~ & \langle 4 \, 3 \, 2 \, 1\, 8 \, 7 \, 6 \,5 \rangle &~~~, ~~~ \\ {\bm {\cal P}}_{5} &=& {\bm \sigma}^1 & \otimes &{\bm {\rm I}}{}_{2} & \otimes &{\bm {\rm I}}{}_{2} &~=~ & (15)(26)(37)(48) & ~=~& \langle 5 \, 6 \, 7 \, 8\, 1 \, 2 \, 3 \, 4 \, \rangle &~~~, ~~~ \\ {\bm {\cal P}}_{6} &=& {\bm \sigma}^1 & \otimes &{\bm {\rm I}}{}_{2} & \otimes & {\bm \sigma}^1 &~=~ & (16)(25)(38)(47) &~=~ & \langle 6 \, 5 \, 8 \, 7\, 2 \, 1 \, 4 \, 3 \, \rangle &~~~, ~~~ \\ {\bm {\cal P}}_{7} &=& {\bm \sigma}^1 & \otimes &{\bm \sigma}^1 & \otimes &{\bm {\rm I}}{}_{2 } &~=~ & (17)(28)(35)(46) & ~=~& \langle 7 \, 8 \, 5 \, 6\, 3 \, 4 \, 1 \, 2 \, \rangle &~~~, ~~~ \\ {\bm {\cal P}}_{8} &=& {\bm \sigma}^1 & \otimes &{\bm \sigma}^1 & \otimes &{\bm \sigma}^1 & ~=~& (18)(27)(36)(45) & ~=~& \langle 8 \, 7 \, 6 \, 5\, 4 \, 3 \, 2 \, 1 \, \rangle &~~~. ~~~ \\ \end{array} \label{N8diaX} \end{equation} Several questions remain for future inquiry. If the faces of permutahedra can be interpreted as consisting of supercharges of lower degree supermultiplets, can we embed these supercharges (of lower degree supermultiplets) of a lower degree permutahedra into higher degree permutahedra (thereby creating a mechanism for the generation of higher $\cal N$ supermultiplets from lower $\cal N$ supermultiplets? Another pressing question to pursue in the context of permutahedra is what is the interpretation of the non-closure terms in Eq.(\ref{CLFF3})? Lastly, does the Bruhat weak ordering metric on the permutations plays a role in the sorting done by the eigenvalues? Answering these questions will require: \newline \indent (a.) explicit knowledge of the arrangements of {\em {all}} 40,320 permutations in the ${\mathbb S}{}_8$ \newline \indent $~~~~~$ permutahedron, and \newline \indent (b.) the calculation of all two point ``correlator'' matrices \cite{pHEDRON} between the per- \newline \indent $~~~~~$ mutations that appear in the six representations seen above together with \newline \indent $~~~~~$ those associated with the permutations described in Eq.\ (\ref{N8diaX}). \vskip0.05in \noindent Thanks to modern IT coding and infrastructure this is a surmountable problem. In a future paper these results will be reported. \vspace{.05in} \begin{center} \parbox{4in}{{\it ``As the prerogative of Natural Science is to cultivate \\ $~~$ a taste for observation, that of Mathematics is, almost \\ $~~$ from the starting point, to stimulate the faculty of \\ $~~$ invention.'' \\ ${~}$ ${~}$ \\ ${~}$ }\,\,-\, J.\ J.\ Sylvester $~~~~~~~~~$} \parbox{4in}{ $~~$} \end{center} \noindent {\bf Acknowledgments}\\[.1in] \indent This research is supported in part by the endowment of the Ford Foundation Professorship of Physics at Brown University and the Brown Theoretical Physics Center. Additional acknowledgment is given for their participation in the 2020 SSTPRS (Student Summer Theoretical Physics Research Session) program by Devin Bristow, John H. Caporaletti, Aleksander Cianciara, Delina Levine, and Gabriel Yerger. \newpage
{ "timestamp": "2022-01-26T02:05:46", "yymm": "2012", "arxiv_id": "2012.14015", "language": "en", "url": "https://arxiv.org/abs/2012.14015" }
\section{Introduction} High-dimensional quantum bits hold great potential for quantum communication owing to their robustness to a realistic noisy environment \cite{cerf2002security, erhard2020advances, cozzolino2019high}. Implementations based on encoding information in the transverse spatial modes of photons are especially promising due to the large Hilbert space they span \cite{molina2007twisted, erhard2018twisted}. In recent years, such implementations were successfully demonstrated in free-space \cite{krenn2015twisted, sit2017high}. Meanwhile, efforts for multimode fiber-based technologies are expected to achieve high-dimensional quantum communication without line of sight, based on existing multimode fiber components and infrastructures \cite{loffler2011fiber,kang2012measurement,cozzolino2019air,alarcon2021few,valencia2020unscrambling,cao2020distribution,zhou2021high}. The leading approach for generating entangled photons in transverse spatial modes is through spontaneous parametric down conversion in bulk crystals \cite{mair2001entanglement}. However, it is extremely challenging to couple transverse entangled photons to fibers since it requires a precise mapping between the free space transverse modes and the fiber guided modes. Indeed, most demonstrations of distributing spatially entangled photons with fibers are limited to coupling of only two guided modes \cite{loffler2011fiber,kang2012measurement,cozzolino2019air,alarcon2021few}. Recently, distribution of a photon which is entangled in six spatial modes over a 2 meter-long fiber \cite{valencia2020unscrambling}, and in three spatial modes over a 1 km-long fiber were demonstrated \cite{cao2020distribution}. However, these methods require accurate calibrations, limiting implementations in real-life scenarios. An alternative for coupling free-space entangled photons to a fiber, is to generate the photons inside the fiber by using spontaneous four-wave mixing (SFWM). Over the past two decades, generation of photon pairs by SFWM was studied using multiple types of single mode optical fibers \cite{park2020telecom}, including photonic crystal fibers \cite{sharping2004quantum, rarity2005photonic, fan2005efficient, cohen2009tailored}, dispersion shifted fibers \cite{li2005optical, takesue20051, dyer2009high}, and birefringent fibers \cite{smith2009photon, lugani2020spectrally}. SFWM in multimode fibers was recently utilized for generating photons occupying a high dimensional transverse mode \cite{cruz2016fiber, rottwitt2018photon, guo2019generation}. Generating photon pairs in a superposition of multiple fiber modes requires precise analysis of the phase matching conditions that will allow multiple SFWM processes in the same spectral channel \cite{pourbeyram2018photon, rottwitt2019quantum, ekici2020graded, goudreau2020theory}. These theoretical works predict that the photon pair sources can be tunable for a wide range wavelengths, from the ultraviolet to the infra red and the telecommunication range. Experimentally, such phase matching conditions were recently studied for parametric amplification of weak signals \cite{shamsshooli2020toward}, but not in the spontaneous regime. Hence correlations between pairs of photons generated in multiple fiber modes were not measured to date. In this work, we propose and demonstrate a fiber source of photon pairs which occupy multiple fiber modes. Our measurements prove that the photons are correlated in the guided mode basis, by mapping the modes the photons occupy to their arrival times at the end of a 1 km-long fiber. The 1 km-long fiber acts as an all-fiber in-line mode sorter, in contrast to bulk free-space mode sorters that are typically used for measuring correlations between transverse modes \cite{fontaine2019laguerre, mair2001entanglement, krenn2014generation, berkhout2010efficient}. Our in-line mode sorting configuration allows us to measure the two-dimensional (2D) histogram of the arrival times of the photons, which reveals that the photons occupy three guided modes of the fiber. By analyzing the histogram we achieve the two-photon modal decomposition and verify the spatial correlations of photon pairs generated in the multimode fiber. \section{Results} \subsection{Multimode Correlated Photons Source} Our source is based on coupling Ti:Sapphire mode-locked pulses (pulse duration $140 fs$, wavelength $\lambda_{pump}=695 nm$) into a few mode fiber as shown in Figure \ref{Fig1}. In SFWM, two pump photons are spontaneously annihilated, and two photons called signal and idler are generated in two spectral channels ($\lambda_{s}=542 nm$, $\lambda_{i}=970 nm$). Each spectral channel is composed of many different spatial modes. The photons occupy the guided modes of the fiber, which can be approximated by the linearly polarized (LP) modes of a weakly guiding optical fiber. The state of the photons is determined by the phase matching conditions and can be written as: $|\Psi\rangle = \alpha|LP_{02}\rangle_{s}|LP_{01}\rangle_{i}+\beta|LP_{11}\rangle_{s}|LP_{11}\rangle_{i}$ where subscripts s (i) mark the mode of the signal (idler) photon and the coefficients $\alpha, \beta$ are determined by the nonlinear overlap integral (see Supplementary Equation 3). The term $|LP_{01}\rangle_{s}|LP_{02}\rangle_{i}$ is not present in the quantum state as the mode $LP_{02}$ is not guided in our fiber for the wavelength of the idler photon. Extension of this scheme to higher dimensions and other spectral bands is presented in the Supplementary Note 1. The photon pairs are generated mostly in the first few tens of centimeters of the fiber, after which the peak power of the pump pulse is too weak for SFWM due to its temporal spreading, for more information see Supplementary Note 2. To quantify the efficiency of the pair generation we use a $20 cm$ section of SMF-28 to measure the coincidence detection rate as a function of the average pump power, exhibiting a quadratic scaling as expected for a four-wave mixing process (Fig. 1b). The coincidence to accidental ratio we obtain for a pump power of 10mW is 850 (see Supplementary Note 3 for more details). In principle, to improve the coincidence rate we could use higher pump powers. Increasing the pump power, however, will also increase parasitic Raman scattering. In our system Raman scattering hardly adds noise since it is temporally separated from the generated photon pairs. However, the pump power is limited since the photon counts due to Raman scattering exceed the maximal count rate of our detectors ($\approx5Mhz$). This limitation can be circumvented by using superconducting nanowire detectors with an order of magnitude higher maximal count rates ($\approx50Mhz$), or by using in-line fiber Bragg gratings to filter the pump light before the sorter, so that the pump will not generate Raman scattering along the 1 km-long fiber. \begin{figure}[H] \begin{centering} \includegraphics[width=\columnwidth]{Fig1.pdf} \par\end{centering} \caption{ \textbf{An all-fiber multimode source and mode sorter for photon pairs correlated in the fiber modes.} a) Ultrashort pulses of 140fs $(\lambda_{pump}=695nm$) are coupled into a 1 km-long fiber. Pump photons are spontaneously annihilated and pairs of signal and idler photons are generated at two different spectral channels $(\lambda_{s}=542nm,\lambda_{i}=970nm)$. At these wavelengths, the fiber (SMF-28) supports a few modes, where the modal distribution of the photon pairs is determined by the phase matching condition of the fiber. After the first few tens of centimeters, the temporal spread of the pump pulse prevents SFWM. In the next 1-km long section of the fiber, the different modes are separated due to modal dispersion (inset). Higher spatial modes arrive after lower spatial modes, and shorter wavelengths arrive after longer wavelengths. At the output of the fiber the signal and idler photons are spectrally separated by a dichroic mirror (DM), filtered by a bandpass filter (BPF) and their arrival times are registered using two single photon detectors and a time-to-digital converter (TDC). An electronic delay of $70 ns$ is introduced to the idler detector to compensate for chromatic delay between the signal and idler photons. b) The experimentally measured coincidence rate as a function of the pump average power for a 20nm long fiber exhibiting quadratic scaling.} \label{Fig1} \end{figure} \subsection{Multimode Photons Sorter} Next, we use a 1 km-long section of the same fiber, which serves as a photon pairs source and as a mode sorter of the fiber’s guided modes. Due to modal Group Delay Dispersion (GDD), the arrival times of the photons at the end of the fiber depend on their modal distribution and their spectral channel, as depicted in Figure \ref{Fig1}. We can therefore map the arrival times of the photons to their modal decomposition, up to modal degeneracy in symmetric fiber cores. Although this sorting scheme is quite common in classical optics \cite{painchaud1992time}, it was only recently demonstrated at the single photon level for weak coherent pulses \cite{chandrasekharan2020observing}. Here we use the same principle for entangled photons. In our set-up, the temporal resolution is limited by the jitter of the avalanche photo diodes which is $400 ps$. Since the GDD of our fiber is in the scale of $1 ns/km$, a 1 km-long fiber is sufficient to temporally separate the modes. \subsection{Two Photon Modal Distribution Measurement} To investigate the modal distribution of the two-photon state, we use the mode-to-time mapping and study the temporal two-photon probability $P(T_{s},T_{i})$ that describes the probability to detect a signal photon at time $T_{s}$ and an idler photon at time $T_{i}$. To this end, we plot the two-dimensional histogram of the arrival times after compensating for chromatic dispersion (Figure \ref{Fig2}(a)). Two correlation peaks are observed, corresponding to the delay between either $|LP_{02}\rangle_{s}$ and $|LP_{01}\rangle_{i}$ or between $|LP_{11}\rangle_{s}$ and $|LP_{11}\rangle_{i}$. Clearly, the two-photon probability is not-separable, indicating that photons are correlated in the modal basis. To quantify the correlation of the two photons we post select two arrival times for the signal $(T_{s}^{(1)},T_{s}^{(2)})$ and two arrival times for the idler $(T_{i}^{(1)},T_{i}^{(2)})$. The post-selected arrival times are chosen to maximize the Pearson correlation coefficient: $PCC=\sum_{k=1}^{2} \sum_{l=1}^{2} P(T_{s}^{(k)},T_{i}^{(l)})(T_{s}^{(k)}-\mu _ {T_{s}})(T_{i}^{(l)}-\mu _ {T_{i}})/(\sigma _{T_{s}}\sigma _{T_{i}})$ where $\mu _ {T_{s}},\mu _{T_{i}}$ are the mean arrival times of the signal and idler photons and $\sigma _{T_{s}},\sigma _{T_{i}}$ are their standard deviations. We obtain $PCC=0.51\pm0.012$, which indicates a strong correlation. The main source of correlation degradation in our system is the $400 ps$ jitter of the detectors, which causes a circular smearing of the histogram peaks. Another source of decorrelation is the uncertainty in the creation times of the pairs, which results in a diagonal spread of about $\approx 200 ps$ that hardly effects the PCC between the chosen arrival times. In principle, inter modal coupling can also add decorrelation, however the PCC is sensitive only to mode mixing that occurs in the first few tens of centimeters of the fiber, because the arrival times of photons which experience mode coupling after a longer distance will be different from the post selected times $(T_{s}^{(1)},T_{s}^{(2)}),(T_{i}^{(1)},T_{i}^{(2)})$. Thus PCC degradation due to inter modal mode mixing, which is typically on the order of $20db/km$ \cite{kaliteevskiy2013two}, is negligible. \begin{figure}[H] \begin{centering} \includegraphics[width=\columnwidth]{Fig2.pdf} \par\end{centering} \caption{\textbf{Temporal two-photon probability.} (a) Histogram of the arrival times $T_{s},T_{i}$ of the signal and idler photons. The arrival times are measured relative to an electronic trigger from the pump laser which serves as a global clock, and after adding an electronic delay of 70ns to the idler detector to compensate for the chromatic delay between the signal and idler photons. The two off-diagonal peaks indicate that the two-photon state is not separable. We therefore conclude that the photons are correlated in the modal basis. The two peaks correspond to occupation of modes $|LP_{02}\rangle_{s}|LP_{01}\rangle_{i}$ and $|LP_{11}\rangle_{s}|LP_{11}\rangle_{i}$, as verified by numerical computation of the fiber’s modal group delays. (b),(c) Cross sections of the two-dimensional histogram along the lines marked in (a), emphasizing the modal correlations. For example, post selecting events with an idler’s arrival time of $T_{i}^{(1)}=0.4ns$ (green solid curve) shows localization of the signal photon at $T_{s}^{(2)}=3.5ns$. In (b) the post selection is on the signal photon, while in (c) it is on the idler photon. The measured delay between the two peaks is $\Delta T_{s}=1ns$ for the signal photons and $\Delta T_{i}=0.5ns$ for the idler photons.} \label{Fig2} \end{figure} \subsection{Modal Group Delay Simulation} To show that the measured delays between the signal and idler photons match the expected delays for an SMF-28 fiber, we numerically calculated its modal group delays. We solve the scalar wave equation for an SMF-28 fiber, with a $4.2um$ core radius, core-cladding index difference of $\Delta=0.33\%$, and a step-index profile with a typical dip shape. The modal delay of $LP_{11},LP_{02}$ modes, relative to the fundamental mode is presented in Figure \ref{Fig3}. We chose the fundamental mode as a reference to cancel the chromatic dispersion. At the signal's wavelength the delay between the $LP_{02}$ and $LP_{11}$ is $\Delta T_s=1 ns$. At the idler's wavelength the delay of $LP_{01}$ and $LP_{11}$ is $\Delta T_i=0.5 ns$. These delays are in agreement with the temporal correlations found experimentally, supporting mode-to-time mapping scheme. \begin{figure}[H] \begin{centering} \includegraphics[width=\columnwidth]{Fig3.pdf} \par\end{centering} \caption{ \textbf{Numerical computation of the modal delays of the $LP_{11}$ (blue curve) and the $LP_{02}$ (red curve) modes.} The delays are presented relative to the fundamental mode $LP_{01}$, to compensate for chromatic delay. For the signal photon at $\lambda_s=542nm$, the delay between the $LP_{02}$ and $LP_{11}$ modes is $\Delta T_{s}^{(th)}=1ns/km$, in agreement with the experimentally measured delays presented in Figure \ref{Fig2}(c). For the idler photon at $ \lambda_i=970nm$, the delay between the $ LP_{11}$ and $LP_{01}$ modes is $\Delta T_{i}^{(th)}=0.5ns/km$, in agreement with measured delays reported in Figure \ref{Fig2}(b).} \label{Fig3} \end{figure} \section{Discussion} In conclusion, we have demonstrated generation and sorting of correlated photon pairs occupying high order modes of a commercially available fiber. The all-fiber configuration opens the door for implementing high-dimensional photonic quantum bits in fiber-based applications. For example, the mode-to-time mapping can potentially solve the challenge of scaling the number of required detectors with the number of fiber modes, an outstanding challenge in conventional mode sorters. Towards this end, it is necessary to improve the temporal resolution of the system, for example by using superconducting nanowire single photon detectors with jitter times as low as a few picoseconds, and faster electronics. It will allow sorting more transverse modes and using shorter fibers for the temporal mode sorter, which in turn will decrease the background noise caused by fluorescence and parasitic nonlinear processes in the fiber. In order to manipulate the photons coherently and apply projective measurement in two mutually unbiased bases one can use multi-plane light converters (MPLC) \cite{lib2021reconfigurable,fontaine2019laguerre, fontaine2021hermite,krenn2014generation}. We note that by combining the all-fiber temporal sorter with an all-fiber wavefront modulator that we recently developed \cite{resisi2020wavefront}, it would be possible to demonstrate an all-fiber sorter in mutually unbiased basis, opening the door for all fiber quantum communication protocols with high dimensional quantum bits. Addressing these challenges will allow exploring applications of the all-fiber source and sorter. For example, using an in-line multimode fiber beam splitter one could split the photon pairs and route each photon to a different remote user. Such configuration is relevant for device independent quantum key distribution, where an untrusted user (Charlie) distributes entangled photon pairs to Alice and Bob, who generate a secure key based on Bell measurements \cite{ekert1991quantum}. A more immediate application of the all-fiber source is quantum communication protocols that rely on sending both photons to the same target. Examples include quantum dense coding \cite{hu2018beating,guo2019advances}, high capacity quantum key distribution \cite{cabello2000quantum,long2002theoretically} and direct quantum communication \cite{deng2003two}. \newline \section{Methods} \subsection{Experimental Setup} An optical fiber (SMF-28) is pumped by a Ti:Sapphire laser (Coherent Chameleon Ultra II, 680-1060nm, 140fs duration, 80MHz repetition rate). Before coupling to the fiber, the laser was filtered using a bandpass filter (Thorlabs FB700-40). The signal and idler photons were separated using a dichroic mirror (DM) with an edge at 925 nm (Semrock FF925-Di01). In each arm the pump beam was blocked using spectral filters. In the signal arm we employed a short pass filter (Semrock BSP01-633R), and a bandpass filter (Semrock FF01-540). In the Idler arm we employed a long pass filter (Semrock BLP01-808R) and a bandpass filter (Semrock LL01-976). The signal and idler photons were coupled into two optical fibers (SMF-28) and detected using avalanche photodetectors (Excelitas SPCM-AQ4C), with a quantum efficiency of 50\% for the signal photons and 15\% for the idler photons. The arrival times of the photons where registered using a time-to-digital converter (Swabian Time Tagger 20). \section{Acknowledgments} The authors kindly thank Hagai Eisenberg and Avi Pe'er for many fruitful discussions and suggestions. This research was supported by the \textit{United States-Israel Binational Science Foundation (BSF)} (Grant No. 2017694). KS and YB acknowledge the support of the Israeli Council for Higher Education, the Israel National Quantum Initiative (INQI) and the Zuckerman STEM Leadership Program. \end{multicols} \clearpage \maketitle \section{Supplementary Information} \subsection{Note 1: Scheme for higher-dimensional state generation at telecom and visible wavelengths} To generate high-dimensional states in a superposition of multiple fiber modes, it is required to find the phase matching condition that will allow multiple spontaneous four-wave mixing processes in the same spectral channel. Here we show a concrete example of such phase-matching condition based on a commercially available graded index (GRIN) fiber (OM4) spliced to a commercial Ytterbium mode-locked fiber laser. In addition to the possibility to generate high-dimensional quantum states, this scheme also allows generation of photons in the c-band. We start by presenting the phase matching condition that allows high-dimensional entanglement in OM4 fibers. In multimode fibers, one can find multiple modal configurations that satisfy the phase matching condition required for four-wave mixing. There are a few benefits in using GRIN fibers. The guided-modes in a GRIN fiber can propagate with nearly identical group velocities and therefore nonlinear coupling among short pulses is achieved over much longer distances than in step index fibers. More importantly for generating high-dimensional quantum bits, the parabolic refractive index profile of GRIN fibers yields degenerate group modes with equally spaced propagation constants $\beta_n$ and a degeneracy that scales linearly with the group number $g_n$. It is therefore possible to obtain multiple combinations of guided-modes that satisfy the phase matching condition for the same signal and idler frequencies $\omega_s, \omega_i$. Explicitly, the dependence of the phase matched signal and idler frequencies on the group number mismatch defined by $G=-2g_{pump}+g_{idler}+g_{signal}$ is given by \cite{nazemosadat2016phase}: \begin{equation} \omega_{s,i} =\omega_p\pm\sqrt{\frac{\sqrt{2\Delta}G}{R\beta_p^{(2)}(\omega_p)}} \end{equation} Where the $+ (-)$ sign corresponds to the signal (idler) frequency, $\omega_p$ is the pump frequency, $\beta_p^{(2)}(\omega_p)$ is the group-velocity dispersion parameter of the pump mode, and $R$, $\Delta$ are the core radius and the maximal refractive index difference between the core and the clad, respectively. The dimension of the signal and idler photons therefore increases with the group number mismatch $G$. For example, for $G=2$ there are four different modal combinations that yield the same phase matched frequencies, resulting in four-dimensional quantum states at $\omega_s$ and $\omega_i$. To confirm the above phase matching analysis, we numerically solve the multimode nonlinear Schrödinger equation (MM-NLSE) using the numerical solver developed in \cite{wright2017multimode}. The MM-NLSE is given by: \begin{equation} \frac{\partial A_{k}}{\partial z} = i\sum_{n} \frac{\beta_n^{(k)}}{n!}(i\frac{\partial}{\partial t})^n A_k + i\frac{n_2\omega_p}{c}\sum_{lmn}S_{klmn}A_lA_mA^*_n \end{equation} where $A_k(z,t)$ is the slowly varying amplitude of mode $k$, $z$ is the propagation axis along the fiber and $\beta_n^{(k)} = \partial^n\beta^{(k)}/\partial\omega^n$. The nonlinear coupling coefficients $S_{klmn}$ are given by the overlap of the transverse profiles of the guided-modes $F_k(x,y)$: \begin{equation} S_{klmn} = \frac{\int dxdy F_k^*(x,y)F_n^*(x,y)F_m(x,y)F_l(x,y)}{\sqrt{\int dxdy |F_k(x,y)|^2 \int dxdy |F_l(x,y)|^2 \int dxdy |F_m(x,y)|^2 \int dxdy |F_n(x,y)|^2}} \end{equation} To obtain the phase matched frequencies, we propagate a strong pump field at the fundamental mode of a GRIN fiber at $\lambda_p=1040nm$, together with a weak signal seed occupying all the guided-modes of the fiber and all wavelengths lower than $\lambda_p$. To simulate a concrete scheme, we use $140fs$ pulses with an energy of $0.1nJ$ per pulse, corresponding to commercially available Ytterbium mode-locked fiber lasers. Supplementary Figure \ref{SFig1} presents the spectrum at the output of a 10cm long fiber, exhibiting idler photons centered at $\lambda=1540nm$ that occupy four fiber modes. \begin{figure}[ht!] \begin{centering} \includegraphics[width=0.7\columnwidth]{spectrum_29_8.pdf} \par\end{centering} \caption{Power spectrum of intermodal four-wave mixing at the output of a 10cm graded index fiber pumped by $140fs$ pulses at $\lambda_p=1040nm$, obtained by numerical integration of the multimode nonlinear Schrödinger equation (MM-NLSE). To find the phase matching wavelengths, all the guided-modes of the fiber at wavelengths lower than $\lambda_p$ were excited.} \label{SFig1} \end{figure} In Supplementary Table \ref{tab1} we summarize the spontaneous four-wave mixing processes that are stimulated by the seed parameters given above, showing that four-dimensional quantum states can be generated in the c-band, by pumping a GRIN fiber with a commercially available femtosecond fibre laser. \begin{table}[h!] \centering \begin{tabular}{||c c c c c||} \hline Signal mode & Idler mode & $\lambda_{Signal}$ (nm) & $\lambda_{Idler}$ (nm) & G \\ \hline\hline $LP_{01}$ & $LP_{02}$ & 785 & 1540 & 2 \\ \hline $LP_{02}$ & $LP_{01}$ & 785 & 1540 & 2 \\ \hline $LP_{11a}$ & $LP_{11a}$ & 785 & 1540 & 2 \\ \hline $LP_{11b}$ & $LP_{11b}$ & 785 & 1540 & 2 \\ \hline \end{tabular} \caption{ Phase-matching of intermodal four-wave mixing in a GRIN fiber (OM4). The pump is assumed to be in the $LP_{01}$ mode at $\lambda_p=1040nm$ . At $\lambda_s=785nm$ and $\lambda_i=1540nm$ we get four types of intermodal processes, enabling the generation of a four-dimensional quantum state.}\label{tab1} \end{table} The spectral band of the generated photons depends on the pump wavelength. It is possible to generate spontaneous intermodal four wave mixing up to the telecom wavelenghts, thanks to the fact that the phase matching conditions are nearly unaffected by the pump wavelength. For example, in the above table we present a process where the idler is generated in the telecom c-band spectrum. However, because the four-wave mixing process conserves energy, the second photon is generated in 785nm. Such configuration is especially relevant when one wishes to send one photon in free-space and the other photon in a fiber. \newpage \subsection{Note 2: Numerical calculation of the pair generation rate} To analyze the pair generation rate as a function of the fiber length, we numerically integrate the MM-NLSE as described in the previous section, now for the fibre and pump parameters used in our experiment. We propagate a strong pump pulse occupying the fundamental mode, along with weak white noise occupying all guided-modes that simulates seeding by vacuum fluctuations. For the pump field we assume $140 fs$, $0.1nJ$ pulses centered at $\lambda_p=695nm$, with a repetition rate of $80MHz$, corresponding to the pulses used in our experiment. For the vacuum fluctuations we assume zero mean fields with nonzero variance which corresponds to an energy of $\hbar\omega$ per spectral channel \cite{trajtenberg2020simulating}. The obtained pair rate at the output of the fiber as a function of the fiber length is presented in Supplementary Figure \ref{SFig2}, showing that most of the photons are generated in the first tens of centimeters of the fiber. We attribute most of the reduction in the pair rate to the pump dispersion, as the dispersion length for the pump pulses, defined by $L_D=T_0^2/\beta_2$ where $T_0=140fs$ is the pulse duration and $\beta_2=44000 fs^2$ is the group velocity dispersion parameter, is approximately 45cm. We further note that an upper bound on the fiber segment length over which the photons are generated, can be estimated from the temporal two-photon probability presented in Figure 2 of the main text. Since the temporal signal-idler separation scales linearly with the distance they propagate in the fiber, generation at different positions along the fiber exhibits smearing of the two-photon probability peaks along its diagonal. Figure 2 exhibits a diagonal spreading of $\approx 200ps$. Since the measured signal-idler separation after 1km is $70ns$, the $200ps$ spreading sets an upper of a few meters on the segment length over which the pairs are generated. In practice, since other noise sources may contribute to the diagonal spreading of the two-photon probability, the actual segment length is most likely shorter than this upper bound. \begin{figure}[h!] \begin{centering} \includegraphics[width=0.7\columnwidth]{pairs_4_9.pdf} \par\end{centering} \caption{Pair generation rate obtained by numerical integration of the multimode nonlinear Schrödinger equation. Here we propagate $0.1nJ$, 140fs long pump pulses at $\lambda_p=695nm$, together with white noise fields at $\lambda_{vacuum}<695nm$ corresponding to vacuum fluctuations, which have a zero mean and a nonzero variance corresponding to an energy of $\hbar\omega$ per spectral channel. } \label{SFig2} \end{figure} \subsection{Note 3: Coincidence to accidental ratio measurement} To quantify the coincidence to accidental ratio (CAR) we measured the coincidence histogram below for a pump average power of $P=10mW$, using 20cm long section of a SMF-28 (Supplementary Figure \ref{SFig3}). The CAR is found by the ratio of the correlation peak to the highest correlation measured at a delay of an integer number of pump periods, $CAR=850$. \begin{figure}[h!] \centering \includegraphics[width=0.7\columnwidth]{coins2.pdf} \caption{Coincidence histogram ranging a few times the separation between consecutive pump pulses, yielding a coincidence to accidental ratio of $CAR=850$ for a pump average power of $P=10mW$. } \label{SFig3} \end{figure} \clearpage \printbibliography \end{document}
{ "timestamp": "2022-04-18T02:15:10", "yymm": "2012", "arxiv_id": "2012.14024", "language": "en", "url": "https://arxiv.org/abs/2012.14024" }
\section{Introduction} Varieties of (pointed) residuated lattices are algebraic counterparts of substructural logics \cite{gjko}. In this paper the amalgamation property is investigated in classes of involutive commutative residuated lattices. Historically, amalgamations were first considered for groups in the form of amalgamated free products \cite{sch1927}. In residuated lattices the amalgamation property has been analyzed mostly in varieties in which the algebras are linear or semilinear, i.e., subdirect products of linearly ordered ones, or conic or semiconic. In addition, the investigated classes in the literature have mostly been either divisible and integral \cite{MMT} or idempotent \cite{weichen,GJM}. The scope of the present paper is investigating the amalgamation property in some varieties of residuated lattices which are neither divisible nor integral nor idempotent. \\ We just mention here a few of the most recent related developments (see \cite{gjko,MMT} and the references therein for a more complete picture). As shown in \cite{GLT}, the varieties of semilinear residuated lattices and semilinear cancellative residuated lattices fail the amalgamation property. There is a comprehensive account of the interrelationships between algebraic properties of varieties and properties of their free algebras and equational consequence relations, including equivalences between different kinds of amalgamation properties and the Robinson property, the congruence extension property, and the deductive interpolation property in \cite{MMT}. Also, the presence or failure of amalgamation is established for some subvarieties of residuated lattices, notably for all subvarieties of commutative GMV-algebras. As shown in \cite{GJM}, the variety generated by totally ordered commutative idempotent residuated lattices has the amalgamation property, and even an example of a non-commutative variety of idempotent residuated lattices that has the amalgamation property has been presented. The amalgamation property is quite rare for general varieties. In the present paper for two classes of involutive (pointed) residuated lattices, namely, the classes of odd and even involutive totally ordered pointed commutative residuated lattices, we show the failure of the amalgamation property. An insight is gained, too: in these classes amalgamation fails for the same reason as the reason of the failure of the amalgamation property for discrete linearly ordered groups with positive normal homomorphisms \cite{ExCoAbLOG}. On that basis, we consider two subclasses which contain only algebras which are idempotent-symmetric (to be defined here), and show that those two classes have the amalgamation property. The proofs make use of a representation theorem, in which odd or even totally ordered involutive commutative pointed residuated lattices are represented as direct systems of abelian $o$-groups equipped with some further structure \cite{JenRepr2020} (see Theorem~\ref{BUNCHalg_X} below). After some preliminaries on residuated lattices, amalgamation, and direct systems in Section~\ref{87GJHGsfgw}, the main construction and results are presented in Section~\ref{SECTamalg}. Combining our main result in Theorem~\ref{EzaTuTtI} with previous results, a strengthening of the amalgamation property, the transferable injection property can be deduced for the variety generated by the class of odd, involutive, idempotent-symmetric, commutative residuated chains (Corollary~\ref{CORtarnsfINJ}). Our main result yields, as a by-product, a new algebraic proof for amalgamation in the classes of odd and even totally ordered Sugihara monoids (Corollary~\ref{kjGJd24}). \section{Preliminaries}\label{87GJHGsfgw} Algebras will be denoted by bold capital letters, their underlying sets by the same regular letter unless otherwise stated. Let $\mathbf X=(X, \leq)$ be a poset. For $x\in X$ define the upper neighbor $x_\uparrow$ of $x$ to be the unique cover of $x$ if such exists, and $x$ otherwise. Define $x_\downarrow$ dually. A partially ordered algebra with a poset reduct will be called {\em discretely ordered} if for any element $x$, $x_\downarrow<x<x_\uparrow$ holds. An {\em FL$_e$-algebra}\footnote{Other terminologies for FL$_e$-algebras are: pointed commutative residuated lattices or pointed commutative residuated lattice-ordered monoids.} is a structure $\mathbf X=( X, \wedge,\vee, {\mathbin{*\mkern-9mu \circ}}, \ite{{\mathbin{*\mkern-9mu \circ}}}, t, f )$ such that $(X, \wedge,\vee )$ is a lattice, $( X,\leq, {\mathbin{*\mkern-9mu \circ}},t)$ is a commutative residuated monoid, and $f$ is an arbitrary constant, called the {\em falsum} constant. Commutative residuated lattices are the $f$-free reducts of FL$_e$-algebras. Sometimes the lattice operators will be replaced by their induced ordering $\leq$ in the signature, in particular, if an FL$_e$-chain is considered (that is, when the order is total). Being residuated means that there exists a binary operation $\ite{{\mathbin{*\mkern-9mu \circ}}}$, called the residual operation of ${\mathbin{*\mkern-9mu \circ}}$, such that $\g{x}{y}\leq z$ if and only if $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{z}\geq y$. This equivalence is called adjointness condition, (${\mathbin{*\mkern-9mu \circ}},\ite{{\mathbin{*\mkern-9mu \circ}}}$) is called an adjoint pair. Equivalently, for any $x,z$, the set $\{v\ | \ \g{x}{v}\leq z\}$ has its greatest element, and $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{z}$, the residuum of $x$ and $z$, is defined as this element: $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{z}:=\max\{v\ | \ \g{x}{v}\leq z\}$; this is called the residuation condition. Being residuated implies that ${\mathbin{*\mkern-9mu \circ}}$ is lattice ordered. One defines the {\em residual complement operation} by $\nega{x}=\res{{\mathbin{*\mkern-9mu \circ}}}{x}{f}$ and calls an FL$_e$-algebra {\em involutive} if $\nega{(\nega{x})}=x$ holds. In this case $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{y}=\nega{(\g{x}{\nega{y}})}$ holds, and \begin{equation}\label{item_boundary_zeta} \mbox{ for $x\geq t$, $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}\leq x$ } \end{equation} holds if the order is total \cite[Lemma~2.1]{JenRepr2020}. Call the elements $x\geq t$ {\em positive}. An involutive FL$_e$-algebra is {\em odd} (or rank $0$) if the residual complement operation leaves the unit element fixed, that is, $\nega{t}=t$, and {\em even} (or rank $1$) if the unit element is the unique cover of its residual complement. In our subsequent discussion there are a few classes of algebras which will play a significant role. These are given distinguished notation as listed below. $$ \begin{array}{ll} \mathfrak C & \mbox{the class of totally ordered sets (chains)}\\ \mathfrak A^\mathfrak c & \mbox{the class of abelian $o$-groups}\\ \mathfrak A^\mathfrak l & \mbox{the class of abelian $\ell$-groups}\\ \mathfrak A^{\mathfrak c\mathfrak d} & \mbox{the class of discrete abelian $o$-groups}\\ \mathfrak I & \mbox{the class of involutive FL$_e$-algebras}\\ \end{array} $$ Adjunct to $\mathfrak I$, \begin{itemize}[-] \item the superscript $\mathfrak c$ means restriction to totally ordered algebras, \item the superscript $\mathfrak{sl}$ (semilinear) means restriction to algebras which are subdirect products of totally ordered ones, \item the subscript $\mathfrak 1$ means restriction to even (that is, rank $1$) algebras, \item the subscript $\mathfrak 0$ means restriction to odd (that is, rank $0$) algebras, \item the subscript $\mathfrak{symm}$ means restriction to those algebras in which $\nega{x}$ is idempotent whenever $x$ is idempotent. Call these algebras idempotent-symmetric\footnote{It takes a half-line proof to show that in such an algebra the residual complement of every negative idempotent element is a positive idempotent element. Therefore, such an algebra is idempotent-symmetric if and only the following holds true: $x$ is idempotent if and only if $\nega{x}$ is idempotent. So this property captures that the set of idempotent elements is symmetric with respect to $t$.}. \end{itemize} For instance $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ refers to the class of odd involutive, idempotent-symmetric, FL$_e$-chains. \subsection{Amalgamation} Let $\mathfrak U$ be a class of algebras. Call $\langle \mathbf A, \mathbf B_1, \mathbf B_2, \iota_1, \iota_2 \rangle$ a V-formation in $\mathfrak U$, if $\mathbf A, \mathbf B_1, \mathbf B_2\in\mathfrak U$ and $\iota_k$ is an embedding of $\mathbf A$ into $\mathbf B_k$ ($k=1,2$). The V-formation can be amalgamated in $\mathfrak U$ if there exists a triple $\langle \mathbf C, \iota_3, \iota_4 \rangle$, called an amalgam of the V-formation, such that $\mathbf C\in\mathfrak U$, $\iota_k$ is an embedding of $\mathbf B_k$ into $\mathbf C$ ($k=3,4$) such that $\iota_3\circ\iota_1=\iota_4\circ\iota_2$, see Fig.~\ref{DefAmalg}. \begin{figure}[ht] \begin{diagram} & & \textbf{\textit{A}} & & \\ & \ldEmbed^{\iota_1} & & \rdEmbed^{\iota_2} & \\ \textbf{\textit{B$_1$}} & \rEmbed_{\iota_3} & \textbf{\textit{C}} & \lEmbed_{\iota_4} & \textbf{\textit{B$_2$}} \\ \end{diagram} \caption{Amalgamation} \end{figure} If every V-formation in $\mathfrak U$ can be amalgamated in $\mathfrak U$, then $\mathfrak U$ is said to have the amalgamation property. The triple is called a strong amalgamation of the V-formation if it amalgamates in such a way that $\iota_3(b_1)=\iota_4(b_2)$ implies $b_1\in\iota_1(A)$ and $b_2\in\iota_2(A)$. If all V-formations in $\mathfrak U$ can be strongly amalgamated in $\mathfrak U$, then $\mathfrak U$ is said to have the strong amalgamation property. In order to keep the complexity of notation on a reasonable level, without loss of generality, we will always assume in the sequel that $\iota_1$ and $\iota_2$ are inclusion maps. \\ It is known that the classes $\mathfrak A^\mathfrak c$ and $\mathfrak A^\mathfrak l$ have the amalgamation property \cite{Gamag1,AmalgAbOrdG}, and the class $\mathfrak C$ has the strong amalgamation property \cite[Lemma 2.2]{AmalgAbOrdG}. One way of constructing an amalgam, in general, can be to consider the amalgamated free product (if it exists). For instance, for $\mathfrak A^\mathfrak l$ it was shown in \cite[Theorem~12.2.1]{TsiPowell}, that for $\textbf{\textit{G}},\textbf{\textit{K}},\textbf{\textit{M}}\in\mathfrak A^\mathfrak l$ with embeddings $\iota_1 : \textit{G}\to \textit{K}$ and $\iota_2 : \textit{G}\to \textit{M}$, there exists the free product in $\mathfrak A^\mathfrak l$ of $\textbf{\textit{K}}$ and $\textbf{\textit{M}}$ with $\textbf{\textit{G}}$ amalgamated, denoted by $\textbf{\textit{K}}\ast_\textbf{\textit{G}} \textbf{\textit{M}}$, which is an algebra in $\mathfrak A^\mathfrak l$ such that \begin{enumerate}[start=1,label={ P\arabic*)}] \item\label{commuteREF} there exists embeddings $\iota_3$ and $\iota_4$ which make the upper part of the diagram in Fig.~\ref{amalgamationFIG} commute, and \item\label{UnivProp} for every $\textbf{\textit{P}}\in\mathfrak A^\mathfrak c$ and homomorphisms $f_1:\textbf{\textit{K}}\to\textbf{\textit{P}}$ and $f_2:\textbf{\textit{M}}\to\textbf{\textit{P}}$ which make the outer square commute, there is a unique homomorphism $f : \textbf{\textit{K}}\ast_\textbf{\textit{G}} \textbf{\textit{M}} \to \textbf{\textit{P}}$ which makes the two triangles in the lower part of the diagram in Fig.~\ref{amalgamationFIG} commute. \end{enumerate} \begin{figure}[ht] \begin{diagram} & & \textbf{\textit{G}} & & \\ & \ldEmbed^{\iota_1} & & \rdEmbed^{\iota_2} & \\ \textbf{\textit{K}} & \rEmbed_{\iota_3} & \textbf{\textit{K}}\ast_\textbf{\textit{G}} \textbf{\textit{M}} & \lEmbed_{\iota_4} & \textbf{\textit{M}} \\ & \rdTo^{f_1} & \dDashto_f & \ldTo^{f_2} & \\ & & \textbf{\textit{P}} & & \\ \end{diagram} \caption{Amalgamated free product in $\mathfrak A^\mathfrak l$} \label{amalgamationFIG} \end{figure} As with all universal constructions, the amalgamated free product, if it exists, is unique up to a unique isomorphism. As shown in \cite[Theorem~12.2.2]{TsiPowell}, one way of constructing an amalgam in $\mathfrak A^\mathfrak c$ of a V-formation in $\mathfrak A^\mathfrak c$ is to consider its amalgamated free product in $\mathfrak A^\mathfrak l$ first, and then to extend its lattice order to a total one by the Szpilrajn extension theorem \cite{PartToTotal}. Since the extension can, in general, be done in more than one ways, the obtained algebra is not unique, and thus there cannot exists the free product in $\mathfrak A^\mathfrak c$ of $\textbf{\textit{K}}$ and $\textbf{\textit{M}}$ with $\textbf{\textit{G}}$ amalgamated. This will make our construction somewhat more involved, see the part between (\ref{eLSo2}) and (\ref{eLSo2b}). \subsection{Direct systems and direct limits in $\mathfrak A^\mathfrak c$} A directed partially ordered set is a nonempty set together with a reflexive, antisymmetric, and transitive binary relation, with the additional property that every pair of elements has an upper bound. Let $\langle \alpha,\leq \rangle$ be a directed partially ordered set. Let $\{\mathbf A_i\in\mathfrak U : i\in\alpha\}$ be a family of algebras of the same type and $f_{i\to j}$ be a homomorphism\footnote{Homomorphisms are understood in the corresponding setting.} for every $i,j\in\alpha$, $i\leq j$ with the following properties: \begin{enumerate}[(D1)] \item $f_{i\to i}$ is the identity of $\mathbf A_i$, and \item\label{Kompooot} $f_{i\to k}=f_{j\to k}\circ f_{i\to j}$ for all $i\leq j\leq k$. \end{enumerate} \noindent Then $\langle \mathbf A_i,f_{i\to j} \rangle$ is called a direct system of algebras in $\mathfrak U$ over $\alpha$. Sometimes we also write $\langle \mathbf A_i,f_{i\to j} \rangle_\alpha$. The direct limit of $\langle \mathbf A_i,f_{i\to j} \rangle$ is an algebra in $\mathfrak U$, denoted by $\underset{\longrightarrow}{\lim}\,\mathbf A_i$, along with canonical homomorphisms $\pi_i : \mathbf A_i\to \underset{\longrightarrow}{\lim}\,\mathbf A_i$ such that for all $i\leq j$, $\pi_j\circ f_{i\to j}=\pi_i$, and such that the following universal property holds: if $\mathbf A\in\mathfrak U$ and for every $i\in\alpha$, $\sigma_i:\mathbf A_i\to \mathbf A$ is a homomorphisms satisfying $\sigma_j\circ f_{i\to j}=\sigma_i$ (for $i\leq j$) then there exists a unique homomorphism $\phi:\underset{\longrightarrow}{\lim}\,\mathbf A_i\to\mathbf A$ such that for every $i\in\alpha$, $\phi\circ\pi_i=\sigma_i$ holds. The direct limit can be represented as follows (also in case of general algebras, see \cite[Exercise 1.4.23]{LORM}). Its universe is the disjoint union of the $A_i$'s modulo the following equivalence relation: if $x_i\in A_i$ and $x_j\in A_j$ then $x_i\sim x_j$ if there exists $k\in\alpha$, $i,j\leq k$ such that $f_{i\to k}(x_i)=f_{j\to k}(x_j)$, the canonical homomorphism $\pi_i$ sends each element to its equivalence class, and the algebraic operations on $\underset{\longrightarrow}{\lim}\,\mathbf A_i$ are defined such that these maps become homomorphisms. It is easy to see that the direct limit, if exists, is unique up to isomorphism, and that the direct limit of $\langle \mathbf A_i,f_{i\to j} \rangle_\alpha$ coincides with the direct limit of $\langle \mathbf A_i,f_{i\to j} \rangle_\beta$ for any cofinal subset $\beta$ of $\alpha$. \\ Direct limits exist in $\mathfrak U=\mathfrak A$ \cite[Section 11]{FuchsInfinite}. Direct limits exist in the class of partially ordered abelian groups with positive homomorphisms\footnote{Positive homomorphisms are abelian group homomorphisms that map positive elements to positive elements. Equivalently, they are order preserving: $x\leq y$ implies $f(x)\leq f(y)$.}, too: Let $\langle \mathbf A_i,f_{i\to j}\rangle_\alpha$ be a direct system of partially ordered abelian groups and positive homomorphisms. Let $\mathbf A$ be its abelian group direct limit and $\pi_i$'s be the canonical homomorphisms. Then, as shown in \cite[Proposition 1.15]{PoAGwI}, $\mathbf A$ can be made into a partially ordered abelian group with positive cone $A^+=\cup_{i\in\alpha}\pi_i(A_i^+)$, and $\mathbf A$ together with the maps $\pi_i$ is a direct limit for $\langle \mathbf A_i,f_{i\to j}\rangle_\alpha$ in the class of partially ordered abelian groups. The same construction works in $\mathfrak A^\mathfrak c$. Indeed, when applied to abelian $o$-groups, it results is an abelian $o$-group: Consider $x\notin A^+$. Then there is some $i\in\alpha$ and $y\in A_i$ such that $x=\pi_i(y)$. Since $x\notin A^+$, $y\notin A_i^+$ follows. Hence $y^{-1}\in A_i^+$ since $\mathbf A_i$ is totally ordered. Therefore, $ x^{-1}= \pi_i(y)^{-1}=\pi_i(y^{-1}) \in A^+ $ and we are done. These lead to \begin{proposition} Direct limits exist in $\mathfrak A^\mathfrak c$. \qed \end{proposition} \noindent We are going to refer to homomorphisms between direct limits which are induced by homomorphisms between the corresponding direct systems. If $\mathcal A=\langle \mathbf A_i,f_{i\to j} \rangle$ and $\mathcal B=\langle \mathbf B_i,g_{i\to j} \rangle$ are two direct systems over the same index $\alpha$ set then by a homomorphism $\Phi:\mathcal A\to\mathcal B$ is meant a system of homomorphisms $\Phi=\{\Phi_i:A_i\to B_i : i\in\alpha\}$ such that for every $i<j$ the diagram in Fig.~\ref{HomoM} \begin{figure}[ht] \begin{diagram} \textbf{\textit{A$_i$}} & \rTo_{\Phi_i} & \textbf{\textit{B$_i$}} \\ \dTo^{f_{i\to j}} & & \dTo_{g_{i\to j}} \\ \textbf{\textit{A$_j$}} & \rTo_{\Phi_j} & \textbf{\textit{B$_j$}} \\ \end{diagram} \caption{} \label{HomoM} \end{figure} is commutative. \begin{lemma}\cite[Theorem 11.2]{FuchsInfinite}\label{EZisMONO} If $\mathcal A$ and $\mathcal B$ are direct systems and $\Phi:\mathcal A\to\mathcal B$ is a homomorphism then there exists a unique homomorphism $\Phi^\star:\mathbf A^\star=\underset{\longrightarrow}{\lim}\,\mathbf A_i\to\mathbf B^\star=\underset{\longrightarrow}{\lim}\,\mathbf B_i$ such that for every $u\in\alpha$ the diagram in Fig.~\ref{HomoMO} \begin{figure}[ht] \begin{diagram} \textbf{\textit{A$_i$}} & \rTo_{\Phi_i} & \textbf{\textit{B$_i$}} \\ \dTo^{\pi_u} & & \dTo_{\pi_v} \\ \textbf{\textit{A$^\star$}} & \rTo_{\Phi^\star} & \textbf{\textit{B$^\star$}} \\ \end{diagram} \caption{} \label{HomoMO} \end{figure} is commutative. Moreover, $\Phi^\star$ is embedding if all the $\Phi_i$'s are embeddings. \end{lemma} \subsection{A link between $\mathfrak I^\mathfrak c_0$, $\mathfrak I^\mathfrak c_1$ and some direct systems in $\mathfrak A^\mathfrak c$}\label{SEClinkkk} Theorem~\ref{BUNCHalg_X} is a representation theorem of odd and even involutive FL$_e$-chains by means of some direct systems of abelian $o$-groups. It establishes a one-to-one correspondence between the class $\mathfrak I^\mathfrak c_0\cup\mathfrak I^\mathfrak c_1$ containing all odd and even involutive FL$_e$-chains and the class of bunches of layer groups. Therefore, a bunch of layer groups will also be referred to as the group representation of the algebra in question. In a bunch of layer groups $\langle \textbf{\textit{G$_u$}},\textbf{\textit{H$_u$}}, \varsigma_{u\to v} \rangle_{\langle \kappa_o, \kappa_J, \kappa_I, \leq_\kappa\rangle}$ of an algebra $\mathbf X$ there are three pairwise disjoint sets $\kappa_o$, $\kappa_J$, and $\kappa_I$, their union $\kappa$ is totally ordered by $\leq_\kappa$ such that $\kappa$ has a least element $t$. It can be seen whether the corresponding $\mathbf X$ is odd, or even with a non-idempotent falsum constant, or even with an idempotent falsum constant, by looking at whether $t$ is in $\kappa_o$, in $\kappa_J$, or in $\kappa_I$, respectively. This explains the role of Table~\ref{ThetaPsiOmega} in \begin{definition}\label{DEFbunch} \cite[Definition~6.1]{JenRepr2020} Call ${\mathcal X}=\langle \textbf{\textit{G$_u$}},\textbf{\textit{H$_u$}}, \varsigma_{u\to v} \rangle_{\langle \kappa_o, \kappa_J, \kappa_I, \leq_\kappa\rangle}$ a {\em bunch of layer groups}, where $(\kappa,\leq_\kappa)$ is a totally ordered set with least element $t$, the ordered triple $\langle \{t\}, \bar\kappa_J,\bar\kappa_I\rangle$ is a partition of $\kappa$, where $\bar\kappa_I$ and $\bar\kappa_J$ can also be empty, $\kappa_o$, $\kappa_J$, and $\kappa_I$ are defined by one of the rows of \begin{table}[ht] \begin{center} \begin{tabular}{c|c|cl} $\kappa_o$ & $\kappa_J$ & $\kappa_I$ \\ \hline \{t\} & $\bar\kappa_J$ & $\bar\kappa_I$ \\ \hline $\emptyset$ & $\bar\kappa_J\cup\{t\}$ & $\bar\kappa_I$ \\ \hline $\emptyset$ & $\bar\kappa_J$ & $\bar\kappa_I\cup\{t\}$ \\ \hline \end{tabular} \label{ThetaPsiOmega} \end{center} \end{table} \noindent $\textbf{\textit{G$_u$}}=(G_u,\preceq_u,\cdot_u,\ { }^{-1_u},u)$ is a family of abelian $o$-groups indexed by elements of $\kappa$, and \\ $\textbf{\textit{H$_u$}}=(H_u,\preceq_u,\cdot_u,\ { }^{-1_u},u)$ is a family of abelian $o$-groups indexed by elements of $\kappa_I$, such that $$ \mbox{ for $u\in\kappa_J$, $\textbf{\textit{G$_u$}}$ is discrete, } $$ $$ \mbox{ for $u\in\kappa_I$, $\textbf{\textit{H$_u$}}\leq\textbf{\textit{G$_u$}}$, } $$ and such that for every $u,v\in\kappa$, $u\leq_\kappa v$, there exists a homomorphism $\varsigma_{u\to v} : G_u\to G_v$ satisfying \begin{enumerate}[start=1,label={(G\arabic*)}] \item\label{(G1)} $\varsigma_{u\to u}=id_{G_u}$ and $\varsigma_{v\to w}\circ\varsigma_{u\to v}=\varsigma_{u\to w}$ \hfill (direct system propert ), \item\label{(G3)} for $v\in\kappa_I$, $\varsigma_{u\to v}$ maps into $H_v$. \item\label{(G2)} for $u\in\kappa_J$, $\varsigma_{u\to v}(u)=\varsigma_{u\to v}(u_{\downarrow_u})$, \end{enumerate} Call the \textbf{\textit{G$_u$}}'s and the \textbf{\textit{H$_u$}}'s the layer groups and layer subgroups of $\mathcal X$, respectively, call $\langle\kappa,\leq_\kappa\rangle$ the {\em skeleton} of $\mathcal X$, call $\langle \kappa_o, \kappa_J, \kappa_I \rangle$ the {\em partition} of the skeleton, and call $\langle \textbf{\textit{G$_u$}}, \varsigma_{u\to v} \rangle_\kappa$ the direct system of $\mathcal X$. Note that $\kappa$ can be recovered from its partition, (and ultimately, from $\mathcal X$) via $\kappa=\kappa_o\cup\kappa_J\cup\kappa_I$. \end{definition} \begin{remark}\label{IgyIsNezheto} Bunches of layer groups are direct systems of abelian $o$-groups over a totally ordered set, equipped with a few extra properties. \end{remark} The following theorem demonstrates a one-to-one correspondence in a constructive manner between the class $\mathfrak I^\mathfrak c_0\cup\mathfrak I^\mathfrak c_1$ and the class of bunches of layer groups. Because of this, if $\mathbf X$ denotes the algebra corresponding to the bunch $\mathcal X$, then we also say that the \textbf{\textit{G$_u$}}'s and the \textbf{\textit{H$_u$}}'s are the layer groups and layer subgroups of $\mathbf X$, we call $\langle\kappa,\leq_\kappa\rangle$ the {\em skeleton} of $\mathbf X$, $\langle \kappa_o, \kappa_J, \kappa_I \rangle$ the {\em partition} of the skeleton, and call $\langle \textbf{\textit{G$_u$}}, \varsigma_{u\to v} \rangle_\kappa$ the direct system of $\mathbf X$. \begin{theorem}\label{BUNCHalg_X} {\cite[Theorem~8.1]{JenRepr2020}} \begin{enumerate}[label={(A)}] \item\label{errefere} Given an odd or an even involutive FL$_e$-chain $\mathbf X=(X,\leq,{\mathbin{*\mkern-9mu \circ}},\ite{{\mathbin{*\mkern-9mu \circ}}},t,f)$ with residual complement operation $\komp$, $$ \mathcal X_{\mathbf X}=\langle \textbf{\textit{G$_u$}},\textbf{\textit{H$_u$}}, \varsigma_{u\to v} \rangle_{\langle \kappa_o, \kappa_J, \kappa_I,\leq_\kappa\rangle} $$ is bunch of layer groups, called the {\em bunch of layer groups of $\mathbf X$}, where $$ \kappa=\{\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x} : x\in X\}= \{u\geq t : u \mbox{ is idempotent} \} \mbox{ is ordered by $\leq$,} $$ $$ \bar\kappa_I=\{u\in \kappa\setminus\{t\} : \nega{u} \mbox{ is idempotent}\}, $$ $$ \bar\kappa_J=\{u\in \kappa\setminus\{t\} : \nega{u} \mbox{ is not idempotent}\}, $$ $\kappa_o$, $\kappa_J$, $\kappa_I$ are defined by \begin{table}[h] \begin{center} \begin{tabular}{l|l|l|lll} $\kappa_o$ \ \ \ \ \ \ \ \ & $\kappa_J$ & $\kappa_I$ & \\ \hline \{t\} & $\bar\kappa_J$ & $\bar\kappa_I$ & if $\mathbf X$ is odd\\ \hline $\emptyset$ & $\bar\kappa_J\cup\{t\}$ & $\bar\kappa_I$ & if $\mathbf X$ is even and $f$ is not idempotent \\ \hline $\emptyset$ & $\bar\kappa_J$ & $\bar\kappa_I\cup\{t\}$ & if $\mathbf X$ is even and $f$ is idempotent\\ \hline \end{tabular} \end{center} \end{table} \noindent for $u\in\kappa$, $$ \begin{array}{llll} \textbf{\textit{G$_u$}}&=& (G_u,\leq,{\mathbin{*\mkern-9mu \circ}},\ { }^{-1},u) & \mbox{if $u\in\kappa$,}\\ \textbf{\textit{H$_u$}}&=& (H_u,\leq,{\mathbin{*\mkern-9mu \circ}},\ { }^{-1},u) & \mbox{if $u\in\kappa_I$,}\\ \end{array} $$ where $X_u=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\}$, $H_u=\{x\in X_u : \g{x}{\nega{u}}<x\}$, $\accentset{\bullet}H_u=\{\g{x}{\nega{u}} : x\in H_u\}$, \begin{equation}\label{DEFcsopi} G_u=\left\{ \begin{array}{ll} X_u & \mbox{if $u\notin\kappa_I$}\\ X_u\setminus\accentset{\bullet}H_u & \mbox{if $u\in\kappa_I$}\\ \end{array} \right. \end{equation} \begin{equation}\label{EzLeSzainVerZ} x^{-1}=\res{{\mathbin{*\mkern-9mu \circ}}}{x}{u}, \end{equation} and for $u,v\in\kappa$ such that $u<v$, $\varsigma_{u\to v} : G_u\to G_v$ is defined by $$ \varsigma_{u\to v}(x)= \g{v}{x} . $$ \end{enumerate} \begin{comment} for $u\in\kappa$, \begin{equation}\label{DEFcsopiUNIFORM} \textbf{\textit{G$_u$}}=(G_u,\leq,{\mathbin{*\mkern-9mu \circ}},\ { }^{-1},u) , \end{equation} where $G_u=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\} \setminus \{\g{x}{\nega{u}} : \g{x}{\nega{u}}<x\in X\}$, $x^{-1}=\res{{\mathbin{*\mkern-9mu \circ}}}{x}{u}$, for $u\in\kappa$, $$ H_u=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u, \ \g{x}{\nega{u}}<x\}, $$ and for $u,v\in\kappa$ with $u<v$, $\varsigma_{u\to v}$ is defined by $\varsigma_{u\to v}(x)=\g{v}{x}$ for $x\in G_u$. \end{comment} \bigskip \begin{enumerate}[label={(B)}] \item\label{BLGtoX} Given a bunch of layer groups $ \mathcal X=\langle \textbf{\textit{G$_u$}},\textbf{\textit{H$_u$}}, \varsigma_{u\to v} \rangle_{{\langle \kappa_o, \kappa_J, \kappa_I, \leq_\kappa\rangle}} $ with $\textbf{\textit{G$_u$}}=(G_u,\preceq_u,\cdot_u,\ { }^{-1_u},u)$ $$ \mathbf X_{\mathcal X}=(X,\leq,{\mathbin{*\mkern-9mu \circ}},\ite{{\mathbin{*\mkern-9mu \circ}}},t,\nega{t}) $$ is an involutive FL$_e$-chain with residual complement $\komp$, called the {\em involutive FL$_e$-chain of $\mathcal X$}, where $\kappa=\kappa_o\cup\kappa_J\cup\kappa_I$, for $u\in\kappa$, \begin{equation}\label{IkszU} X_u=\left\{ \begin{array}{ll} G_u & \mbox{ if $u\not\in\kappa_I$},\\ G_u\ \cup\, \accentset{\bullet}H_u & \mbox{ if $u\in\kappa_I$},\\ \end{array} \right. \end{equation} (where $\accentset{\bullet}H_u=\{\accentset{\bullet}{h} : h\in H_u\}$ is a copy of $H_u$ which is disjoint from $G_u$), \begin{equation}\label{EZazX} X=\displaystyle\dot\bigcup_{u\in \kappa}X_u , \end{equation} if $u\notin\kappa_I$ then $\leq_u\,=\,\preceq_u$, if $u\in\kappa_I$ then $\leq_u$ extends $\preceq_u$ to $X_u$ by letting \begin{equation}\label{KibovitettRendezesITTIS} \mbox{ $\accentset{\bullet} a<_u\accentset{\bullet} b$ and $x<_u\accentset{\bullet} a<_uy$ if $a,b\in H_u$, $x,y\in G_u$, $a\prec_u b$, $x\prec_u a\preceq_u y$, } \end{equation} for $v\in\kappa$, $\rho_v : X\to X$ is defined by \begin{equation}\label{P5} \begin{array}{lll} \rho_v(x)&=& \left\{ \begin{array}{ll} \varsigma_{u\to v}(x) & \mbox{ if $x\in G_u$ and $u<_\kappa v$},\\ x & \mbox{ if $x\in G_u$ and $u\geq_\kappa v$},\\ \end{array} \right. \\ \rho_v(\accentset{\bullet}x)&=& \left\{ \begin{array}{ll} \varsigma_{u\to v}(x) & \mbox{ if $\accentset{\bullet}x\in \accentset{\bullet}H_u$ and $\kappa_I\ni u<_\kappa v$},\\ \accentset{\bullet}x & \mbox{ if $\accentset{\bullet}x\in \accentset{\bullet}H_u$ and $\kappa_I\ni u\geq_\kappa v$},\\ \end{array} \right. \end{array} \end{equation} for $x\in X_u$ and $y\in X_v$, \begin{equation*}\label{RendeZesINNOVATIVAN} \mbox{$x<y$ iff $\rho_{u\vee v}(x)<_{u\vee v}\rho_{u\vee v}(y)$ or $\rho_{u\vee v}(x)=\rho_{u\vee v}(y)$ and $u<_\kappa v$,}\\ \end{equation*} for $u\in\kappa_I$, $h_u : X_u \to G_u$, \begin{equation}\label{canHOM} \begin{array}{ll} h_u(x)= x & \mbox{ if $x\in G_u$,}\\ h_u(\accentset{\bullet}x)= x & \mbox{ if $\accentset{\bullet}x\in \accentset{\bullet}H_u$,}\\ \end{array} \end{equation} for $x,y\in X_u$, \begin{equation}\label{uPRODigy} \gteu{x}{y}=\left\{ \begin{array}{ll} \accentset{\bullet}{\left({h_u(x)}\cdot_u{h_u(y)}\right)} & \mbox{ if $u\in\kappa_I$, $\g{h_u(x)}{h_u(y)}\in H_u$ and $\neg(x,y\in H_u)$}\\ {h_u(x)}\cdot_u{h_u(y)} & \mbox{ if $u\in\kappa_I$, $\g{h_u(x)}{h_u(y)}\notin H_u$ or $x,y\in H_u$}\\ x\cdot_u y& \mbox{ if $u\notin\kappa_I$}\\ \end{array} \right. , \end{equation} for $x\in X_u$ and $y\in X_v$, \begin{equation}\label{EgySzeruTe} \g{x}{y}=\gteM{u\vee v}{\rho_{u\vee v}(x)}{\rho_{u\vee v}(y)}, \end{equation} for $x\in X$, \begin{equation}\label{SplitNega} \begin{array}{lll} \nega{(\accentset{\bullet}x)}&=&\left\{ \begin{array}{ll} x^{-1} & \mbox{ \ \ \ if $u\in\kappa_I$ and $\accentset{\bullet}x\in \accentset{\bullet}H_u$}\\ \end{array} \right. \\ \nega{x}&=&\left\{ \begin{array}{ll} \accentset{\bullet}{\left(x^{-1}\right)} & \mbox{ if $u\in\kappa_I$ and $x\in H_u$}\\ x^{-1} & \mbox{ if $u\in\kappa_I$ and $x\in G_u\setminus H_u$}\\ x^{-1} & \mbox{ if $u\in\kappa_o$ and $x\in G_u$}\\ {x^{-1}}_\downarrow & \mbox{ if $u\in\kappa_J$ and $x\in G_u$}\\ \end{array} \right. , \end{array} \end{equation} for $x,y\in X$, \begin{equation}\label{IgYaReSi} \res{{\mathbin{*\mkern-9mu \circ}}}{x}{y}=\nega{(\g{x}{\nega{y}})}, \end{equation} $\ite{{\mathbin{*\mkern-9mu \circ}}}$ is the residual operation of ${\mathbin{*\mkern-9mu \circ}}$, $$ \mbox{ $t$ is the least element of $\kappa$, } $$ \begin{equation}\label{tLESZaz} \mbox{ $f$ is the residual complement of $t$, } \end{equation} and is given by $$ \begin{array}{lll} \nega{t}&=&\left\{ \begin{array}{ll} \accentset{\bullet}{(t^{-1})} & \mbox{ if $u\in\kappa_I$}\\ t^{-1} & \mbox{ if $u\in\kappa_o$}\\ {t^{-1}}_\downarrow & \mbox{ if $u\in\kappa_J$}\\ \end{array} \right. . \end{array} $$ In addition, $$ \rho_v(x)=\g{v}{x} \ \ \mbox{for $v\in\kappa$ and $x\in X$}, $$ $\mathbf X_{\mathcal X}$ is odd if $t\in\kappa_o$, even with a non-idempotent falsum if $t\in\kappa_J$, and even with an idempotent falsum if $t\in\kappa_I$. \end{enumerate} \begin{enumerate}[start=1,label={(C)}] \ite Items~\ref{errefere} and \ref{BLGtoX} describe a one-to-one correspondence between the class containing all odd and even involutive FL$_e$-algebras and the class of bunches of layer groups: given a bunch of layer groups $\mathcal X$ it holds true that $\mathcal X_{({\mathbf X}_\mathcal X)}=\mathcal X$, and given an odd or even involutive FL$_e$-chain $\mathbf X$ it holds true that $\mathbf X_{(\mathcal X_\mathbf X)}=\mathbf X$. \qed \end{enumerate} \end{theorem} \section{Reduction of amalgamation in $\mathfrak I_0$ and $\mathfrak I_1$ to amalgamation in $\mathfrak C$ and some direct systems in $\mathfrak A^\mathfrak c$} Let $\mathbf X,\mathbf Y\in\mathfrak I_0$ or $\mathbf X,\mathbf Y\in\mathfrak I_1$. Lemma~\ref{HogyanLatszik} characterizes how \lq $\mathbf X$ is a subalgebra of $\mathbf Y$\rq\, can be seen from the respective group representations of $\mathbf X$ and $\mathbf Y$. The proof is an easy but tedious verification guided by Theorem~\ref{BUNCHalg_X}. \begin{definition} Given two bunches of layer groups ${\mathcal X}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf X)}{4pt}}, \kappa_J^{\scaleto{(\mathbf X)}{4pt}}, \kappa_I^{\scaleto{(\mathbf X)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\rangle}$ and ${\mathcal Y}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_J^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_I^{\scaleto{(\mathbf Y)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}\rangle}$, we say that $\mathcal X$ is a sub-bunch of $\mathcal Y$, in notation $\mathcal X\leq\mathcal Y$, if the following conditions hold. \begin{enumerate}[start=1,label={(S\arabic*)}] \item\label{EzaKEETTo} $ \min_{\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}} \kappa^{\scaleto{(\mathbf X)}{4pt}} = \min_{\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}} \kappa^{\scaleto{(\mathbf Y)}{4pt}}$. \item\label{MindenkiResZe} $\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\subseteq\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}$, $\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\subseteq\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$, $\kappa_I^{\scaleto{(\mathbf X)}{4pt}}\subseteq\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$, and $\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\,\subseteq\,\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}$. \item\label{ReszalgebrasAGOK} If $u\in\kappa^{\scaleto{(\mathbf X)}{4pt}}$ then $\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$, \\ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ then $\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$, \\ if $u\in\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$ then the lower cover of the unit element of \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} coincides with the lower cover of the unit element of \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}. \item\label{ResTRIcTIon} For $u,v\in\kappa^{\scaleto{(\mathbf X)}{4pt}}$, $u<_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} v$, $\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} : G_u^{\scaleto{(\mathbf Y)}{4pt}}\to G_v^{\scaleto{(\mathbf Y)}{4pt}}$ is an extension of $\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} : G_u^{\scaleto{(\mathbf X)}{4pt}}\to G_v^{\scaleto{(\mathbf X)}{4pt}}$. \end{enumerate} \end{definition} \begin{lemma}\label{HogyanLatszik} Let $$\mathbf X=(X,\leq_X,\mathbin{\circ\mkern-7mu \cdot\mkern2mu},\ite{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}},t,f),$$ $$\mathbf Y=(Y,\leq_Y,{\mathbin{*\mkern-9mu \circ}},\ite{{\mathbin{*\mkern-9mu \circ}}},\rm{t},\rm{f})$$ be algebras both in $\mathfrak I^\mathfrak c_0$ or in $\mathfrak I^\mathfrak c_1$ along with their respective group representations $${\mathcal X}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf X)}{4pt}}, \kappa_J^{\scaleto{(\mathbf X)}{4pt}}, \kappa_I^{\scaleto{(\mathbf X)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\rangle},$$ $${\mathcal Y}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_J^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_I^{\scaleto{(\mathbf Y)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}\rangle}.$$ Then $\mathbf X\leq\mathbf Y$ if and only if $\mathcal X\leq\mathcal Y$. \end{lemma} \begin{proof} Referring to Theorem~\ref{BUNCHalg_X}, let ${\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ and ${\kappa^{\scaleto{(\mathbf Y)}{4pt}}}=\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$, denote the monoidal operation of the layer group \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} by $\cdot_u$ and the monoidal operation of the layer group \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} by $\diamond_u$. Now, $\mathbf X\leq\mathbf Y$ is equivalent to the following six conditions: \begin{enumerate}[(B1)] \item\label{rESzhaLMaZ} $X\subseteq Y$, \item\label{OrderINGresZe} $\leq_X\,\subseteq\,\leq_Y$, \item\label{ReSTrict} $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}$ is the restriction of ${\mathbin{*\mkern-9mu \circ}}$ $X$, \item\label{ReSTrictITE} $\ite{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}}$ is the restriction of $\ite{{\mathbin{*\mkern-9mu \circ}}}$ to $X$, \item\label{TkegyenlOEk} $t={\rm t}$, \item\label{FkegyenlOEk} $f={\rm f}$. \end{enumerate} - By Theorem~\ref{BUNCHalg_X}, $\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf X)}{4pt}}={\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\{u\geq_X t : u \mbox{ is idempotent}\}$ and $\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\,=\,\leq_X$ over ${\kappa^{\scaleto{(\mathbf X)}{4pt}}}$. Hence $\min_{\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}}(\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf X)}{4pt}})=t$. Likewise follows $\min_{\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}}(\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Y)}{4pt}})=\rm t$. Therefore, condition \ref{EzaKEETTo} is equivalent to \ref{TkegyenlOEk}. \medskip (I) Assume $\mathbf X\leq\mathbf Y$. \ref{MindenkiResZe}: Let, for instance, $u\in\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$; the proof of the other two cases is similar. By Table~\ref{ThetaPsiOmega}, either $u=t$ and $\mathbf X\in\mathfrak I^{\mathfrak c}_{\mathfrak 1, \neg\mathfrak{id}}$ in which case $u=\rm t$ and $\mathbf Y\in\mathfrak I^{\mathfrak c}_{\mathfrak 1, \neg\mathfrak{id}}$ (since being even and idempotent inherit to subalgebras) thus yielding $u\in\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$ by Table~\ref{ThetaPsiOmega}, or $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}_J$ is strictly positive and idempotent and its residual complement is not idempotent (in $\mathbf X$) hence $u$ is strictly positive and idempotent and its residual complement is not idempotent in $\mathbf Y$ either (since $\mathbf X\leq\mathbf Y$) thus yielding $u\in\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$ by Table~\ref{ThetaPsiOmega}. Finally, $\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\,\subseteq\,\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}$ holds since by Theorem~\ref{BUNCHalg_X}, $\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}$ is the restriction of $\leq_X$ to ${\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\{u\geq_X t : u \mbox{ is idempotent}\}$ and $\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}$ is the restriction of $\leq_Y$ to ${\kappa^{\scaleto{(\mathbf Y)}{4pt}}}=\{u\geq_Y \rm t : u \mbox{ is idempotent}\}$, and both idempotency and the underlying ordering inherits to subalgebras. \ref{ReszalgebrasAGOK}: Let $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$. Then $u\in{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$ by \ref{MindenkiResZe}. Since $\mathbf X\leq\mathbf Y$, we denote the operations of $\mathbf X$ by ${\mathbin{*\mkern-9mu \circ}}$, $\ite{{\mathbin{*\mkern-9mu \circ}}}$, $\kompM{{\mathbin{*\mkern-9mu \circ}}}$, too. \\ If $u\not\in\kappa_I$ then by (\ref{DEFcsopi}), $G_u^{\scaleto{(\mathbf X)}{4pt}}=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\}$ and $G_u^{\scaleto{(\mathbf Y)}{4pt}}=\{x\in Y : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\}$, hence $\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$ is obvious from $\mathbf X\leq\mathbf Y$. \\ If $u\in\kappa_I$ then by (\ref{DEFcsopi}), $G_u^{\scaleto{(\mathbf X)}{4pt}}=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\}\setminus\{\g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}} : x\in X, \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u, \, \g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}}<x\}$ and $G_u^{\scaleto{(\mathbf Y)}{4pt}}=\{x\in Y : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u\}\setminus\{\g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}} : x\in Y, \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u, \, \g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}}<x\}$. If $a\in G_u^{\scaleto{(\mathbf X)}{4pt}}$ then $a\in X$, $\res{{\mathbin{*\mkern-9mu \circ}}}{a}{a}=u$ holds. Since $X\subseteq Y$, $a\in Y$ holds, too, and thus assuming $a\notin G_u^{\scaleto{(\mathbf Y)}{4pt}}$ would mean that there exists $x\in Y$ such that $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u$, $\g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}}<x$, and $a=\g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}}$. But then $x\overset{(\ref{EzLeSzainVerZ})}{=}\res{{\mathbin{*\mkern-9mu \circ}}}{(\res{{\mathbin{*\mkern-9mu \circ}}}{x}{u})}{u}=\res{{\mathbin{*\mkern-9mu \circ}}}{(\negaM{{\mathbin{*\mkern-9mu \circ}}}{\g{x}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}})}}{u}=\res{{\mathbin{*\mkern-9mu \circ}}}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{a}}{u}$ is in $X$ since so are $a$ and $u$, yielding $a\notin G_u^{\scaleto{(\mathbf X)}{4pt}}$, a contradiction. \\ If $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ then by (\ref{DEFcsopi}), $H_u^{\scaleto{(\mathbf X)}{4pt}}=\{x\in X : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u, \, \g{x}{\nega{u}}<x\}$ and $H_u^{\scaleto{(\mathbf Y)}{4pt}}=\{x\in Y : \res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}=u, \, \g{x}{\nega{u}}<x\}$, hence $\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$ is obvious from $\mathbf X\leq\mathbf Y$. \\ The lower cover of the unit element of \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} is $\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}$. Indeed for any $t\leq x<u$ it holds true by (\ref{item_boundary_zeta}) that $\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}<u$ and hence it is not is the universe of \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}. For any $\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}<x\leq t$ it holds true that $t\leq\negaM{{\mathbin{*\mkern-9mu \circ}}}{x}<u$, and since $\res{{\mathbin{*\mkern-9mu \circ}}}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{x}}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{x}}=\negaM{{\mathbin{*\mkern-9mu \circ}}}{\left(\g{\negaM{{\mathbin{*\mkern-9mu \circ}}}{x}}{\negaM{{\mathbin{*\mkern-9mu \circ}}}{(\negaM{{\mathbin{*\mkern-9mu \circ}}}{x})}}\right)} =\negaM{{\mathbin{*\mkern-9mu \circ}}}{\left(\g{\negaM{{\mathbin{*\mkern-9mu \circ}}}{x}}{x}\right)} =\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}$, the previous case applies. Likewise follows that the lower cover of the unit element of \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} is $\negaM{{\mathbin{*\mkern-9mu \circ}}}{u}$, and hence they are equal since $\mathbf X\leq\mathbf Y$. \ref{ResTRIcTIon}: for $u,v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}} $, $u<_{\kappa^{\scaleto{(\mathbf X)}{4pt}}} v$, and $x\in G_u$ it holds true by Theorem~\ref{BUNCHalg_X} that $ \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} (x) = \gpont{x}{v} \overset{\ref{ReSTrict}}{=} \g{x}{v} = \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} (x) $. \medskip (II) Assume that conditions \ref{EzaKEETTo}-\ref{ResTRIcTIon} are satisfied. \ref{rESzhaLMaZ}: By \ref{MindenkiResZe}, \begin{equation}\label{ReSzeKAPPAnak} {\kappa^{\scaleto{(\mathbf X)}{4pt}}} := \kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf X)}{4pt}} \subseteq \kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Y)}{4pt}} =: {\kappa^{\scaleto{(\mathbf Y)}{4pt}}} \end{equation} follows. By \ref{ReszalgebrasAGOK} it holds true that $\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$ if $u\in\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$, and $\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}\leq\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$, hence by (\ref{IkszU}), \begin{equation}\label{XuRESZEyU} \mbox{ $X_u\subseteq Y_u$ holds for $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$. } \end{equation} Therefore, $X\overset{(\ref{EZazX})}{=}\displaystyle\dot\bigcup_{u\in {\kappa^{\scaleto{(\mathbf X)}{4pt}}}}X_u \overset{(\ref{ReSzeKAPPAnak})}{\subseteq} \displaystyle\dot\bigcup_{u\in {\kappa^{\scaleto{(\mathbf X)}{4pt}}}}Y_u\subseteq\displaystyle\dot\bigcup_{u\in {\kappa^{\scaleto{(\mathbf Y)}{4pt}}}}Y_u\overset{(\ref{EZazX})}{=}Y$. \medskip \ref{OrderINGresZe}: From the first two lines in \ref{ReszalgebrasAGOK}, ${\preceq_G}_u\subseteq{\preceq_K}_u$ holds for $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$. Therefore, by Theorem~\ref{BUNCHalg_X}/\ref{BLGtoX}, ${\leq_G}_u\subseteq{\leq_K}_u$ follows if $u\in\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$, see the line before (\ref{KibovitettRendezesITTIS}). If $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ then a moment's reflection using the first two lines in \ref{ReszalgebrasAGOK} and (\ref{KibovitettRendezesITTIS}) shows that ${\leq_G}_u\subseteq{\leq_K}_u$ holds in this case, too. Summing up, \begin{equation}\label{ReSZreNDeZes} \mbox{ for $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, ${\leq_G}_u\subseteq{\leq_K}_u$ holds. } \end{equation} Now, let $x,y\in X$ such that $x\in G_u$, $y\in G_v$, $u,v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$. Then by (\ref{RendeZesINNOVATIVAN}) $x\leq_X y$ equivalent to \begin{equation}\label{ElobbIgy} \mbox{ $\rho_{G,u\vee v}(x)\, {\leq_G}_{u\vee v}\, \rho_{G,u\vee v}(y)$ except if $u>_{\kappa^{\scaleto{(\mathbf X)}{4pt}}} v$ and $\rho_{G,u\vee v}(x)=\rho_{G,u\vee v}(y)$. } \end{equation} Using the first two lines in \ref{ReszalgebrasAGOK} it is immediate from (\ref{P5}) that for every $w\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, \begin{equation}\label{RoKAVAext} \mbox{ for every $w\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, ${\rho_K}_w$ is an extension of ${\rho_G}_w$. } \end{equation} Hence it follows, using (\ref{ReSZreNDeZes}) and (\ref{ElobbIgy}) that $$ \mbox{ $\rho_{K,u\vee v}(x)\, {\leq_K}_{u\vee v}\, \rho_{K,u\vee v}(y)$ except if $u>_{\kappa^{\scaleto{(\mathbf X)}{4pt}}} v$ and $\rho_{K,u\vee v}(x)=\rho_{K,u\vee v}(y)$, } $$ which implies $x\leq_Y y$. \medskip \ref{ReSTrict}: Referring to the first two lines in \ref{ReszalgebrasAGOK}, it follows from (\ref{canHOM}) that \begin{equation}\label{ExTENSionIZACO} \mbox{ ${h_K}_u$ is an extension of ${h_G}_u$. } \end{equation} For $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$ and $x,y\in X_u$, it follows that $u\overset{(\ref{ReSzeKAPPAnak})}{\in}{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$ and $x,y\overset{(\ref{XuRESZEyU})}{\in}Y_u$, and $$ \begin{array}{lll} \gpontu{x}{y}& \overset{(\ref{uPRODigy})}{=} &\left\{ \begin{array}{ll} \accentset{\bullet}{\left({{h_G}_u(x)}\cdot_u{{h_G}_u(y)}\right)} & \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$, $\g{{h_G}_u(x)}{{h_G}_u(y)}\in H_u^{\scaleto{(\mathbf X)}{4pt}}$ and $\neg(x,y\in H_u^{\scaleto{(\mathbf X)}{4pt}})$}\\ {{h_G}_u(x)}\cdot_u{{h_G}_u(y)} & \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$, $\g{{h_G}_u(x)}{{h_G}_u(y)}\notin H_u^{\scaleto{(\mathbf X)}{4pt}}$ or $x,y\in H_u^{\scaleto{(\mathbf X)}{4pt}}$}\\ x\cdot_u y& \mbox{ if $u\notin\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$}\\ \end{array} \right. \\ &\overset{(\ref{ExTENSionIZACO}), \ref{MindenkiResZe}}{=}&\left\{ \begin{array}{ll} \accentset{\bullet}{\left({{h_K}_u(x)}\cdot_u{{h_K}_u(y)}\right)} & \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$, $\g{{h_K}_u(x)}{{h_K}_u(y)}\in H_u^{\scaleto{(\mathbf Y)}{4pt}}$ and $\neg(x,y\in H_u^{\scaleto{(\mathbf Y)}{4pt}})$}\\ {{h_K}_u(x)}\cdot_u{{h_K}_u(y)} & \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$, $\g{{h_K}_u(x)}{{h_K}_u(y)}\notin H_u^{\scaleto{(\mathbf Y)}{4pt}}$ or $x,y\in H_u^{\scaleto{(\mathbf Y)}{4pt}}$}\\ x\cdot_u y& \mbox{ if $u\notin\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$}\\ \end{array} \right. \\ &\overset{(\ref{uPRODigy})}{=}& \gteu{x}{y} \end{array} $$ Summing up, \begin{equation}\label{ReSZsZoRZas} \mbox{ for $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}_u$ is the restriction of $\teu$ to $X_u$ } \end{equation} Now, let $x,y\in X$ such that $u,v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, $x\in X_u$, $y\in X_v$. Since $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}$ is commutative, we can safely assume $u\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} v$. Therefore, $ \gpont{x}{y} \overset{(\ref{EgySzeruTe})}{=} \gtepontM{v}{{\rho_G}_v(x)}{{\rho_G}_v(y)} \overset{(\ref{RoKAVAext})\, (\ref{ReSZsZoRZas})}{=} \gteM{v}{{\rho_K}_v(x)}{{\rho_K}_v(y)} \overset{(\ref{EgySzeruTe})}{=} \g{x}{y} $ holds. \medskip \ref{ReSTrictITE}: Referring to (\ref{IgYaReSi}), it is sufficient to prove that $\kompM{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}}$ is extension of $\kompM{{\mathbin{*\mkern-9mu \circ}}}$. Since by the first row of \ref{ReszalgebrasAGOK} the inverse operation in \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} is extension of the inverse operation in \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, for $x\in X$, \begin{equation}\label{SplitNegaKKK} \begin{array}{lll} \negaM{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}}{(\accentset{\bullet}x)}&\overset{(\ref{SplitNega})}{=}&\left\{ \begin{array}{ll} \ x^{-1_{G_u}} \ \ \ \ \ \ \ = \ \ x^{-1_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}} & \mbox{ \ \ \ \ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ and $\accentset{\bullet}x\in \accentset{\bullet} H_u^{\scaleto{(\mathbf X)}{4pt}}$}\\ \end{array} \right. \\ \negaM{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}}{x}&\overset{(\ref{SplitNega})}{=}&\left\{ \begin{array}{llll} \accentset{\bullet}{\left(x^{-1_{G_u}}\right)} &=& \accentset{\bullet}{\left(x^{-1_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}}\right)}& \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ and $x\in H_u^{\scaleto{(\mathbf X)}{4pt}}$}\\ x^{-1_{G_u}}&=& x^{-1_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}} & \mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ and $x\in G_u\setminus H_u^{\scaleto{(\mathbf X)}{4pt}}$}\\ x^{-1_{G_u}}&=& x^{-1_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}} & \mbox{ if $u\in\kappa_o^{\scaleto{(\mathbf X)}{4pt}}$ and $x\in G_u$}\\ {x^{-1_{G_u}}}_{\downarrow_{G_u}}&=& {x^{-1_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}}}_{\downarrow_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}} & \mbox{ if $u\in\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$ and $x\in G_u$}\\ \end{array} \right. . \end{array} \end{equation} In the last row we also used that $\downarrow_{G_u^{\scaleto{(\mathbf Y)}{4pt}}}$ is extension of $\downarrow_{G_u}$, as it readily follows from the last condition in \ref{ReszalgebrasAGOK}. Referring to the first three items in \ref{MindenkiResZe} and first two items in \ref{ReszalgebrasAGOK}, it follows, respectively, that $$ \left\{ \begin{array}{lll} u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}} &\mbox{ and }& \accentset{\bullet}x\in \accentset{\bullet} H_u^{\scaleto{(\mathbf Y)}{4pt}} \\ u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}} &\mbox{ and }& x\in H_u^{\scaleto{(\mathbf Y)}{4pt}} \\ u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}} &\mbox{ and }& x\in G_u^{\scaleto{(\mathbf Y)}{4pt}}\setminus H_u^{\scaleto{(\mathbf Y)}{4pt}} \\ u\in\kappa_o^{\scaleto{(\mathbf Y)}{4pt}} &\mbox{ and }& x\in G_u^{\scaleto{(\mathbf Y)}{4pt}} \\ u\in\kappa_J^{\scaleto{(\mathbf Y)}{4pt}} &\mbox{ and }& x\in G_u^{\scaleto{(\mathbf Y)}{4pt}}\\ \end{array} \right. , $$ where in the middle row we also used the second condition from \ref{ReszalgebrasAGOK}. Therefore, by (\ref{IgYaReSi}), the expression in (\ref{SplitNegaKKK}) is equal to $$ \left\{ \begin{array}{ll} \negaM{{\mathbin{*\mkern-9mu \circ}}}{(\accentset{\bullet}x)} &\mbox{ if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ and $\accentset{\bullet}x\in \accentset{\bullet}H_u^{\scaleto{(\mathbf X)}{4pt}}$} \\ \negaM{{\mathbin{*\mkern-9mu \circ}}}{x} &\mbox{ if $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$ and $x\in G_u$} \\ \end{array} \right. . $$ \medskip \ref{FkegyenlOEk}: Obvious, since the falsum constant is the residual complement of the unit element (see (\ref{tLESZaz})), $t=\rm t$ (see \ref{TkegyenlOEk}), and $\kompM{\mathbin{\circ\mkern-7mu \cdot\mkern2mu}}$ is extension of $\kompM{{\mathbin{*\mkern-9mu \circ}}}$ (see the proof of \ref{ReSTrictITE}). \end{proof} Corollary~\ref{HogyanLatszikCOR} shows how \lq $\mathbf X$ is embeddable into $\mathbf Y$\rq\, can be seen from the respective group representations of $\mathbf X$ and $\mathbf Y$. Since by Theorem~\ref{BUNCHalg_X} the universe of the skeleton and the universe of the layer groups are subsets of the universe of the algebra, any mapping $\iota$ from $\mathbf X$ to $\mathbf Y$ induces a mapping $\iota_\kappa$ from the skeleton of $\mathbf X$, and a mapping $\iota_u$ in every layer $u$, from the $u^{\rm th}$-layer group \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} of $\mathbf X$. \begin{corollary}\label{HogyanLatszikCOR} Assume the hypotheses of Lemma~\ref{HogyanLatszik}. Then $\iota : \mathbf X\to\mathbf Y$ is an embedding if and only if the following conditions hold. \begin{enumerate}[start=1,label={(E\arabic*)}] \item $\iota_\kappa:=\iota_{|_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}}$ is an embedding\footnote{Injective order preserving mapping.} of the skeleton of $\mathcal X$ into the skeleton of $\mathcal Y$, which preserves the least element and the partition, \item\label{E2LesZEz} For every $u\in\kappa^{\scaleto{(\mathbf X)}{4pt}}$, $\iota_u:=\iota_{|_{\textbf{\textit{G$^{\scaleto{(\mathbf X)}{3pt}}_u$}}}}$ is an ($o$-group) embedding of the $u^{\rm th}$-layer group \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} of $\mathcal X$ into the $\iota_\kappa(u)^{\rm th}$-layer group \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_{\iota_\kappa(u)}$}} of $\mathcal Y$ \footnote{A consequence of this can be phrased as \lq\lq amalgamation must be done layerwise\rq\rq.} such that \begin{enumerate} \item[-] for every $u,v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, $u<v$ the diagram in Fig.~\ref{HOMO_NocsaKK} commutes, \begin{figure}[ht] \begin{diagram} \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & \rEmbed_{\iota_u} & \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_{\iota_\kappa(u)}$}} \\ \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}}} & & \dTo_{\varsigma_{{\iota_\kappa(u)}\to {\iota_\kappa(v)}}^{\scaleto{(\mathbf Y)}{4pt}}} & \\ \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}} & \rEmbed_{\iota_v} & \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_{\iota_\kappa(v)}$}} \\ \end{diagram} \caption{} \label{HOMO_NocsaKK} \end{figure} \item[-] if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ then $\iota_u$ maps \textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}} into \textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_{\iota(u)}$}}, \item[-] if $u\in\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$ then $\iota_u$ preserves the cover of the unit element $u$ of \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}. \end{enumerate} \end{enumerate} \end{corollary} \begin{proof} For the last statement we mention that preserving the (upper) cover of the unit element is equivalent to preserving the lower cover of it, since these are the inverses of one another. \end{proof} \section{Extension of direct systems} For the main construction in the proof of Theorem~\ref{EzaTuTtI} we shall need the following lemma, which we have not been able to find in the literature. It states the extendability of direct systems over chains to larger chains. The proof of Lemma~\ref{ExtDirSyst} works for direct systems of algebras of any kind over chains. Here we state it only for abelian $o$-groups. \begin{lemma}\label{ExtDirSyst} \begin{enumerate}[(1)] \ite Let $\langle \alpha,\leq \rangle$ be a subalgebra of the totally ordered set $\langle \beta,\leq \rangle$. Every direct system $\langle \mathbf A_u, f_{u\to v} \rangle$ of abelian $o$-groups over $\alpha$ can be extended to a direct system $\langle \mathbf B_u, g_{u\to v} \rangle$ of abelian $o$-groups over $\beta$, by letting for $s\in\beta$, \begin{equation}\label{EQtheWAYyouDOit} \mathbf B_s=\underset{\underset{i\in\alpha, i\leq s}{\longrightarrow}}{\lim}\, \mathbf A_i , \footnote{$\underset{\underset{i\in\alpha, i\leq s}{\longrightarrow}}{\lim}\, \mathbf A_i$ denotes the direct limit of the direct system $\langle \mathbf A_u, f_{u\to v} \rangle$ over (restricted to) the index set $\{i\in\alpha, i\leq s\}$.} \end{equation} and for $i,j\in\beta$, $i<j$, \begin{equation}\label{eLSo3} g_{i\to j}= \left\{ \begin{array}{ll} f_{i\to j & \mbox{if $i\in\alpha$}\hfill\mbox{(extension)}\\ \phi_{i\to j} & \mbox{if $i\in\beta\setminus\alpha$, $j\in\alpha$}\\ id_{i\to j} & \mbox{if $i,j\in\beta\setminus\alpha$ and $\nexists w\in\alpha$ such that $i<w<j$}\\ \pi_{w\to j}\circ\phi_{i\to w} & \mbox{if $i,j\in\beta\setminus\alpha$ and $\exists w\in\alpha$ such that $i<w<j$}\\ \end{array} \right. . \end{equation} \item\label{Dir2} It holds true that \begin{equation}\label{EqDir2} \mathbf B_s=\left\{ \begin{array}{ll} \underset{\underset{i\in\beta, i\leq s}{\longrightarrow}}{\lim}\, \mathbf A_i & \mbox{if $s\in\alpha$}\\ \underset{\underset{i\in\beta, i<s}{\longrightarrow}}{\lim}\, \mathbf A_i & \mbox{if $s\in\beta\setminus\alpha$}\\ \end{array} \right. \end{equation} \end{enumerate} \end{lemma} \begin{proof} The first line of (\ref{EqDir2}) holds since $\{i\in\beta, i\leq s\}$ is cofinal in $\{i\in\alpha, i\leq s\}$ if $s\in\alpha$ and hence their direct limits coincide, whereas the second line of (\ref{EqDir2}) holds since for $s\in\beta\setminus\alpha$, $\{i\in\alpha, i\leq s\}=\{i\in\alpha, i<s\}$. This ends the proof of \ref{Dir2}. It is immediate from the definition of the direct limit that for $s\in\alpha$, $\mathbf B_s\cong\mathbf A_s$ (extension), so for $s\in\alpha$ we shall identify $\mathbf B_s$ with $\mathbf A_s$, and thus for every $i\in\alpha$, $i\leq s$ it holds true that $\pi_{i\to s}$, the canonical homomorphism from $\mathbf A_i$ to $\mathbf B_s$, is equal to $f_{i\to s}$. Hence for $i\in\alpha$, we shall write $\pi_{i\to s}$ instead of $f_{i\to s}$ in the sequel, and rewrite the direct system property as $\pi_{j\to k}\circ\pi_{i\to j}=\pi_{i\to k}$ $(i,j,k\in\alpha)$. From the property of canonical homomorphisms, this identity also holds for $i,j\in\alpha$ and $k\in\beta\setminus\alpha$, hence it holds true that \begin{equation}\label{PiIsComm} \pi_{j\to k}\circ\pi_{i\to j}=\pi_{i\to k} \ \ (i,j\in\alpha, k\in\beta) . \end{equation} For $\beta\setminus\alpha\ni u<v\in\alpha$ let $\phi_{u\to v}$ denote the unique homomorphism from $\mathbf B_u$ to $\mathbf A_v$ guaranteed by the universal property based on the family $\{ f_{i\to v} : i\in\alpha, i\leq v\}$. Then it holds true that \begin{equation}\label{PhiIsCOMM} \phi_{j\to k}\circ\pi_{i\to j}=\pi_{i\to k} \ \ (i,k\in\alpha, j\in\beta\setminus\alpha) . \end{equation} The fourth row of (\ref{eLSo3}) is well-defined. Indeed, if $i<z<j$ holds for some $z\in\alpha$ then since $\alpha$ is a chain we may assume $w<z$. Then for any $k\in\alpha$, $k<i$ it holds true that $ (\pi_{w\to z}\circ\phi_{i\to w})\circ\pi_{k\to i} = \pi_{w\to z}\circ(\phi_{i\to w}\circ\pi_{k\to i}) \overset{(\ref{PhiIsCOMM})}{=} \pi_{w\to z}\circ\pi_{k\to w} \overset{(\ref{PiIsComm})}{=} \pi_{k\to z} $, and hence by the unicity in the universal property we obtain \begin{equation}\label{MegKerulom} \pi_{w\to z}\circ\phi_{i\to w}=\phi_{i\to z} . \end{equation} Now, $ \pi_{w\to j}\circ\phi_{i\to w} \overset{(\ref{PiIsComm})}{=} (\pi_{z\to j}\circ\pi_{w\to z})\circ\phi_{i\to z} = \pi_{z\to j}\circ(\pi_{w\to z}\circ\phi_{i\to z}) \overset{(\ref{MegKerulom})}{=} \pi_{z\to j}\circ\phi_{i\to z} $ follows so we are done. It remains to verify that $\langle \mathbf B_i, g_{i\to j} \rangle$ is a direct system over $\beta$, that is, to prove $$ g_{j\to k}\circ g_{i\to j}=g_{i\to k} $$ for $i,j,k\in\beta$, $i<j<k$ (in case of coincidence of any of the indices it is straightforward). \begin{itemize} \item[-] If $i,j\in\alpha$ then $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} \pi_{j\to k}\circ \pi_{i\to j} \overset{(\ref{PiIsComm})}{=} \pi_{i\to k} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \item[-] If $i,k\in\alpha$ and $j\in\beta\setminus\alpha$ then $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} \phi_{j\to k}\circ \pi_{i\to j} \overset{(\ref{PhiIsCOMM})}{=} \pi_{i\to k} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \item[-] If $i\in\alpha$, $j,k\in\beta\setminus\alpha$ and $\exists w\in\alpha$ such that $i<w<j$ then $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} (\pi_{w\to k}\circ\phi_{j\to w})\circ \pi_{i\to j} = \pi_{w\to k}\circ(\phi_{j\to w}\circ \pi_{i\to j}) \overset{(\ref{PhiIsCOMM})}{=} \pi_{w\to k}\circ\pi_{i\to w} \overset{(\ref{PiIsComm})}{=} \pi_{i\to k} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \item[-] Assume $i\in\alpha$, $j,k\in\beta\setminus\alpha$ and $\nexists w\in\alpha$ such that $j<w<k$. Now $\mathbf B_j\cong\mathbf B_k$ follows from the last condition and (\ref{EQtheWAYyouDOit}), denote the related isomorphism by $id_{j\to k}$. From the definition of direct limits, $id_{j\to k}\circ\pi_{z\to j}=\pi_{z\to k}$ holds for any $z\in\alpha$, $z\leq j$. In particular, $id_{j\to k}\circ\pi_{i\to j}=\pi_{i\to k}$. Therefore $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} id_{j\to k}\circ \pi_{i\to j} = \pi_{i\to k} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \item[-] If $j,k\in\alpha$ and $i\in\beta\setminus\alpha$ then $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} \pi_{j\to k}\circ \phi_{i\to j} \overset{(\ref{MegKerulom})}{=} \phi_{i\to k} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \item[-] If $j\in\alpha$ and $i,k\in\beta\setminus\alpha$ then $ g_{j\to k}\circ g_{i\to j} \overset{(\ref{eLSo3})}{=} \pi_{j\to k}\circ \phi_{i\to j} \overset{(\ref{eLSo3})}{=} g_{i\to k} $. \end{itemize} \end{proof} Call $\langle \mathbf B_u, g_{u\to v} \rangle_\beta$ the closure of $\langle \mathbf A_u, f_{u\to v} \rangle_\alpha$ to $\beta$. To simplify notation, and since it is an extension, we shall denote the closure of $\langle \mathbf A_u, f_{u\to v} \rangle_\alpha$ to $\beta$ by $\langle \mathbf A_u, f_{u\to v} \rangle_\beta$. \section{Amalgamation in $\mathfrak I^\mathfrak c_0$ and $\mathfrak I^\mathfrak c_1$}\label{SECTamalg} The amalgamation property will be investigated in $\mathfrak J^\mathfrak c_\mathfrak 0$, $\mathfrak J^\mathfrak c_\mathfrak 1$, $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$, $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$, and $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ in this section. As it may be expected from Section~\ref{SEClinkkk}, first we will need some basic facts about partially ordered abelian groups ($po$-groups). The positive cone of a $po$-group is the set of its elements greater or equal to the unit element of the group. Its strict positive cone is its positive cone except the unit element. If $\leq$ is the ordering of the abelian $po$-group $\mathbf G$ then we also express it by saying that $\leq$ is a partial order of $\mathbf G$. It is readily seen from \cite[Theorem 2 in page 13]{fuchs} that \begin{proposition}\label{PRstrictP} $P$ is the positive cone of an abelian $po$-group $\mathbf G$ if and only if $P\cap P^{-1}$ contains only the unit element and $PP\subseteq P$. $Q$ is its strict positive cone if and only if $Q\cap Q^{-1}=\emptyset$ and $QQ\subseteq Q$. \end{proposition} An easy consequence of Proposition~\ref{PRstrictP} is \begin{lemma}\label{genLEQs} If $P_1$ and $P_2$ are the positive cones of two partial orders of the abelian group $\mathbf G$, then \begin{enumerate}[i] \item there exists the smallest partial order of $\mathbf G$ which contains both $P_1$ and $P_2$ if and only if $P_1\cap {P_2}^{-1}=\emptyset$. \item \label{MasoDIkkkk} If this is the case then its positive cone is $P_1\cup P_2\cup P_1P_2$. \end{enumerate} \end{lemma} \begin{proof} The positive cone of a partial order of $\mathbf G$ which extends both $P_1$ and $P_2$ must also contain $P_1P_2$ since the product of two positive elements must be positive. On the other hand, we claim that $P_1\cup P_2\cup P_1P_2$ is the positive cone of a partial order of $\mathbf G$ if and only if $P_1\cap{P_2}^{-1}=\{t\}$. That is, $P_1\cup P_2\cup P_1P_2$ is closed under multiplication, and $(P_1\cup P_2\cup P_1P_2)\cap(P_1\cup P_2\cup P_1P_2)^{-1}=\{t\}$. Using Proposition~\ref{PRstrictP} and that the multiplication is commutative, $(P_1\cup P_2\cup P_1P_2)(P_1\cup P_2\cup P_1P_2)=P_1P_1\cup P_1P_2\cup P_1P_1P_2\cup P_2P_1\cup P_2P_2\cup P_2P_1P_2\cup P_1P_2P_1\cup P_1P_2P_2\cup P_1P_2P_1P_2\subseteq P_1\cup P_2\cup P_1P_2$. In addition, $ (P_1\cup P_2\cup P_1P_2)\cap(P_1\cup P_2\cup P_1P_2)^{-1}= (P_1\cup P_2\cup P_1P_2)\cap(P_1^{-1}\cup P_2^{-1}\cup (P_1P_2)^{-1}) = (P_1\cap P_1^{-1})\cup(P_1\cap P_2^{-1})\cup(P_1\cap (P_1P_2)^{-1})\cup(P_2\cap P_1^{-1})\cup(P_2\cap P_2^{-1})\cup(P_2\cap (P_1P_2)^{-1})\cup(P_1P_2\cap P_1^{-1})\cup(P_1P_2\cap P_2^{-1})\cup(P_1P_2\cap (P_1P_2)^{-1}) = \{t\} \cup(P_1\cap P_2^{-1}) \cup(P_1\cap (P_1P_2)^{-1}) \cup(P_1\cap P_2^{-1})^{-1}\cup\{t\} \cup(P_2\cap (P_1P_2)^{-1}) \cup (P_1\cap (P_1P_2)^{-1})^{-1} \cup(P_2\cap (P_1P_2)^{-1})^{-1} \cup(P_1P_2\cap (P_1P_2)^{-1}) $ Now, if $P_1\cap{P_2}^{-1}=\{t\}$ then the latest is equal to $ \{t\}\cup\{t\}\cup(P_1\cap (P_1P_2)^{-1})\cup\{t\}\cup\{t\}\cup(P_2\cap (P_1P_2)^{-1})\cup(P_1\cap (P_1P_2)^{-1})^{-1}\cup(P_2\cap (P_1P_2)^{-1})^{-1}\cup(P_1P_2\cap (P_1P_2)^{-1}) = \{t\} \cup(P_1\cap (P_1P_2)^{-1}) \cup(P_1\cap (P_1P_2)^{-1})^{-1} \cup(P_2\cap (P_1P_2)^{-1}) \cup(P_2\cap (P_1P_2)^{-1})^{-1} \cup(P_1P_2\cap (P_1P_2)^{-1}) $, and here all terms in the brackets are equal to $\{t\}$, we check only, e.g., $P_1\cap (P_1P_2)^{-1}=\{t\}$. Let $a\in P_1\cap (P_1P_2)^{-1}$. Then $a=a_1^{-1}a_2^{-1}$ where $a,a_1\in P_1$ and $a_2\in P_2$. Therefore, $a_2=a^{-1}a_1^{-1}\in P_1^{-1}$ holds, and it implies $a_2\in P_2\cap P_1^{-1}$, hence $a_2=t$ holds. It follows that $a=a_1^{-1}\in P_1\cap P_1^{-1}$. Hence $a=t$. All the other terms can be handled similarly. On the other hand, if $ \{t\} \cup(P_1\cap P_2^{-1}) \cup(P_1\cap (P_1P_2)^{-1}) \cup(P_1\cap P_2^{-1})^{-1}\cup\{t\} \cup(P_2\cap (P_1P_2)^{-1}) \cup (P_1\cap (P_1P_2)^{-1})^{-1} \cup(P_2\cap (P_1P_2)^{-1})^{-1} \cup(P_1P_2\cap (P_1P_2)^{-1}) = \{t\}$ then $P_1\cap P_2^{-1}$ cannot exceed $\{t\}$, and since $P_1$ and $P_2$ are positive cones, both $P_1$ and $P_2^{-1}$ contain $\{t\}$. Hence $P_1\cap P_2^{-1}=\{t\}$. \end{proof} The major steps of the construction of Theorem~\ref{EzaTuTtI} below are illustrated in Figures~\ref{C1}-\ref{C3}. to ease the understanding. \begin{figure}[ht] \begin{center} \includegraphics[width=1.2in, frame]{e1} \includegraphics[width=1.2in, frame]{e2} \includegraphics[width=1.2in, frame]{e3} \caption{The V-formation, the V-formation layerwise} \label{C1} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=1.2in, frame]{e4} \includegraphics[width=1.2in, frame]{e5} \caption{Closures to $\kappa^{\scaleto{(\mathbf W)}{4pt}}$, extension of the induced embeddings to $\kappa^{\scaleto{(\mathbf W)}{4pt}}$} \label{C2} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=1.2in, frame]{e6} \includegraphics[width=1.2in, frame]{e7} \includegraphics[width=1.2in, frame]{e8} \caption{Layerwise amalgamation in $\mathfrak A^\mathfrak l$, in $\mathfrak A^\mathfrak c$, and amalgamation in $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ and in $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$} \label{C3} \end{center} \end{figure} \begin{theorem}\label{EzaTuTtI} \begin{enumerate} \item $\mathfrak J^\mathfrak c_\mathfrak 0$ and $\mathfrak J^\mathfrak c_\mathfrak 1$ fail the amalgamation property. \item $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ and $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ have the amalgamation property \item $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ and $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ fail the strong amalgamation property \end{enumerate} \end{theorem} \begin{proof} (1) Denote by $\mathbbm 1$ the trivial (one-element) group and $\leq\,=\{\langle 0,0 \rangle,\langle 0,1 \rangle,\langle 1,1 \rangle\}$. For any discrete abelian $o$-group \textbf{\textit{G$_1$}}, the involutive FL$_e$-chains given by $$ \mbox{ $\langle \{\mathbbm 1,\textbf{\textit{G$_1$}}\}, \emptyset, \varsigma_{0\to 1} \rangle_{\langle \{0\}, \{1\}, \emptyset,\leq\rangle}$ and $\langle \{\textbf{\textit{G$_1$}}\},\emptyset, \emptyset \rangle_{\langle \emptyset, \{1\}, \emptyset,\langle 1,1 \rangle\}\rangle\rangle}$ } $$ are from $\mathfrak J^\mathfrak c_\mathfrak 0$ or $\mathfrak J^\mathfrak c_\mathfrak 1$, respectively, see the table in Theorem~\ref{BUNCHalg_X}. Note that in both cases $1$ is in the $\kappa_J$ part of the partition, and in the $\kappa_J$ part of the partition the related abelian $o$-groups are discrete (c.f. Definition~\ref{DEFbunch}). If we had amalgam for any V-formation using algebras of this form, then according to Corollary~\ref{HogyanLatszikCOR}, on the $1$-layer it would yield an amalgam with an abelian $o$-group $\mathbf G$ which is discrete, too, (since the induced embedding on the skeletons preserves the partition and therefore $\mathbf G$ is in the $\kappa_J$-layer of its layer group representation, too) and with normal and positive embeddings (since in a $\kappa_J$-layer the related embeddings must preserve the cover of the unit element, too, c.f. the last item in Corollary~\ref{HogyanLatszikCOR}). However, it has been shown in \cite[Lemma 12]{ExCoAbLOG} that $\mathfrak A^{\mathfrak c\mathfrak d}$ with normal and positive embeddings does not have the amalgamation property, a contradiction. \bigskip (2) Let $\mathbf X,\mathbf Y,\mathbf Z\in\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ (or $\mathbf X,\mathbf Y,\mathbf Z\in\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$) and consider the V-formation in Fig.~\ref{Vformation}. \begin{figure}[ht] \begin{diagram} & & \mathbf X & & \\ & \ldEmbed^{\iota_1} & & \rdEmbed^{\iota_2} & \\ \mathbf Y & & & & \mathbf Z \\ \end{diagram} \caption{} \label{Vformation} \end{figure} \noindent Let $$ \mbox{ ${\mathcal X}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf X)}{4pt}}, \kappa_J^{\scaleto{(\mathbf X)}{4pt}}, \kappa_I^{\scaleto{(\mathbf X)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\rangle}$}, $$ $$ \mbox{ ${\mathcal Y}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_J^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_I^{\scaleto{(\mathbf Y)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}\rangle}$, } $$ $$ \mbox{ ${\mathcal Z}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf Z)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf Z)}{4pt}}, \kappa_J^{\scaleto{(\mathbf Z)}{4pt}}, \kappa_I^{\scaleto{(\mathbf Z)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf Z)}{3pt}}}\rangle}$, } $$ be the respective layer group representations of $\mathbf X$, $\mathbf Y$, and $\mathbf Z$ (in the extended sense, see Remark~\ref{IgyIsNezheto}). Without loss of generality, we may assume that $\iota_1$ and $\iota_2$ are inclusion maps. Hence its induced mappings will be inclusion maps, too, see below. By Corollary~\ref{HogyanLatszikCOR}, \begin{enumerate}[start=1,label={(C\arabic*)}] \item\label{kjHkjgjghgfghffFfjGJ34} the induced (order preserving) inclusion maps, see Fig.~\ref{NocsaKKwELSO}, \begin{figure}[ht] \begin{diagram} & & \mathbf {\kappa^{\scaleto{(\mathbf X)}{4pt}}} & & \\ & \ldEmbed^{\iota_{1,\kappa}} & & \rdEmbed^{\iota_{2,\kappa}} & \\ {\kappa^{\scaleto{(\mathbf Y)}{4pt}}} & & & & {\kappa^{\scaleto{(\mathbf Z)}{4pt}}} \end{diagram} \caption{} \label{NocsaKKwELSO} \end{figure} \noindent preserve the least element and the partition of $\kappa$, \item\label{C2leszEZ} denoting the induced ($o$-group) inclusion maps of $\iota_1$ and $\iota_2$ for every $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$ by $\iota_{1,u}$ and $\iota_{2,u}$, respectively, \begin{enumerate}[(a)] \item the diagram in Fig.~\ref{NocsaKK55} commutes for every $u,v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, $u<v$, \begin{figure}[ht] \begin{diagram} \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \lEmbed_{\iota_{1,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & \rEmbed_{\iota_{2,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}} & & \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}}} & & \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}} & \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}} & \lEmbed_{\iota_{1,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}} & \rEmbed_{\iota_{2,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_v$}} \\ \end{diagram} \caption{} \label{NocsaKK55} \end{figure} \item if $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ then $\iota_{1,u}$ and $\iota_{1,u}$ preserve the layer subgroups, that is, $\iota_{1,u}(\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}})\subseteq\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$, $\iota_{2,u}(\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}})\subseteq\textbf{\textit{H$^{\scaleto{(\mathbf Z)}{4pt}}_u$}}$, \end{enumerate} \end{enumerate} Notice that the last condition of Corollary~\ref{HogyanLatszikCOR} can be omitted here since $\kappa_J^{\scaleto{(\mathbf X)}{4pt}}=\emptyset$ holds by the $\mathfrak{symm}$ subclass restriction for $\mathbf X$. We shall construct an amalgam of $\mathtt V$ in $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ (on in $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$) as follows. There exists a strong amalgamation of Fig.~\ref{NocsaKKwELSO} in $\mathfrak C$ as shown in Fig.~\ref{NocsaKKw}. \begin{figure}[ht] \begin{diagram} & & \mathbf \kappa^{\scaleto{(\mathbf X)}{4pt}} & & \\ & \ldEmbed^{\iota_{1,\kappa}} & & \rdEmbed^{\iota_{2,\kappa}} & \\ \kappa^{\scaleto{(\mathbf Y)}{4pt}} & \rEmbed^{\nu_1} & \kappa^{\scaleto{(\mathbf W)}{4pt}} & \lEmbed^{\nu_2} & \kappa^{\scaleto{(\mathbf Z)}{4pt}} \end{diagram} \caption{} \label{NocsaKKw} \end{figure} To simplify notation, we shall assume that $\nu_1$, and $\nu_2$ are inclusion maps, that is, \begin{equation}\label{KaMuEtLa} {\kappa^{\scaleto{(\mathbf X)}{4pt}}}={\kappa^{\scaleto{(\mathbf Y)}{4pt}}}\cap{\kappa^{\scaleto{(\mathbf Z)}{4pt}}}, \ {\kappa^{\scaleto{(\mathbf W)}{4pt}}}={\kappa^{\scaleto{(\mathbf Y)}{4pt}}}\cup{\kappa^{\scaleto{(\mathbf Z)}{4pt}}} . \end{equation} \bigskip Let \begin{equation}\label{PaRTitiOn} \mbox{ $\kappa_o^{\scaleto{(\mathbf W)}{4pt}}=\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf Z)}{4pt}}$, $\kappa_J^{\scaleto{(\mathbf W)}{4pt}}=\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}$, $\kappa_I^{\scaleto{(\mathbf W)}{4pt}}=\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}$. } \end{equation} Let $$ \begin{array}{lllll} \langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} & \mbox{be the closure of } & \langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf X)}{4pt}}} & \mbox{to} & {\kappa^{\scaleto{(\mathbf W)}{4pt}}},\\ \langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} & \mbox{be the closure of } & \langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf Y)}{4pt}}} & \mbox{to} & {\kappa^{\scaleto{(\mathbf W)}{4pt}}},\\ \langle \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} & \mbox{be the closure of } & \langle \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf Z)}{4pt}}} & \mbox{to} & {\kappa^{\scaleto{(\mathbf W)}{4pt}}}.\\ \end{array} $$ We extend the family $\{\iota_{1,u} : u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}\}$ of embeddings to a family $\{\iota_{1,u} : u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}\}$ of embeddings as follows. \begin{comment} Recall from Lemma~\ref{ExtDirSyst} that $$ \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}}= \left\{ \begin{array}{ll} \underset{\underset{u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}, u\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v}{\longrightarrow}}{\lim}\, \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & \mbox{if $v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$}\\ \underset{\underset{u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}, u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v}{\longrightarrow}}{\lim}\, \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & \mbox{if $v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}\setminus{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$}\\ \underset{\underset{u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}, u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v}{\longrightarrow}}{\lim}\, \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & \mbox{if $v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}\setminus{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$}\\ \end{array} \right. , $$ $$ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}}= \left\{ \begin{array}{ll} \underset{\underset{u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}, u\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v}{\longrightarrow}}{\lim}\, \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \mbox{if $v\in{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$}\\ \underset{\underset{u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}, u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v}{\longrightarrow}}{\lim}\, \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \mbox{if $v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}\setminus{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$}\\ \end{array} \right. . $$ \end{comment} For $v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, let $\iota_{1,v}$ be the unique homomorphism mapping \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}} to \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}}, guaranteed by the universal property based on the family of homomorphism $$\{\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}\circ\iota_{1,u} : G_u^{\scaleto{(\mathbf X)}{4pt}} \to G_v^{\scaleto{(\mathbf Y)}{4pt}} \ | \ {\kappa^{\scaleto{(\mathbf X)}{4pt}}}\ni u\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v\}.$$ Clearly, whenever $v\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$, thus defined $\iota_{1,v}$ coincides with the $\iota_{1,v}$ in \ref{C2leszEZ}, so this definition is an extension, and thus defined $\iota_{1,v}$'s make the left square of the diagram in Fig.~\ref{NocsaKK55} commute for every ${\kappa^{\scaleto{(\mathbf X)}{4pt}}}\ni u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$. Moreover, by Lemma~\ref{ExtDirSyst}, thus defined $\iota_{1,v}$'s make the left square of the diagram in Fig.~\ref{NocsaKK55} commute for every ${\kappa^{\scaleto{(\mathbf W)}{4pt}}}\ni u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$. In addition, $\iota_{1,v}$ is an embedding, too, since the unique homomorphism which makes the left square of the diagram in Fig.~\ref{NocsaKK55} commute for every ${\kappa^{\scaleto{(\mathbf X)}{4pt}}}\ni u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$ is an embedding by Lemma~\ref{EZisMONO}, since elements of $\{\iota_{1,u} : {\kappa^{\scaleto{(\mathbf X)}{4pt}}}\ni u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v\}$ are all embeddings. Likewise we extend the family $\{\iota_{2,u} : u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}\}$ of embeddings in \ref{C2leszEZ} to a family $\{\iota_{2,u} : u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}\}$ of embeddings. Summing up, the diagram in Fig.~\ref{NocsaKK55} commutes for every $u,v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, $u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$. \bigskip For every $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$ consider the free product in $\mathfrak A^\mathfrak l$ of \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} and \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} with \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} amalgamated: let \begin{equation} \label{eLSo2} \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}}=\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}\ast_{\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}} \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} , \end{equation} equipped with (abelian $\ell$-group) embeddings $\iota_{3,u}$ and $\iota_{4,u}$ according to Fig.~\ref{HomoMOUZT}. \begin{figure}[ht] \begin{diagram} & & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & & \\ & \ldEmbed^{\iota_{1,u}} & & \rdEmbed^{\iota_{2,u}} & \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \rEmbed_{\iota_{3,u}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$} & \lEmbed_{\iota_{4,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \end{diagram} \caption{} \label{HomoMOUZT} \end{figure} For $u,v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, $u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$ let $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ be the unique (abelian $\ell$-group) homomorphism which makes the two squares in the middle of Fig.~\ref{amalgamationTwiceFIGpre} commute. \begin{figure}[ht] \begin{diagram} & & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & & \\ & \ldEmbed^{\iota_{1,u}} & & \rdEmbed^{\iota_{2,u}} & \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \rEmbed_{\iota_{3,u}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$} & \lEmbed_{\iota_{4,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}} & & \dDashto{\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}} & & \dTo_{\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}} \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}} & \rEmbed_{\iota_{3,v}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_v$} & \lEmbed_{\iota_{4,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_v$}} \\ & \luEmbed_{\iota_{1,v}} & & \ruEmbed_{\iota_{2,v}} & \\ & & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}} & & \\ \end{diagram} \caption{} \label{amalgamationTwiceFIGpre} \end{figure} To see that $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ is well-defined, that is, to infer the existence and the unicity of $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ from the universal property described in \ref{UnivProp} we have to show $ \iota_{3,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}\circ\iota_{1,u} = \iota_{4,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}\circ\iota_{2,u} $. Using that $\iota_{1,u}$, $\iota_{2,u}$, $\iota_{1,v}$, $\iota_{2,v}$, are embeddings, \ref{ResTRIcTIon} implies $\iota_{1,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}}=\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}\circ\iota_{1,u}$ and $\iota_{2,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}}=\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}\circ\iota_{2,u}$, and hence $\iota_{3,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}\circ\iota_{1,u}=\iota_{3,v}\circ\iota_{1,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \overset{\ref{commuteREF}}{=} \iota_{4,v}\circ\iota_{2,v}\circ\varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}}=\iota_{4,v}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}\circ\iota_{2,u}$, so we are done. Next we prove that $ \langle \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle $ is a direct system of abelian $\ell$-groups over $\kappa^{\scaleto{(\mathbf W)}{4pt}}$. To see that the $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$'s satisfy the direct system property, see \ref{(G1)}, we proceed as follows. Referring to the unicity in the universal property \ref{UnivProp}, to prove $$\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}=\varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$$ for $u,v,w\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, $u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}w$, it suffices to prove that $\varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ makes the lower two squares in Fig.~\ref{amalgamationTwiceFIG2} commute. \begin{figure}[ht] \begin{diagram} \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \rEmbed_{\iota_{3,u}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}} & \lEmbed_{\iota_{4,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}} & & \dDashto{\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}} & & \dTo_{\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}} \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}} & \rEmbed_{\iota_{3,v}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_v$}} & \lEmbed_{\iota_{4,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_v$}} & \\ \dTo^{\varsigma^{\scaleto{(\mathbf Y)}{4pt}}_{v\to w}} & & \dDashto{\varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}} & & \dTo_{\varsigma_{v\to w}^{\scaleto{(\mathbf Z)}{4pt}}} \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_w$}} & \rEmbed_{\iota_{3,w}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_w$}} & \lEmbed_{\iota_{4,w}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_w$}} \\ \uTo^{\varsigma^{\scaleto{(\mathbf Y)}{4pt}}_{u\to w}} & & \uDashto } & & \uTo_{\varsigma_{u\to w}^{\scaleto{(\mathbf Z)}{4pt}}} \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \rEmbed_{\iota_{3,u}} & \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}} & \lEmbed_{\iota_{4,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \end{diagram} \caption{} \label{amalgamationTwiceFIG2} \end{figure} That is, $ \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}\circ \iota_{3,u} = \iota_{3,w}\circ\varsigma^{\scaleto{(\mathbf Y)}{4pt}}_{u\to w} $ and $ \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}\circ \iota_{4,u} = \iota_{4,w}\circ\varsigma_{u\to w}^{\scaleto{(\mathbf Z)}{4pt}} $ hold. We prove the first one, the other one can be proven analogously. Using that the upper four little squares in Fig.~\ref{amalgamationTwiceFIG2} commute, $ \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}\circ \iota_{3,u} = \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}\circ \iota_{3,v}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} = j_{1,w}\circ\varsigma^{\scaleto{(\mathbf Y)}{4pt}}_{v\to w}\circ \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \overset{\ref{(G1)}}{=} j_{1,w}\circ\varsigma^{\scaleto{(\mathbf Y)}{4pt}}_{u\to w} $ holds, so we are done. \bigskip Our next aim is to derive a direct system $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ of abelian $o$-groups from the direct system $ \langle \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ of abelian $\ell$-groups such that for every $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$ the diagram in Fig.~\ref{amalgamationTwiceFIG} commutes. \begin{figure}[ht] \begin{diagram} & & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}} & & \\ & \ldEmbed^{\iota_{1,u}} & & \rdEmbed^{\iota_{2,u}} & \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} & \rEmbed_{\iota_{3,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} & \lEmbed_{\iota_{4,u}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_u$}} \\ \dTo^{\varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}}} & & \dDashto{\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}} & & \dTo_{\varsigma_{u\to v}^{\scaleto{(\mathbf Z)}{4pt}}} \\ \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}} & \rEmbed_{\iota_{3,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} & \lEmbed_{\iota_{4,v}} & \textbf{\textit{G$^{\scaleto{(\mathbf Z)}{4pt}}_v$}} \\ & \luEmbed_{\iota_{1,v}} & & \ruEmbed_{\iota_{2,v}} & \\ & & \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_v$}} & & \\ \end{diagram} \caption{} \label{amalgamationTwiceFIG} \end{figure} The aim will be achieved by changing only the underlying orderings. Let $\mathcal P$ be the set of objects $\{P_u : u\in \kappa^{\scaleto{(\mathbf W)}{4pt}} \}$ where $P_u$ is a partial order which extends the lattice order of \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}} such that $ \langle \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $, where the ordering of \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}} is replaced by $P_u$, is a direct system of abelian $po$-groups. It has just been shown that $\mathcal P$ is nonempty. For $p,q\in\mathcal P$, $p=\{P_u : u\in \kappa^{\scaleto{(\mathbf W)}{4pt}} \}$, $q=\{Q_u : u\in \kappa^{\scaleto{(\mathbf W)}{4pt}} \}$, we let $p\preceq q$ if for every $u\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$, $P_u\subseteq Q_u$. Then $\preceq$ is a partial ordering of $\mathcal P$, and since the union of chains of partial orders (ordered by inclusion) is a partial order, too, the union of any chain in $\langle\mathcal P,\preceq\rangle$ is in $\mathcal P$. It follows from Zorn's lemma that $\langle\mathcal P,\preceq\rangle$ has a greatest element $m=\{\leq^{\scaleto{(\mathbf W)}{4pt}}_u : u\in \kappa^{\scaleto{(\mathbf W)}{4pt}} \}$. For every $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$, denote the related positive cone by $P^{\scaleto{(\mathbf W)}{4pt}}_u$ and the strict positive cone by $Q^{\scaleto{(\mathbf W)}{4pt}}_u$. For $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$, replace the ordering of \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_u$}} by $\leq^{\scaleto{(\mathbf W)}{4pt}}_u$, and denote the obtained abelian $po$-group by \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}. By the definition of $\mathcal P$, $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ is a direct system of abelian $po$-groups. We claim that $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ is, in fact, a direct system of abelian $o$-groups. Its proof amounts to proving that for every $i\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$, $\leq^{\scaleto{(\mathbf W)}{4pt}}_i$ is a total order on \textit{G$^{\scaleto{(\mathbf W)}{4pt}}_i $}. Assume, for {\em reductio ad absurdum}, that there is $s\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$ such that $$ \mbox{ $\leq^{\scaleto{(\mathbf W)}{4pt}}_s$ is not total.} $$ Our plan is to construct an element of $\mathcal P$ which is larger than $m$, thus obtaining a contradiction. To that end first we show that in the direct system $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ the preimage of any strictly positive element is strictly positive. More formally, \medskip\noindent $\bullet$ for $u,v\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$, $u\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$ it holds true that $$(\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}})^{-1}(Q^{\scaleto{(\mathbf W)}{4pt}}_v)\subseteq Q^{\scaleto{(\mathbf W)}{4pt}}_u.$$ Consider, for $u\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$, $$ \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u = \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u \cup\{t^{\scaleto{(\mathbf W)}{4pt}}_u\} , $$ where $ \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u = \left\{ x_u\in \textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$} : \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_u) \in Q^{\scaleto{(\mathbf W)}{4pt}}_v \mbox{ for some $\kappa^{\scaleto{(\mathbf W)}{4pt}}\ni v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}u$} \right\} . $ \medskip\noindent $-$ Notice that since the case $v=u$ is included in the definition of $\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u$ it holds true, for $u\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$, that $$ P^{\scaleto{(\mathbf W)}{4pt}}_u \subseteq \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u . $$ \medskip\noindent $-$ Next we claim that for $u\in \kappa^{\scaleto{(\mathbf W)}{4pt}}$, $\bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u$ (as a positive cone) makes the group reduct of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} a $po$-group. By Proposition~\ref{PRstrictP} it suffices to prove, that \begin{enumerate}[(i)] \item $\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u\cap{\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u}^{-1}=\emptyset$ and \item $\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u \subseteq \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u$. \end{enumerate} (i) Assume $a\in\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u\cap{\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u}^{-1}$. Then $a,a^{-1}\in\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u$. There exist $j,k\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} u$ such that $\varsigma_{u\to j}^{\scaleto{(\mathbf W)}{4pt}}(a)$ and $\varsigma_{u\to k}^{\scaleto{(\mathbf W)}{4pt}}(a^{-1})$ are strictly positive (in their respective algebras). Without loss of generality we may assume $j\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} k$. It follows that $ \varsigma_{u\to k}^{\scaleto{(\mathbf W)}{4pt}}(a) = \varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}} ( \varsigma_{u\to j}^{\scaleto{(\mathbf W)}{4pt}}(a)) $ is positive since $\varsigma_{u\to j}^{\scaleto{(\mathbf W)}{4pt}}(a)$ is positive and $\varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}$ is a positive homomorphism. Since $\varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}(a)$ is positive and $\varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}(a^{-1})$ is strictly positive, $\varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}(aa^{-1})= \varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}(a) \varsigma_{j\to k}^{\scaleto{(\mathbf W)}{4pt}}(a^{-1})$ is strictly positive, a contradiction, since unit elements are mapped into unit elements. \\ (ii) Let $a,b\in\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u$. Then there exist $v,w\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}u$ such that $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(a)$ and $\varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(b)$ are strictly positive. Without loss of generality we may assume $v\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}w$. It follows that $ \varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(a) = \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}} ( \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(a)) $ is positive since $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(a)$ is positive and $\varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}}$ is a positive homomorphism. Since $\varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(a)$ is positive and $\varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(b)$ is strictly positive, $\varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(ab)= \varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(a) \varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(b)$ is strictly positive, ensuring $ab\in\bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u$. \medskip\noindent $-$ Finally, we claim that if $x_u\in \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u$ then $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_u)\in \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ holds for $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}u$. Indeed, the case $v=u$ being obvious, we may assume $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}u$. Since $x_u\in \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u$, either $x_u=t^{\scaleto{(\mathbf W)}{4pt}}_u$ in which case the statement obviously holds, or there exists $w\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}u$ such that $\varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(x_u)\in Q^{\scaleto{(\mathbf W)}{4pt}}_w$. If $w\leq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$ then $ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_u) = \varsigma_{w\to v}^{\scaleto{(\mathbf W)}{4pt}} ( \varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(x_u) ) \in \varsigma_{w\to v}^{\scaleto{(\mathbf W)}{4pt}} ( Q^{\scaleto{(\mathbf W)}{4pt}}_w ) \subseteq P^{\scaleto{(\mathbf W)}{4pt}}_v \subseteq \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_v $ follows since $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ is a direct system and $ P^{\scaleto{(\mathbf W)}{4pt}}_v \subseteq \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_v $, respectively, whereas if $w>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}v$ then $ \varsigma_{v\to w}^{\scaleto{(\mathbf W)}{4pt}} ( \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_u) ) = \varsigma_{u\to w}^{\scaleto{(\mathbf W)}{4pt}}(x_u) \in Q^{\scaleto{(\mathbf W)}{4pt}}_w $ implies $ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_u) \in \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_v \subseteq \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_v $. \medskip\noindent We have just proved that $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ modified by extending, for $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$, the positive cone $P^{\scaleto{(\mathbf W)}{4pt}}_u$ of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} by $\bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u$ is a direct system of abelian $po$-group. Since $m$ is maximal, for $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$, $ P^{\scaleto{(\mathbf W)}{4pt}}_u = \bar{P}^{\scaleto{(\mathbf W)}{4pt}}_u $ and hence $ Q^{\scaleto{(\mathbf W)}{4pt}}_u = \bar{Q}^{\scaleto{(\mathbf W)}{4pt}}_u $ follow. Therefore, the preimage of any strictly positive element is strictly positive, so our claim is settled. \medskip\noindent We are ready to construct an element of $\mathcal P$ which is strictly larger than $m$. \medskip\noindent $\bullet$ First we show that $P^{\scaleto{(\mathbf W)}{4pt}}_s$ can be extended into a total order of the group reduct of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_s$}}. By \cite[Corollary 13 on page 39]{fuchs} any partial order of an abelian group can be extended to a total order of the group if and only if the group is torsion-free. But \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_s$}} is torsion-free since \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_s$}} is an abelian $\ell$-group, $\ell$-groups are known to be torsion-free (see e.g. \cite[Corollary 0.1.2.]{lgroups}), and \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_s$}} shares its multiplication with \textbf{\textit{L$^{\scaleto{(\mathbf W)}{4pt}}_s$}}. Hence there exists an extension $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ of $P^{\scaleto{(\mathbf W)}{4pt}}_s$ which makes the group reduct of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_s$}} an abelian $o$-group. \medskip For $\kappa^{\scaleto{(\mathbf W)}{4pt}}\ni v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$ let \begin{equation}\label{EQnagyobbaKRa} \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v = \varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}} (\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s) . \end{equation} \medskip\noindent $\bullet$ We claim that for all $\kappa^{\scaleto{(\mathbf W)}{4pt}}\ni v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ (as a positive cone) makes the group reduct of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} a $po$-group. (i) Let $ a\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v \cap ({\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v})^{-1} $. Then $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(a_s)=a$ and $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(b_s)=a^{-1}$, for some $a_s,b_s\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$. Now $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}({b_s}^{-1})=a$ follows, hence ${b_s}^{-1}\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$, too. Since $b_s,{b_s}^{-1}\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ and $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ is a positive cone, it holds true that $b_s=t^{\scaleto{(\mathbf W)}{4pt}}_s$. Hence $a^{-1}=t^{\scaleto{(\mathbf W)}{4pt}}_v$ and therefore, $a=t^{\scaleto{(\mathbf W)}{4pt}}_v$. \\ (ii) Let $ a,b\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v $. Then $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(a_s)=a$ and $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(b_s)=b$, for some $a_s,b_s\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$. Since $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ is a positive cone, it holds true that $a_s b_s\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ and hence $ ab= \varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(a_s)\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(b_s) = \varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(a_s b_s) \in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v $. \medskip\noindent $\bullet$ Next, we claim that $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ modified by replacing, for $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, the partial order $\leq^{\scaleto{(\mathbf W)}{4pt}}_v$ of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} by the partial order given by (the positive cone) $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ is a direct system of abelian $po$-groups. We need to prove the positivity of the $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$'s. \\ - If $u,v<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$ then $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ is positive since the ordering of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} and \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} have not been changed in the direct system $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $. \\ - Assume $u<_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$ and $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$. Since $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s\supseteq P^{\scaleto{(\mathbf W)}{4pt}}_s$, $\varsigma_{u\to s}^{\scaleto{(\mathbf W)}{4pt}}$ is positive so we may assume $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$. Because of the definition of $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ in (\ref{EQnagyobbaKRa}), $\varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}$ is positive, too. Therefore, $ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} = \varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}} \circ \varsigma_{u\to s}^{\scaleto{(\mathbf W)}{4pt}} $, is positive. \\ - Assume $u\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$. Let $a\in\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_u$. There exists $x_s\in\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ such that $a=\varsigma_{s\to u}^{\scaleto{(\mathbf W)}{4pt}}(x_s)$ hence $ \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(a)= \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}(\varsigma_{s\to u}^{\scaleto{(\mathbf W)}{4pt}}(x_s)) = \varsigma_{s\to v}^{\scaleto{(\mathbf W)}{4pt}}(x_s) $ is in $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ by (\ref{EQnagyobbaKRa}). \medskip\noindent Although we constructed $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ such that $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s\supseteq P^{\scaleto{(\mathbf W)}{4pt}}_s$, and also $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s\supset P^{\scaleto{(\mathbf W)}{4pt}}_s$ follows by the indirect assumption (since $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ is total and $P^{\scaleto{(\mathbf W)}{4pt}}_s$ is not), $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v\supseteq P^{\scaleto{(\mathbf W)}{4pt}}_v$ does not necessarily hold for $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$. Therefore, the direct system of the previous point is not necessarily larger than $m$. Our next aim is to construct one which is, by considering the partial order generated by $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ and $P^{\scaleto{(\mathbf W)}{4pt}}_v$ at those indices. To that end \medskip\noindent $\bullet$ first we claim that for $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, $(\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v)^{-1}\cap P^{\scaleto{(\mathbf W)}{4pt}}_v=\{t^{\scaleto{(\mathbf W)}{4pt}}_v\}$. Assume $ \{t^{\scaleto{(\mathbf W)}{4pt}}_v\}\neq a\in(\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v)^{-1}\cap P^{\scaleto{(\mathbf W)}{4pt}}_v $. Since $a^{-1}\in\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ there exists $x_s\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$ such that $a^{-1}=\varsigma_{s\to j}^{\scaleto{(\mathbf W)}{4pt}}(x_s)$. Therefore, $a=\varsigma_{s\to j}^{\scaleto{(\mathbf W)}{4pt}}(x_s^{-1})$ holds. Since $a\in Q^{\scaleto{(\mathbf W)}{4pt}}_v$ and since the preimage of any strictly positive element is strictly positive, it follows that $x_s^{-1}\in Q^{\scaleto{(\mathbf W)}{4pt}}_s$. Since $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s\supseteq P^{\scaleto{(\mathbf W)}{4pt}}_s$, $x_s^{-1}$ is strictly positive in $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$, too, contradicting to $x_s\in \widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$. \medskip\noindent Therefore, by Lemma~\ref{genLEQs}, for $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, there exists the smallest partial order (denote its positive cone by $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_v$) of the group reduct of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} which contains both $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ and $P^{\scaleto{(\mathbf W)}{4pt}}_v$, and let $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_s=\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_s$. Since the homomorphisms of $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ are positive (with respect to the system of its own positive cones $P^{\scaleto{(\mathbf W)}{4pt}}_i$) and are also positive with respect to the system of positive cones where for $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, $P^{\scaleto{(\mathbf W)}{4pt}}_u$ is replaced by $\widetilde{P}^{\scaleto{(\mathbf W)}{4pt}}_u$, it is obvious from claim~\ref{MasoDIkkkk} of Lemma~\ref{genLEQs} that the homomorphisms of $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ are positive, too, with respect to the systems of positive cones where for $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, $P^{\scaleto{(\mathbf W)}{4pt}}_v$ is replaced by $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_v$. Therefore, $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ modified by replacing, for $v\geq_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, the positive cone $P^{\scaleto{(\mathbf W)}{4pt}}_v$ of \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} by $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_v$ is a direct system of abelian $po$-groups. Since for $v>_{\kappa^{\scaleto{(\mathbf W)}{4pt}}}s$, $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_v\supseteq P^{\scaleto{(\mathbf W)}{4pt}}_s$, and since $\widehat{P}^{\scaleto{(\mathbf W)}{4pt}}_s\supset P^{\scaleto{(\mathbf W)}{4pt}}_s$, it contradicts to the maximality of $m$ in $\mathcal P$. \bigskip\noindent We have shown that $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ is a direct system of abelian $o$-groups. Obviously, for every $u\in\kappa^{\scaleto{(\mathbf W)}{4pt}}$ the diagram in Fig.~\ref{amalgamationTwiceFIG} commutes and all its arrows are embeddings, since the universes of the respective algebras and the mappings have not been changed. \medskip For $u\in\kappa_I^{\scaleto{(\mathbf W)}{4pt}}$ let \begin{equation}\label{eLSo2b} \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}}=\textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} , \end{equation} and let $$ \mathcal W= \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf W)}{4pt}}, \kappa_J^{\scaleto{(\mathbf W)}{4pt}}, \kappa_I^{\scaleto{(\mathbf W)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf W)}{3pt}}}\rangle} . $$ First we verify that $\mathcal W$ is a bunch of layer groups. \begin{itemize} \item We have already seen that $\langle{\kappa^{\scaleto{(\mathbf W)}{4pt}}},\leq_{\kappa^{\scaleto{(\mathbf W)}{3pt}}}\rangle$ is a totally ordered set. \item Since by \ref{kjHkjgjghgfghffFfjGJ34} it holds true that $\min{\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\min{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}=\min{\kappa^{\scaleto{(\mathbf Z)}{4pt}}}$, and since by the construction ${\kappa^{\scaleto{(\mathbf W)}{4pt}}}={\kappa^{\scaleto{(\mathbf Y)}{4pt}}}\cup{\kappa^{\scaleto{(\mathbf Z)}{4pt}}}$, it follows that ${\kappa^{\scaleto{(\mathbf W)}{4pt}}}$ has a least element, too, and $\min{\kappa^{\scaleto{(\mathbf W)}{4pt}}}=\min{\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\min{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}=\min{\kappa^{\scaleto{(\mathbf Z)}{4pt}}}$. \item We check that (\ref{PaRTitiOn}) defines a partition of ${\kappa^{\scaleto{(\mathbf W)}{4pt}}}$. By (\ref{PaRTitiOn}) and (\ref{KaMuEtLa}), $ \kappa_o^{\scaleto{(\mathbf W)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf W)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf W)}{4pt}}= (\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf Z)}{4pt}}) \cup (\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}) \cup (\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}) = (\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}\cup \kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup \kappa_I^{\scaleto{(\mathbf Y)}{4pt}} )\cup (\kappa_o^{\scaleto{(\mathbf Z)}{4pt}}\cup \kappa_J^{\scaleto{(\mathbf Z)}{4pt}}\cup \kappa_I^{\scaleto{(\mathbf Z)}{4pt}} ) = \kappa^{\scaleto{(\mathbf Y)}{4pt}}\,\cup\,\kappa^{\scaleto{(\mathbf Z)}{4pt}} = {\kappa^{\scaleto{(\mathbf W)}{4pt}}} $. These three are pairwise disjoint, too. Assume, on the contrary, that $u$ is in, say, $\kappa_I^{\scaleto{(\mathbf W)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf W)}{4pt}}$. Then by distributivity, $ u \overset{(\ref{PaRTitiOn})}{\in} (\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_I^{\scaleto{(\mathbf Z)}{4pt}})\cap(\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}) = (\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Y)}{4pt}})\cup (\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Z)}{4pt}})\cup (\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Y)}{4pt}})\cup (\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}) = \emptyset\cup (\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Z)}{4pt}})\cup (\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Y)}{4pt}})\cup \emptyset \subseteq {\kappa^{\scaleto{(\mathbf Y)}{4pt}}}\cap{\kappa^{\scaleto{(\mathbf Z)}{4pt}}} = {\kappa^{\scaleto{(\mathbf X)}{4pt}}}=\kappa_I^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$. If $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf X)}{4pt}}$ then $u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf Y)}{4pt}}$ and $u\in\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cup\kappa_o^{\scaleto{(\mathbf Z)}{4pt}}$ holds by \ref{kjHkjgjghgfghffFfjGJ34}, and hence $u\notin\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$ and $u\notin\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}$ follows, respectively, a contradiction to $u\in(\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Z)}{4pt}})\cup(\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Y)}{4pt}})$, whereas if $u\in\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$ then $u\in\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$ and $u\in\kappa_J^{\scaleto{(\mathbf Z)}{4pt}}$ holds by \ref{kjHkjgjghgfghffFfjGJ34}, and hence $u\notin\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$ and $u\notin\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}$ follows, a contradiction to $u\in(\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Z)}{4pt}})\cup(\kappa_I^{\scaleto{(\mathbf Z)}{4pt}}\cap\kappa_J^{\scaleto{(\mathbf Y)}{4pt}})$, too. \item For $u\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} given by (\ref{eLSo2}), and for $u\in\kappa_I^{\scaleto{(\mathbf W)}{4pt}}$, \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}} given by (\ref{eLSo2b}) are abelian $o$-groups and obviously $\textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}}\leq\textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}$ hold for $u\in\kappa_I^{\scaleto{(\mathbf W)}{4pt}}$. \item We have already seen that for $u,v\in{\kappa^{\scaleto{(\mathbf W)}{4pt}}}$, $u<v$, $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ is a homomorphism mapping \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} to \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}}. \item We have already seen that $ \langle \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}} \rangle_{\kappa^{\scaleto{(\mathbf W)}{4pt}}} $ is a direct system of abelian $o$-groups. \item As for \ref{(G3)}, it is obvious that for $v\in\kappa_I^{\scaleto{(\mathbf W)}{4pt}}$, $\varsigma_{u\to v}^{\scaleto{(\mathbf W)}{4pt}}$ maps \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} into \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_v$}}, since it maps into \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}}, and \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_v$}} and \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_v$}} coincide. \end{itemize} Finally, we prove that the involutive FL$_e$-algebra $\mathbf W$ of $\mathcal W$ is an amalgam of $\mathtt V$ in $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ (or in $\in\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$). \begin{figure}[ht] \begin{diagram} & & \mathbf X & & \\ & \ldEmbed^{\iota_1} & & \rdEmbed^{\iota_2} & \\ \mathbf Y & \rEmbed_{\iota_3} & \mathbf W & \lEmbed_{\iota_4} & \mathbf Z \\ \end{diagram} \caption{Amalgam of $\mathtt V$} \label{DefAmalg} \end{figure} That $\mathbf Y$ is embeddable into $\mathbf W$ is verified, using Corollary~\ref{HogyanLatszikCOR}, as follows. By (\ref{PaRTitiOn}), the inclusion map is and order embedding from ${\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$ into ${\kappa^{\scaleto{(\mathbf W)}{4pt}}}$ which preserves the least element and the partition. Second, as shown above, for every $u\in{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$, there exists an embedding $\iota_{3,u}$ of \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} into \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} such that for every $u,v\in{\kappa^{\scaleto{(\mathbf Y)}{4pt}}}$, $u<v$ the top left square in Fig.~\ref{amalgamationTwiceFIG2} is commutative. Third, if $u\in\kappa_I^{\scaleto{(\mathbf Y)}{4pt}}$ then \textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} is mapped into \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}}, since $\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}\leq\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}$ and \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}} is mapped into \textbf{\textit{G$^{\scaleto{(\mathbf W)}{4pt}}_u$}} which is equal to \textbf{\textit{H$^{\scaleto{(\mathbf W)}{4pt}}_u$}} by (\ref{eLSo2b}). The last item in Corollary~\ref{HogyanLatszikCOR} does not apply since $\mathbf Y\in\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}\cup\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ implies $\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}=\emptyset$, so we are done. Analogously we can verify that $\mathbf Z$ is embeddable into $\mathbf W$, too. Let $x\in X$. Since by (\ref{EZazX}) and (\ref{DEFcsopi}), the universe of any algebra in $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}\cup\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ is the disjoint union of the universes of its layer groups and layer subgroups, there exists a unique $u\in{\kappa^{\scaleto{(\mathbf X)}{4pt}}}$ such that $x$ is an element of $\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}$ or there exists a unique $u\in\kappa_I^{\scaleto{(\mathbf X)}{4pt}}$ such that $x$ is an element of $\accentset{\bullet}{\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}}$. In the former case $\iota_3(\iota_1(x) =\iota_{3,u}(\iota_{1,u}(x))\overset{Fig.\ref{amalgamationTwiceFIG}}{=}\iota_{4,u}(\iota_{2,u}(x))= \iota_4(\iota_2(x))$ holds and we are done. In the latter case, referring to the row before (\ref{DEFcsopi}), there exist an element $y$ of $\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}$ such that $x=\gX{y}{\negaM{\teX}{u}}$, and hence $\iota_3(\iota_1(x))=\iota_3(\iota_1(\g{y}{\negaM{\teX}{u}}))= \gZ{\iota_3(\iota_1(y))}{\negaM{\teZ}{\iota_3(\iota_1(u))}}$ follows. Since $y$ and $u$ are elements in $\textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}}$, just like in the previous case, the latest is equal to $ \gZ{\iota_4(\iota_2(y))}{\negaM{\teZ}{\iota_4(\iota_2(u))}} =\iota_4(\iota_2(\g{y}{\negaM{\teX}{u}})) =\iota_4(\iota_2(x))$, and we are done, too. \bigskip (3) It is known that the strong amalgamation property fails in $\mathfrak A^\mathfrak c$ \cite{Cherri}. Consider a V-formation $\mathtt V=\langle \mathbf A, \mathbf B_1, \mathbf B_2, \iota_1, \iota_2 \rangle$ in $\mathfrak A^\mathfrak c$ for which no strong amalgam exists in $\mathfrak A^\mathfrak c$. $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$: Assume that $\mathtt V$ (where $\mathbf A, \mathbf B_1, \mathbf B_2$ are viewed as idempotent-symmetric odd involutive FL$_e$-chains, and $\iota_1, \iota_2$ are viewed as embeddings in $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$) has a strong amalgam $\langle \mathbf C, \iota_3, \iota_4 \rangle$ in $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$. Then, by \ref{E2LesZEz} in Corollary~\ref{HogyanLatszikCOR}, $\iota_3$ and $\iota_4$ induces a strong amalgam $\langle \mathbf D, \iota_3|_D, \iota_4 |_D \rangle$ of $\mathbf B_1$ and $\mathbf B_2$ with common subgroup $\mathbf A$ in $\mathfrak A^\mathfrak c$, where $\mathbf D$ is the layer group of $\mathbf C$ corresponding to the unit element of $\mathbf C$. Contradiction. The $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ case can be treated similarly. Let $\leq\,=\{\langle 0,0 \rangle,\langle 0,1 \rangle,\langle 1,1 \rangle\}$ and consider the involutive FL$_e$-chains $X_\mathbf A$, $X_{\mathbf B_1}$, $X_{\mathbf B_2}$, given by the group representations $ \langle \{\mathbbm 1,\textbf{\textit{A}}\}, \emptyset, \varsigma_{0\to 1} \rangle_{\langle \emptyset,\emptyset,\{0,1\}, \leq\rangle} $, $ \langle \{\mathbbm 1,\textbf{\textit{B$_1$}}\}, \emptyset, \varsigma_{0\to 1} \rangle_{\langle \emptyset,\emptyset,\{0,1\}, \leq\rangle} $, and $ \langle \{\mathbbm 1,\textbf{\textit{B$_2$}}\}, \emptyset, \varsigma_{0\to 1} \rangle_{\langle \emptyset,\emptyset,\{0,1\}, \leq\rangle} $, respectively. These are even since the unit element $0$ is in the third partition element, c.f. the last sentence before item~C of Theorem~\ref{BUNCHalg_X}. These are idempotent-symmetric because the second partition element is empty. By Corrollary~\ref{HogyanLatszikCOR}, using the identity map between the trivial groups, $\iota_1$ and $\iota_2$ can be extended to embeddings of the V-formation $\mathtt V_1$ in Fig.~\ref{KiterjedD}. \begin{figure}[ht] \begin{diagram} & & X_\mathbf A & & \\ & \ldEmbed^{\langle id,\iota_1\rangle} & & \rdEmbed^{\langle id,\iota_2\rangle} & \\ X _\mathbf {B_1} & & & & X _\mathbf {B_2} \\ \end{diagram} \caption{} \label{KiterjedD} \end{figure} Again, if we assume the existence of a strong amalgam $\langle X_\mathbf C, \iota_3, \iota_4 \rangle$ of $\mathtt V_1$ in $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$ then by Corollary~\ref{HogyanLatszikCOR}, the $1$-level group of $X_\mathbf C$ along with the induced embeddings of $\iota_3$ and $\iota_4$ (on the $1$-level) would yield a strong amalgam of $\mathtt V$ in $\mathfrak A^\mathfrak c$, a contradiction. \end{proof} As pointed out in \cite{MaMe}, totally ordered Sugihara monoids are either odd or even\footnote{There, the term \lq disconnected\rq\, is used for \lq even\rq, and the term \lq connected\rq\, is used for \lq odd\rq.}: Indeed, $\nega{t}=f$ and since $f$ is idempotent, $t=\nega{f}=\res{{\mathbin{*\mkern-9mu \circ}}}{f}{f}\geq f$ holds on the one hand. Now, $f<x<t$ would imply $f<\nega{x}<t$, and hence $\g{x}{\nega{x}}\geq\g{\min(x,\nega{x})}{\min(x,\nega{x})}=\min(x,\nega{x})>f$ contradicts residuation. Moreover, in Sugihara monoids every element is idempotent, hence obviously, the residual complement of a positive idempotent element is idempotent, too, thus ensuring that totally ordered Sugihara monoids belong to either $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$ or $\mathfrak I^{\mathfrak c}_{\mathfrak 1,\mathfrak{symm}}$. Denote by $\mathbbm 1$ the trivial (one-element) group. Since the group representation of totally ordered even or odd Sugihara monoids is of the form $\langle \mathbbm 1_u, \mathbbm 1_u, \varsigma^{u\to v} \rangle_{\langle\emptyset,\emptyset,\kappa,\leq_\kappa\rangle}$ or $\langle \mathbbm 1_u, \mathbbm 1_u, \varsigma^{u\to v} \rangle_{\langle\{t\},\emptyset,\kappa\setminus\{t\},\leq_\kappa\rangle},$ respectively \cite{JenRepr2020}, in every layer the groups in the induced V-formation are trivial, and the construction of Theorem~\ref{EzaTuTtI} produces the trivial group as their amalgamation in every layer. Hence the amalgamation, as done in this paper, of a V-formation of totally ordered even or odd Sugihara monoids is also a totally ordered even or odd Sugihara monoid, respectively. Summing up, a by-product of the construction in Theorem~\ref{EzaTuTtI} is Corollary~\ref{kjGJd24} (cf.~\cite{MaMe}, where the same conclusion is reached by quantifier elimination in first-order theories). \begin{corollary}\label{kjGJd24} The classes of odd and even totally ordered Sugihara monoids have the amalgamation property. \end{corollary} Recall that $\mathfrak J^\mathfrak{sl}_\mathfrak 0$ and $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ are the classes of the semilinear members of $\mathfrak J_\mathfrak 0$ and $\mathfrak I_{\mathfrak 0,\mathfrak{symm}}$, respectively. These are varieties and are generated by $\mathfrak J^\mathfrak c_\mathfrak 0$ and $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$, respectively. Indeed, $\mathfrak J^\mathfrak{sl}_\mathfrak 0$ is clearly a variety and the class of its chains is $\mathfrak J^\mathfrak c_\mathfrak 0$. By Theorem~\ref{BUNCHalg_X}, $\{\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x}:x\in X\}$ is the set of positive idempotent elements of the involutive FL$_e$-algebra $\mathbf X$, hence idempotent-symmetry of involutive FL$_e$-algebras can be captured by posing $\g{\nega{(\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x})}}{\nega{(\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x})}}=\nega{(\res{{\mathbin{*\mkern-9mu \circ}}}{x}{x})}$ or its equivalent simpler variant $\g{\g{x}{\nega{x}}}{\g{x}{\nega{x}}}=\g{x}{\nega{x}}$. Therefore, also $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ is a variety, and the class of its chains is $\mathfrak I^{\mathfrak c}_{\mathfrak 0,\mathfrak{symm}}$. Finally, a variety of residuated lattices is semilinear if and only if it is generated by its chains \cite[Proposition]{introFUZZY}. \\ As shown in \cite[Corollary 44 and Theorem 49]{MMT}, respectively, any variety of commutative pointed residuated lattices has the congruence extension property, and a variety of semilinear residuated lattices satisfying the congruence extension property has the amalgamation property if the class of its chains has the amalgamation property. Therefore, $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ has the amalgamation property. A strengthening of the amalgamation property is the transferable injections property. A class of algebras $\mathfrak U$ has the transferable injections property if whenever $\mathbf A, \mathbf B, \mathbf C \in \mathfrak U$, $\iota_1$ is an embedding of $\mathbf A$ into $\mathbf B$, and $\iota_2$ is a homomorphism from $\mathbf A$ into $\mathbf C$, there exist an algebra $\mathbf D\in \mathfrak U$, a homomorphism $\iota_3$ from $\mathbf B$ into $\mathbf D$, and an embedding $\iota_4$ from $\mathbf C$ into $\mathbf D$ such that $\iota_3\circ \iota_1 = \iota_4\circ \iota_2$. As shown in \cite[Corollary 44]{MMT}, any variety of commutative pointed residuated lattices has the amalgamation property iff it has the transferable injections property. \begin{corollary}\label{CORtarnsfINJ} The variety $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ has the transferable injections property. \end{corollary} \begin{comment} \section{Densification} A variety $\mathtt V$ is called semilinear if every subdirectly irreducible algebra in $\mathtt V$ is a chain. Equivalently, $\mathtt V$ is semilinear if and only if its semantic consequence relation is the same as the semantic consequence relation of its chains \cite{HorcikAlgSem}. Standard algebras for a mathematical fuzzy logic L are the ones from the corresponding variety which universe is the real unit interval $[0,1]$. A mathematical fuzzy logic L enjoys strong standard completeness if the following conditions are equivalent for each formula $\varphi$ and theory $T$: \begin{enumerate} \item $T\vDash_L \varphi$, \item for each standard L-algebra $\mathbf A$ and each $\mathbf A$-model $e$ of $T$, $e$ is an $\mathbf A$-model of $\varphi$. \end{enumerate} There are two weaker alternatives for defining standard completeness of L. The same definition as above but confining to {\em finite} theories yields the definition of finite strong standard completeness, whereas by setting $T=\emptyset$ we obtain the definition of (weak) standard completeness. A possible way of proving strong standard completeness, as introduced in \cite{JMstcompl}, is to prove that any countable L-chain is embeddable into a standard L-chain. Knowing the facts that $\rm{IUL}^{fp}$-chains constitute an algebraic semantics of $\mathbf{IUL}^{fp}$, and that $\rm{IUL}^{fp}$-chains are exactly non-trivial bounded odd involutive FL$_e$-chains, we shall prove in this paper that any non-trivial countable, bounded odd involutive FL$_e$-chain embeds into an odd involutive FL$_e$-chain over the real unit interval $[0,1]$ such that its top element is mapped into $1$ and its bottom element is mapped into $0$. To this end, the main tool will be the representation theorem of odd involutive FL$_e$-chains in \cite{JenRepr2020}. The finite strong standard completeness of $\mathbf{IUL}^{fp}$ has been proved in \cite{JS_FSSC}, and that proof relies on the representation theorem for the subclass of odd involutive FL$_e$-chains, where the number of idempotent elements of the algebra is finite \cite{JS_Hahn,Jenei_Hahn_err}. \subsection{Densification in $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0}$ and $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$} The method of proving standard completeness via densification and a subsequent Dedekind-MacNeille completion has been introduced in \cite{JMstcompl}. \bigskip\bigskip A chain with at least two elements is called dense if for any two distinct elements of it there is a third one strictly in between them. If for some $a<b$ there is no $c$ such that $a<c<b$ then we call the pair $a,b$ a gap. A chain $\mathbf Y$ fills the gap $x_1,x_2$ of the chain $\mathbf X$ if there exists an embedding $\iota: X\to Y$ and $y\in Y$ such that $\iota(x_1)<y<\iota(x_2)$. A nontrivial variety $\mathtt V$ is said to be {\em densifiable} if every gap of every chain in $\mathtt V$ can be filled by another chain in $\mathtt V$. \begin{theorem}\label{EzaTuTtIDENSIFIABLE} The varieties $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0}$ and $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ are densifiable. \end{theorem} \begin{proof} Adapt all notations of Theorem~\ref{BUNCHalg_X} with appropriate superscripts $^{\scaleto{(\mathbf X)}{4pt}}$ and $^{\scaleto{(\mathbf Y)}{4pt}}$, corresponding to the group representations ${\mathcal X}$ and ${\mathcal Y}$, and the algebras $\mathbf X$ and $\mathbf Y$, respectively. Let $\mathbf X\in\mathfrak I^{\mathfrak c}_{\mathfrak 0}$ and ${\mathcal X}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf X)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf X)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf X)}{4pt}}, \kappa_J^{\scaleto{(\mathbf X)}{4pt}}, \kappa_I^{\scaleto{(\mathbf X)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}}\rangle}$, be its group representation. Assume $x_1,x_2\in X$, $$x_1<^{\scaleto{(\mathbf X)}{4pt}}x_2,$$ where $x_1$ is in the $u$-layer $X_u^{\scaleto{(\mathbf X)}{4pt}}$ and $x_2$ is in the $v$-layer $X_v^{\scaleto{(\mathbf X)}{4pt}}$. We shall insert a new element $a$ into the skeleton of $\mathbf X$ \lq just below $v$\rq: let $\langle \kappa^{\scaleto{(\mathbf Y)}{3pt}}, \leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}} \rangle$ be the totally ordered set, which is the extension of $\langle \kappa^{\scaleto{(\mathbf X)}{3pt}}, \leq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} \rangle$ by a new element $a$ and $x>_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}} b$ if and only if $x\geq_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} v$. Let $\kappa^{\scaleto{(\mathbf Y)}{3pt}}$ inherit the partition of $\kappa^{\scaleto{(\mathbf X)}{3pt}}$ and put $a$ into $\kappa_I^{\scaleto{(\mathbf Y)}{3pt}}$. Set \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_a$}} = \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_v$}}, \textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_a$}}=\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_a$}}, $\varsigma_{a\to v}^{\scaleto{(\mathbf Y)}{4pt}}=id_{\textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_{v}$}}}$, $\varsigma_{i\to a}^{\scaleto{(\mathbf Y)}{4pt}}=\varsigma_{i\to v}^{\scaleto{(\mathbf Y)}{4pt}}$ (for $i<_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} v$), $\varsigma_{a\to i}^{\scaleto{(\mathbf Y)}{4pt}}= \varsigma_{v\to i}^{\scaleto{(\mathbf Y)}{4pt}}$ (for $i>_{\kappa^{\scaleto{(\mathbf X)}{3pt}}} v$). A moment's reflection shows that ${\mathcal Y}=\langle \textbf{\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_u$}},\textbf{\textit{H$^{\scaleto{(\mathbf Y)}{4pt}}_u$}}, \varsigma_{u\to v}^{\scaleto{(\mathbf Y)}{4pt}} \rangle_{\langle \kappa_o^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_J^{\scaleto{(\mathbf Y)}{4pt}}, \kappa_I^{\scaleto{(\mathbf Y)}{4pt}},\leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}}\rangle}$ is a bunch of layer groups (cf.\ Definition~\ref{DEFbunch}); denote its algebra by $\mathbf Y$. By Corollary~\ref{HogyanLatszikCOR}, $\mathbf X$ is embeddable into $\mathbf Y$, witnessed by $\iota_\kappa: \kappa^{\scaleto{(\mathbf X)}{3pt}}\to\kappa^{\scaleto{(\mathbf Y)}{3pt}}$, $u\mapsto u$, and the \lq identity\rq\, mappings $\iota_u:\textit{G$^{\scaleto{(\mathbf X)}{4pt}}_u$}\to\textit{G$^{\scaleto{(\mathbf Y)}{4pt}}_{u}$}$ ($u\in\kappa^{\scaleto{(\mathbf X)}{3pt}}$). In the sequel, for every $u\in\kappa^{\scaleto{(\mathbf X)}{3pt}}$ we extend these mappings to be the \lq identity\rq\,mappings $\iota_u : X^{\scaleto{(\mathbf X)}{4pt}}_u\to X^{\scaleto{(\mathbf Y)}{4pt}}_{u}$ of the corresponding layers. Denote the (unique) element in $X^{\scaleto{(\mathbf Y)}{4pt}}_{a}$ which corresponds to the element $\iota_v(x_2)$ in $X^{\scaleto{(\mathbf Y)}{4pt}}_{v}$ by $r$. That is, let $r\in X^{\scaleto{(\mathbf Y)}{4pt}}_{a}$ such that $\rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r)=\iota_v(x_2)$. By the above embedding of $\mathbf X$ into $\mathbf Y$, $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2 $ follows. We shall conclude by showing that $$ \iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} r <^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2). $$ Since $\rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r)=\iota_v(x_2) \overset{(\ref{P5})}{=} \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_v(x_2))$, $r \leq^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2)$ follows by (\ref{RendeZesINNOVATIVAN}). Since $r$ is in $X^{\scaleto{(\mathbf Y)}{4pt}}_{a}$, $\iota_v(x_2)$ is in $X^{\scaleto{(\mathbf Y)}{4pt}}_{v}$ and the layers are pairwise disjoint, $r <^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2)$ follows, as stated. To prove $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} r$ first assume that $ u >_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}} v $. Then $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}}_u \rho_{u}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_v(x_2))$ follows from $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2)$ via (\ref{RendeZesINNOVATIVAN}). Since $ \rho_{u}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_v(x_2)) = \rho_{u}^{\scaleto{(\mathbf Y)}{4pt}}(\rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r)) \overset{(\ref{RhOLESzEZwww})}{=} u\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y}(v\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} r) = (u\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y}v)\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} r \overset{v<u}{=} u\mathbin{\mathbin{*\mkern-9mu \circ}}_Y r \overset{(\ref{RhOLESzEZwww})}{=} \rho_{u}^{\scaleto{(\mathbf Y)}{4pt}}(r) $, also $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}}_u \rho_{u}^{\scaleto{(\mathbf Y)}{4pt}}(r) $ holds which by (\ref{RendeZesINNOVATIVAN}) implies $ \iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} r $, as stated. Second, assume that $ u \leq_{\kappa^{\scaleto{(\mathbf Y)}{3pt}}} v $. Then $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) \leq^{\scaleto{(\mathbf Y)}{4pt}}_v \iota_v(x_2)$ follows from $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} \iota_v(x_2)$ via (\ref{RendeZesINNOVATIVAN}). Since $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) \overset{(\ref{RhOLESzEZwww})}{=} v \mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} \iota_u(x_1) \overset{v<a}{=} (v \mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} a)\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} \iota_u(x_1) = v \mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} (a\mathbin{{\mathbin{*\mkern-9mu \circ}}_Y} \iota_u(x_1)) \overset{(\ref{RhOLESzEZwww})}{=} \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\rho_{a}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1))) $, also $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\rho_{a}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)))\leq^{\scaleto{(\mathbf Y)}{4pt}}_v \iota_v(x_2)= \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r) $ holds. If $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) = \iota_v(x_2) (=\rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r)) $ then $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} r$ follows by (\ref{RendeZesINNOVATIVAN}), as stated, whereas if $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) <^{\scaleto{(\mathbf Y)}{4pt}}_v \iota_v(x_2)$ holds then since $\rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}$ preserves the order, it follows that $ \rho_{a}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) \leq^{\scaleto{(\mathbf Y)}{4pt}}_a r$ (indeed, $\rho_{a}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) >^{\scaleto{(\mathbf Y)}{4pt}}_a r$ would imply $ \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1)) = \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(\rho_{a}^{\scaleto{(\mathbf Y)}{4pt}}(\iota_u(x_1))) \geq^{\scaleto{(\mathbf Y)}{4pt}}_v \rho_{v}^{\scaleto{(\mathbf Y)}{4pt}}(r) = \iota_v(x_2) $, a contradiction), and hence $\iota_u(x_1) <^{\scaleto{(\mathbf Y)}{4pt}} r$ follows by (\ref{RendeZesINNOVATIVAN}), as stated. \medskip To prove that $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ admits densification, notice that since in the previous proof the new element which was added to the skeleton of $\mathbf X$ went to the $\kappa_I$ partition of $\mathbf Y$, the construction of $\mathbf Y$ from $\mathbf X$ did not introduce any new element into the $\kappa_J$ partition of $\mathbf X$. Therefore, if $\kappa_J^{\scaleto{(\mathbf X)}{4pt}}$ is empty, then $\kappa_J^{\scaleto{(\mathbf Y)}{4pt}}$ is empty, too. Therefore, the resulting algebra $\mathbf Y$ is in $\mathfrak I^{\mathfrak{sl}}_{\mathfrak 0,\mathfrak{symm}}$ if so is $\mathbf X$. \end{proof} For a variety, densifiability is sufficient for densification, as shown in \begin{proposition}\cite[Proposition 2.2]{BaldiTerui} Let $L$ be a language of algebras and $\mathtt V$ a densifiable variety of type $L$. Then every chain $\mathbf X$ of cardinality $\delta>1$ is embeddable into a dense chain of cardinality $\delta+\aleph_0+|L|$. \end{proposition} Therefore, a corollary of Theorem~\ref{EzaTuTtIDENSIFIABLE} is \begin{corollary}\label{EzaTuTtIDENSIFIABLEchains} If $\mathbf X\in\mathfrak I^{\mathfrak{c}}_{\mathfrak 0}$ or $\mathbf X\in\mathfrak I^{\mathfrak{c}}_{\mathfrak 0,\mathfrak{symm}}$ is countable (finite or countably infinite) then it can be embedded into a countable dense chain $\mathbf Y$ in $\mathfrak I^{\mathfrak{c}}_{\mathfrak 0}$ or $\mathfrak I^{\mathfrak{c}}_{\mathfrak 0,\mathfrak{symm}}$, respectively. \end{corollary} As for the MacNeille completion step, adapt the notation of Corollary~\ref{EzaTuTtIDENSIFIABLEchains}. We may safely assume that $\mathbf Y$ is bounded, since if not, then both the top and the bottom elements are missing due to involutivity, and we can add a new top element to be the zero for the monoidal operation, and then a new bottom element to be the zero for the monoidal operation (including the top element), thus obtaining a bounded odd involutive FL$_e$-chain in which $\mathbf Y$ embeds. Bounded countable dense chains have been shown to be isomorphic to $\mathbb Q\cap[0,1]$ by Cantor, hence $\mathbf X$ can be embedded into an odd involutive FL$_e$-chain $\mathbf Z$ over $\mathbb Q\cap[0,1]$. By denoting the monoidal operation of $\mathbf Z$ by ${\mathbin{*\mkern-9mu \circ}}$, $$ \gpont{a}{b}=\sup_{x\in\mathbb Q\cap[0,1], x<a}\sup_{y\in\mathbb Q\cap[0,1], y<b}\g{x}{y} \ \ \ \ \mbox{(for $a,b\in[0,1]$)} $$ results in a commutative, associative, residuated (equivalently left-continuous) extension $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}$ of ${\mathbin{*\mkern-9mu \circ}}$ over $[0,1]$ (exactly like in \cite[Theorem 3.2]{JMstcompl}). Obviously, being odd is inherited by $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}$. Being involutive is inherited, too, since the residual complement operation being involutive is known to be equivalent to being strictly increasing, and the strictly increasing nature of the residual complement operation is clearly inherited using that $\mathbb Q\cap[0,1]$ is dense in $\mathbb R\cap[0,1]$. Hence we have obtained \begin{theorem} The variety $\mathfrak I^{\mathfrak{c}}_{\mathfrak 0}$ is strongly standard complete. \end{theorem} Involutive uninorm logic with fixed point is strongly standard complete. We remark that the property $\mathfrak{symm}$ is not inherited by $\mathbin{\circ\mkern-7mu \cdot\mkern2mu}$, in general: As a counterexample, the $o$-group $\mathbb Z\lex\mathbb R$ is in $\mathfrak I^{\mathfrak{c}}_{\mathfrak 0,\mathfrak{symm}}$ It is not difficult to see that its MacNeille completion is $\PLPII{\mathbb Z}{\mathbb R}$ (see \cite[Definition 4.2]{JS_Hahn} for its definition or \cite[Example 3.1]{JS_Hahn} for its analytic description), and it is not in $\mathfrak I^{\mathfrak{c}}_{\mathfrak 0,\mathfrak{symm}}$. \end{comment}
{ "timestamp": "2021-11-16T02:22:25", "yymm": "2012", "arxiv_id": "2012.14181", "language": "en", "url": "https://arxiv.org/abs/2012.14181" }
\section{Fast Degree Reduction with RandomColorTrial}\label{sec:oneshot} This section is devoted to the derivation of the key property of {\RCT} that our algorithms rely on -- fast degree reduction. Recall that in {\RCT}, every node $v$ picks a color $c_v\in \Psi(v)$ independently, uniformly at random, and calls {\textsc{TryColor}}($v$, $c_v$) (Alg.~\ref{alg:trycolor}). We use $N(v)$ to denote the set of uncolored neighbors of node $v$ in $G$. We begin by observing that, generally, the algorithm provides a constant coloring rate, that is, in a large enough subset, a constant fraction of nodes are colored after a single application. \begin{lemma}\label{lem:oneshotslowdecrease} Let $S$ be a subset of at least $c\log n$ nodes, each node $v\in S$ having palette of size $|\Psi(v)|\ge c\log n$, for a large enough constant $c>0$. After a single application of {\RCT}, at least $|S|/9$ nodes in $S$ are colored, w.h.p. \end{lemma} \begin{proof} It is shown in \cite[Lemma 5.4]{BEPSv3} that under the conditions of our lemma, if $|N(v)|\ge (c/2)\log n$ holds for all nodes in $S$, then at least $|S|/16$ nodes in $S$ are colored in {\RCT}, w.p. $1-n^{-c/1024}-n^{-c/32+1}$. To prove our lemma, we partition $S$ into two subsets $S_1$ and $S_2=S\setminus S'$, where $S_1=\{v\in S : |N(v)|\ge |\Psi(v)|\}$. Let $S'$ be the larger of the two; note that $|S'|\ge (c/2)\log n$. If $S'=S_1$, then by the reasoning above, for a large enough $c$, it holds w.h.p. that $|S'|/16\ge |S|/32$ nodes in $S$ are colored. Otherwise, $|\Psi(v)|\ge 2|N(v)|$ holds, for every node $v\in S'$. It is easy to see that in this case, every node in $S$ is successfully colored w.p. at least 1/2, even under an adversarial assignment of colors to its neighbors; hence Chernoff bound (\ref{eq:chernoffless}) implies that w.p. $1-n^{-c/16}$, at least $|S'|/4=|S|/8$ nodes in $S'$ are colored. This completes the proof. \end{proof} \iffalse \begin{proof} As assumed, we have $|\Psi(v)|>c\log n$, for each node $v\in S$, and by \Cref{L:BEPS}, w.p.\ $1-\exp((c/64)\log n)$, $v$ has at least $|\Psi(v)|/8$ colors in its palette that are not chosen by neighbors. Let $\mathcal{E}_v$ be the latter event; if $c$ is large enough, $\mathcal{E}_S=\cap_{v\in S} \mathcal{E}_v$ also holds w.h.p. Conditioned on $\mathcal{E}_S$, each node in $S$ is successfully colored w.p.\ at least $1/8$, \emph{irrespective of the outcome for its neighbors}, and the number nodes that are colored is at least $|S|/8$ in expectation. Applying Chernoff bound (\ref{eq:chernoffless}), we have that w.p.\ $1-\exp(\Omega(|S|))$, at least $|S|/9$ nodes in $S$ are colored, which implies the claim, since $|S|\ge c\log n$, for a large enough $c>0$. \end{proof} \fi Next, we show that if nodes have large enough slack, then {\RCT} reduces the degrees of nodes very rapidly. We state the following lemma in a general form, as it is also used in \Cref{sec:densecoloring} for a more involved color trial process. \begin{lemma}[fast degree reduction]\label{lem:basiconeshot} Let $H$ be a subgraph of $G$. For a vertex $v\in H$, let $d_v$ be its degree and $s_v\ge 2d_v$ be a parameter such that $s_v\ge c\log n$, for a large enough constant $c>0$. Assume the nodes in $H$ execute a randomized coloring algorithm where in every round $i$, each uncolored node picks a color different from its neighbors' choices and makes it its permanent color w.p.\ at least $1-|N_i(v)|/s_v$, irrespective of the color choices of other nodes, where $N_i(v)$ is the set of uncolored neighbors of $v$ in $H$ at the beginning of round $i$. After $O(\log\log s^*)$ iterations, every node $v$ in $H$ has at most $O((s_v/s^*)\log n)$ uncolored neighbors in $H$, where $s^*=\min_w s_w$. \end{lemma} \begin{proof} Let $n_v^i$ be an upper bound (specified below) on the size of $N_i(v)$. Initially, we have $n_v^1=d_v$. As we assumed, node $v$ is colored in iteration $i$ w.p.\ at least $1-n_v^i/s_v$, irrespective of the outcome for other nodes. For each node $u$, let $Z_u=1$ if $u$ is not colored in iteration $i$ and $Z_u=0$ otherwise. We have $Pr[Z_u=1]\le n_u^i/s_u$. Consider a node $v$, and let $Z=\sum_{u\in N_i(v)}Z_u$. We have $ \mathbb{E}[Z]\le \sum_{u\in N_i(v)} n_u^i/s_u\le n_v^i M_i, $ where $M_i=\max_u (n_u^i/s_u)$. Since the probability bound on $Z_u$ above holds irrespective of other $Z_{u'}$, we apply Chernoff bound (\ref{eq:chernoffmore}) to get \[ Pr\left[\sum_{u\in N_i(v)}Z_u>n_v^iM_i + c'\log n\right]\le n^{-c'/4}\ , \] for any constant $c'>4$. Hence it holds w.h.p.\ that $\sum_{N_i(v)}Z_u\le n_v^iM_i + c'\log n$, and we can set $n_v^{i+1}= n_v^iM_i + c'\log n$. Consider a node $w$ with $n_w^i/s_w\ge M_i^{3/2}$. Then we have \[ \frac{n_w^{i+1}}{s_w} = \frac{n_w^iM_i}{s_w} + \frac{c'\log n}{s_w}\le M_i^{5/3} + \frac{c'\log n}{s^*}\ . \] The latter implies the recursion $M_{i+1}\le M_i^{3/2} + c'\log n/s^*$. Let $i_0=O(1)$ be the first index where the recursion holds and $r=c'\log n/s^*$. Recall that $M_{i_0}\le 1/4$. Since $M_i,r< 1$, we have: \[ M_{i+1}\le M_i^{3/2} + r\le M_{i-1}^{(3/2)^2} + r^{3/2} + r\le M_{i_0}^{(3/2)^{i-O(1)}} + \sum_{j=0}^{i} r^{(3/2)^j}=M_{i_0}^{(3/2)^{i-O(1)}} + \frac{O(\log n)}{s^*}\ . \] Thus, after $i=O(\log\log s^*)$ iterations we have $M_{i}=\frac{O(\log n)}{s^*}$, i.e., $n_v^i= \frac{O(s_v\log n)}{s^*}$, w.h.p. \end{proof} \section{Step 1: Initial Slack Generation}\label{sec:slack} Slack generation is based on the following idea. Take a node $v$, and let $u,w$ be two of its neighbors. Let $u,w$ each choose a random color from its palette. If either $u$ or $v$ has a palette that significantly differs from $\Psi(v)$, then it, say $u$, is likely to choose a color not in $\Psi(v)$, and the slack of $v$ gets increased, if $u$ retains its color. Otherwise, both $u$ and $v$ have many common colors with $v$, and so are likely to both select the same color, again increasing the slack of $v$, if they retain their colors. This increase of slack can be attributed to the pair $u,w$. It is possible to also ensure that nodes are likely to retain the color they pick, by combining random coloring with node sampling. This leads us to the algorithm called {\textsc{SlackGeneration}}, which consists of an execution of {\RCT} in a randomly sampled subgraph $G[S]$. Each node independently joins $S$ w.p.\ $p=1/20$. The following key result, which has appeared in various reincarnations in~\cite{EPS15,HSS18,CLP20,AlonAssadi}, formalizes the intuition above, showing that {\textsc{SlackGeneration}} converts sparsity into slack. We note that none of the known variants is suitable to our setting: either they are not adapted for list coloring or the dependence on sparsity is sub-optimal. The \emph{local sparsity} $\zeta_v$ of a vertex $v$ is defined as $\zeta_v=\frac{1}{\Delta}\left({\Delta \choose 2}-m(N(v))\right)$, where for a set $X$ of vertices, $m(X)$ denotes the number of edges in the subgraph $G[X]$. Roughly speaking, $\zeta_v$ is proportional to the number of ``missing'' edges in the neighborhood of $v$; note that $0\le \zeta_v< \Delta/2$. \begin{lemma}\label{lem:sparseImpliesSlack} Assume each vertex $v$ has a palette $\Psi(v)$ of size $\Delta+1$. Let $v$ be a vertex with $\zeta_v\ge c\log n$, for a large enough constant $c>0$. There is a constant $c_1>0$ such that after {\textsc{SlackGeneration}}, $v$ has slack $Z \ge c_1\zeta_v$, w.h.p. \end{lemma} \begin{proof} Let $\zeta=\zeta_v$. We may assume w.l.o.g. that $|N(v)|\ge \Delta-\zeta/2\ge 3\Delta/4$, as otherwise $v$ has slack $\zeta/2$, and we are done. Let $X\subseteq {\binom{N(v)}{2}}$ be the set of pairs $\{u,w\}$ s.t. $\{u,w\}$ is not an edge. Under our assumption, $N(v)$ contains ${\Delta-\zeta/2\choose 2}$ pairs of nodes, of which at most ${\Delta\choose 2}-\zeta\Delta$ are edges, hence $|X|\ge {\Delta-\zeta/2\choose 2}-{\Delta\choose 2}+\zeta\Delta>\zeta\Delta/2$. A node $u$ is \emph{activated} if it is sampled in $S$. Note that every node is independently activated w.p.\ $p=1/20$. The rest of the proof is conditioned on the high probability event that for every node $v$, there are at most $(4/3)p\Delta=\Delta/15$ nodes activated in $N(v)$ (which easily follows by an application of Chernoff bound (\ref{eq:chernoffmore})). We use $V(X)=\{u : \exists w, \{u,w\}\in X\}$ to denote the vertices that are in at least one pair in $X$. Let $X_1\subseteq X$ be the subset of pairs $\{u,w\}$, s.t. at least one $z\in \{u,w\}$ satisfies \begin{equation}\label{eq:lowinter} |\Psi(z)\cap\Psi(v)|<(9/10)(\Delta+1)\ , \end{equation} and let $X_2=X\setminus X_1$. The case $|X_1|\ge |X|/2$ is easier to handle. Let $V_1\subseteq V$ be the set of vertices that are present in a pair in $X_1$ and satisfy (\ref{eq:lowinter}). Since $|X_1|\ge |X|/2\ge \zeta\Delta/4$ and each $w\in V_1$ can give rise to at most $\Delta$ pairs, $|V_1|\ge \zeta/4$. By a Chernoff bound, there is a subset $V'_1$ of at least $\zeta p/5$ nodes activated in $V_1$, w.h.p.\ (where we use the assumption that $\zeta/\log n$ is large enough). Let $w\in V'_1$. As assumed, $|\Psi(w)\setminus \Psi(v)|> \Delta/10$, and there are at most $\Delta/15$ activated neighbors of $w$. Thus, $w$ chooses a color $c\notin\Psi(v)$ and retains it, even when conditioned on arbitrary color choices of its activated neighbors, w.p.\ at least $1/10-1/15=1/30$. Thus, we can apply Chernoff bound (\ref{eq:chernoffless}), to obtain that at least $p\zeta/160$ nodes in $V_1$ choose a color that is not in $\Psi(v)$, w.h.p.\ (again, we use the assumption that $\zeta/\log n$ is large enough). This gives the claimed slack to $v$. Therefore, we continue with the assumption that $|X_2|\ge |X|/2$, or for simplicity, that $X=X_2$ and $|X|\ge \zeta\Delta/2$. Thus, for every pair $\{u,w\}\in X$, $|\Psi(w)\cap\Psi(v)|\ge (9/10)\Delta$, and $|\Psi(u)\cap\Psi(v)|\ge (9/10)\Delta$, and by Obs.~\ref{obs:deltaintersections}, \begin{equation}\label{eq:slackgen1} |\Psi(u)\cap \Psi(w)|\ge (4/5)\Delta\ . \end{equation} Let $Z$ be the number of colors in $\Psi(v)$ that are picked by at least one pair of activated vertices in $X$ and are permanently selected by all these activated neighbors. For each pair $\{u,w\}\in X$, let $\mathcal{E}_1$ be the event that $u$ and $w$ are activated and pick the same color $c$ which is not picked by any other node in $N(u)\cap N(w)$, and let $\mathcal{E}_2$ be the event that the colors picked by $u$ and $w$ are not picked by any other node in $V(X)$. Let $Y_{u,w}$ be the binary random variable that is the indicator of the event $\mathcal{E}_1\cap \mathcal{E}_2$. Note that $Z\ge Y=\sum_{\{u,v\}\in X} Y_{u,v}$. First, we show that $Pr[Y_{u,v}=1]=\Omega(1/\Delta)$, which implies that $\mathbb{E}[Z]\ge \mathbb{E}[Y]=\Omega(|X|/\Delta)=\Omega(\zeta)$. Note that $Pr[Y_{u,v}=1]=Pr[\mathcal{E}_1\cap \mathcal{E}_2]=Pr[\mathcal{E}_1] \cdot Pr[\mathcal{E}_2\mid \mathcal{E}_1]$. Note that $Pr[\mathcal{E}_2\mid \mathcal{E}_1]$ is the probability that no node in $V(X)\setminus (\{u,w\}\cup N(u)\cup N(w))$ picks the color $c$ picked by $u$ and $w$. Since each node independently is activated w.p.\ $p$, the probability that a node picks $c$ is $1/(\Delta+1)$, and $|V(X)|\le\Delta$, we have \[ Pr[\mathcal{E}_2\mid \mathcal{E}_1]\ge (1-p/(\Delta+1))^{\Delta}=\Omega(1)\ . \] Next, consider $Pr[\mathcal{E}_1]$. By our assumption above, for every pair of vertices $u,w$, at most $3p\Delta<\Delta/5$ vertices are activated in $N(u)\cup N(w)$. By (\ref{eq:slackgen1}), $|\Psi(u)\cap\Psi(w)|\ge (4/5)\Delta$; hence, there is a set $S_{u,w}$ of at least $(3/5)\Delta$ colors in $\Psi(u)\cap \Psi(w)$ that are not selected by a neighbor in $N(u)\cap N(w)$. Note that $\mathcal{E}_1$ is implied by both $u$ and $w$ being activated and choosing a color from $S_{u,w}$, hence we have, as claimed, \[ Pr[\mathcal{E}_1]\ge p^2\cdot \frac{|S_{u,w}|}{|\Psi(u)|}\cdot \frac{1}{|\Psi(w)|}=\Omega(1/\Delta)\ . \] It remains to show that $Z$ is concentrated around its mean. Let $T$ be the number of colors in $\Psi(v)$ that are picked by at least one pair $(u,w)\in X$, and $D$ be the number of colors in $\Psi(v)$ that are picked by at least one pair $(u,w)\in X$ but are not retained by at least one of them. Note that $Z=T-D$. Moreover, note that both $T$ and $D$ are a function of activation r.v. and the random color pick of vertices in $V(X)\cup N(V(X))$, and as such they are (i) $\Theta(1)$-certifiable: for each color picked in $T$, there are 2 nodes in $V(X)$ that ``can certify'' for it (for $D$, an additional one that picked the same color), and (ii) $\Theta(1)$-Lipschitz: changing the color/activation of one vertex can affect $T$ and $D$ by at most 2. We need to bound $\mathbb{E}[T]$ (which implies the same bound for $D$, as $D\le T$). For a given color $c$ and two nodes $u,w\in V(X)$, the probability that $u$ and $w$ both pick $c$ is at most $1/(\Delta+1)^{2}$. By the union bound, the probability that $c$ is picked by a pair is at most $X/(\Delta+1)^2$. There are $\Delta+1$ colors in $\Psi(v)$, so the expected number $T$ of colors that are picked by at least one pair is $\mathbb{E}[T]\le X/(\Delta+1)<\zeta$, and since $\mathbb{E}[Z]=\Omega(\zeta)=\Omega(\mathbb{E}[T])>c_3\sqrt{\mathbb{E}[T]}$, for a large enough constant $c_3$, since $\zeta=\Omega(\log n)$. Applying Lemma~\ref{lem:talagrand}, and using these relations, we have \[ Pr\left[|T-\mathbb{E}[T]|\ge \mathbb{E}[Z]/10\right]\le \exp\left(-\Theta(1)\frac{(\mathbb{E}[Z]/10-O(\sqrt{\mathbb{E}[T]}))^2}{\mathbb{E}[T]}\right)\le \exp(-\Omega(\zeta))\ . \] Since $\zeta\ge c_2\log n$, for a large enough constant $c_2$, we have that $|T-\mathbb{E}[T]|<\mathbb{E}[Z]/10$, w.h.p. Similarly, $|D-\mathbb{E}[D]|<\mathbb{E}[Z]/10$, w.h.p. Putting together we see that w.h.p., $Z=T-D\ge \mathbb{E}[T]-\mathbb{E}[D]-\mathbb{E}[Z]/5=(4/5)\cdot\mathbb{E}[Z]=\Omega(\zeta)$. This completes the proof. \end{proof} We apply Lemma~\ref{lem:sparseImpliesSlack} to establish slack for nodes of various densities. Sparse nodes obtain slack $\Omega(\Delta)$, while more dense nodes have slack depending on the \emph{external degree} and \emph{antidegree}. We use a connection of antidegree and external degree to local sparsity, originally observed in \cite{HKMN20} for distance-2 coloring. \begin{lemma} Let $\eta,\varepsilon\le 1/3$. For every node $v\in V_{sparse}$, $\zeta_v\ge(\eta^2/4)\Delta$. For every node $v\in C=C_i$ with antidegree $a(v)$ and external degree $e(v)$, it holds that $\zeta_v\ge (1-2\varepsilon)a(v)$ and $\zeta_v\ge (1-3\varepsilon)e(v)/2$. \end{lemma} \begin{proof} Let $v\in V_{sparse}$. Since $v$ is not $\eta$-dense, there are less than $(1-\eta)\Delta$ nodes $u\in N(v)$ satisfying $|N(u)\cap N(v)|\ge (1-\eta)\Delta$. We assume that $d=|N(v)|\ge (1-\eta/5)\Delta$, as otherwise $\zeta_v\ge \eta/10$ follows from the definition of $\zeta_v$. Thus, $v$ has at least $(4\eta/5)\Delta$ neighbors $u$, each having at least $(4\eta/5)\Delta$ non-neighbors in $N(v)$; therefore $m(N(v))\le {\Delta\choose 2}-(8/25)\eta^2\Delta^2$, and $\zeta_v\ge (8\eta^2/25)\Delta\ge (\eta^2/4)\Delta$. Let us consider a node $v\in C=C_i$. Each external neighbor $w\in N(v)\setminus C$ has at most $\varepsilon\Delta$ neighbors in $C$ (by the definition of ACD). Thus, since $|N(v)\cap C|\ge (1-\varepsilon)\Delta$, $w$ contributes at least $(1-2\varepsilon)\Delta$ non-edges to $G[N(v)]$. In total, the external neighbors contribute at least $e(v)(1-2\varepsilon)\Delta$ non-edges in $G[N(v)]$, which must be at most $\zeta_v \Delta$; hence, $\zeta_v\ge (1-2\varepsilon)e(v)$. Next, observe that for every node $u\in C\setminus N(v)$, $|N(u)\cap N(v)\cap C|\ge (1-3\varepsilon)\Delta$, since $|N(u)\cap C|,|N(v)\cap C|\ge (1-\varepsilon)\Delta$ and $|C|\ge (1+\varepsilon)\Delta$; hence, there are at least $a(v) \cdot (1-3\varepsilon)\Delta$ edges between $N(v)$ and $C\setminus N(v)$. On the other hand, by the definition of sparsity, at most $2\zeta_v\Delta$ edges can exit $N(v)$. Thus, $\zeta_v\ge (1-3\varepsilon)a(v)/2$. \end{proof} The two lemmas above immediately imply the following one. \begin{lemma} \label{lem:sparseGetsSlack} After {\textsc{SlackGeneration}}, every node $v\in V_{sparse}$ has slack $\Omega(\Delta)$, and every node $w\in C_i$ with antidegree $a(w)$ and external degree $e(w)$ such that $a(w)+e(w)\ge c\log n$, for a large enough constant $c>0$, has slack $\Omega(e(w)+a(w))$, w.h.p. \end{lemma} \section{Step 2: Coloring Sparse Nodes}\label{sec:sparsecoloring} We show that given the $(\varepsilon,\eta)$-ACD, the maximum degree of the graph induced by uncolored nodes in $V_{sparse}$ can be reduced to $O(\log n)$ with $O(\log \log \Delta)$ executions of {\RCT} in $G[V_{sparse}]$, after which we can apply {\textsc{ColorSmallDegreeNodes}} to finish coloring $G[V_{sparse}]$. We call this procedure {\textsc{ColorSparseNodes}}. After slack generation, each node in $V_{sparse}$ has slack $\Omega(\Delta)$ (see Sec.~\ref{sec:slack}). \Cref{lem:oneshotslowdecrease} shows that in a few iterations, the slack of every node in $V_{sparse}$ becomes larger than its uncolored degree, after which \Cref{lem:basiconeshot} applies, to show that the degrees of nodes rapidly decrease to $O(\log n)$. \begin{lemma}[Coloring sparse nodes] \label{lem:coloringSparse} After {\textsc{SlackGeneration}}, {\textsc{ColorSparseNodes}} colors all vertices in $V_{sparse}$ in $O(\log \log \Delta + T)$ rounds, w.h.p., where $T$ is the time needed to $(deg+1)$-list color an $n$-vertex graph with maximum degree $O(\log^4 n)$ in a colorspace of size $\poly(n)$. \end{lemma} \begin{proof} Assume that $\Delta\ge c\log n$, for a large enough constant $c>0$, as otherwise $G$ can be colored in $T$ rounds, using {\textsc{ColorSmallDegreeNodes}}. After slack generation, each node $v\in V_{sparse}$ has slack $s_v\ge s^*= c'\Delta$, for a constant $c'>0$ (\Cref{lem:sparseGetsSlack}). Let $w\in V_{sparse}$ and let $S$ be the set of uncolored neighbors of $w$ in $G[V_{sparse}]$. Since each node $v\in S$ has a palette of size $|\Psi(v)|\ge s_v\ge cc'\log n$, Lemma~\ref{lem:oneshotslowdecrease} implies that after $O(1)$ rounds, there are at most $s_w/2$ uncolored nodes remaining in $S$, w.h.p. By the union bound, this holds for all nodes $w\in V_{sparse}$, w.h.p. For the subsequent rounds, we lower bound the probability for a node $v$ to get colored, in order to apply Lemma~\ref{lem:basiconeshot}. In every round $i$, node $v$ picks a uniformly random color from $\Psi(v)$; hence, conditioned on any colors selected by its $t_i$ participating neighbors, $v$ selects a different color w.p.\ at least $(|\Psi(v)| - t_i)/|\Psi(v)|= 1-t_i/|\Psi(v)|\ge 1-t_i/s_v$, using $s_v\le |\Psi(v)|$. Note that the probability bound on getting colored holds in each iteration as the slack of a node never decreases; hence, we can indeed apply \Cref{lem:basiconeshot} with parameters $s_v=s^*=c'\Delta$ (also recall that $s_v\ge cc'\log n$). The lemma implies that in $O(\log\log \Delta)$ rounds, the maximum degree of $G[V_{sparse}]$ reduces to $O(\log n)$, w.h.p. The remaining nodes in $V_{sparse}$ can be colored in $T$ rounds, using {\textsc{ColorSmallDegreeNodes}}. \end{proof} \section{Step 3: Coloring Dense Nodes}\label{sec:densecoloring} We assume here that the $(\varepsilon,\eta)$-ACD $V_{sparse},C_1,\dots,C_k$ of $G$, with $\varepsilon=1/3$ and $\eta=\varepsilon/108$, is given (computed in Alg.~\ref{alg:main}, line~\ref{st:mainacd}), where each almost-clique $C=C_i$ has a designated leader node $w_{C}$ (e.g., the node with minimum ID), as well as a clique overlay (computed in Alg.~\ref{alg:main}, line~\ref{st:overlay}). We further assume that {\textsc{SlackGeneration}} has been executed (line~\ref{st:mainslack}), and Lemma~\ref{lem:sparseGetsSlack} applies. In this section we describe algorithm {\textsc{ColorDenseNodes}} (Alg.~\ref{alg:dense}) that colors the vertices in almost-cliques $C_1,\ldots,C_k$. The high level idea is to use the slack of nodes to reduce the degrees of the subgraph induced by uncolored dense nodes to $\poly \log(n)$, and then color the remaining vertices via {\textsc{ColorSmallDegreeNodes}} (\Cref{thm:smallDegree}). We prove the following result. \begin{lemma}[Coloring dense nodes] \label{lem:coloringDense} After {\textsc{SlackGeneration}}, {\textsc{ColorDenseNodes}} colors all vertices in $C_1,\dots,C_k$ in $O(T+\log \log n + \log^2\log\Delta)$ rounds, w.h.p., where $T$ is the time needed to $(deg+1)$-list color an $n$-vertex graph with maximum degree $O(\log^4 n)$ in a colorspace of size $\poly(n)$. \end{lemma} We present the algorithm in Sec.~\ref{ssec:denseAlg}, prove that all dense nodes are colored in Sec.~\ref{ssec:degreeReduction}, and complete the proof of \Cref{lem:coloringDense} by proving that the algorithm can be implemented efficiently in the {$\mathsf{CONGEST}$\xspace} model in Sec.~\ref{ssec:randomGreedyImplementation}. \emph{We assume} that $\Delta=\omega(\log^4 n)$, as otherwise {\textsc{ColorSmallDegreeNodes}} gives the lemma. \subsection{Algorithm Description} \label{ssec:denseAlg} Towards reducing the degrees of nodes, the plan is to first reduce the external degree of each node to $O(\log n)$, and then reduce the size of each almost-clique. If there were no edges within the almost-cliques, one could reduce the external degree of nodes by applying {\RCT} $O(\log\log\Delta)$ times, as we did for coloring sparse nodes (\Cref{sec:sparsecoloring}), using the fact that each node has slack proportional to its external degree (by \Cref{lem:sparseGetsSlack}). There are two obstacles to this. First, the external degrees of nodes can be very different, which is problematic when applying the arguments from \Cref{sec:oneshot}. Second, unfortunately, there are many edges in almost-cliques. To overcome the first obstacle, we partition each almost-clique into layers of carefully chosen sizes and handle them sequentially. This ensures that although nodes have different external degrees, they have (to some extent) similar slacks within the subgraph induced by each layer, which is provided by the uncolored neighbors in subsequent layers that are left to be colored later. To overcome the second obstacle, we assign the nodes in each almost-clique random colors from their palettes \emph{in an arbitrary fixed node order}, ensuring that each node gets a color different from its predecessors (see Alg.~\ref{alg:synchtrial}). A fast implementation of this procedure in the {$\mathsf{CONGEST}$\xspace} model poses certain technical challenges. The idea is to collect the palettes of nodes of an almost-clique into the leader node, which can then choose the candidate colors and send them back to the nodes. This cannot be done quickly, even with fast communication via clique overlays. Instead, we show that it suffices to send a large enough random subset of each palette, and the similar slack of nodes provided by partitioning also comes in handy here. After reducing the external degree, a few applications of the color-assignment-and-trial-in-cliques procedure described above suffices to reduce the number of neighbors of each node in the given layer to $\poly \log(n)$. This is achieved due to the internally conflict-free (for each almost-clique) color assignment of node colors, low external degree, and large slack. Algorithm {\textsc{ColorDenseNodes}} is formally described in Alg.~\ref{alg:dense}. In line~\ref{st:partition}, we partition each almost-clique $C=C_i$ into $t= O(\log\log\Delta)$ \emph{layers} $R_1^C, \ldots, R_t^C$ that are processed iteratively. The partitioning is done probabilistically, where each vertex independently joins layer $R_i^C$ w.p.\ $p_i$. We describe the probability distribution below. Throughout this section, let $R_i=\cup_{j=1}^kR_i^{C_j}$ denote the set of all vertices of layer $i$. The terms $R_i,R_i^C$ will always denote the corresponding sets of \emph{uncolored} nodes, that is, nodes that are permanently colored are automatically removed from these sets. \begin{algorithm}[H] \caption{{\textsc{ColorDenseNodes}} } \label{alg:dense} \begin{algorithmic}[1] \STATE Partition the uncolored nodes of each almost-clique $C$ into layers $R^C_0, R^C_1, \ldots, R^C_t$, where each such node joins $R^C_i$ with probability $p_i$, independently of other nodes. \label{st:partition} \STATE{\textbf{for}} $O(\log\log n)$ iterations \textbf{do} {\RCT} in $R_0$.\label{st:lglgnoneshot} \STATE {\textsc{ColorSmallDegreeNodes}} in $G[R_0]$.\label{st:smalldeg1} \FOR {$i=0,\dots,t-1$} \label{st:densemainloop} \STATE{\textbf{for}} $O(1)$ iterations \textbf{do} {\RCT} in $R_i$. \label{st:oneShotFor2} \STATE{\textbf{for}} $O(\log\log \Delta)$ iterations \textbf{do} {\textsc{SynchronizedColorTrial}} in $R_i$.\label{st:core} \ENDFOR \STATE {\textsc{ColorSmallDegreeNodes}} in $G[V\setminus V_{sparse}]$. \label{st:smalldeg3} \end{algorithmic} \end{algorithm} The purpose of {\RCT} in line \ref{st:lglgnoneshot}, followed by {\textsc{ColorSmallDegreeNodes}}, is to reduce the size of $R_0^C$ by a logarithmic factor (Lemma~\ref{lem:rcisize}), which is needed for efficient communication using the clique overlay. When processing each layer, we start with $O(1)$ applications of {\RCT}, with the purpose of increasing the slack by a constant factor (used in \Cref{lem:colorpickprobability}). The main action happens in line~\ref{st:core}, where $O(\log\log\Delta)$ applications of {\textsc{SynchronizedColorTrial}} (described below) reduce the size of layer $i$, so that each node in $V$ has $O(\log^2 n)$ neighbors in $R_i$. After this we can invoke {\textsc{ColorSmallDegreeNodes}} to finish coloring $V\setminus V_{sparse}$. The subgraph induced by the last layer $R_t$ has small degree, so it is handled by {\textsc{ColorSmallDegreeNodes}} directly. Alg.~\ref{alg:dense} is executed on all almost-cliques in parallel (in particular, each layer is processed in parallel) and all claims (in particular those in Section~\ref{ssec:degreeReduction}) hold for all almost-cliques. Finally, let us describe the probability distribution for partitioning. Let $t'=\lceil\log_{3/2}\log_{\log n}\sqrt{\Delta}\rceil$, and $t\le 3\log\log \Delta$, to be specified below. Let $p_1=1/\log^{3/2} n$, and, for $2\le i\le t'$, let $p_i=p_{i-1}^{3/2}$. For $t'<i\le t$, let $p_i=\sqrt{p_{i-1}/\Delta}$, where $t$ is the largest value such that $p_t\ge c\log n/\Delta$, for a sufficiently large constant $c>0$. Finally, let $p_0=1 - \sum_1^t p_i$. We let $\Lambda_i=\Delta p_i$ denote the (roughly) expected size of $R_i^C$. The following observation contains all the properties of the probability distribution that we need. \begin{observation}\label{obs:lambdaprops} Let $n>16$. For every $c>0$, there is a $c'>0$ such that if $\Delta>c'\log^4 n$, then: \begin{minipage}{0.5\textwidth} \begin{enumerate}[label=(\roman*),noitemsep] \item $t$ is well defined, and $t'<t\le 3\log\log\Delta$, \item $\Lambda_0\ge \Delta/4$, \item $c\log n \le \Lambda_t\le c^2\log^2 n$ \end{enumerate} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{enumerate}[label=(\roman*),noitemsep] \setcounter{enumi}{3} \item $\Lambda_i\ge c\log n$, $0\le i\le t$ \item $\Lambda_i\le\Lambda_{i+1}^2$, $0\le i<t$, and \item $\Lambda_i\le\sqrt{\frac{\Delta \Lambda_{i+1}}{\log n}}$, $0<i<t$\label{obsi:sending} \end{enumerate} \end{minipage} \end{observation} \begin{proof} \begin{enumerate}[label=(\roman*)] \item By the definition of $t'$, we have $p_{t'-1}\ge \Delta^{-1/2}$, implying that $p_{t'}\ge \Delta^{-3/4}>c\log n /\Delta$, for $\Delta\ge c^4\log^4 n$. Thus $t$ is well defined, we have $t>t'$ and by $t$'s definition we have $p_t\Delta\geq c\log n$. Also note that $p_{t'}\Delta\le \sqrt{\Delta}$. For $t\ge i>t'$, we have $p_{t'}\Delta\le \sqrt{\Delta}$, $p_i\Delta=\sqrt{p_{i-1}\Delta}$, and $p_t\Delta\ge c\log n$, hence $t-t'\le \log\log\Delta$. Since $t'\le 2\log\log\Delta$, we obtain (i). \item Note that for $1\le i\le t'$, we have $p_i=(\log n)^{-(3/2)^i}$, and since $\log n> 4$, it holds that $\sum_{i\le t'} p_i<1/4$. For $i>t'$, we have that $p_i<1/\sqrt{\Delta}$, and since $t-t'\le \log\log\Delta$, we have that $\sum_{i>t'}p_i<\log\log\Delta/\sqrt{\Delta}\le 1/2$, if $\Delta\ge 16$; hence, $p_0=1-\sum_{1}^t p_i\ge 1/4$, and $\Lambda_0\ge \Delta/4$. \item By the definition of $t$, we have $p_t\ge c\log n/\Delta$, and $\sqrt{p_t/\Delta}<c\log n/\Delta$. They imply that $c\log n\le \Lambda_t< c^2\log^2 n$ \item Follows from the fact that $p_i$ is a decreasing sequence, and $p_t\ge c\log n/\Delta$. \item As observed above, $p_{t'}\ge \Delta^{-3/4}$. The latter holds for $i<t'$ as well. Then, $p_{i}\ge \Delta^{-3/4}$ implies that $(p_i\Delta)^{4/3}\ge\Delta^{1/3}$ holds for all $0\leq i\leq t'$. Using this in the last inequality, we obtain, for $t'> i>0$, \[ \Lambda_i=p_{i}\Delta =p_{i+1}^{2/3}\Delta= (p_{i+1}\Delta)^{2/3}\cdot \Delta^{1/3}\le (p_{i+1}\Delta)^{2}=\Lambda_{i+1}^2\ . \] For $t>i\ge t'$, we have $\Lambda_i=\Lambda_{i+1}^2$, by the definition of $p_i$. For $i=0$, $\Lambda_0/\Lambda_1^2=\log^3/\Delta<1$. \item For $0<i<t'$, we have \[ \Lambda_{i}^2=\Delta^2 p_i^2 =\Delta^2 p_{i+1}^{4/3}=\Delta\Lambda_{i+1}\cdot p_{i+1}^{1/3}\le \Delta\Lambda_{i+1}/\log n ,\] since $p_{i+1}= \log^{-(3/2)^{i+1}}n\leq \log^{-3} n$ holds for $i\ge 1$. For $t>i\ge t'$, we have $p_i\le \Delta^{-1/2}$, hence $\Lambda_i\le \sqrt{\Delta}$. Unrelated, due to $\Lambda_{i+1}=\sqrt{\Lambda_{i}}$ it suffices to prove $\Lambda_{i}\le \sqrt{\Delta/\log n}\cdot \Lambda_i^{1/4}$, which reduces to $\Lambda_i\le (\Delta/\log n)^{2/3}$. The latter holds for $\Lambda_i\le \sqrt{\Delta}$, since $\Delta>\log^4 n$. \qedhere \end{enumerate} \end{proof} \paragraph{Synchronized Color Trial.} In {\textsc{SynchronizedColorTrial}} (see Alg.~\ref{alg:synchtrial}), each vertex of $R_i^C$ is assigned a random candidate color distinct from those of other nodes in $R_i^C$, which it then tries via {\textsc{TryColor}}. To achieve this, each vertex of $R_i^C$ selects a random subset $P(v)$ of its palette $\Psi(v)$ and sends $P(v)$ to the clique leader $w_C$, who then locally processes the nodes in $R_i^C$ in an arbitrary order and assigns each node $v$ a random color from its set $P(v)$ that is different from the colors assigned to previous nodes. We let $|P(v)|=\Pi_i=c_P\max(1,|R_i^C|/\Lambda_{i+1})\log n$, where $c_P>0$ is a large enough constant, specified in \Cref{lem:colorpickprobability}. \begin{algorithm}[H]\caption{{\textsc{SynchronizedColorTrial}} (executed in $R_i^C$, for all $C$ in parallel)} \label{alg:synchtrial} \begin{algorithmic}[1] \STATE Leader $w_C$ computes $|R_i^C|$ and sends it to all nodes in $R_i^C$. \STATE Each node $v\in R^C_i$ sends a uniformly random subset $P(v)\subseteq\Psi(v)$ of size $|P(v)|= \Pi_i$ to $w_{C}$. \STATE $w_C$ processes the nodes in $R_i^C$ in an arbitrary order $v_1,v_2\dots$, where $v_j$ is assigned a candidate color $c_j$ chosen uniformly at random from $ P(v_j) \setminus \{c_1, c_2, \ldots, c_{j-1}\}$. \STATE $w_C$ sends each node $v_j$ its candidate color $c_j$. \STATE {\textsc{TryColor}}($v_j$, $c_j$) in $G[R_i]$, for all $j\ge 1$. \label{st:trying} \end{algorithmic} \end{algorithm} We show in \Cref{ssec:randomGreedyImplementation} that one iteration of {\textsc{SynchronizedColorTrial}} can be implemented in the {$\mathsf{CONGEST}$\xspace} model in $O(1)$ rounds, using the \emph{clique overlay} computed in step~\ref{st:overlay} of Alg.~\ref{alg:main}. Note that this is nearly trivial to do in the $\mathsf{LOCAL}$\xspace model, as any node in $R_i^C$ is within distance $2$ from the clique leader $w_C$. Further, each node can send its whole palette $\Psi(v)$ to the leader, which simplifies the analysis. \subsection{Degree Reduction for Dense Nodes} \label{ssec:degreeReduction} We now prove that $O(\log\log \Delta)$ repetitions of {\textsc{SynchronizedColorTrial}} (line~\ref{st:core}) reduces the external degree of each node in $R_i$, and 2 more iterations reduce the number of neighbors of each node $u\in V$ in $R_i$ to $O(\log^2 n)$. We need the following notation. For $v\in V$, let $r_i(v)=N(v)\cap R_i$ denote its number of uncolored neighbors in $R_i$. For $v\in R_i^C$, let $e_i(v)$ denote its \emph{external degree} -- its number of uncolored neighbors in $R_i\setminus R_i^C$ -- and $a_i(v)$ its \emph{antidegree} -- its number of non-adjacent nodes in $R_i^C$. Note that $e_i(v), r_i(v)$ and $a_i(v)$ may change during the execution of the algorithm, but only downwards. We begin by bounding the size of $R^C_i$, as well as various degrees of nodes, restricted to $R_i^C$, immediately after the partitioning. \begin{lemma}\label{lem:nbrsinri} Let $C$ be an almost-clique, and $0\le i\le t$. After line~\ref{st:partition} of Alg.~\ref{alg:dense}, it holds w.h.p.\ that: (i) $|R_i^C|=\Theta(\Lambda_i)$, (ii) $r_i(u)=O(\Lambda_i)$, for $u\in V$, (iii) $|N(v)\cap R_i^C|=\Theta(\Lambda_i)$ and $e_i(v)=O(\Lambda_i)$, for $v\in C$. \end{lemma} \begin{proof} Initially, $|C|=(1\pm \varepsilon)\Delta$, and each node $v\in C$ has at least $(1-\varepsilon)\Delta$ neighbors in $C$. Before partitioning, a node in $C$ can only be colored in {\textsc{SlackGeneration}}, where it participates with probability $p=\frac{1}{20}$. Thus, every node $v\in C$ joins $R^C_i$ w.p.\ $p'_i\ge \frac{19p_i}{20}$, and similarly, every node $v\in V\setminus V_{sparse}$ joins $R_i$ independently w.p.\ $p'_i$. The expected size of $R_i^C$, as well as the expected number of neighbors of a vertex $v\in C$ in $R^C_i$ is $\Theta(p_i\Delta)=\Theta(\Lambda_i)$. Similarly, the expected number of neighbors of a node $u\in V$ in $R_i$ is $O(p_i\Delta)=O(\Lambda_i)$. Since $\Lambda_i>c\log n$, for a large enough constant $c$ (Obs.~\ref{obs:lambdaprops}), all claims follow by using Chernoff bound (\ref{eq:chernoffless}). (Note that $e_i(v)\le r_i(v)$.) \end{proof} The following lemma highlights how many ``free'' colors each node in $R_i^C$ has, both due to its slack, as well as due to the fact that during its processing, its neighbors in $R_{i+1}^C$ stay uncolored. \begin{lemma}[candidate color assignment]\label{lem:colorpickprobability} Let $0\le i<t$. If $\Pi_i=c_P\max(1,|R_i^C|/\Lambda_{i+1})\log n$, for a large enough constant $c_P>0$, then in every iteration of {\textsc{SynchronizedColorTrial}} in layer $i$, each node $v\in R_i^C$ has a palette of size at least $|R_i^C|+3e_i(v)+\Omega(\Lambda_{i+1})$ and is assigned a candidate color from $P(v)$, w.h.p., even if conditioned on an arbitrary color assignment to other nodes in $R_i^C$. \end{lemma} \begin{proof} Let $v\in C$ be a node. Let $a_i(v)=a_0$ and $e_i(v)=e_0$ be the initial antidegree and external degree of $v$ before slack generation (line~\ref{st:mainslack} in Alg.~\ref{alg:main}). By Lemma~\ref{lem:sparseGetsSlack}, after slack generation, $v$ has slack $c(a_0+e_0)$, for a constant $c>0$, w.h.p. By Lemma~\ref{lem:nbrsinri}, $v$ also has $c_1\Lambda_{i+1}$ uncolored neighbors in $R_{i+1}^C$, for a constant $c_1>0$, w.h.p.; hence, $v$ initially has a palette of size $|\Psi(v)|\ge |N(v)\cap R_i^C| + c(a_0+e_0) + c_1\Lambda_{i+1}$. If $a_0+e_0=\Omega(\log n)$, with a large enough coefficient, then, by Lemma~\ref{lem:oneshotslowdecrease}, after $O(1)$ executions of {\RCT} (line~\ref{st:oneShotFor2} in Alg.~\ref{alg:dense}), we have $a_i(v)<ca_0$ and $e_i(v)<(c/3)e_0$, w.h.p., and the palette size in each subsequent round is \begin{align} |\Psi(v)|&\ge |N(v)\cap R_i^C| + c(a_0+e_0) + c_1\Lambda_{i+1}\ge |N(v)\cap R_i^C|+a_i(v)+3e_i(v)+c_1\Lambda_{i+1}\notag\\ &\ge |R_i^C| + 3e_i(v) + c_1\Lambda_{i+1}\ .\label{eq:palettesizeslack} \end{align} If, on the other hand, $a_0+c_0=O(\log n)$, then (\ref{eq:palettesizeslack}) still holds, with a different constant $c'_1=c_1/4$ in front of $\Lambda_{i+1}$, since by Obs.~\ref{obs:lambdaprops}, $\Lambda_{i+1}=\Omega(\log n)$, with a large enough coefficient, which we can choose so that $c_1\Lambda_{i+1}>4(a_0+e_0)>c_1\Lambda_{i+1}/4 + 3(a_0+e_0)$. Let $\mathcal{C}$ be the set of colors that the leader assigns to the nodes in $R_i^C$ preceding $v$. It follows from (\ref{eq:palettesizeslack}) that for any set $\mathcal{C}$, the probability that $P(v)\subseteq \mathcal{C}$ is at most \[ \left(\frac{|R_i^C|}{|R_i^C| + c_1\Lambda_{i+1}}\right)^{|P(v)|}=\left(1-\frac{c_1\Lambda_{i+1}}{|R_i^C|+c_1\Lambda_{i+1}}\right)^{|P(v)|}\le \exp\left(-h\right)\ , \] where $h=c_1|P(v)|\Lambda_{i+1}/(c_1\Lambda_{i+1}+|R_i^C|)>c_3\log n$, for a large enough constant $c_3>0$, since $|P(v)|=\Pi_i=c_P\max(1,|R_i^C|/\Lambda_{i+1})\log n$, for a large enough constant $c_P>0$. Thus, it holds w.h.p.\ that the leader assigns $v$ a color from $P(v)\setminus \mathcal{C}$. \end{proof} \begin{lemma}\label{lem:exdegreeSmall} Let $i<t$ and assume $R_0,\dots,R_{i-1}$ have been colored. After $O(\log\log \Delta)$ iterations of {\textsc{SynchronizedColorTrial}}, it holds for every node $v\in R_i^C$ that $e_i(v)=O(\log n)$, w.h.p. \end{lemma} \begin{proof} The goal is to apply Lemma~\ref{lem:basiconeshot} with a suitable subgraph $H$. To obtain $H$ from $G[R_i]$ each node $v\in R_i^C$ removes all but the edges to nodes in $R_i\setminus R_i^C$. Note that the uncolored degree of a vertex in $H$ corresponds to its (uncolored) external degree in $G[R_i]$. When restricted to $H$ the algorithm (line~\ref{st:core} in Alg.~\ref{alg:dense}) is still a valid coloring algorithm, since nodes in $V_{sparse}\cup \cup_{j\neq i} R_j$ do not participate, and pairs of nodes within the same almost-clique $C$ are never assigned the same color. To apply Lemma~\ref{lem:basiconeshot} we need to show that in iteration $j$ of {\textsc{SynchronizedColorTrial}} each vertex $v$ gets colored with probability at least $1-N_j(v)/s_v$ irrespective of the color choices of other nodes, for a suitable choice of $s_v$, where $N_j(v)$ are the uncolored neighbors of $v$ in iteration $j$ in $H$. We condition the rest of the proof on the high probability event that every node in $R_i$ in every iteration is assigned a candidate color (due to \Cref{lem:colorpickprobability}). Let $v\in R_i^C$ be a vertex, and consider a fixed iteration of {\textsc{SynchronizedColorTrial}}. Let $\mathcal{C}$ be the (random) set of candidate colors assigned to the nodes of $R_i^C$ preceding $v$. In the rest of the paragraph, we condition on an arbitrary outcome of $\mathcal{C}$. By the assumption above, node $v$ is assigned a candidate color $c_v\in P(v)\setminus \mathcal{C}$. It follows from the randomness of $P(v)$ and the random choice of the candidate color $c_v$, that $c_v$ is uniformly distributed in $\Psi(v)\setminus \mathcal{C}$. Let $\mathcal{C'}$ be the set of colors assigned to the external neighbors of $v$. It follows that even when conditioned on arbitrary $\mathcal{C'}$, $v$ is permanently colored, i.e., $c_v\in \Psi(v)\setminus (\mathcal{C}\cup \mathcal{C'})$ holds, w.p.\ at least $1-|\mathcal{C'}|/|\Psi(v)\setminus \mathcal{C}|$. We know that $|\mathcal{C}|<|R_i^C|$, $|\mathcal{C'}|\leq|N_H(v)|$, and $|\Psi(v)|\ge |R_i^C|+3e_i(v)+\Omega(\Lambda_{i+1})$ (from \Cref{lem:colorpickprobability}). Thus, $v$ gets permanently colored w.p.\ at least \begin{equation}\label{eq:cliquesuccprob} 1-\frac{|\mathcal{C'}|}{|\Psi(v)\setminus \mathcal{C}|}\geq 1-\frac{|N_j(v)|}{2e_i(v)+\Omega(\Lambda_{i+1})}=1-\frac{|N_j(v)|}{s_v}\ , \end{equation} irrespective of the candidate color assignments of other nodes in $G$ and where $s_v=2e_i(v)+\Omega(\Lambda_{i+1})$. Since also $d_v=|N_j(v)|= e_i(v)\le s_v/2$ and $s_v=\Omega(\Lambda_{i+1})=\Omega(\log n)$, with a large enough constant factor (provided by Obs.~\ref{obs:lambdaprops}), Lemma~\ref{lem:basiconeshot} applies with parameters $d_v$ and $s_v$. Note that $s_v\in O(\Lambda_i)\cap \Omega(\Lambda_{i+1})$, so after $O(\log\log \min s_v)=O(\log\log \Delta)$ iterations, each node in $R_i$ has $e_i(v)=O(\max_{u,w\in R_i} (s_u/s_w)\cdot \log n)=O(\Lambda_{i+1}\log n)$, w.h.p. Now, we bound $e_i(v)$ further by applying \Cref{lem:basiconeshot} again with a smaller upper bound on $\max_{u,w\in R_i} (s_u/s_w)$: Let us replace $s_v=\min(s_v, O(\Lambda_{i+1}\log n))$; note that we still have $s_v\ge 2d_v=2e_i(v)$, as well as $s_v=\Omega(\Lambda_{i+1})$, and hence, $\max_{u,w\in R_i}(s_u/s_w)=O(\log n)$. Applying \Cref{lem:basiconeshot} again with the same reasoning and parameters $d_v=e_i(v)$ and $s_v$, we see that after $O(\log\log\Delta)$ more iterations, $e_i(v)=O(\log^2 n)$, w.h.p. A final application of \Cref{lem:basiconeshot} with $d_v=e_i(v)$ and $s_v=O(\log^2 n)$ implies $e_i(v)=O(\log n)$, in $O(\log\log\log n)$ more iterations, w.h.p. \end{proof} \begin{lemma}\label{lem:degreeSmall} Let $i<t$ and assume $R_0,\dots,R_{i-1}$ have been colored, and for every node $v\in R_i$, $e_i(v)=O(\log n)$. After 2 iterations of {\textsc{SynchronizedColorTrial}}, it holds, for every node $u\in V$, that $r_i(u)=O(\log^2 n)$, w.h.p. \end{lemma} \begin{proof} We condition on the high probability events that for each node $u\in V$, $r_i(u)=O(\Lambda_i)$ (\Cref{lem:nbrsinri}), each $v\in R_i^C$ has palette size $|\Psi(v)|\ge |R_i^C| + 3e_i(v)+\Omega(\Lambda_{i+1})$ and receives a candidate color in every iteration (\Cref{lem:colorpickprobability}). Consider a node $u\in V$ and an arbitrary iteration of {\textsc{SynchronizedColorTrial}}. For every neighbor $v\in N(u)\cap R_i$, let $X_v$ be a binary random variable that is 1 iff $v$ stays uncolored in the iteration. As it was shown in (\ref{eq:cliquesuccprob}), $Pr[X_v=1]\le e_i(v)/(2e_i(v)+\Omega(\Lambda_{i+1}))=O(\log n/\Lambda_{i+1})$, even when conditioned on an adversarial choice of candidate colors for nodes in $R_i\setminus \{v\}$, and hence, when conditioned on arbitrary values of $X_u$, $u\in R_i\setminus \{v\}$ (since \emph{trying a color} is deterministic). Recalling that $r_i(u)=O(\Lambda_i)$, we have $E\left[\sum_{v\in N(u)\cap R_i} X_v\right]=\frac{O(\Lambda_i\log n)}{\Lambda_{i+1}}$, and since the latter is $\Omega(\log n)$, an application of Chernoff bound (\ref{eq:chernoffmore}) implies that after the iteration, $r_i(u)=|N(u)\cap R_i|=\sum_{v\in N(u)\cap R_i} X_v=\frac{O(\Lambda_i\log n)}{\Lambda_{i+1}}$, w.h.p. By the same reasoning ($s_v$ does not decrease), after another iteration, $r_i(u)=O(\max(1, \Lambda_i\log n/\Lambda^2_{i+1})\log n)$, w.h.p., which is in $O(\log^2 n)$, by Obs.~\ref{obs:lambdaprops}, (v). \end{proof} \subsection{CONGEST Implementation and Proof of Lemma~\ref{lem:coloringDense}} \label{ssec:randomGreedyImplementation} All steps in Alg.~\ref{alg:dense}, except for the candidate color assignment, can be implemented in {$\mathsf{CONGEST}$\xspace} by design. The computation and distribution of $|R_i^C|$ in {\textsc{SynchronizedColorTrial}} can be done in $O(1)$ rounds using standard aggregation tools and the fact that $C$ has diameter $2$ (Lemma~\ref{lem:acdproperties}). It remains to show that nodes can indeed send their sets $P(v)$ to their leader in $O(1)$ {$\mathsf{CONGEST}$\xspace} rounds, using the clique overlays of almost-cliques. Recall that colored nodes automatically leave $R_i, R_i^C$, reducing their size. \begin{lemma}\label{lem:rcisize} After line \ref{st:smalldeg1} of Alg.~\ref{alg:dense}, $|R_0^C|<\Delta/\log^2 n$, w.h.p. \end{lemma} \begin{proof} Initially, $|R_0^C|=O(\Delta)$. By Lemma~\ref{lem:oneshotslowdecrease}, after each application of {\RCT} in Step \ref{st:lglgnoneshot}, the number of nodes in $R_0^C$ of degree $\Omega(\log n)$ decreases by a constant factor, w.h.p. Hence, after $O(\log \log n)$ iterations, it decreases by a factor $O(\log^2 n)$, w.h.p. Since we also color (and remove) the small degree vertices in step~\ref{st:smalldeg1}, the size of $R_0^C$ shrinks to $\Delta/\log^2 n$, w.h.p. \end{proof} \begin{lemma}\label{lem:runtime} W.h.p., in each iteration of {\textsc{SynchronizedColorTrial}} in $R_i^C$, $i<t$, each node succeeds to send a sub-palette $P(v)$ of size $\Pi_i$ to leader $w_C$ in $O(1)$ rounds. \end{lemma} \begin{proof} We use the clique overlay of almost-clique $C$, provided by \Cref{thm:congestedclique}, together with Lenzen's scheme~\cite{Lenzen13}. To this end, we need to ensure that each node has to send and receive $O(\Delta)$ colors, and that it indeed has $\Pi_i= c_P\max(1,|R_i^C|/\Lambda_{i+1})\log n$ colors in its palette. The latter follows from Lemma~\ref{lem:colorpickprobability}: each node $v\in R_i^C$ has palette size at least $|R_i^C|+\Omega(\Lambda_{i+1})=\Omega(\max(1,|R_i^C|/\Lambda_{i+1})\log n)$, w.h.p., where the equality holds as long as $\Lambda_{i+1}>c\log n$, for a large enough constant $c$ (provided by Obs.~\ref{obs:lambdaprops}). Consider a fixed iteration of {\textsc{SynchronizedColorTrial}}. Note that $\Pi_i=O(\Delta)$, since $|R_i^C|<(1+\varepsilon)\Delta$ and $\Lambda_{i+1}=\Omega(\log n)$, as observed above; hence, it remains to show that $w_C$ has to receive only $O(\Delta)$ colors. There are $|R_i^C|$ uncolored nodes in $C$, each sending $c_P \max(1,|R_i^C|/\Lambda_{i+1})\log n$ colors to $w_C$. If $|R_i^C|<\Lambda_{i+1}$, then $w_C$ has to receive $O(|R_i^C|\log n)=O(\Delta)$ colors, since by Lemma~\ref{lem:rcisize}, $|R_0^C|<\Delta/\log^2 n$, and by Lemma~\ref{lem:nbrsinri}, $|R_i^C|=O(\Lambda_i)=O(\Delta/\log n)$, for $i>0$, w.h.p. Otherwise, $w_C$ has to receive $O(|R_i^C|^2\log n/\Lambda_{i+1})$ colors, which again is in $O(\Delta)$: for $i>0$, we have $\Lambda_i^2\log n/\Lambda_{i+1}\le \Delta$, by Obs.~\ref{obs:lambdaprops},~\ref{obsi:sending}, while for $i=0$, we have, by Lemma~\ref{lem:rcisize}, $|R_0^C|<\Delta/\log^2 n$, w.h.p., and $\Lambda_1=\Delta/\log^{3/2} n$, so $|R_0^C|^2\log n/\Lambda_1<\Delta$. This completes the proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:coloringDense}] \Cref{lem:exdegreeSmall,lem:degreeSmall} imply that w.h.p., after processing layer $i<t$, each node $u\in V$ has at most $O(\log^2 n)$ neighbors in $R_i$. Similarly, \Cref{lem:nbrsinri} implies that every node $u\in V$ has $O(\Lambda_t)=O(\log^2 n)$ neighbors in $R_t$, w.h.p. Altogether, after processing all layers, every node $u\in V$ has at most $O(t\log^2 n)=O(\log\log\Delta \cdot \log^2 n)$ neighbors in $V\setminus V_{sparse}$; hence, {\textsc{ColorSmallDegreeNodes}} (line~\ref{st:smalldeg3}) successfully colors $G[V\setminus V_{sparse}]$, w.h.p. Partitioning into layers can be done without communication. The preprocessing in line~\ref{st:lglgnoneshot} takes $O(\log\log n)$ rounds, followed by $T$ rounds for {\textsc{ColorSmallDegreeNodes}}, which is applied on a graph with $n$ nodes, maximum degree $\poly \log(n)$ and color space size $\poly(n)$. We iterate through $O(\log\log \Delta)$ layers, and each layer makes $O(1)$ iterations of {\RCT} (each taking $O(1)$ rounds), followed by $O(\log\log\Delta)$ iterations of {\textsc{SynchronizedColorTrial}} (each taking $O(1)$ rounds). Finally, we apply {\textsc{ColorSmallDegreeNodes}} to a graph with $n$ nodes, maximum degree $\poly\log(n)$ and color space size $\poly(n)$. Thus, the runtime is in $O\big(\log\log n + (\log\log \Delta)^2 +T\big)$. \end{proof} \section{Computation of ACD and Clique Overlay }\label{sec:ACD} \subsection{ACD Computation in Congest} \label{ssec:acdComputation} A distributed algorithm \emph{computes an ACD} if each $C_i$ has a unique AC-ID that is known to all nodes in $C_i$, and each node in $V_{sparse}$ knows it is in $V_{sparse}$. The goal of the remainder of this section is to prove the following lemma. \begin{lemma}[ACD computation] \label{lem:ACD} There is an $O(1)$ round randomized CONGEST algorithm that, given a graph $G=(V,E)$ with maximum degree $\Delta\ge c\log^{2}n$, for a sufficiently large constant $c$, computes an $(\varepsilon,\eta)$-ACD with $\varepsilon=1/3$ and $\eta=\varepsilon/108$, w.h.p. \end{lemma} The algorithm {\textsc{ComputeACD}}, described in Alg.~\ref{alg:acd}, computes an ACD, using a bootstrapping-style procedure. We begin by sampling a subset $S$ of vertices, where each vertex is independently sampled w.p.\ $1/\sqrt{\Delta}$ (line~\ref{st:acdsample}). The idea is to use the whole graph to relay information between the $S$-nodes, in order to efficiently find out which pairs of $S$-nodes belong to the same almost-clique (to be constructed), then let each node $v\in V\setminus S$ decide whether it joins such an $S$-almost-clique $C$, depending on the size of $N(v)\cap C$. To construct the $S$-almost-cliques, each node $v\in V$ chooses a random $S$-neighbor (ID) and broadcasts it to its neighbors (line~\ref{st:acdgossip}). A key observation is that if two $S$-nodes are similar, i.e., share many neighbors, then they will likely receive each other's IDs many times, which allows them to detect similarity (with some error), as well as friend edges (lines~\ref{st:acdsimilar}--\ref{st:acdfriend}). The $S$-nodes that have detected many friend edges are likely to be dense, and join a set $S_{dense}$ (line~\ref{st:acddense}). Roughly speaking, the connected components induced by $S_{dense}$ and detected friend edges form the $S$-almost-cliques. Each node in $S_{dense}$ selects the minimum ID of an $S_{dense}$-neighbor and sends it to the other neighbors, thus proposing to form an almost-clique with that ID (line~\ref{st:acdsendmin}). Each node in $V$ that receives the same ID from many neighbors also joins the almost-clique with that ID (line~\ref{st:formclique}). We prove that each connected component $C$ that contains a sufficiently dense node has the desired properties (w.h.p.): $|C|\approx \Delta$, and each node $v\in C$ has $\approx \Delta$ neighbors in $C$. The components not satisfying the required properties are moved into $V_{sparse}$ (line~\ref{st:acdfilter}), as they do not contain a dense node (w.h.p.). Throughout this section we let $\delta=\varepsilon/27<1/80$, and require $\Delta\geq c\log^{2}n$, for a constant $c>0$, as in Lemma~\ref{lem:ACD}. Our claims hold for smaller $\delta$, provided that $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. For any $\gamma\in [0,1]$, we let $F_{\gamma}$ be the set of $\gamma$-friend edges in $G$, and let $G_{F_{\gamma}}=(V,F_{\gamma})$ be the graph induced by $F_\gamma\cup V$. \begin{algorithm}[H]\caption{{\textsc{ComputeACD}} ($\delta\in (0,1/80)$)}\label{alg:acd} \begin{algorithmic}[1] \STATE \textbf{Sample} Each node selects itself into $S$ independently w.p.\ $1/\sqrt{\Delta}$ and notifies its neighbors.\label{st:acdsample} \STATE \textbf{Gossip} Each node $v\in V$ selects uniformly at random one ID of an $S$-neighbor (if it has any) and forwards it with probability $\min(1, |S\cap N(v)|/(2\sqrt{\Delta}))$ to each of its $S$-neighbors.\label{st:acdgossip} \STATE \textbf{Detect similar nodes} Each $S$-node $v$ decides it is \emph{approximately $\delta$-similar} to node $u$ if $v$ receives the ID of $u$ at least $(1-2\delta)\sqrt{\Delta}/2$ times.\label{st:acdsimilar} \STATE \textbf{Detect friendship edges} The set $F\subseteq E$ consists of the edges between vertices in $S$ where at least one endpoint identified the other one as approximately $\delta$-similar.\label{st:acdfriend} \STATE \textbf{Detect density} Each $S$-node with more than $(1-2\delta)\sqrt{\Delta}$ incident $F$-edges joins a set $S_{dense}$.\label{st:acddense} \STATE{Each node $v\in S_{dense}$ selects the minimum ID of a node $u\in S_{dense}$ such that $\{u,v\}\in F$ and broadcasts it to its neighbors in $G$.}\label{st:acdsendmin} \STATE{\textbf{Form cliques} Any vertex $v\in V$ (including vertices in $S$) that receives the same ID at least $(1-11\delta)\sqrt{\Delta}$ times adapts this ID as its AC-ID, otherwise $v$ joins $V_{sparse}$.\label{st:formclique}} \STATE{\textbf{Remove bad cliques} Use the AC-ID node $w$ of each connected component $C$ as a leader node to ensure: If $|C|<(1-\delta)\Delta$ or there is a node $u\in C$ with $|N(u)\cap C|<(1-27\delta)\Delta$ then all nodes in $C$ join $V_{sparse}$.}\label{st:acdfilter} \end{algorithmic} \end{algorithm} In the following analysis, we let $H=(S_{dense}, F\cap S_{dense}\times S_{dense})$ denote the subgraph induced by dense nodes ($S_{dense}$) and friendship edges ($F$). We first show that vertices in $S$ detect friend edges between them (with some error). \begin{lemma}[$S$-friend edges are detected] \label{lem:friendsDetected} Let $\delta< 1/4$ and $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. W.h.p., (1) any $\delta$-friend edge between vertices in $S$ is contained in $F$, and (2) any edge in $F$ is a $4\delta$-friend edge. \end{lemma} \begin{proof} Let $u,v\in S$, and let $R_{u,v}$ be the set of common neighbors of $u$ and $v$. The rest of the proof is conditioned on the event that every node in $G$ has at most $(1+\delta)\sqrt{\Delta}$ neighbors in $S$. By Lemma~\ref{lem:nbrhdpreserved}, this holds w.h.p. (using the bound on $\Delta$). Given this, every node in $R_{u,v}$ independently sends the ID of $u$ to $v$ w.p.\ $1/2\sqrt{\Delta}$. Thus, the expected number of times $v$ receives the ID of $u$ is $|R_{u,v}|/2\sqrt{\Delta}$. \textbf{(1):} Assume that $\{u,v\}$ is a $\delta$-friend edge. It holds that $|R_{u,v}|\ge (1-\delta)\Delta$, and hence $v$ receives the ID of $u$ at least $(1-\delta)\sqrt{\Delta}/2$ times in expectation, and at least $(1-\delta)^2\sqrt{\Delta}/2\ge (1-2\delta)\sqrt{\Delta}/2$ times, w.h.p., by a Chernoff bound (\ref{eq:chernoffless}). Claim (1) follows by a union bound over all $\delta$-dense edges. \textbf{(2):} Now, assume that $\{u,v\}$ is not a $4\delta$-friend edge. Then $|R_{u,v}|< (1-4\delta)\Delta$, and by a similar application of (\ref{eq:chernoffmore}), $v$ receives the ID of $u$ less than $(1+\delta)|R_{u,v}|/2\sqrt{\Delta}< (1+\delta)(1-4\delta)\sqrt{\Delta}/2\le (1-2\delta)\sqrt{\Delta}/2$ times, w.h.p. Similarly, $u$ receives the ID of $v$ less than $(1-2\delta)\sqrt{\Delta}/2$ times, w.h.p. Claim (2) now follows by a union bound over all edges that are not $4\delta$-dense. \end{proof} In line~\ref{st:acddense} of Alg.~\ref{alg:acd} $S$ nodes use the number of incident friend edges to detect whether they are dense. The next lemma provides the guarantees of this step. \begin{lemma}[Density detection in $S$]\label{lem:Sdense} Let $\delta<4$ and $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. W.h.p., (1) if $v\in S$ is $\delta/2$-dense then it is contained in $S_{dense}$, and (2) if $v\in S$ is not $4\delta$-dense then it is not contained in $S_{dense}$. \end{lemma} \begin{proof} \textbf{(1):} Let $v$ be a $\delta/2$-dense node and $R_v\subseteq N(v)$ be the set of its $\delta/2$-friends. We know that $|R_v|\ge (1-\delta/2)\Delta$. By Lemma~\ref{lem:friendsDetected}, all edges $\{u,v\}$ with $u\in R_v\cap S$ are in $F$, w.h.p. By Lemma~\ref{lem:nbrhdpreserved} (using the bound on $\Delta$), $|R_v\cap S|\ge (1-\delta)|R_v|/\sqrt{\Delta}\ge (1-\delta)(1-\delta/2)\sqrt{\Delta}\ge (1-2\delta)\sqrt{\Delta}$, and hence $v$ is in $S_{dense}$, w.h.p. Claim (1) now follows by a union bound over all $\delta/2$-dense nodes. \textbf{(2):} Let $v$ be a node that is not $4\delta$-dense. Let $R_v\subseteq N(v)$ be the set of $4\delta$-friends of $v$. It holds that $|R_v|< (1-4\delta)\Delta$. By Lemma~\ref{lem:nbrhdpreserved} (using the bound on $\Delta$), $|R_v\cap S|<(1+\delta)(1-4\delta)\Delta\le (1-3\delta)\sqrt{\Delta}$, w.h.p. By Lemma~\ref{lem:friendsDetected}, for $u\in (N(v)\setminus R_v)\cap S$ the edge $\{u,v\}$ is not in $F$, w.h.p. Hence, $v$ has at most $|R_v\cap S|< (1-3\delta)\sqrt{\Delta}$ incident edges in $F$, and hence $v$ does not join $S_{dense}$. Claim (2) now follows by a union bound over all nodes that are not $4\delta$-dense. \end{proof} We will use the following observation from~\cite{ACK19} in the proof of Lemma~\ref{lem:delta4}. \begin{observation}[\cite{ACK19}] \label{obs:manyFriends} Let $\gamma<1/2$, $v$ be an $\gamma$-dense vertex, and $S_v\subseteq N(v)$ be the $(1-\gamma)\Delta$ neighbors of $v$ along $\gamma$-friend edges. Every vertex in $S_v$ is $2\gamma$-dense. \end{observation} \begin{proof} Let $w,w'\in S_v$. Since the edges $\{v,w\},\{v,w'\}$ are $\gamma$-friend edges, $|N(w)\cap N(v)|\ge (1-\gamma)\Delta$ and $|N(w')\cap N(v)|\ge (1-\gamma)\Delta$. By Obs.~\ref{obs:deltaintersections}, $|N(w)\cap N(w')|\ge (1-2\gamma)\Delta$. Similarly, since $|S_v|\ge (1-\gamma)\Delta$ and $|N(w)\cap N(v)|\ge (1-\gamma)\Delta$, we have $|N(w)\cap S_v|\ge (1-2\gamma)\Delta$. Thus, every node in $S_v$ has at least $(1-2\gamma)\Delta$ neighbors in $S_v$, all $2\gamma$-similar to it, i.e., it has at least $(1-2\gamma)\Delta$ of $2\gamma$-friends, as required. \end{proof} In the next lemmas we show that the computed components satisfy the properties of an ACD. \begin{lemma}\label{lem:delta4} Let $\delta<1/20$ and $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. Let $v$ be a $\delta/4$-dense node in $V$ and $R_v$ the set of its $\delta/4$-friends. All vertices in $R_v\cup \{v\}$ output the same AC-ID, w.h.p. \end{lemma} \begin{proof} Fix $v'\in R_v$ and introduce the following sets: \begin{align*} R_{v'} & =R_v\cap N(v')\\ S_{v,v'} & =R_{v'}\cap S\subseteq N(v)\cap N(v')\cap S \end{align*} By Obs.~\ref{obs:manyFriends}, all nodes in $R_v$ are $\delta/2$-dense and as all nodes in $S_{v,v'}$ are contained in $R_v$, Lemma~\ref{lem:Sdense}, (1) implies that $S_{v,v'}\subseteq S_{dense}$. Thus, all nodes in $S_{v,v'}$ participate in the ID broadcasting in Step 6. Let $u$ be the node of minimum ID in $N_H(S_{v,v'})$, and let $X\subseteq S_{v,v'}$ be the set of nodes that have node $u$ as a neighbor in $H$. By the definition of $u$ (no node in $X$ has a smaller ID neighbor in $H$) all nodes in $X$ forward $u$'s ID to $v$ and $v'$ in Step 6. In the remainder of the proof, we show that $|X|\ge (1-11\delta)\sqrt{\Delta}$, w.h.p. The claim then follows by a union bound over all $v'$. Note that $|R_v|\ge (1-\delta/4)\Delta$ and $|N(v')\cap N(v)|\ge (1-\delta/4)\Delta$ (since $v,v'$ are $\delta/4$-friends), hence Obs.~\ref{obs:deltaintersections} implies that $|R_{v'}|\ge (1-\delta/2)\Delta$. Let $w\in S_{v,v'}\subseteq R_v$ be a neighbor of $u$ in $H$, i.e., $\{u,w\}\in F$ ($w$ exists by the definition of $u$). Since $w$ is a $\delta/4$-friend of $v$, $|N(w)\cap N(v)|\geq (1-\delta/4)\Delta$ holds, and since $|R_{v'}|\ge (1-\delta/2)\Delta$, Obs.~\ref{obs:deltaintersections} implies that $|N(w)\cap R_{v'}|\ge (1-\delta)\Delta$. Therefore, by Lemma~\ref{lem:nbrhdpreserved} (using the bound on $\Delta$), $|N(w)\cap S_{v,v'}|\ge (1-\delta)^2\sqrt{\Delta}\ge (1-2\delta)\sqrt{\Delta}$. Also by Lemma~\ref{lem:nbrhdpreserved}, $|N(w)\cap S|\le (1+\delta)\sqrt{\Delta}$. Edge $\{u,w\}$ is contained in $F$ (by the definition of $u$ and $w$) and by Lemma~\ref{lem:friendsDetected}, (2) it is a a $4\delta$-friend edge, w.h.p. Thus, $|N(w)\cap N(u)|\ge (1-4\delta)\Delta$. By Lemma~\ref{lem:nbrhdpreserved}, $|N(w)\cap N(u)\cap S| \ge (1-\delta)(1-4\delta)\sqrt{\Delta}\ge (1-5\delta)\sqrt{\Delta}$. Applying Obs.~\ref{obs:deltaintersections} to sets $C=N(w)\cap S$, $A=N(u)$ and $B=S_{v,v'}$, we see that $|N(w)\cap N(u)\cap S_{v,v'}|\ge (1-8\delta)\sqrt{\Delta}$. From Lemma~\ref{lem:nbrhdpreserved}, $|N(u)\cap S|\le (1+\delta)\sqrt{\Delta}$. Let $T$ be the set of neighbors of $u$ along $F$-edges. Since $u$ is in $S_{dense}$, we have $|T|\ge (1-2\delta)\sqrt{\Delta}$. Applying Obs.~\ref{obs:deltaintersections} with $C=N(u)\cap S_{v,v'}$, $A=T$, and $B=S_{v,v'}$, we see that $u$ is adjacent to at least $(1-11\delta)\sqrt{\Delta}$ nodes in $S_{v,v'}$, along edges belonging to $F$, i.e., $|X|\geq (1-11\delta)\sqrt{\Delta}$. \end{proof} \begin{lemma}\label{lem:commonw} Let $\delta<1/13$ and $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. Let $C$ be a component with AC-ID $w$, and $v\in C$. W.h.p., $|N(v)\cap N(w)|\ge (1-13\delta)\Delta$. \end{lemma} \begin{proof} Let $v$ be a node such that there is a subset $S_v\subseteq N(v)\cap S_{dense}$ of size at least $(1-11\delta)\sqrt{\Delta}$ that broadcast the same ID $w$ in Step 6. Thus, $w$ and $v$ have $(1-11\delta)\sqrt{\Delta}$ common neighbors in $S$, and hence $|N(v)\cap N(w)|\ge (1-13\delta)\Delta$, as otherwise by Lemma~\ref{lem:nbrhdpreserved} (using the bound on $\Delta$), $|N(u)\cap N(w)\cap S|<(1+\delta)|N(v)\cap N(w)|/\sqrt{\Delta}<(1-11\delta)\sqrt{\Delta}$ (using $\delta<1/13$), w.h.p. \end{proof} \begin{lemma}[ACD forming]\label{lem:ACDFormation} Let $\delta<29$ and $\Delta>(c/\delta^4)\log^2 n$, for a large enough constant $c>0$. The following hold w.h.p.: \begin{enumerate}[label=(\arabic*)] \item every $\delta/4$-dense node in $V$ adapts an AC-ID. For every component $C$ that contains a $\delta/4$-dense node, \item $|C|\ge (1-\delta/4)\Delta$, \item every node $v\in C$ has at least $(1-27\delta)\Delta$ neighbors in $C$, \item $|C|\le (1+25\delta)\Delta$. \end{enumerate} \end{lemma} \begin{proof} \textbf{(1), (2):} Immediately follows from Lemma~\ref{lem:delta4}. \textbf{(3):} Let $C$ be the component with AC-ID $w$, and $v\in C$ be an arbitrary node in it. By Lemma~\ref{lem:commonw}, $|N(v)\cap N(w)|\ge (1-13\delta)\Delta$ holds w.h.p. Let $u\in C$ be the $\delta/4$-dense node in $C$. We know from Lemma~\ref{lem:delta4} that the set $R_u$ of at least $(1-\delta/4)\Delta$ neighbors of $u$ is contained in $C$. Since $|N(u)\cap N(w)|\ge (1-13\delta)\Delta$ and $|N(v)\cap N(w)|\ge (1-13\delta)\Delta$, we have $|R_u\cap N(w)|\ge (1-14\delta)\Delta$, and hence $|N(v)\cap N(w)\cap R_u|\ge (1-27\delta)\Delta$. The latter implies the claim since $R_u\subseteq C$. \textbf{(4):} Following the proof of (3), let $w$ be the AC-ID of a component $C$ whose size we want to bound. By the definition of the algorithm, every node in $C$ is within distance 2 from $w$. Let $A=N(w)$ and $B=(N(A)\setminus A)\cap C$. Note that $A\cup B=C$. By Lemma~\ref{lem:commonw}, for each node $v\in C$, $|N(v)\cap N(w)|>(1-13\delta)\Delta$, w.h.p. Thus, for each node $u\in A$, $|N(u)\cap A|\ge (1-13\delta)\Delta$ and hence $|N(u)\cap B|<13\delta\Delta$. On the other hand, for each node $v\in B$, $|N(v)\cap A|\ge (1-13\delta)\Delta$. The former implies that the number of edges between $A$ and $B$ is at most $13\delta|A|$, while the latter implies that it is at least $(1-13\delta)|B|$. Hence, $|B|<\frac{13\delta}{1-13\delta} |A|<25\delta|A|$ which implies the claim. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:ACD}] The output is well-defined: Every node $v$ adopts at most one AC-ID, since by Lemma~\ref{lem:nbrhdpreserved}, $|N(v)\cap S|\le (1+\delta)\sqrt{\Delta}$, and $(1-11\delta)\sqrt{\Delta}>(1+\delta)\sqrt{\Delta}/2$, for $\delta<20$. The main properties follow immediately from the algorithm definition and Lemma~\ref{lem:ACDFormation}, recalling that $\delta=\frac{\varepsilon}{27}$. Steps 1 to 6 clearly take $O(1)$ rounds. Step 7 takes $O(1)$ rounds, since by the definition of the algorithm, every node in a component with AC-ID $w$ is within distance 2 from $w$; hence, $w$ can aggregate the size of $C$ and the degrees of nodes within $C$ in $O(1)$ rounds. \end{proof} \begin{remark} The constants in Lemma~\ref{lem:acdproperties} can be improved by deriving the properties claimed there directly from our algorithm. However, that would make the exposition more complicated. \end{remark} \subsection{Clique Overlay for Almost-Cliques}\label{sec:congestedcliqueoverlay} Consider a $\Delta$-clique $K_{\Delta}$ with a set of \emph{routing requests}, where each node has $O(\Delta)$ messages to send to or receive from other nodes in $K_\Delta$ (a node can have several messages addressed to the same destination). In his celebrated work, Lenzen~\cite{Lenzen13} designed an algorithm that allows an execution of any given set of routing requests in $O(1)$ {$\mathsf{CONGEST}$\xspace} rounds. In order to achieve a similar result for almost-cliques, we simulate a clique over a given almost-clique. Recall that an almost-clique has diameter 2 (\Cref{lem:acdproperties}). A \emph{clique overlay} of an almost-clique $C$ is a collection $O$ of length-2 paths, containing a path between any pair of non-adjacent nodes in $C$. The \emph{congestion} of an overlay $O$ is the maximum, over the edges $e$ of $G$, number of occurrences of $e$ in $C$. A distributed algorithm computes an overlay $O$ of an almost clique $C$ if for every non-edge $\{u,v\}$ in $C$, nodes $u,v$ know the node $w$ that forwards messages from $v$ to $u$ and vice versa in $O$. Given a clique overlay of congestion 2 for an almost clique $C$, we can combine it with Lenzen's algorithm to execute in $O(1)$ rounds any given set of routing requests where each node sends and receives $O(\Delta)$ messages. We give a randomized algorithm, called {\textsc{ComputeCliqueOverlay}}, that, for a given almost clique $C$, computes a clique overlay with congestion 2 in $O(\log\log n)$ rounds. The main idea is to model the construction of an overlay as a list coloring problem, where missing edges of $C$ are the vertices of a graph, while the nodes of $C$ are the colors: a vertex $uv$ picking color $w$ means $w$ is used in the overlay to forward messages from $u$ to $v$ and back. As we note below, the runtime can be improved by using slightly more involved algorithms. \begin{theorem}\label{thm:congestedclique} Assume that $\varepsilon\le 1/15$ and $\varepsilon\Delta>c\log^2 n$, for a large enough constant $c>0$. There is a $O(\log\log n)$-round $\mathsf{CONGEST}$\xspace algorithm that, for any almost-clique $C$ with a leader node $w_C$, computes a clique overlay $O$ of congestion 2. \end{theorem} \begin{proof} We assume the nodes in $C$ know the IDs of all non-neighbors in $C$. This can be achieved via standard aggregation methods, e.g., by forming a breadth-first-search tree rooted at the leader $w_C$, which aggregates the sizes of subtrees, and counts and enumerates the nodes from $1$ to $|C|$. As a result, each node knows its new ID, as well as $|C|$, and in one additional round of communication, also knows the IDs of its neighbors; hence, it also knows the IDs of its non-neighbors. We reduce the construction of a congestion-2 overlay to a coloring problem with high bandwidth. We form a graph $H$ with a vertex $uv$ for each non-edge $\{u,v\}$ in $G[C]$. Vertices $uv$, $wt$ of $H$ are adjacent if $\{u,v\} \cap \{w,t\} \ne \emptyset$. We refer here to vertices of $G$ as \emph{nodes} to distinguish from \emph{vertices} of $H$. The colorspace of our coloring problem is the set $[|C|]$ of recomputed node IDs in $C$. The initial palette $\Psi(uv)$ of a vertex $uv$ is the set $N(u)\cap N(v) \cap C$ of common neighbors of $u$ and $v$ in $C$. Observe that solving the constructed list coloring instance for $H$ yields a congestion-2 clique overlay $O$ for $C$. Namely, each non-edge (vertex) $uv$ is assigned a 2-path via some node $w$ (color), and each edge $\{u,w\}$ of $C$ is only used in 2-paths of color $u$ or $w$, and so it appear at most twice in $O$. Further note that every vertex in $H$ has degree at most $2\varepsilon\Delta$, while the palette size is at most $(1-3\varepsilon)\Delta$ (using Obs.~\ref{obs:deltaintersections}). As a result, assuming that $\varepsilon\le 1/15$, each vertex has slack three times larger than its degree, that is, we have a \emph{sparse node coloring} instance, with a caveat: the computation needs to be done in $G[C]$. We have each vertex $uw$ of $H$ \emph{handled} by the incident node $u$ with higher ID. The handler $u$ has an imperfect information about the palette: during the algorithm, it maintains an \emph{apparent palette} $\Psi'(uv)=\{w\in N(u) : uw\notin O\}$, which contains some unusable colors, as $u$ does not know which of its neighbors $w\in N(u)$ are adjacent to $v$ ($\Psi'(uv)\setminus\Psi(uv)$ consists of those neighbors that are not adjacent to $v$). We let the handler nodes simulate {\RCT} in $H$, with the apparent palettes of vertices, but only retain colors that belong to the original palettes. The execution is done in a sequence of pairs of rounds. In the first round of each pair, node $u$ picks a uniformly random color $c_v$ from each palette $\Psi'(uv)$ for each vertex $uv$ it handles. If the same color is sampled for two or more vertices $uv,uv',\dots$ then they are not colored in this round. For each remaining $uv$, $u$ sends the ID of $v$ to $c_v$ (note that $u$ sends a single message to $c_v$). In the second round of the pair, consider a node $w$ that receives node IDs $v_1,v_2,\dots,v_t$ from neighbors $u_1,u_2,\dots,u_t$. We assume that for each $i$, $w$ is adjacent to both $u_i$ and $v_i$; otherwise $w$ is not a usable color for $u_iv_i$, and ignores that pair. If $t=1$, $w$ broadcasts $u_1,v_1$ to its neighbors, indicating that $u_1v_1$ is colored with $w$, otherwise $w$ broadcasts a message indicating that color $w$ was not taken in the given round. All handler nodes update their palettes accordingly. It is easy to see that the above correctly simulates {\RCT} in $H$ with apparent palettes. A vertex $uv$ is colored in a given round if its random color lands in $\Psi(v)$ and is different from the colors picked by its neighbors; hence, if $d_0,d$ denote the number of uncolored neighbors of $uv$ at the beginning of the algorithm and in the current round, respectively, then $uv$ is colored w.p. \[ \frac{|\Psi'(uv)|-|\Psi'(uv)\setminus\Psi(uv)|-d}{|\Psi'(uv)|}\ge 1-\frac{d_0+d}{|\Psi'(uv)|}\ge 1/2\ , \] where the last inequality follows from the observation that each vertex has slack $3d_0$ (and slack never decreases). Moreover, the probability bound clearly holds even when conditioned on adversarial candidate color choices of neighbors. Thus, also using that $\Delta=\Omega(\log^2 n)$, with a large enough coefficient, we can apply Chernoff bound~(\ref{eq:chernoffmore}) to show that while there are at least $\frac{\varepsilon\Delta}{3\log n}$ uncolored vertices, in each iteration, the number of uncolored vertices decreases by a constant factor, w.h.p. (we use the assumption that $\varepsilon\Delta=\Omega(\log^2 n)$, with a large enough coefficient); therefore, in $O(\log\log n)$ rounds there are at most $\frac{\varepsilon\Delta}{3\log n}$ vertices remaining to be colored, w.h.p. To finish coloring, in one more round, each node $u$ tries $3\log n$ colors in parallel, for each vertex $uv$ it handles. The probability of success of each trial is at least $1/2$, by the same analysis as above, since we can view parallel trials by the same vertex as trials by $3\log n$ different vertices; even so, each vertex will have degree bounded by $2\varepsilon\Delta$, and slack at least three times that, as before. Thus, each parallel trial for a given vertex succeeds w.p. at least $1/2$, irrespective of the outcome of other trials (including the trials for other vertices). Since there are at least $3\log n$ trials per vertex, Chernoff bound~(\ref{eq:chernoffless}) implies that each vertex is successfully colored, w.h.p. \end{proof} \begin{remark} In the proof above, the ability of a node $u$ to locally resolve conflicts between the candidate color picks by the vertices $uv$ it handles makes the coloring problem more like a $\mathsf{LOCAL}$\xspace coloring, since every vertex can now try many colors in parallel. This suggests that the multi-trial technique of~\cite{SW10} can be applied to get a $O(\log^* n)$ time algorithm. A quicker improvement can be obtained by using a variant of \Cref{lem:basiconeshot}, which, given that all nodes have equal initial slack $\Omega(\Delta)$, reduces their degrees to $O(\Delta/\log n)$ in $O(\log\log\log n)$ rounds. Note that we need to use parallel trials here too, in order to suppress the failure probability caused by the difference between the apparent and real palettes. \end{remark} \section{Coloring Small Degree Graphs}\label{sec:smalldegree} In this section, we show how to $(deg+1)$-list color a graph in $O(\log\Delta)+\poly\log\log(n)$ rounds. This result is efficient in the small degree case, i.e., when the maximum degree is bounded by $\poly\log(n)$, with runtime reducing to $O(\log^5\log n)$ rounds. We call the obtained algorithm {\textsc{ColorSmallDegreeNodes}}. \begin{theorem}\label{thm:smallDegree} Let $H$ be a subgraph of $G$ with maximum degree $\Delta_H$. Assume that each node $v$ of degree $d_v$ in $H$ has a palette $\Psi(v)\subseteq [U]$ of size $|\Psi(v)|\ge d_v+1$ from a colorspace of size $U=\poly(n)$. There is a $O(\log\Delta_H+\log^5\log n)$-round randomized algorithm that w.h.p. colors $H$ by assigning each node $v$ a color from its palette. \end{theorem} Similar results are known in the $\mathsf{LOCAL}$\xspace model \cite{BEPSv3,RG19,GGR20} and the $\mathsf{CONGEST}$\xspace model \cite{Ghaffari2019,GGR20}, where the latter has slightly worse runtime. We provide a (mostly) self contained proof of the result, meantime fixing an error in the state-of-the-art $\mathsf{CONGEST}$\xspace algorithm (see Remark~\ref{rem:smallError}) and slightly improving the runtime, and thus matching the runtime of the state-of-the-art in the $\mathsf{LOCAL}$\xspace model. Our algorithm consists of four steps: \emph{Shattering}, \emph{Network Decomposition}, \emph{Colorspace Reduction} and \emph{Cluster Coloring}. These four steps have been used in a similar (but not identical) manner in \cite{Ghaffari2019}. Colorspace reduction is an adaptation of a result in \cite{HKMN20}, and the shattering part stems from \cite{BEPSv3}. \paragraph{High Level Overview} The \emph{shattering} part consists of $O(\log \Delta_H)$ iterations of {\RCT}. This results in a partial coloring of the graph, after which the uncolored parts of $H$ form connected components of size $N=\poly\log(n)$, w.h.p., as shown in \cite{BEPSv3}. In the remainder of the algorithm, such components are processed in parallel and independently from each other. In the \emph{network decomposition } part, each component is partitioned into $\Gamma_1,\ldots,\Gamma_c$ collections of \emph{clusters}, with $c=O(\log\log n)$. The important properties of this decomposition are: (i) the clusters in each $\Gamma_i$ are independent, i.e., there is no edge between the nodes of any two distinct $Q, Q'\in\Gamma_i$, (ii) each cluster $Q\in \Gamma_i$ \emph{essentially} has a $\poly\log\log(n)$ diameter. By Property (i), all clusters in $\Gamma_i$ can be colored in parallel without interference, while Property (ii) is used to do that efficiently. The algorithm that we use to color the clusters has runtime depending on the size of the colorspace. In order to make the coloring procedure efficient, we perform \emph{color reduction}, where we deterministically compute a function $f_Q$ that maps the color palettes of all nodes in $Q$ to a much smaller colorspace of size $\poly(N)$, while preserving the palette sizes. In the new color space, each color can be represented with $O(\log\log n)$ bits. This step is run on all clusters in all $\Gamma_i$ in parallel. In the \emph{cluster coloring part}, we iterate through the groups $\Gamma_1,\ldots,\Gamma_c$. When processing $\Gamma_i$ each cluster $Q\in \Gamma_i$ is colored in parallel. Note that we need to solve a $(deg+1)$-list coloring on $Q$: all colors of previously colored neighbors in a cluster in $\Gamma_1,\ldots,\Gamma_{i-1}$ are removed from the palettes of nodes in $Q$. In the last step, we solve the list coloring problem on $Q$ by simulating $O(\log n)$ parallel and independent instances of the simplest coloring algorithm: iterate {\RCT}, for $O(\log N)$ rounds; note that the size of $Q$ is upper bounded by $N=\poly\log(n)$. The failure probability of each instance is $1/\poly(N)$, and w.h.p.\ (in $n$), at least one of these instances is successful~\cite[Sec. 4]{Ghaffari2019}. After executing all instances for $O(\log N)$ iterations, the nodes in $Q$ agree on a successful instance and get permanently colored with the colors chosen in that instance. \smallskip The color space reduction is necessary for sending the random color trials of several parallel instances in one $O(\log n)$-bit $\mathsf{CONGEST}$\xspace message. Alternatively, one could replace the cluster coloring procedure with the deterministic $(deg+1)$-list coloring algorithm from \cite{BKM19}. However, its runtime depends logarithmically on the color space, and one would need the same color space reduction to obtain a $\poly\log\log(n)$ round algorithm. The runtime of our approach is dominated by the time to compute a network decomposition. \paragraph{Shattering} Barenboim, Elkin, Pettie, and Su \cite{BEPSv3} showed that $O(\log \Delta_H)$ rounds are sufficient to reduce the $(deg+1)$-list coloring problem on an $n$-node graph $H$ to several independent $(deg+1)$-list coloring instances on subgraphs with $N=\polylog(n)$ nodes. In the remaining parts, we focus on handling one such subgraph. Let us detail the algorithm of \cite{BEPSv3}, to demonstrate its simplicity (the analysis is nontrivial though). By \cite[Lemma 5.3]{BEPSv3}, after $O(\log\Delta_H)$ iterations of {\RCT} in $H$, the maximum size of a connected component of uncolored nodes in $H$ is $O(\Delta_H^2\log n)$, w.h.p.\ One can now partition the uncolored vertices into two groups (depending on the uncolored degree) that need to be list-colored one after the other. In one of the groups, $O(\log\Delta_H)$ more iterations of {\RCT} suffice to reduce the maximum size of an uncolored connected component to $N=\poly\log(n)$, while in the other group, uncolored components are bounded in size by $N$, by design. The analysis is independent of the colorspace, and this shattering procedure can be executed in $\mathsf{CONGEST}$\xspace. The two groups are processed sequentially, but inside each group, all uncolored components are colored in parallel. Notice that there are no conflicts between them, as all communication happens within individual components. Henceforth, we fix such a component $\ensuremath{L}$ and describe the coloring algorithm in $\ensuremath{L}$. By omitting excess colors, we can also assume that the list size of each vertex is upper bounded by $d_{\ensuremath{L}}(v)+1\leq |\ensuremath{L}|\leq N$. \paragraph{Network decomposition} To color a component $\ensuremath{L}$, we first compute a \emph{network decomposition} of $\ensuremath{L}$, using a deterministic algorithm from \cite{GGR20}. To highlight that all (uncolored) components of $G$ are handled in parallel and independently, we formulate all results in this section from the viewpoint of a graph $\ensuremath{L}$. \begin{definition} A \emph{network decomposition} of a graph $\ensuremath{L}$, with weak diameter $d$ and a number of colors $c$, consists of a partition of $\ensuremath{L}$ into vertex-induced subgraphs or \emph{clusters} $S_1,\dots,S_t$ and a coloring of the clusters with $c$ colors, such that: 1. For any $i\neq j$, if $S_i$ and $S_j$ have the same color, then there is no edge with one endpoint in $S_i$ and the other in $S_j$, 2. for each $i$ and $u,v\in S_i$, the distance from $u$ to $v$ in $\ensuremath{L}$ is bounded by $d$. \end{definition} In a network decomposition with $c$ colors, let $\Gamma_1,\ldots,\Gamma_c$ denote the collection of clusters colored with the colors $1$ to $c$, respectively. \begin{theorem}\cite{GGR20}\label{thm:netdec} Let $\ensuremath{L}$ be $k$-node graph where each node has a $b=\Omega(\log k)$-bit ID. There is a deterministic algorithm that computes a network decomposition of $\ensuremath{L}$ with $O(\log k)$ colors and weak diameter $O(\log^2 k)$, in $O(\log^5 k + (\log^* b) \log^4 k)$ rounds of the $\mathsf{CONGEST}$\xspace model with $b$-bit messages. Moreover, for each cluster $Q$, there is a Steiner tree $T_Q$ with radius $O(\log^2 k)$ in $\ensuremath{L}$, for which the set of terminal nodes is $Q$. Each vertex of $\ensuremath{L}$ is in $O(\log k)$ Steiner trees of each color. \end{theorem} Aggregation procedures can be efficiently pipelined in the computed network decomposition. \begin{lemma}[(simplified), \cite{GGR20}] \label{lem:pipelining} Let $\ensuremath{L}$ be a communication graph on n vertices. Suppose that each vertex of $\ensuremath{L}$ is part of some cluster $Q$ such that each such cluster has a rooted Steiner tree $T_Q$ of diameter at most $R$ and each node of $\ensuremath{L}$ is contained in at most $P$ such trees. Then, in $O(P + R)$ rounds of the $\mathsf{CONGEST}$\xspace model with $b$-bit messages for $b \geq P$, we can perform the following operations for all clusters in parallel: broadcast, convergecast, minimum, summation and bit-wise maximum. \end{lemma} For the precise definition of broadcast, convergecast, minimum and summation, we refer to \cite{GGR20}. In the \emph{bit-wise maximum} of a cluster $Q$, each vertex $v\in Q$ has a bit string of $\ell=O(b)$ bits $s_1(v),\ldots,s_{\ell}(v)$, and we say $Q$ computes the bit-wise maximum if a designated leader node in $Q$ knows the bit string $s_1,\ldots,s_{\ell}$, where $s_i=\max_{v\in Q}s_i(v)$. While the computation of bit-wise maximum is not explicitly mentioned in \cite{GGR20}, it follows from their claim that a function $\circ$ can be computed in the claimed runtime if it is associative and $p_i(x_1\circ \dots \circ x_k)$ can be computed from $p_i(x_1), \ldots, p_i(x_k)$ where $p_i$ denotes the $i$ leftmost (or rightmost) bits of a string. Both properties clearly hold for the bit-wise maximum. We apply Theorem~\ref{thm:netdec} to each component $\ensuremath{L}$. Since $\ensuremath{L}$ has size $O(\log^3 n)$ and IDs of size $O(n)$, we obtain a network decomposition with $O(\log\log n)$ colors and weak diameter $O(\log^2\log n)$, in $O(\log^5\log n)$ time, and each vertex is in at most $O(\log\log n)$ Steiner trees of each color. \paragraph{Colorspace Reduction} We map the colorspace $[U]$ to a smaller colorspace that is comparable to the size $N$ of an uncolored component. This is described in the following lemma, which is a reformulation of a similar lemma from~\cite{HKMN20}. \begin{lemma}\label{lem:ColorSpaceReduction} Let $N>\log n$ be an upper bound on the number of vertices and the diameter of some subgraph $H$ of a connected communication network $G$. Let the list of each node $v$ have size $|\Psi(v)|\le N$, and the colorspace have size $U=\poly(n)$. There is a constant $c_0>0$ such that all vertices of $H$ learn the same mapping $f:[U]\to [N^{c_0}]$ s.t. for each vertex $v\in H'$, $|f(\Psi(v))|=|\Psi(v)|$, by executing $O(\log N)$ iterative summation and broadcast primitives in $H$. \end{lemma} \begin{proof} Let the constant $c_0>0$ be such that $(N^{c_0}/2)^{N^{c_0-5}/2}>U$. Such a constant exists, since $U=\poly(n)$ and $N>\log n$. Let $N^{c_0}/2< p \le N^{c_0}$ be a prime, which exists due to Bertrand's postulate. The reduction is given by a random hash function, which is then derandomized using the method of conditional expectation. We start by describing this in the centralized setting, and then comment on how to do it distributively. Let $d=\lceil p/N^5\rceil$. To each color $\alpha\in [U]$, we assign a unique polynomial $\psi_\alpha\in \mathbb{F}_p[x]$ of degree $d$. This can be done since the number of degree-$d$ polynomials over $\mathbb{F}_p$ is $p^{d+1}>(N^{c_0}/2)^{N^{c_0-5}/2}>C$. To obtain the color reduction $f:[U]\to [N^{c_0}]$ we want to choose a ``good'' function from the set $\{f_g : g\in \mathbb{F}_p\}$, where \[f_g : [U] \rightarrow \mathbb{F}_p\text{, and } f_g(\alpha)=\psi_\alpha(g).\] In other words, we fix a distinct polynomial $\psi_{\alpha}$ for each color $\alpha\in [U]$ and for each $g\in \mathbb{F}$ we obtain a 'candidate color reduction' $f_g$ by mapping color $\alpha\in [U]$ to the evaluation of its polynomial $\psi_{\alpha}$ at $g$. The function $f_g$ is \emph{good} if it preserves the list sizes of all nodes $u$ of $H$: For each node $u$ and $g\in \mathbb{F}_p$, let $X_u(g)$ be 0 if $|f_g(\Psi(u))|=|\Psi(u)|$, and 1, otherwise. A function $f_g$ is \emph{good} if $\sum_{u\in H} X_g(u)=0$; the latter implies that for each node $u$, $|f_g(\Psi(u))|=|\Psi(u)|$, as required. First, let us show that a random function $f_g$, corresponding to a uniformly random choice of $g$ is good with significant probability. \begin{claim} \label{lem:randReduction} If $g\in \mathbb{F}_p$ is chosen uniformly at random, then $\mathbb{E}\big[\sum_{u\in H} X_u\big]\le N^{-2}$. \end{claim} \begin{proof} Consider a node $u$. For two distinct colors $\alpha,\beta\in \Psi(u)$, the polynomials $\psi_\alpha$ and $\psi_\beta$ intersect in at most $d$ points; hence, if $g$ is sampled uniformly, the probability that $f_g(\alpha)=\psi_\alpha(g)=\psi_\beta(g)=f_g(\beta)$ is $d/p\ge N^{-5}$. Thus, the probability that any two colors in $\Psi(u)$ map to the same element of $\mathbb{F}_p$ is at most $\binom{|\Psi(u)|}{2}\cdot d/p<N^2\cdot N^{-5}=N^{-3}$. The claim follows by linearity of expectation, since the sum is over at most $N$ elements, each having expectation at most $N^{-3}$. \renewcommand{\qed}{\ensuremath{\hfill\blacksquare}} \end{proof} \renewcommand{\qed}{\hfill \ensuremath{\Box}} We assume that the elements of $\mathbb{F}_p$ are numbered from 1 to $p$ and we choose the element $g$ by choosing its bits independently: we flip $\ell=\lceil\log_2 p\rceil$ unbiased coins, $b_1,\dots,b_\ell$, and let $g$ be the number with the binary representation $b_1\dots b_\ell$. Let $Y$ be a binary random variable that is 1 if $g>p$, and 0, otherwise. Conditioned on $Y=1$, $g$ is uniformly distributed in $\mathbb{F}_p$. Let us re-define $X_u(g)$ to be 0 also when $g>p$. Note that we still have $\mathbb{E}[\sum_{u\in H}X_u]= \mathbb{E}[\sum_{u\in H}X_u \mid Y=1]\Pr[Y=1]\le 1/N^2$, since we only make $X_u$ smaller in some cases. We also have $\mathbb{E}[Y]\le 1/2$; hence, assuming $N\ge 3$, \begin{equation}\label{eq:randomclred} \mathbb{E}\left [Y+\sum_{u\in H}X_u(g)\right]\le 1/2+ 1/N^2\le 2/3\ . \end{equation} In order to find a good function $f_g$, it suffices to find $b_1\dots b_\ell$ such that $Y+\sum_{u\in H}X_u<1$. In order to derandomize (\ref{eq:randomclred}), we fix the bits $b_1,\dots,b_\ell$ inductively, with the basis (\ref{eq:randomclred}). Given $b_1,\dots,b_i\in \{0,1\}$, for $i\ge 0$, such that $\mathbb{E}[Y+\sum_{u\in H}X_u(g)\mid b_1,\dots,b_i]\le 2/3$, we select $b_{i+1}\in \{0,1\}$ such that $\mathbb{E}[Y+\sum_{u\in H}X_u(g)\mid b_1,\dots,b_i,b_{i+1}]\le 2/3$. The existence of such a value follows from the inductive hypothesis, using the conditional expectation formula (conditioning on $b_{i+1}$). After fixing all bits, we have a deterministic value $g$ for which $Y+\sum_{u\in H}X_u(g)\le 2/3$. It remains to see how to compute a good function distributively. We do this in $\ell$ phases, where we fix $b_i$ in phase $i$, assuming the bits $b_1,\dots,b_{i-1}$ have been fixed, and each node knows those bits. To this end, each node computes $\mathbb{E}[X_u\mid b_1,\dots,b_{i-1},0]$ and $\mathbb{E}[X_u\mid b_1,\dots,b_{i-1},1]$, with precision $N^{-5}$. Note that such values fit in a single message of size $O(\log n)$. These values are aggregated in a leader node, i.e., a leader node learns values $v_b=\mathbb{E}\left[\sum_{u\in H}X_u\mid b_1,\dots,b_{i-1},b\right]\pm N^{-4}$, for $b=0,1$, and computes $v_b'=\mathbb{E}\left[Y+\sum_{u\in H}X_u\mid b_1,\dots,b_{i-1},b\right]\pm N^{-4}$, and chooses $b_i=b$ such that $v_b'\le v'_{1-b}$. The overall error accumulated throughout the $\ell$ phases is at most $\ell N^{-4}\le N^{-3}$. This, implies that when all $\ell$ bits are fixed, we have $Y+\sum_{u\in H}X_u<2/3+N^{-3}<1$; hence, the computed function is good. It takes $O(\ell)=O(\log N)$ summation and broadcast operations to compute the color reduction. \end{proof} To obtain a color space reduction, i.e., a function $f_Q$, for each cluster $Q$ of the network decomposition, that maintains the list sizes, we apply the algorithm from Lemma~\ref{lem:ColorSpaceReduction} to all clusters in all $\Gamma_1,\ldots,\Gamma_c$ in parallel. Since we run it on $O(\log\log n)$ color classes in parallel, and a node can be in $O(\log\log n)$ Steiner trees per color class, a node can be in $P=O(\log^2\log n)$ Steiner trees of all clusters. Using Lemma~\ref{lem:pipelining}, the $O(\log N)=O(\log\log n)$ iterative summation and broadcast primitives of Lemma~\ref{lem:ColorSpaceReduction} can be implemented in the Steiner trees of diameter at most $D=O(\log^2\log n)$ in $O(\log N\cdot (D+P))=O(\log^3\log n)$ rounds. \paragraph{Cluster Coloring (by \cite{Ghaffari2019})} \begin{algorithm}[H] \caption{Cluster Coloring} \label{alg:smalldeg} \begin{algorithmic}[1] \STATE{Given}: network decomposition $\Gamma_1,\ldots, \Gamma_c$, color space reduction $f_Q$, for each cluster $Q$ \FOR{$i=1,\dots,c$} \FORALL{clusters $Q\in \Gamma_i$ in parallel} \STATE{\textbf{Update palettes:} Remove colors used by permanently colored neighbors (in $G$)} \STATE{\textbf{Map lists:}} Use $f_Q:[U]\rightarrow [N^{c_0}]$ to map remaining lists to a smaller color space \FOR{$O(\log N)=O(\log\log n)$ iterations and $O(\log n)$ parallel instances} \STATE \textbf{Simulate} {\RCT} \ENDFOR \STATE \textbf{Agree on a successful instance} and permanently color each node in $Q$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} While the update of the lists takes place in the original color space, the simultaneous instances of {\RCT} work in a smaller color space of size $\poly(N)$, and each of its colors can be represented with $O(\log N)=O(\log \log n)$ bits. In fact, the color trials of one round of $O(\log n)$ instances use $O(\log n\cdot \log\log n)$ bits and can be done in $O(\log \log n)$ rounds. Thus, the $O(\log N)=O(\log\log n)$ rounds in total can be simulated in $O(\log^2 \log n)$ rounds. Each of these instances is successful with probability $1/\poly(N)$ and w.h.p.\ (in $n$), at least one of them is successful~\cite[Sec. 4]{Ghaffari2019}. A node can locally determine which instances are successful, i.e., in which instances it gets colored. For node $v$, let $s_1(v)\cdots s_{\ell}(v)$ be the indicator string in which $s_i(v)$ indicates whether instance $i$ was successful for node $v$. With a bit-wise maximum, a leader can determine an instance that is successful w.h.p.,\ for all nodes in the cluster, and can broadcast it to all nodes. Updating lists and applying the color space reduction can be done in one round, simulating the instances takes $O(\log^2\log n)$ rounds, and agreeing on a successful instance takes $O(\log^2\log n)$ rounds, due to the diameter of the Steiner trees and Lemma~\ref{lem:pipelining}. Thus, the total runtime by iterating over all color classes of the network decomposition is $O(\log^3\log n)$. \begin{proof}[Proof of Theorem~\ref{thm:smallDegree}] By design, after executing all four steps, all nodes of the graph are properly colored. The shattering part takes $O(\log \Delta_H)=O(\log \log n)$ rounds. Computing the network decomposition takes $O(\log^5\log n)$ rounds. The color space reduction runs on all clusters in parallel and takes $O(\log^3\log n)$ rounds. The cluster coloring part takes $O(\log^2\log n)$ rounds per color class of the network decomposition and $O(\log^3\log n)$ rounds in total. \end{proof} \begin{remark} A slightly worse runtime of $O(\log^6\log n)$ rounds can be obtained by using the algorithm of \cite{BKM19} to list color the clusters. Without the pipelining (\Cref{lem:pipelining}) to speed up aggregation within clusters the runtime gets even slower but it remains $\poly\log\log(n)$. \end{remark} \begin{remark}\label{rem:smallError} The high level structure of the algorithm for \Cref{thm:smallDegree} is similar to the one in \cite{Ghaffari2019}, using the updated intermediate procedures to compute a network decomposition from \cite{GGR20} and the color reduction from \cite{HKMN20}. Unfortunately, the $O(\log \Delta)+2^{O(\log\log n)}$ algorithm in \cite{Ghaffari2019} has a mistake in the design of its color space reduction. To reduce the color space \cite{Ghaffari2019} maps the original color lists to a smaller color space $[p]$ where $p$ is a fixed (and deterministically chosen) prime in $[N^4,2N^4]$. Color $x$ is mapped to $h_{a,b}(x)=a\cdot x + b\bmod p$ with randomly chosen $a$ and $b$. The wrong but crucial claim in the paper states that the probability for two colors $x$ and $x'$ to be hashed to the same color over the randomness of $a$ and $b$, i.e., the probability of the event that $h_{a,b}(x)=h_{a,b}(x')$ holds, is at most $1/N$. But colors $x$ and $x'$ are mapped to the same color whenever $x=x'\bmod p$, regardless of the choice of $a$ and $b$. In contrast (besides other changes in the design) in the core part of \Cref{lem:ColorSpaceReduction} we fix one distinct polynomial $h_x(y)$ per color $x$ and evaluate it at a randomly chosen $y\in \mathbb{F}_y$ to obtain a color in the smaller color space $[p]$. The $O(\log\Delta +\log^6\log n)$ algorithm from \cite{GGR20} uses the techniques of \cite{Ghaffari2019} in a black box manner, including the erroneous color space reduction. \end{remark} It is interesting that the color space dependence also plays a role for deterministic algorithms whose complexity is expressed as $f(\Delta)+O(\ensuremath{\log^*} n)$; see the discussion in \cite{MT20} that compares the color space dependent results \cite{MT20} with the related color space independent results in \cite{FHK}. \section{Concentration Bounds} We use the following variant of Chernoff bounds for dependent random variables, that is obtained, e.g., as a corollary of Lemma 1.8.7 and Thms. 1.10.1 and 1.10.5 in~\cite{Doerr2020}. \begin{lemma}[Generalized Chernoff]\label{chernoff} Let $X_1,\dots,X_r$ be binary random variables, and $X=\sum_i X_i$. \begin{enumerate} \item If $Pr[X_i=1\mid X_1=x_1,\dots,X_{i-1}=x_{i-1}]\le q$, for all $i\in [r]$ and $x_1,\dots,x_{i-1}\in \{0,1\}$ with $Pr[X_1=x_1,\dots,X_r=x_{i-1}]>0$, then for any $\delta\in (0,1)$, \begin{equation}\label{eq:chernoffless} Pr[X\le(1-\delta)qr]\le \exp(-\delta^2qr/2)\ . \end{equation} \item If $Pr[X_i=1\mid X_1=x_1,\dots,X_{i-1}=x_{i-1}]\ge q$, for all $i\in [r]$ and $x_1,\dots,x_{i-1}\in \{0,1\}$ with $Pr[X_1=x_1,\dots,X_r=x_{i-1}]>0$, then for any $\delta>0$, \begin{equation}\label{eq:chernoffmore} Pr[X\ge(1+\delta)qr]\le \exp(-\min(\delta^2,\delta)qr/3)\ . \end{equation} \end{enumerate} \end{lemma} We will often use the following simple corollary of bounds (\ref{eq:chernoffless}-\ref{eq:chernoffmore}). \begin{lemma}\label{lem:nbrhdpreserved} Let $S$ be a randomly sampled subset of vertices in $G$, each sampled independently, with probability $q$. Let $\gamma\in (0,1)$ and $\alpha>0$ be such that $\gamma^2\alpha q\Delta\ge c\log n$, for a sufficiently large constant $c>1$. For every subset $T\subseteq V(G)$ of vertices of size $|T|=\alpha\Delta$, we have $(1-\gamma)q\alpha\Delta\le |T\cap S|\le (1+\gamma)q\alpha\Delta$, w.h.p. \end{lemma} \iffalse \begin{lemma}\label{lem:nbrhdpreserved} Let $K$ be a subgraph of $G$ with maximum degree $\Delta_K$, and let $S$ be a randomly sampled subset of vertices in $K$, each sampled independently, w.p. $q$. Let $\gamma\in (0,1)$ and $\alpha>0$ be such that $\gamma^2\alpha q\Delta_K\ge c\log n$, for a large enough constant $c>1$. For every subset $T\subseteq V(K)$ of vertices of size $|T|=\alpha\Delta_K$, we have $(1-\gamma)q\alpha\Delta_K\le |T\cap S|\le (1+\gamma)q\alpha\Delta_K$, w.h.p. \end{lemma} \fi We will also use the following variant of Talagrand's inequality. A function $f(x_1, \dots, x_n)$ is called \emph{$c$-Lipschitz} iff changing any single $x_i$ can affect the value of $f$ by at most $c$. Additionally, $f$ is called \emph{$r$-certifiable} iff whenever $f(x_1, \dots , x_n) \ge s$, there exists at most $r s$ variables $x_{i_1}, \dots, x_{i_{rs}}$ so that knowing the values of these variables certifies $f\ge s$. \begin{lemma}\label{lem:talagrand}\cite{molloy2013coloring}[Talagrand's Inequality II] Let $X_1, \dots , X_n$ be $n$ independent random variables and $f(X_1, \dots , X_n)$ be a $c$-Lipschitz $r$-certifiable function. For any $b \ge 1$, \[ P (|f - \mathbb{E}[f]| > b+60c\sqrt{r\mathbb{E}[f]}) \le 4 \exp\left(-\frac{b^2}{8c^2r\mathbb{E}[f]}\right)\ . \] \end{lemma} \section{Introduction} In the distributed vertex coloring problem, we are given an $n$-node network graph $G=(V,E)$, and the goal is to properly color the nodes of $G$ by a distributed algorithm: The nodes of $G$ are autonomous agents that interact by exchanging messages with their neighbors in synchronous communication rounds. At the end, every node needs to output its color in the computed vertex coloring. The standard version of the problem asks for a coloring with $\Delta+1$ colors, where $\Delta$ is the largest degree of $G$, such that the objective is to match what can be achieved by a simple sequential greedy algorithm. Distributed coloring has been intensively studied for over 30 years. The problem has been used as a prototypical example to study distributed symmetry breaking in graphs, and it certainly is at the very core of the general area of distributed graph algorithms, e.g., \cite{barenboimelkin_book}. \vspace*{-3mm} \paragraph{Distributed Coloring, State of the Art.} The first paper to explicitly study the distributed coloring problem was a seminal paper by Linial~\cite{linial87}, which effectively also started the area of distributed graph algorithms. Already then, it was known that by using simple randomized algorithms for the parallel setting~\cite{alon86,luby86}, \emph{with randomization}, the distributed $(\Delta+1)$-coloring problem can be solved in only $O(\log n)$ communication rounds. In fact, even one of the simplest conceivable randomized distributed coloring algorithms solves the problem in $O(\log n)$ rounds~\cite{johansson99}. The algorithm always maintains a partial proper coloring and operates in $O(\log n)$ synchronous phases. In each phase, each uncolored node chooses a uniformly random color among the colors not already used by some neighbor. A simple analysis shows that each uncolored node can keep its random color with a constant probability, which leads to the $O(\log n)$ runtime bound. This most basic random coloring step will also play an important role in our paper and we will therefore refer to it as {\textsc{RandomColorTrial}} in the following. Classically, distributed coloring was studied in a variant of the message passing model known as the $\mathsf{LOCAL}$\xspace model, where in each round, nodes are allowed to exchange messages of arbitrary size. Over the years, the main challenges have been to understand the deterministic complexity of the $(\Delta+1)$-coloring problem (e.g., \cite{barenboimelkin_book,barenboimE10,BarenboimEK14,Barenboim16,FHK,BEG18,Kuhn20}) and to understand to what extent $o(\log n)$-time randomized distributed coloring algorithms exist. In fact, these two questions are actually closely related~\cite{chang16exponential}. In a recent breakthrough, Rozho\v{n} and Ghaffari~\cite{RG19} showed that $(\Delta+1)$-coloring (and many other important problems~\cite{SLOCAL17,FOCS18-derand}) can deterministically be solved in $\polylog(n)$ time. Combined with the astonishing recent progress on randomized algorithms~\cite{BEPSv3,EPS15,HSS18,CLP20}, this in particular gives randomized $\polyloglog(n)$-time algorithms, with the best complexity known being $O(\log^5\log n)$~\cite{GGR20}. With the complexity of distributed coloring in the powerful $\mathsf{LOCAL}$\xspace model being quite well understood, it might now become within reach to also understand the complexity in the much more realistic $\mathsf{CONGEST}$\xspace model, where in each round, every node is only allowed to exchange $O(\log n)$ bits with each of its neighbors. Many early distributed coloring algorithms work directly in the more restricted $\mathsf{CONGEST}$\xspace model, but the recent highly efficient randomized algorithms of \cite{EPS15,HSS18,CLP20} unfortunately make quite heavy use of the power of the $\mathsf{LOCAL}$\xspace model. It seems unclear whether and to what extent their ideas can be applied in the $\mathsf{CONGEST}$\xspace model. The best randomized $(\Delta+1)$-coloring algorithm known in the $\mathsf{CONGEST}$\xspace model has a round complexity of $O(\log\Delta + \log^6\log n)$~\cite{BEPSv3,Ghaffari2019,GGR20}. Note that as a function of the number $n$ of nodes alone, this algorithm still has a running time of $O(\log n)$, which is no faster than the simple 30 years old methods. Given all the recent progress on distributed coloring, arguably one of the most important open questions regarding this classic distributed problem is the following. \smallskip \begin{center} \begin{minipage}{0.9\textwidth} \it Is there a randomized algorithm in the $\mathsf{CONGEST}$\xspace model that solves the $(\Delta+1)$-vertex coloring problem in time $o(\log n)$ or even in time $\polyloglog(n)$? \end{minipage} \end{center} \paragraph{Our Main Contribution.} We answer this question in the affirmative and give a randomized $(\Delta+1)$-coloring algorithm in the $\mathsf{CONGEST}$\xspace model, which is \textbf{as fast as} the best known algorithm for the problem in the $\mathsf{LOCAL}$\xspace model. As our main result, we prove the following theorem. \begin{theorem}[simplified]\label{thm:mainResult} There is a randomized distributed algorithm, in the $\mathsf{CONGEST}$\xspace model, that solves any given instance of the $(\Delta+1)$-list coloring problem in any $n$-node graph with maximum degree $\Delta$ in \textbf{$O(\log^5\log n)$} rounds, with high probability. \end{theorem} Note that our algorithm even works for the more general $(\Delta+1)$-list coloring problem, where every node initially is given an arbitrary list of $\Delta+1$ colors, and the objective is to find a proper vertex coloring such that each node is colored with one of the colors from its list. Our algorithm follows the paradigm of breaking the graph into sparse and dense parts and processing them separately, which has been the only successful approach for sublogarithmic complexity in the $\mathsf{LOCAL}$\xspace model~\cite{HSS18,CLP20}. By working in the much more restricted $\mathsf{CONGEST}$\xspace model, however, we are forced to develop general techniques based on more basic principles. We show that, under some conditions, the progress guarantee of {\RCT} is exponentially better than suggested by its basic analysis. Our analysis extends to a general class of random coloring algorithms akin to {\RCT}. For coloring dense parts, however, this has to be combined with additional techniques to deal with the major challenge of congestion. In the following, we first give a high level overview over what is known about randomized coloring algorithms in the $\mathsf{LOCAL}$\xspace model, then briefly discuss the state of the art in the $\mathsf{CONGEST}$\xspace model. In \Cref{sec:technicaloverview}, we overview existing techniques that are relevant to our algorithm, explain in more detail why it is challenging to use existing ideas in the $\mathsf{CONGEST}$\xspace model, discuss how we overcome the major challenges, and summarize the algorithm and the technical ideas of the paper. The core technical part of the paper starts with and is outlined in \Cref{sec:main}. \paragraph{History of Randomized Coloring in the $\mathsf{LOCAL}$\xspace Model.} The first improvement over the simple $O(\log n)$-time algorithms of \cite{alon86,luby86,johansson99} appeared in \cite{SW10}, where the authors show that by trying several colors in parallel, the $(\Delta+1)$-coloring problem can be solved in $O(\log\Delta +\sqrt{\log n})$ rounds. A similar result was previously proven for coloring with $O(\Delta)$ colors in \cite{KSOS06}. Subsequently, the \emph{graph shattering technique}, first developed for constructive Lov\'{a}sz Local Lemma algorithms~\cite{beck1991algorithmic}, was introduced by \cite{BEPSv3} to the area of distributed graph algorithms. Since each node is colored with constant probability in each iteration of {\textsc{RandomColorTrial}}, $O(\log \Delta)$ iterations suffice to make the probability of a given node remaining uncolored polynomially small in $\Delta$. This ensures that afterwards, all remaining connected components of uncolored nodes are of $\polylog n$ size, as shown by \cite{BEPSv3}, and they are then typically colored by a deterministic algorithm. The deterministic complexity of coloring $N$-node graphs in the $\mathsf{LOCAL}$\xspace model was at the time $2^{O(\sqrt{\log N})}$~\cite{panconesi1992improved}, but has recently been improved to $O(\log^5 N)$~\cite{RG19,GGR20}. As a result, the time complexity of \cite{BEPSv3} improved from $O(\log\Delta)+2^{O(\sqrt{\log\log n})}$ to $O(\log\Delta+\log^5 \log n)$. We remark that it was shown in \cite{chang16exponential} that for distributed coloring and related problems, the randomized complexity on graphs of size $n$ is lower bounded by the deterministic complexity on graphs of size $\sqrt{\log n}$. Hence, in some sense, the graph shattering technique is necessary. All further improvements on randomized distributed coloring concentrated on the ``preshattering'' part, i.e., on coloring each node with probability $1-1/\poly(\Delta)$ so that the uncolored nodes form components of size $\polylog n$. An important step towards a sublogarithmic preshattering phase was done by Elkin, Pettie, and Su~\cite{EPS15}, coloring graphs satisfying a specific local sparsity property. Following this, Harris, Schneider, and Su~\cite{HSS18} achieved preshattering in time $O(\sqrt{\log\Delta}+\log\log n)$, resulting in a $(\Delta+1)$-coloring algorithm with time complexity $O(\sqrt{\log n})$ (in terms of $n$ alone). The algorithm is based on a decomposition of the graph into locally sparse nodes to which the algorithm of \cite{EPS15} can be applied and into dense components that have a constant diameter such that computations within these components can be carried out in a centralized brute-force manner in the $\mathsf{LOCAL}$\xspace model. Finally, Chang, Li, and Pettie~\cite{CLP20} gave a hierarchical version of the algorithm of \cite{HSS18}, bringing the preshattering complexity all the way down to $O(\log^* n)$. This leads to the current best randomized $(\Delta+1)$-coloring algorithm known with time complexity $O(\log^5\log n)$. \paragraph{State of the Art in the $\mathsf{CONGEST}$\xspace Model.} While the simple \textsc{RandomColorTrial}{} algorithm and thus the $O(\log\Delta)$ preshattering phase of \cite{BEPSv3} clearly also work in the $\mathsf{CONGEST}$\xspace model, the remaining discussed randomized coloring algorithms all use the additional power of the $\mathsf{LOCAL}$\xspace model to various extents, and if one aims to achieve similar results for the $\mathsf{CONGEST}$\xspace model, one faces a number of challenges. The fastest known deterministic algorithm to apply on $\polylog(n)$-size components is based on decomposing the graph into clusters of small diameter (as defined in \cite{awerbuch89}), where in the $\mathsf{LOCAL}$\xspace model, each cluster can be colored in a brute-force way, by collecting its topology at a single node. Luckily, this issue has already been solved: The fastest known network decomposition algorithms of \cite{RG19,GGR20} can directly be applied in the $\mathsf{CONGEST}$\xspace model, and by using techniques developed in \cite{censor2017derandomizing,Ghaffari2019,BKM19,HKMN20}, one can efficiently solve the coloring problem in each cluster in the $\mathsf{CONGEST}$\xspace model. This implies that the $(\Delta+1)$-coloring problem can be solved in $O(\log\Delta)+\polyloglog(n)$ rounds. In order to obtain a sublogarithmic-time $(\Delta+1)$-coloring algorithm in the $\mathsf{CONGEST}$\xspace model, the challenge therefore is to develop an efficient preshattering algorithm. In \Cref{sec:technicaloverview}, we discuss how the preshattering techniques used by \cite{HSS18,CLP20} exploit the power of the $\mathsf{LOCAL}$\xspace model, we describe the challenges that arise for the $\mathsf{CONGEST}$\xspace model and also how we tackle those challenges. \section{Technical Overview } \label{sec:technicaloverview} Although extremely simple, {\textsc{RandomColorTrial}} lies at the heart of most known efficient randomized coloring algorithms. Each uncolored node $v$ picks a uniformly random color $c$ from its current \emph{palette} -- colors from its list $\Psi(v)$ that are not already used by its neighbors -- and then executes {\textsc{TryColor}}($v,c$) (\Cref{alg:trycolor}). Barenboim, Elkin, Pettie and Su \cite{BEPSv3} show that w.h.p., the \emph{uncolored degree} (i.e., the degree of the subgraph induced by uncolored nodes) of every vertex of degree $\Omega(\log n)$ goes down by a \textit{constant factor} in each iteration of {\textsc{RandomColorTrial}}, and within $O(\log \Delta)$ steps, each node has degree $O(\log n)$. \begin{algorithm}[H]\caption{{\textsc{TryColor}} (vertex $v$, color $c_v$)}\label{alg:trycolor} \begin{algorithmic}[1] \STATE Send $c_v$ to $N(v)$, receive the set $T=\{c_u : u\in N(v)\}$. \STATE{\textbf{if}} $c_v\notin T$ \textbf{then} permanently color $v$ with $c_v$. \STATE Send/receive permanent colors, and remove the received ones from $\Psi(v)$. \end{algorithmic} \end{algorithm} Our main technical contribution is a \emph{much faster} algorithm that partially colors the input graph so that the uncolored degree reduces to $\poly\log n$. Once the degree is reduced, one can rely on efficient $\poly\log\log n$-round algorithms: such algorithms are well known in the $\mathsf{LOCAL}$\xspace model \cite{BEPSv3,RG19,GGR20}, and we discuss their $\mathsf{CONGEST}$\xspace counterparts at the end of this section. Our degree reduction follows the by-now standard approach of partitioning the graph into \emph{sparse} and \emph{dense} parts to be colored separately, with most of the action in the dense part. To our knowledge, this approach is the only one known to have led to sublogarithmic algorithms, even in $\mathsf{LOCAL}$\xspace~\cite{HSS18,CLP20}. (It has also been useful in other models of computing, e.g., sublinear algorithms and Massively Parallel Computing~\cite{ParterSu,CFGUZ19,ACK19}.) Algorithms in the $\mathsf{LOCAL}$\xspace model profit from the fact that nodes can try many colors simultaneously, or can simulate any centralized algorithm in a small diameter dense part by collecting all the input to one node. These trivialities in $\mathsf{LOCAL}$\xspace become major challenges in $\mathsf{CONGEST}$\xspace. In the remainder of this section, we describe how we partition the graph and process sparse and dense parts in $\mathsf{CONGEST}$\xspace. \paragraph{Almost-Clique Decompositions.} Inspired by Reed~\cite{Reed98}, Harris, Schneider, and Su \cite{HSS18} introduced an important structure, the \emph{almost-clique decomposition} (ACD). In the variation we use, it is a partition of $V$ into $V_{sparse}, C_1, C_2, \ldots, C_k$, where $V_{sparse}$ contains sparse nodes, while each $C_i$ is an \emph{almost-clique} with the property that each node in $C_i$ is adjacent to at least $(1-\epsilon)\Delta$ other nodes in $C_i$, $C_i$ contains at most $(1+\epsilon)\Delta$ nodes, and has diameter at most 2 for a given parameter $\epsilon<1$. The ACD has been used widely in coloring algorithms in different models \cite{ParterSu,CFGUZ19,ACK19,HKMN20,AlonAssadi}. An ACD can be computed in $O(1)$ rounds in the $\mathsf{LOCAL}$\xspace model rather easily, since, roughly speaking, pairs of nodes can deduce from their common neighborhood size whether they belong to the same almost-clique or not. Assadi, Chen and Khanna \cite{ACK19} gave a method that was used in \cite{HKMN20} to compute an ACD in $O(\log n)$ rounds in $\mathsf{CONGEST}$\xspace. Their argument is based on random sampling and requires $\Omega(\log n)$ time. We overcome this \textbf{first obstacle} towards a $o(\log n)$ $\mathsf{CONGEST}$\xspace algorithm by showing that such a decomposition can be computed in $O(1)$ rounds via a bootstrapping-style procedure (see \Cref{ssec:acdComputation}). We begin by sampling a subset $S$ of vertices, where each vertex is independently sampled w.p.\ $1/\sqrt{\Delta}$. The idea is to use the whole graph to relay information between the $S$-nodes, in order to efficiently find out which pairs of $S$-nodes belong to the same almost-clique (to be constructed). To this end, each node $v\in V$ chooses a random $S$-neighbor (ID) and broadcasts it to its neighbors. A key observation is that if two $S$-nodes share many neighbors, they will likely receive each other's IDs many times, which allows them to detect similarity of their neighborhoods. The rest of the nodes join the structure created by the $S$-nodes to form the partition. \smallskip After computing the ACD, our algorithm can then color the sparse nodes in $V_{sparse}$ and then the dense nodes in $V_{dense}=C_1\cup\dots\cup C_k$ separately. \paragraph{Coloring Sparse Nodes and Slack Generation.} Schneider and Wattenhofer \cite{SW10} showed that if there are enough colors in the palettes, the coloring can be extremely quick: e.g., $O(\Delta + \log^{1.1} n)$-coloring in $O(\log^* \Delta)$ rounds of $\mathsf{LOCAL}$\xspace. In this case, the nodes have plenty of \emph{slack}: the difference $S$ between their degree and their palette size is then $\Omega(\Delta)$. Slack is a property that never decreases (but can increase, which only makes the problem easier). Suppose we have slack $S \ge \delta \Delta$, while degrees are clearly bounded by $D \le \Delta$. After $O(\log (1/\delta))$ iterations of {\textsc{RandomColorTrial}}, the nodes have degree at most $D \le S/2$. From then on, each node can try $D/(2S)$ random colors in parallel, which has a failure probability of only $\exp(-D/(2S))$. Thus, the degrees go down by an \emph{exponential factor}, $\exp(D/(2S))$, and since the slack is unchanged, the ratio $D/S$ increases as a tower function, resulting in $O(\log^* \Delta)$ time complexity. Elkin, Pettie, and Su \cite{EPS15} showed that a single execution of {\textsc{RandomColorTrial}}, combined with node sampling, actually generates sufficient slack in \emph{sparse graphs}. More precisely, if the induced neighborhood graph $G[N(v)]$ of a node $v$ contains $(1-\tau)\binom{\Delta}{2}$ edges, for some $\tau=\Omega(\log n/\Delta)$, then after this {\textsc{SlackGeneration}} step, node $v$ has slack $\Omega(\tau\Delta)$. The reason is that two non-adjacent neighbors of $v$ have a good chance of being colored with the same color, increasing the slack: the palette size goes down by only one, while the degree goes down by two. The coloring of the sparse nodes in \cite{HSS18,CLP20} relies on both ingredients: First, a single execution of {\textsc{RandomColorTrial}} produces slack of $\Omega(\varepsilon^2 \Delta)$; then the unlimited bandwidth of the $\mathsf{LOCAL}$\xspace model is used to run a version of the multiple color trials \cite{SW10,EPS15,CLP20} described above with $\delta=\Theta(\varepsilon^2)$ in $O(\log (1/\varepsilon) + \log^* \Delta)$ rounds. This works if $\Delta=\Omega((\log^2 n)/\varepsilon)$, and otherwise, the graph can be colored in $O(\log (1/\varepsilon))+\poly\log\log n$ rounds \cite{BEPSv3,GGR20}. The \textbf{second obstacle} is that the multiple trials of \cite{SW10} require large bandwidth and cannot be performed in $\mathsf{CONGEST}$\xspace. We overcome this by showing that the uncolored degree can be reduced to $O(\log n)$ via $O(\log\log \Delta)$ iterations of the very simple {\textsc{RandomColorTrial}}. The probability of vertex $v$ remaining uncolored after iteration $i$ is about $d_i(v)/|\Psi_i(v)|$, where $d_i(v)$ is its degree, and $\Psi_i(v)$ is its palette. With a global upper bound $D_i$ on $d_i(v)$ and a global lower bound $S$ on the slack as a proxy for palette size, this becomes at most $D_i/S$. In the next iteration $i+1$, $D_{i+1} \le D_i^2/S$; hence, the ratio $D_i/S$ satisfies the recurrence $D_{i+1}/S \le (D_i/S)^2$. Thus, the uncolored degree goes down very quickly, and in only $O(\log\log\Delta)$ steps, we are left with a low-degree ($O(\log n)$) graph. \medskip \noindent \textbf{\Cref{lem:basiconeshot}} (fast degree reduction, simplified, informal)\textbf{.} \emph{Suppose after each iteration $i$ of a coloring algorithm, every node remains uncolored with probability at most $D_i/S$, even if the random bits of other nodes are adversarial, for some $D_i = \Omega(\log n)$ upper bounding the uncolored degree in that iteration and $S \ge 2 D_0$. Then, the series $\{\log(S/D_i\}_i$ grows geometrically. In particular, after $O(\log\log \Delta)$ rounds, the uncolored degree becomes $O(\log n)$. } \medskip This powerful observation (in a more general form) will also be crucial for coloring dense nodes. \paragraph{Coloring Dense Nodes.} Harris, Schneider and Su \cite{HSS18} use the small diameter property of each almost-clique $C$ to coordinate the coloring choices within $C$. A single leader node gathers the palettes of all nodes in $C$ and then simulates a sequential version of {\RCT} on a \emph{random ordering} of $C$ to ensure that the nodes within $C$ choose different colors. This has the advantage that only \emph{external neighbors} of a node $v$, i.e., the neighbors outside of $C$, conflict with $v$'s choice, reducing the failure probability to $e(v)/|\Psi(v)|$, where $e(v)$ denotes the \emph{external degree} of $v$. By the ACD definition, this external degree is initially at most $\epsilon \Delta$, while the palette size $|\Psi(v)|$ is $\sim \Delta$, implying a probability to remain uncolored of $O(\epsilon)$. In order to reduce the uncolored degree, they repeat these synchronized color trials. Their main effort is to show that the ratio $E/D$ stays (not much worse than) $\epsilon$ in each repetition, where $E$ is a global upper bound on external degrees and $D$ lower bounds the uncolored degree as and thus palette size. Thus, if $\epsilon$ is chosen small (subconstant), the dense nodes are colored fast, while if $\epsilon$ is large, the sparse nodes in $V_{sparse}$ are colored fast. The best tradeoff is found for $\epsilon = \exp(-\Theta(\sqrt{\log \Delta}))$, which yields a time complexity of $O(\sqrt{\log \Delta})+\poly\log\log n=O(\sqrt{\log n})$ for coloring sparse and dense nodes. We use a similar synchronized version of {\RCT} as \cite{HSS18} but we allow the leader to process the nodes in an arbitrary order. The crux of it is shown as Alg.~\ref{alg:synchtrial2}, where $X \subseteq V_{dense}$ is a subset of uncolored dense nodes that we apply it on (see later), and $X^C=X\cap C$ is its share in each almost-clique. At a high level, our $\mathsf{CONGEST}$\xspace algorithm has a similar overall structure, with synchronized color selection of nodes within each almost-clique, while the analysis is quite different. \begin{algorithm}[H]\caption{{\textsc{SynchronizedColorTrial}} ($\mathsf{LOCAL}$\xspace version, informal)} \label{alg:synchtrial2} \begin{algorithmic}[1] \STATE Each node $v\in X^C$ sends its palette $\Psi(v)$ to its leader $w_{C}$. \STATE $w_C$ processes the nodes in $X^C$ in an \emph{arbitrary order} $v_1,v_2\dots$, where $v_j$ is assigned a candidate color $c_j$ chosen uniformly at random from $\Psi(v) \setminus \{c_1, c_2, \ldots, c_{j-1}\}$. \STATE $w_C$ sends each node $v_j$ its candidate color $c_j$. \STATE {\textsc{TryColor}}($v_j$, $c_j$), for all $j\ge 1$ in $G[X]$. \end{algorithmic} \end{algorithm} One crucial difference of our work from~\cite{HSS18} is showing that dense nodes are colored fast even with \emph{constant} $\varepsilon$, which effectively eliminates the need to balance the choice of $\varepsilon$ between sparse and dense node coloring. Initially, we only care about reducing the external degree. To this end, we focus on the ratio $e(v)/S_v$, where $S_v$ is the slack of node $v$, and we show that it follows the same progression as the ratio $d_v/S_v$ did for the sparse nodes. This builds on the following key structural insight (which only holds for constant $\varepsilon$): \medskip \noindent \textbf{\Cref{lem:sparseGetsSlack}}\textbf{.} (simplified) \emph{After {\textsc{SlackGeneration}}, every node $w$ with external degree $e(w) = \Omega(\log n)$ has slack $\Omega(e(w))$. } \medskip In contrast, \cite{HSS18} does not derive any (initial) slack for dense nodes. \smallskip We would now want to apply \Cref{lem:basiconeshot} to shrink the external degrees in $O(\log\log \Delta)$ iterations of synchronized {\textsc{RandomColorTrial}}, but we need to deal first with the heterogeneity in slack and external degrees between different nodes: although our formal version of \Cref{lem:basiconeshot} is more robust to heterogeneous slack, it still requires a non-trivial global lower bound on the slack. One way to achieve this would be to partition $C$ into groups of roughly equal slack and color those separately. Instead, we put aside a subset $C' \subset C$ of nodes to be colored later, where $|C'| = \Omega(\sqrt{|C|})$.\footnote{We use a more subtle partitioning in the implementation in the $\mathsf{CONGEST}$\xspace model.} Each node $v$ in $R_0^C = C - C'$, the set that we color first, has a lower bound $|\Psi(v)| \ge |C'|$ on palette size throughout the execution of the algorithm. We can view this as \emph{effective slack} $ES_v = \Omega(\max(e(v),\sqrt{|C|})$. The simplified version of \Cref{lem:basiconeshot} given earlier is extended in the full version to allow for different values among the nodes. It gives a geometric progression in terms of $\log ES_v/e(v)$, so that after after $O(\log\log \Delta)$ iterations, the external degree of each node is down to $O(\log n)$, while the effective slack is $\Omega(\sqrt{|C|})$. After three more rounds, each node remains uncolored with probability at most $(O(\log n)/\sqrt{|C|})^3 = O(\log n)/|C|$, resulting in a low-degree subgraph. We are left with the subgraph $C'$, which we solve recursively, setting aside another subset $C''$ with $|C''|=\Omega(|C|^{1/4})$ and coloring $R_1^C = C' - C''$ next. We therefore form \emph{layers} $R_1^C, R_2^C, \ldots$ that are colored iteratively, which adds another $\log\log \Delta$ factor to the runtime. As we shall see, these layers have another advantage that make $\mathsf{CONGEST}$\xspace implementation possible. We summarize our algorithm for dense nodes in Alg.~\ref{alg:dense2}. \begin{algorithm}[H] \caption{{\textsc{ColorDenseNodes}} ($\mathsf{LOCAL}$\xspace version, informal)} \label{alg:dense2} \begin{algorithmic}[1] \STATE Partition the uncolored nodes of each almost-clique $C$ into layers $R^C_0, R^C_1, \ldots, R^C_t$ (TBD) \FOR {$i=0,\dots,t-1$} \STATE{\textbf{for}} $O(1)$ iterations \textbf{do} {\RCT} in $R_i$. \label{st:RCTreduce} \STATE{\textbf{for}} $O(\log\log \Delta)$ iterations \textbf{do} {\textsc{SynchronizedColorTrial}}($R_i$). \ENDFOR \STATE {\textsc{ColorSmallDegreeNodes}} in $G[V\setminus V_{sparse}]$. \end{algorithmic} \end{algorithm} We call {\textsc{RandomColorTrial}} (in line~\ref{st:RCTreduce}) in order to decrease the external degree $e(v)$ from $O(S_v)$ to at most half the slack $S_v/2$, as needed to apply \Cref{lem:basiconeshot}. Even if simplified, there still are several daunting challenges in implementing our algorithm in $\mathsf{CONGEST}$\xspace. In particular, \textbf{the main obstacle} is the communication requirements for the leader of each almost-clique to learn the palettes of the nodes, $\Theta(\Delta^2)$ messages in $\mathsf{CONGEST}$\xspace, of which only $m=O(\Delta)$ can be received per round and only when sent by $m$ \emph{distinct} neighbors. Our first step towards overcoming this obstacle is \emph{sparsifying} the palette $\Psi(v)$, by transmitting only a random subset to the leader. This was used earlier in a congested clique algorithm of Parter and Su \cite{ParterSu}, though on a subgraph that was already polynomially smaller. The presence of the random put-aside subset $C' = \cup_{i\ge 1} R_i^C$ of nodes means that it suffices to transmit only a $O(\log n / |C'|)$-fraction of the palette, and yet {\textsc{SynchronizedColorTrial}} will have enough randomness in the choice of candidate colors of nodes in $R_0^C = C - C'$. In general, when coloring layer $i$, the size of the palette subset to be forwarded is $O(|R_i^C|)\cdot O(\log n )/\sum_{j > i} |R_{j}^C| = O(\log n \cdot |R_i^C|/|R_{i+1}^C|)\ll\Delta$, and the total amount of palette information transmitted to the leader is $O(\log n \cdot |R_i^C|^2/|R_{i+1}^C|)$ colors. We form the layer sizes so that this is always bounded by $O(\Delta)$. Note that $O(\Delta)$ colors can be directly sent to the leader only when they come from $O(\Delta)$ distinct neighbors, which is not the case in our setting: we need to quickly re-route the messages. This issue is resolved in a \emph{clique} by a well known routing strategy of Lenzen \cite{Lenzen13} that allows us to satisfy arbitrary communication patterns, as long as each node sends/receives $O(\Delta)$ messages. To make Lenzen's result usable in an almost-clique $C$, we compute a \emph{clique overlay} on $C$, that allows for a simulation of all-to-all communication, a \emph{congested clique}, with constant-factor overhead. \medskip \noindent\textbf{\Cref{thm:congestedclique} }(simplified)\textbf{.} \emph{There is a $O(\log\log n)$-round $\mathsf{CONGEST}$\xspace algorithm that for any almost-clique $C$, computes a clique overlay, which simulates all-to-all communication in $C$ with constant-factor runtime overhead.} \medskip Since every almost-clique $C$ has diameter 2, constructing such an overlay corresponds to finding a relay node $w$ for each non-edge $uv$ in $G[C]$, in a way that no edge is adjacent to many relays. We reduce this problem to a coloring problem on a graph with vertices corresponding to non-edges $uv$ in $G[C]$, and solve it with similar ideas as the coloring of sparse nodes. Chang, Li, and Pettie \cite{CLP20} build on the argument of \cite{HSS18} and form a hierarchy of almost clique-decompositions, with epsilons ranging from constant to $\Delta^{-1/10}$. They divide the nodes into layers, further partition them into blocks, which are then grouped together into six separate and different subsets that are tackled with slightly different variations of {\textsc{SynchronizedColorTrial}}. In a tour de force, they show how to reduce them in only a constant number of steps to the setting where their extension of the multi-trials of \cite{SW10} can take over. The argument is quite delicate, as they must also ensure along the way that \emph{not too many} nodes get colored in a round. While it is not impossible \emph{a priori} to implement it in $\mathsf{CONGEST}$\xspace, it is likely to be much more difficult than our approach and not likely to result in a significantly faster method. The approach of \cite{CLP20} utilizes the unbounded communication bandwidth of the $\mathsf{LOCAL}$\xspace model in an additional way. The leader gathers also the topology of the almost-clique, in addition to the palettes of the nodes. We deal with this by conservatively assuming that $C$ is fully connected when it comes to candidate color assignment in {\textsc{SynchronizedColorTrial}}. Namely, we bound the \emph{anti-degree} of each node $C$, or its number of \emph{non-neighbors} within $C$, by the slack of the node, as shown in the full version of \Cref{lem:sparseGetsSlack}. The effect of this is roughly equivalent to doubling the external degree, which does not change the fundamentals. We believe that our approach yields a simpler and more direct way of obtaining a $\poly\log\log (n)$-round algorithm for coloring dense nodes, even in the $\mathsf{LOCAL}$\xspace model. \paragraph{Coloring Small Degree Graphs \& Postshattering} \label{ssec:postshattering} After we have reduced the uncolored degree to $\Delta'=\poly\log n$, vertices have already lost colors from their initial palette, due to colored neighbors; thus, the remaining problem is a \emph{$(deg+1)$-list coloring problem}, where the nodes can have different degrees, and the list of a node of degree $d$ has size at least $d+1$. Hence, even though our general result (\Cref{thm:mainResult}) only solves the $(\Delta+1)$-list coloring problem, it is essential that the following theorem solves the (possibly harder) $(deg+1)$-list coloring. We use the corresponding algorithm, called {\textsc{ColorSmallDegreeNodes}}, to color all remaining vertices in $O(\log\Delta'+\log^5\log n)=O(\log^5\log n)$ rounds. \smallskip \textbf{\Cref{thm:smallDegree}} (simplified)\textbf{.} \emph{$(deg+1)$-list coloring in a graph with $n$ nodes and maximum degree $\Delta'$ can be solved in $O(\log \Delta'+\log^5\log n)$ $\mathsf{CONGEST}$\xspace rounds, w.h.p. } \smallskip In the $\mathsf{LOCAL}$\xspace model, an algorithm with the same runtime is known \cite{BEPSv3,RG19,GGR20}, and in the $\mathsf{CONGEST}$\xspace model, a $O(\log \Delta'+\log^6\log n)$-round algorithm is claimed in \cite{Ghaffari2019,GGR20}. We present a similar (but not identical) algorithm that fixes an error in the design of a subroutine in \cite{Ghaffari2019,GGR20} (see Remark~\ref{rem:smallError}) and improves the runtime to match the one in $\mathsf{LOCAL}$\xspace. Our algorithm is obtained by an improved combination of previously known ideas. It is based on the shattering framework of \cite{BEPSv3}, which uses $O(\log \Delta')$ iterations of {\RCT} to reduce the problem to coloring connected components of \emph{size} $N=\poly\log n$, the \emph{network decomposition} algorithm from \cite{GGR20}, which partitions each such component into clusters of small \emph{diameter} ($\poly\log\log n$), and a $\mathsf{CONGEST}$\xspace algorithm from \cite{Ghaffari2019}, for list coloring a single cluster. Interestingly, the latter algorithm is also based on \textsc{RandomColorTrial}{}. It simulates $O(\log n)$ independent instances in parallel, where each instance runs for $O(\log N)$ iterations. One of these instances is likely to color all vertices of the cluster, and the nodes use the small diameter to agree on a successful instance. To simulate many instances in parallel, it is essential that many color trials can be sent in one $O(\log n)$-bit message. For this, the nodes in a cluster compute a mapping of the colors in nodes' palettes to a smaller color space of size $\poly N$ (cf. \cite{HKMN20}): each color then can be represented with $O(\log\log n)$ bits, and $O(\log n/\log\log n)$ color trials fit in a single $\mathsf{CONGEST}$\xspace message. The bottleneck of our approach is the computation of the network decomposition, which is also the current bottleneck for faster randomized (and deterministic) algorithms in the $\mathsf{LOCAL}$\xspace model. \section{Top Level Algorithm and Paper Outline}\label{sec:main} Our main algorithm also serves as an outline of the remainder of the paper. \begin{algorithm}[H] \caption{Main Algorithm Outline} \label{alg:main} \begin{algorithmic}[1] \STATE{\textbf{if}} $\Delta=O(\log^4 n)$ \textbf{then} {\textsc{ColorSmallDegreeNodes}} (Sec.~\ref{sec:smalldegree}) and \textbf{return} \STATE {\textsc{ComputeACD}} Compute an $(\varepsilon,\eta)$-ACD $V_{sparse}, C_1,\ldots, C_k$ of $G$ with $\varepsilon=\frac{1}{3}$, $\eta=\frac{\varepsilon}{108}$ (Sec.~\ref{sec:ACD})\label{st:mainacd} \STATE {\textsc{ComputeCliqueOverlay}} for each $C_i$, $1\le i\le k$ (Sec.~\ref{sec:congestedcliqueoverlay})\label{st:overlay} \STATE Step 1: {\textsc{SlackGeneration}} (Secs.~\ref{sec:oneshot} and~\ref{sec:slack})\label{st:mainslack} \STATE Step 2: {\textsc{ColorSparseNodes}} Color remaining sparse nodes in $G$ (Sec.~\ref{sec:sparsecoloring}) \STATE Step 3: {\textsc{ColorDenseNodes}} Color remaining dense nodes in $G$ (Sec.~\ref{sec:densecoloring}) \end{algorithmic} \end{algorithm} The formal statement of our main result is as follows. \medskip \noindent\textbf{Theorem~\ref{thm:mainResult}.} \emph{ Let $G$ be the input graph with $n$ vertices and maximum degree $\Delta$, where each vertex $v$ has a list $\Psi(v)\subseteq [U]$ of $|\Psi(v)|=\Delta+1$ available colors from a colorspace of size $U=\poly n$. There is a randomized algorithm that in $O(\log^5\log n)$ rounds in the $\mathsf{CONGEST}$\xspace model, w.h.p.\footnote{With high probability, that is, with probability at least $1-n^c$, for any constant $c\ge 1$.} computes a coloring of $G$ such that each vertex $v$ gets a color from its list $\Psi(v)$.} \emph{The runtime can also be stated as $O(T+\log\log n+\log^2\log\Delta)$, where $T$ is the time needed to $(deg+1)$-list color an $n$-vertex graph with maximum degree $O(\log^4 n)$ in a colorspace of size $\poly n$.} \begin{proof}[Proof sketch] If $\Delta=O(\log^4 n)$, then we directly apply algorithm {\textsc{ColorSmallDegreeNodes}} from \Cref{sec:smalldegree}, to color $G$ in $T$ rounds. Assume that $\Delta=\omega(\log^4 n)$. After the ACD computation ($O(1)$ rounds), any node is either in $V_{sparse}$ or in one of the almost-cliques $C_1,\ldots,C_k$; thus, any node that is not colored in {\textsc{SlackGeneration}} ($O(1)$ rounds) gets colored either in {\textsc{ColorSparseNodes}} ($O(\log\log \Delta + T)$ rounds, cf. \Cref{lem:coloringSparse}) or in {\textsc{ColorDenseNodes}} ($O(\log\log n +\log^2\log\Delta + T)$ rounds, cf. \Cref{lem:coloringDense}), and, all nodes are colored at the end, w.h.p. The overlay computation takes $O(\log\log n)$ rounds. The total runtime is dominated by the last step. \end{proof} \section{Definitions and Notation} We use $n$ and $\Delta$ to denote the number of nodes and the maximum degree of the input graph $G$. We let $d_v$, $\Psi(v)$, and $N(v)$ (or $N_H(v)$, in a subgraph $H$) denote the degree, palette, and the neighborhood, respectively, of a node $v$. We often use $s_v$ for a lower bound on the slack (defined below) of node $v$. For simplicity of exposition, we assume in our analysis that the degree, neighborhood, and palette of a node are always updated, by removing colored neighbors or colors permanently taken by neighbors from consideration (which can be done in a single round). \begin{definition}[similarity, friends, density] ~ Let $\varepsilon>0$ and $G=(V,E)$ be a graph with maximum degree $\Delta$. Nodes $u,v\in V$ are \emph{$\varepsilon$-similar} if $|N(v)\cap N(u)|\ge (1-\varepsilon)\Delta$, and are \emph{$\varepsilon$-friends}, if in addition $\{u,v\}\in E(G)$. Node $u\in V$ is \emph{$\varepsilon$-dense} if it has $(1-\varepsilon)\Delta$ friends. \end{definition} \begin{definition}[Almost-Clique Decomposition (ACD)]\label{def:acd} Let $G=(V,E)$ be a graph and $\varepsilon,\eta\in (0,1)$. A partition $V=V_{sparse}\cup C_1\cup \ldots\cup C_k$ of $V$ is an \emph{$(\varepsilon,\eta)$-almost-clique decomposition} for $G$ if: \begin{compactenum} \item $V_{sparse}$ does not contain an $\eta$-dense node\ , \item For every $i\in [k]$, $(1-\varepsilon)\Delta\le |C_i|\le (1+\varepsilon)\Delta$\ , \item For every $i\in [k]$ and $v\in C_i$, $|N(v)\cap C_i|\ge (1-\varepsilon)\Delta$\ . \end{compactenum} \label{D:acd} \end{definition} We refer to $C_i$ as \emph{$\varepsilon$-almost-cliques}, omitting $\varepsilon$ when clear from the context. For a node in an almost clique $C_i$, the \emph{antidegree} -- the number of non-neighbors in $C_i$ -- and \emph{external degree} -- the number of neighbors in other almost cliques $C_j$, $j\neq i$ -- are key parameters that we will use. \begin{definition} The \emph{external degree} $e(v)$ of a node $v\in C_i$ is $e(v)=|N(v)\cap \cup_{j\neq i} C_j|$. The \emph{antidegree} $a(v)$ of a node $v\in C_i$ is $a(v)=|C_i\setminus N(v)|$. \end{definition} \Cref{lem:acdproperties} states useful properties of almost-cliques that easily follow from the ACD definition. In \Cref{sec:ACD}, we show how to compute ACD, for some constants $\varepsilon,\eta$, in $O(1)$ rounds. \begin{observation}\label{obs:deltaintersections} Let $a,b,c\in (0,1)$. Let $A,B,C$ be three sets. If $|A\cap C|\ge (1-a)\Delta$, $|B\cap C|\ge (1-b)\Delta$, and $|C|\le (1+c)\Delta$, then $|A\cap B\cap C|\ge (1-a-b-c)\Delta$. \end{observation} \begin{lemma}[ACD properties]\label{lem:acdproperties} Let $C\in \{C_1,\dots,C_k\}$ and $u,v\in C$. It holds that: (i) $u$ and $v$ are $3\varepsilon$-similar, (ii) $C$ has diameter 1 or 2, (iii) $u$ is $3\varepsilon$-dense, (iv) $e(u)\le \varepsilon\Delta$, and (v) for every $w\in C'\neq C$, $|N(u)\cap N(w)|\leq 2\varepsilon\Delta$. \end{lemma} \begin{proof} By the definition of ACD, $|N(u)\cap C|,|N(v)\cap C|\ge (1-\varepsilon)\Delta$, and $|C|\le (1+\varepsilon)\Delta$; hence, by Obs.~\ref{obs:deltaintersections}, $|N(u)\cap N(v)\cap C|\ge (1-3\varepsilon)\Delta$. This implies (i) and (ii) (since $u$ and $v$ have a common neighbor in $C$). (iii) follows from (i) and the bound $|N(u)\cap C|\ge (1-\varepsilon)\Delta$. (iv) follows from Def.~\ref{D:acd}, 3, using $d_v\le \Delta$. For (v), consider $u\in C$ and $w\in C'$. We have $|N(u)\cap C|,|N(w)\cap C'|\ge (1-\varepsilon)\Delta$, and since $C\cap C'=\emptyset$, it follows that $|N(u)\cap N(w)|<2\varepsilon\Delta$. \end{proof} \begin{definition}[slack] \label{def:slack} ~Let $v$ be a node with a color palette $\Psi(v)$ in a subgraph $H$ of $G$. The \emph{slack} of $v$ in $H$ is the difference $|\Psi(v)|-d$, where $d$ is the number of uncolored neighbors of $v$ in $H$. \end{definition}
{ "timestamp": "2021-04-13T02:17:40", "yymm": "2012", "arxiv_id": "2012.14169", "language": "en", "url": "https://arxiv.org/abs/2012.14169" }
\section{Introduction} \IEEEPARstart{C}{aching} networks can reduce the routing costs for accessing contents by caching the requested contents as close to the requesting users as possible. Prevailing caching networks include content delivery networks (CDN)~\cite{CDN,CDN2}, information-centric networks (ICN)~\cite{YehICN2014}, femtocell networks~\cite{femtocell}, web caching networks~\cite{DSR}, and peer-to-peer networks~\cite{p2p}. There has been extensive previous work (e.g., \cite{poularakis2018distributed, YehSigmetrics, ao2015distributed}) on how to optimally allocate contents to available caches. However, most existing work assumes that cache nodes are altruistic and cooperate with each other to optimize an overall network performance objective. In practice, cache nodes may belong to different entities \cite{Milking}. For example, in wireless community mesh networks such as Google WiFi \cite{afanasyev2010usage} and Guifi \cite{vega2012topology}, individual users contribute their wireless routers (as caches) to the community. On the Internet, different operators and providers deploy their own caching infrastructures and services. Examples include AT$\&$T Content Delivery Network Service, Google Global Cache, Netflix Open Connect, and Akamai.\footnote{In this paper, we treat one provider as one cache node. Another example is a caching network where each provider owns multiple cache nodes. In such a case, we can model the interactions among multiple cache nodes belonging to the same provider as a coalititional game.} In caching networks where different entities operate their own caches, cache nodes may behave selfishly to maximize their own benefits. For example, in multi-hop wireless community mesh networks \cite{draves2004routing}, a cache node has an incentive to cache the content items to minimize its own routing cost, which may not always maximize the social welfare. This motivates us to study the selfish caching behaviors through a game-theoretic approach. To our best knowledge, this is the first paper to examine \emph{selfish caching games} on arbitrary directed graphs with heterogeneous content popularity. We focus on the pure strategy Nash equilibrium (PSNE),\footnote{The main reason for implementing PSNE in practice is simplicity \cite{ValidUtilityGame}.} and address two fundamental questions. \emph{First, is a PSNE guaranteed to exist in any selfish caching game?} \emph{Second, if a PSNE exists, does it have a guaranteed efficiency in terms of social welfare?} The short answers to the above two questions are ``No'' and ``No''. In other words, the selfish caching game does not always admit a PSNE. Even if a PSNE exist, its efficiency in terms of social welfare can be very poor. In this paper, we characterize the conditions under which (i) a PSNE exists, and (ii) a PSNE has a guaranteed efficiency. We characterize the efficiency of PSNE by the \emph{price of anarchy} (PoA), which is the ratio of the social welfare achieved by the worst PSNE to that achieved by a socially optimal strategy \cite{CSgame, roughgarden2002bad}. The analysis of PSNE and PoA takes into account the asymmetric and node-specific interdependencies among cache nodes, which reflect the network topology and content request patterns. Our analysis will help the network designer understand when the network behaves with certain performance guarantees, and how to create these conditions in the network. We analyze the selfish caching game in two scenarios. We first consider a scenario where all contents have equal sizes, which corresponds to practical applications such as video-on-demand services using harmonic broadcasting that divide each video into segments of equal size \cite{juhn1997harmonic}. We then consider a scenario where contents have unequal sizes, which corresponds to practical applications such as video streaming services over HTTP (e.g., Netflix and Hulu) that split each video into segments of lengths from 2 to 10 seconds \cite{huang2012confused}. Our primary contributions are: \begin{itemize} \item \emph{Selfish Caching Game}: To the best of our knowledge, this is the \emph{first} work that studies the selfish caching game on directed graphs with arbitrary topologies and heterogeneous content popularity. \item \emph{Pure Strategy Nash Equilibrium (PSNE)}: For selfish caching games with equal-sized content items, we first show that a PSNE does not always exist. We then show that a PSNE exists if the network does not have a mixed request loop, i.e., a directed loop in which each edge is traversed by at least one content request. Furthermore, we propose a polynomial-time algorithm to find a PSNE for the selfish caching game with no mixed request loop. \item \emph{Price of Anarchy}: We show that the PoA in general can be arbitrarily poor if we allow arbitrary content request patterns. Furthermore, adding extra cache nodes can make the PoA worse, a phenomenon which we call the \emph{cache paradox}. However, when cache nodes have homogeneous request patterns, we show that the selfish caching game is an $\alpha$-scalable valid utility game and the PoA is bounded in arbitrary-topology caching networks. \item \emph{Approximate PSNE}: For selfish caching games with unequal-sized content items, each node's payoff maximization problem is NP-hard. When cache nodes have limited computational capability, we show that their selfish caching behaviors lead to an approximate PSNE with bounded PoA in certain cases of interest. \end{itemize} The rest of the paper is organized as follows. In Section \ref{sec:literature}, we review related literature. In Section \ref{sec:model}, we introduce our system model. In Section \ref{sec:SCG}, we model the selfish caching game and analyze the PSNE. In Section \ref{sec:PoA}, we study the PoA. In Section \ref{sec:Approx}, we analyze selfish caching games with unequal-sized content items. In Section \ref{sec:simu}, we provide simulation results. We conclude in Section \ref{sec:conclusion}. \section{Related Work}\label{sec:literature} There has been a rich body of previous work on caching, many of which are summarized in an excellent recent survey \cite{Survey}. In the following, we introduce related work regarding caching optimization and selfish caching game, respectively. \textbf{Caching Optimization.} There is considerable recent literature on a variety of caching optimization problems, including proactive caching \cite{shukla2017hold, tadrous2016joint}, optimal caching under queuing models \cite{YuanyuanInfocom2020, KellyCache2019}, optimal caching under unknown content popularities \cite{garetto2015efficient, zhang2018coded}, distributed adaptive algorithms for optimal caching \cite{poularakis2018distributed, YehSigmetrics, ao2015distributed, DrCache2018}, caching at the edges \cite{zhao2018red, li2018hierarchical, zhao2018collaborative, kwak2018hybrid, cao2018optimal}, TTL (time-to-live) caches \cite{FerragutSIGMETRICS2016, DehghanTTLton2019}, optimal caching in evolving networks \cite{qin2018content}, joint caching and routing optimization \cite{dehghan2015complexity, amble2011content, StratisJSAC2018}, optimal cache partitioning \cite{chu2016allocating}, and collaborative caching \cite{gharaibeh2016provably, shin2017t, rahimzadeh2017svc, yu2016enhancing, Lui, maille2015impact}. All the above work assumes that all cache nodes aim to maximize the social welfare. \textbf{Selfish Caching Game.} There are several papers which study selfish caching behaviors in simple settings. In \cite{SelfishCaching}, Chun \emph{et al.} study the selfish caching game on undirected graphs with a single content item, assuming homogeneous content popularity across users. In \cite{MarketSharing}, Goemans \emph{et al.} study the content market sharing game, where users get rewards for caching content items. The paper assumes that any node which caches a requested item can serve the request with same cost, without considering network topology. The authors in \cite{DSR} and \cite{DSR2} study a distributed selfish replication game in an undirected complete graph, where the distance between any two nodes is the same. In \cite{CSR}, Gopalakrishnan \emph{et al.} study the capacitated selfish replication game in an undirected network, where users are equally interested in a set of content items. The analysis in the above literature is applicable to undirected graphs, and some are restricted to homogeneous content popularity. In this work, we study the selfish caching game on directed graphs with arbitrary topologies and heterogeneous content popularity. \textbf{\begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{fig/Model} \caption{A caching network with $|V|=8$ nodes and $|\mathcal{I}|=2$ content items, where node 4 (node 7, respectively) is the designated server of item 1 (item 2, respectively). The request forwarding paths are fixed in our model. For example, the path of node 5 requesting item 1 is $p^{(5,1)}=(5,6,4)$, and the path of node 3 requesting item 2 is $p^{(3,2)}=(3,6,5,7)$. }\label{fig:Model} \end{figure}} \section{System Model}\label{sec:model} We consider a network of selfish caches, represented by a directed caching graph $G(V,E)$ with an \emph{arbitrary topology}, where $V$ is the set of cache nodes and $E$ is the set of bidirectional edges which enable ARQ with asymmetric edge costs (see an example in Figure \ref{fig:Model}). Each cache node requests one or more content items (e.g., movies) from the set $\mathcal{I}=\{1,\ldots,|\mathcal{I}|\}$. For each content item $i\in \mathcal{I}$, there is a fixed set of \emph{designated server nodes} $\mathcal{D}^i \subseteq V$, $|\mathcal{D}^i|>0$, that store $i$ in their permanent storage (outside of their caches).\footnote{For example, designated server nodes can be content providers' caches.} We consider equal-sized content items in Sections \ref{sec:model}--\ref{sec:PoA}, which correspond to applications such as video-on-demand services using harmonic broadcasting that divide each video into segments of equal size \cite{juhn1997harmonic}.\footnote{Without loss of generality, we normalize the size of each item to be one.} We will consider the case of unequal-sized items in Section \ref{sec:Approx}. \subsection{Caching Strategies} Each node $s\in V$ has a cache of capacity $c_s \in \mathbb{N}$, i.e., node $s$ can store exactly $c_s$ equal-sized content items. We denote the caching strategy of node $s\in V$ by $\boldsymbol{x}_s =\{x_{si}: \forall i\in \mathcal{I}\} \in\{0,1\}^{|\mathcal{I}|}$, where $$x_{si} \in\{0,1\}, \mbox{ for all } i\in\mathcal{I},$$ indicates whether node $s$ stores content item $i$, and satisfies \begin{equation*} \textstyle \sum_{i\in\mathcal{I}} x_{si} \leq c_s, \mbox{ for all } s\in V. \end{equation*} We let $\boldsymbol{x}_{-s}=\{\boldsymbol{x}_1,\ldots, \boldsymbol{x}_{s-1},\boldsymbol{x}_{s+1},\ldots, \boldsymbol{x}_{|V|}\}$ denote the caching strategy of nodes other than node $s$, and let $\boldsymbol{x}=\{\boldsymbol{x}_s,\boldsymbol{x}_{-s} \}$ denote the global caching strategy. Given $\boldsymbol{x}_s$, we let $Z_s=\{i:x_{si}=1,i\in\mathcal{I}\}$ denote the set of items cached by node $s\in V$. \subsection{Content Requests} We describe each content request by a pair $(s,i)$, where the request source\footnote{We consider a request source to be a point of aggregation which combines many network users. While a single user may request a given content item only once over a time period, an aggregation point is likely to submit many requests for a given content item over a time period.} $s\in V$ requests content item $i\in\mathcal{I}$. We assume that each request $(s,i)$ arrives according to a stationary ergodic process \cite{jiang2018convergence, PanigraphyPoisson2018} with arrival rate $\lambda _{(s,i)} \geq 0$ for all $s \in V$ and $i \in \mathcal{I}$, which reflects \emph{heterogeneous content popularity} across items and request nodes.\footnote{We consider selfish caching behaviors under complete information, where cache nodes know all other nodes' content request patterns \cite{Milking,SelfishCaching,MarketSharing}. Specifically, cache nodes can estimate content request patterns through historical information or long-term learning \cite{shukla2017hold}.} Request $(s,i)$ is forwarded over a pre-determined fixed \emph{request forwarding path}\footnote{Similar as in the named data networks, we assume that the request forwarding path is determined in a longer timescale compared with caching. And we consider selfish caching behaviors under complete information where cache nodes know the request forwarding paths \cite{Milking,SelfishCaching,MarketSharing}.} $p^{(s,i)}$, from request source $s$ to one of content item $i$'s designated server nodes in $\mathcal{D}^i$. Specifically, the path $p^{(s,i)}$ of length $K \leq |V|$ is a sequence $(p_1,\ldots,p_K)$ of nodes $p_k\in V$ such that $p_1=s$, $p_K\in\mathcal{D}^i$, and $(p_k,p_{k+1})\in E$ for all $k\in\{1,\ldots,K-1\}$. We require that $p^{(s,i)}$ contains no loops ($p_k\neq p_l$ for all $1\leq k< l\leq K$) and no node other than the terminal node on $p^{(s,i)}$ is a designated server for content item $i$ ($p_k\not\in\mathcal{D}^i$ for all $1\leq k<K$). For request $(s,i)$, we let $V_{(s,i)}=\{v:v\in p^{(s,i)}, v\neq s, v\notin \mathcal{D}^i \}$ denote the set of intermediate nodes on path $p^{(s,i)}$. We denote $V_s=\cup_{i\in\mathcal{I}}V_{(s,i)}$ as the set of intermediate nodes on all the request forwarding paths of node $s$.\footnote{Note that each cache node can play some or all of the following roles: a designated server of content items, a source of requests, and an intermediate node on request forwarding paths.} Request $(s,i)$ travels along path $p^{(s,i)}$ until either (i) the request reaches a node $v\in p^{(s,i)}$ such that node $v$ caches content item $i$, i.e., $x_{vi}=1$ or, (ii) if $x_{vi}=0$ for all $v\in p^{(s,i)}\setminus \{p_K\}$, the request reaches $p_K\in \mathcal{D}^i$. Having found the closest copy of content item $i$, the network generates a \emph{response message} carrying the requested content item $i$. The response message is propagated in the reverse direction along the request forwarding path, i.e., from the closest node with content item $i$ back to the request source node $s$.\footnote{In this paper, we assume that forwarding and transmission follow standard network protocols. In some settings, forwarding and transmission incur a service cost to the cache node due to the consumption of the transmit power and communication resource. We will consider such costs in the future work.} \begin{table}[t] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Key Notation} \begin{tabular}{l l } \hline $G(V,E)$ & Caching graph, with nodes in $V$ and edges in $E$ \\ $c_s$ & Cache capacity of node $s\in V$ \\ $w_{uv}$ & Cost on edge $(u,v)\in E$ \\ $\mathcal{I}$ & Set of content items \\ $\mathcal{D}^i$ & Set of designated servers for content item $i \in \mathcal{I}$ \\ $(s,i)$ & Request for item $i$ from node $s$ \\ $\lambda_{(s,i)}$ & Arrival rate of request $(s,i)$ \\ $p^{(s,i)}$ & Request forwarding path of request $(s,i)$ \\ $V_{(s,i)}$ & The set of intermediate nodes on path $p^{(s,i)}$\\ $V_{s}$ & The set of intermediate nodes on node $s$' paths\\ $x_{si}$ & Caching strategy of node $s\in V$ for item $i\in \mathcal{I}$\\ $\boldsymbol{x}_s$ & Caching strategy of node $s\in V$ \\ $Z_s$ & The set of content items cached by node $s\in V$\\ $\boldsymbol{x}$ & Global caching strategy of all nodes \\ $h_{(s,i)}$ & The routing cost to serve request $(s,i)$ \\ $h_s$ & The routing cost of node $s$ \\ $g_s$ & The caching gain of node $s$ \\ $G$ & The aggregate caching gain in the network \\ \hline \end{tabular} \label{table:Notation} \end{table} \subsection{Routing Costs}\label{sec:RoutingCosts} Transferring a content item across edge $e=(u,v)\in E$ incurs a cost (e.g., delay or financial expense) denoted by $w_{uv}\geq 0$.\footnote{We do not model the congestion effect on each edge. How to jointly consider cost and throughput issues is an interesting open problem.} Since the size of each request message is relatively small compared with the content item, we assume that costs are only due to content item transfers, and the costs of forwarding requests are negligible \cite{YehSigmetrics}. To serve the request $(s,i)$, the routing cost depends on the caching decision $x_{si}$ of the request source node $s$, as well as the caching decisions $x_{vi}, \forall v\in V_{(s,i)}$, of all the intermediate nodes on the request forwarding path $p^{(s,i)}$. Specifically, the routing cost of transferring item $i$ over the reverse direction of $p^{(s,i)}$ is \begin{equation*} \begin{aligned} &\textstyle h_{(s,i)}\left(x_{si},\{x_{vi}:v\in V_{(s,i)}\} \right)\\ =& \textstyle \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} \prod_{k'=1}^{k}\left( 1-x_{p_{k'}i} \right)\\ =& \textstyle \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} \left( 1-x_{si} \right) \prod_{k'=2}^{k}\left( 1-x_{p_{k'}i} \right). \end{aligned} \end{equation*} Note that $h_{(s,i)}(\cdot)$ includes the cost on edge $(p_{k+1},p_k)$, i.e., $w_{p_{k+1}p_k}$, if and only if none of the nodes from $p_1$ to $p_k$ on path $p^{(s,i)}$ has cached content item $i$. For example, in Figure \ref{fig:Model}, $p^{(3,2)}=(3,6,5,7)$ and the routing cost of request $(3,2)$ depends on $x_{32}$, $x_{62}$ and $x_{52}$. If $(x_{32},x_{62},x_{52})=(0,0,1)$, then $h_{(3,2)}(x_{32},x_{62},x_{52})=w_{63}+w_{56}$. \subsection{Selfish Caching Behavior} Each selfish cache node $s\in V$ seeks a caching strategy to optimize its own benefit, i.e., minimizing the aggregate expected cost for serving all its own requests, calculated as follows: \begin{equation}\label{eq:aggregatecost} \begin{aligned} & h_s\left(\boldsymbol{x}_s, \{\boldsymbol{x}_v: v\in V_s\} \right)\\ =& \sum_{i\in\mathcal{I}}\lambda_{(s,i)} \cdot h_{(s,i)}\left(x_{si},\{x_{vi}:v\in V_{(s,i)}\} \right) . \end{aligned} \end{equation} For notation simplicity, we write $h_s(\cdot)$ as $h_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s})$. In the absence of caching, i.e., $\boldsymbol{x}=\boldsymbol{0}$, the aggregate expected cost of node $s$ is: \begin{equation*} h_s(\boldsymbol{0})=\sum_{i\in\mathcal{I}}\lambda_{(s,i)} \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k}. \end{equation*} We define the \emph{caching gain} of node $s$ as \begin{equation}\label{eq:CachingGain} g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s})=h_s(\boldsymbol{0})-h_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}). \end{equation} Intuitively, the caching gain is the cost reduction enabled by caching. Since $h_s(\boldsymbol{0})$ is a constant, minimizing the aggregate expected cost in~\eqref{eq:aggregatecost} is equivalent to maximizing the caching gain in~\eqref{eq:CachingGain}. Hence, the caching gain in~\eqref{eq:CachingGain} serves as node $s$' payoff function. \textbf{\begin{figure}[t] \centering \includegraphics[width=0.33\textwidth]{fig/NoNErequestweight} \caption{An example where PSNE does not exist. The caching network has $|V|=5$ nodes and $|\mathcal{I}|=2$ content items, where node 4 (node 5, respectively) is the designated server of item 1 (item 2, respectively). The cache capacity is $1$ at each node. The request arrival rates satisfy $\lambda_{(v,i)}=\lambda_i, \forall v\in V, i\in \mathcal{I}$, where $\lambda_1=10$ and $\lambda_2=14$. The request forwarding paths are fixed, for example, $p^{(3,1)}=(3,2,4)$.}\label{fig:NoNE} \end{figure}} \section{Selfish Caching Game}\label{sec:SCG} In this section, we model the interactions among selfish cache nodes by a selfish caching game on directed graphs. We construct an example where the pure strategy Nash equilibrium (PSNE) does not exist for such a game. We then identify the condition under which a PSNE exists, and propose a polynomial-time algorithm to find a PSNE under the condition. \subsection{Game Modeling} We define the selfish caching game as follows: \begin{game}[Selfish Caching Game on Directed Graphs] $ $ \begin{itemize} \item Players: the set $V$ of cache nodes on the caching graph; \item Strategies: the caching strategy $\boldsymbol{x}_s=\{x_{si}: \forall i\in\mathcal{I}\}$ for each cache node $s\in V$, where $x_{si}\in\{0,1\}$ and $\sum_{i\in\mathcal{I}}x_{si} \leq c_s$; \item Payoffs: the caching gain $g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s})$ for each $s\in V$. \end{itemize} \end{game} Since the selfish caching game is a finite game, there exists at least one mixed strategy Nash equilibrium (including pure strategy Nash equilibrium as a special case). However, since it is difficult to implement random caching strategies in practical caching networks, we focus on analyzing pure strategy Nash equilibria in this paper, as defined below. \begin{definition}[Pure Strategy Nash Equilibrium] A pure strategy Nash equilibrium of the selfish caching game is a caching strategy profile $\boldsymbol{x}^{\rm NE}$ such that for every cache node $s\in V$, \begin{equation} g_s(\boldsymbol{x}_s^{\rm NE},\boldsymbol{x}_{-s}^{\rm NE}) \geq g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}^{\rm NE}), \mbox{ for all feasible } \boldsymbol{x}_s. \end{equation} \end{definition} \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{fig/NoNErequestAPP} \caption{An example where there is no mixed request loop. The request forwarding path is $p^{(3,1)}=(3,1,2,4)$. }\label{fig:NoNEAPP} \end{figure} \subsection{An Example with No PSNE} In the following, we first show that the PSNE does not always exist. \begin{theorem}\label{theo:NoNE} There exists a selfish caching game for which the pure strategy Nash equilibrium does not exist. \end{theorem} \begin{proof} Figure \ref{fig:NoNE} is an example with no PSNE. For node 4 (node 5, respectively), caching item 2 (item 1, respectively) is its dominant strategy. Now we analyze the selfish behaviors of nodes 1, 2, and 3. It is easy to verify that for all 8 feasible caching strategy profiles, there always exists one cache node that can improve its caching gain by changing its caching strategy unilaterally. For example, if all the three nodes cache item 1, then node 3 has the the incentive to cache item 2 to improve its caching gain assuming that the other two nodes do no change their caching strategies. Hence there is no strategy profile where everyone is achieving its maximum payoff assuming other nodes do not change their strategies. Hence, the PSNE does not exist. \end{proof} \subsection{Existence of a PSNE} Deciding the existence of a PSNE for games on graphs is NP-hard in general \cite{wang2014belief}. However, we identify the condition under which a PSNE of the selfish caching game exists and can be found in polynomial time. To proceed, we first introduce the definition below. \begin{definition}[Mixed Request Loop] A mixed request loop on a directed graph is a directed loop $(p_1,p_2,\ldots,p_K,p_{K+1}=p_1)$ involving $3 \leq K \leq |V|$ nodes, where $p_k\in V$ for $1 \leq k \leq K$, $p_k\neq p_l$ for all $1\leq k< l\leq K$, and at least one content request traverses edge $(p_k,p_{k+1})\in E$ for all $1 \leq k \leq K$. \end{definition} In Figure \ref{fig:NoNE}, $(1,3,2,1)$ forms a mixed request loop, where requests for item 1 traverse edge $(3,2)$, and requests for item 2 traverse edges $(2,1)$ and $(1,3)$. Note that a loop on graph is not always a mixed request loop. For example, $(1,3,2,1)$ in Figure \ref{fig:NoNEAPP} is a loop. However, the request forwarding path is $p^{(3,1)}=(3,1,2,4)$ rather than $(3,2,4)$, meaning no request traverses edge $(3,2)$. Hence, loop $(1,3,2,1)$ is not a mixed request loop. In other words, we can avoid mixed request loops by properly choosing the request forwarding paths. Next, we will show that a PSNE exists in the selfish caching game on caching graphs with no mixed request loop.\footnote{Note that no mixed request loop is a sufficient (but not necessary) condition for a PSNE to exist.} \begin{theorem}\label{theo:Existence} A PSNE always exists in the selfish caching game on caching graphs with no mixed request loop. \end{theorem} \begin{proof} See Appendix A. \end{proof} Theorem \ref{theo:Existence} holds in caching networks with arbitrary topologies and heterogeneous content popularity. We prove the existence\footnote{The selfish caching game generally admits multiple PSNEs, depending on system parameters such as edge weights and request arrival rates.} of the PSNE by finding a PSNE in polynomial time. \subsection{Polynomial-Time Algorithm to Find a PSNE} In this section, we present a polynomial-time algorithm to find a PSNE for the selfish caching game. Specifically, for each selfish caching game, we can define a \emph{state graph} \cite{MarketSharing} as follows. Recall that given node $s$' caching strategy $\boldsymbol{x}_s$, the set $Z_s=\{i:x_{si}=1, i\in\mathcal{I}\}$ is the set of content items cached by node $s\in V$. Hence we can use $\boldsymbol{x}=\{\boldsymbol{x}_s: \forall s\in V\}$ and $Z=\{Z_s: \forall s\in V\}$ interchangeably to represent the caching strategy profile. \begin{definition}[State Graph \cite{MarketSharing}] A state graph is a directed graph where each vertex corresponds to a strategy profile $Z$. There is a directed arc from vertex $Z$ to vertex $Z'$ with label $v$ if the only difference between $Z$ and $Z'$ is the strategy of player $v$ and the payoff of player $v$ in $Z$ is strictly less than its payoff in $Z'$. \end{definition} A PSNE corresponds to a vertex on the state graph without any outgoing arc, i.e., a sink. Hence identifying a PSNE of the selfish caching game is equivalent to identifying a sink on the corresponding state graph. We propose a polynomial-time algorithm (Algorithm \ref{algo:FindNESG} \cite{MarketSharing}) to find a sink on the state graph. The algorithm proceeds in rounds. The first round starts at the vertex $Z=\emptyset$, corresponding to the strategy profile where none of the cache nodes cache any content item (Line 1 of Algorithm \ref{algo:FindNESG}). In each round, the first arc traversed on the state graph corresponds to an \emph{add arc} where a player, say $s$, changes from $Z_s$ to $Z_s \cup \{i^\ast\}$. Intuitively, player $s$ adds only one content item $i^\ast$ to its cache, where we select $i^\ast$ among all possible content items not currently in $Z_s$ to maximize player $s$' caching gain (Lines 3-4 of Algorithm \ref{algo:FindNESG}). After the first arc, subsequent arcs in the same round correspond to \emph{change arcs}. Specifically, a change arc corresponds to a player, say $v$, replacing $Z_v$ by $Z_v\cup \{j\} \setminus \{t\}$, where $j \notin Z_v$ and $t\in Z_v$. Intuitively, player $v$ replaces content item $t$ for content item $j$ if $g_v(Z_v\cup \{j\} \setminus \{t\},Z_{-v})> g_v(Z_v,Z_{-v})$ (Lines 5-7 of Algorithm \ref{algo:FindNESG}). When the current vertex on the state graph has no change arcs, one round ends. For the vertex where a round ends, if there is an add arc outgoing from it, a new round starts; otherwise, it is a sink and the algorithm terminates. Such a sink corresponds to the PSNE. \begin{algorithm}[t] \LinesNumbered \SetAlgoLined \begin{small} \KwIn{$G(V,E), \mathcal{I}, w_{uv}, \forall (u,v)\in E, \lambda_{(s,i)}$ and $p^{(s,i)}$, for all $s\in V, i\in \mathcal{I}$} \KwOut{$Z^{\rm NE}$} Set $Z=\emptyset$\; \Repeat{$\forall s\in V$ satisfies $|Z_s|=c_s$}{ Randomly pick a node $s\in V$ where $|Z_s|<c_s$\; Add item $i^\ast$ where $i^\ast \in \arg \max_{i\in\mathcal{I}\setminus Z_s} g_s(Z_s \cup \{i\},Z_{-s})$ to node $s$, i.e., $Z_s \gets Z_s \cup \{i^\ast\}$\; \While{$\exists v\in V, j\notin Z_v, t\in Z_v$, such that $g_v(Z_v\cup \{j\} \setminus \{t\},Z_{-v})> g_v(Z_v,Z_{-v})$} { Set $Z_v \gets Z_v\cup \{j\} \setminus \{t\}$\; } } Set $Z^{\rm NE} = Z$\; \end{small} \caption{Find PSNE on State Graph \cite{MarketSharing}} \label{algo:FindNESG} \end{algorithm} In the following theorem, we show that Algorithm \ref{algo:FindNESG} can find a sink on the state graph in polynomial time. \begin{theorem}\label{theo:ExistencePolytime} For the selfish caching game on arbitrary-topology caching graphs with no mixed request loop, Algorithm \ref{algo:FindNESG} computes a PSNE in polynomial time by traversing a path of length at most $|V||\mathcal{I}|^2(|V|-2)^2$ on the corresponding state graph. \end{theorem} \begin{proof} See Appendix B. \end{proof} Note that for any given selfish caching game, Algorithm \ref{algo:FindNESG} does not require the construction of the whole state graph. At any given vertex of the state graph, Algorithm \ref{algo:FindNESG} only requires one to find the next arc to traverse, which takes $\mathcal{O}(|V|)$ time. Hence, the total maximum running time of Algorithm \ref{algo:FindNESG} is $\mathcal{O}(|V|^2|\mathcal{I}|^2(|V|-2)^2)$. Furthermore, different random choices of the next arc to traverse in Algorithm \ref{algo:FindNESG} correspond to different outcomes if there is more than one PSNE in the selfish caching game. Since each cache node maximizes its own benefit, a PSNE of the selfish caching game does not in general optimize the social welfare. We will quantify the efficiency of the Nash equilibria in terms of social welfare next. \section{Price of Anarchy}\label{sec:PoA} To evaluate the efficiency of Nash equilibria, we analyze the price of anarchy (PoA) \cite{CSgame}, i.e., the ratio of the social welfare achieved by the worst Nash equilibrium to that achieved by a socially optimal strategy. In this paper, we define the social welfare as the aggregate caching gain in the network. Specifically, the social welfare maximization problem is \begin{equation}\label{prob:maxCG} \begin{aligned} \displaystyle \mbox{max}~ & ~~ \textstyle G(\boldsymbol{x}) \triangleq \sum_{s\in V} g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) \\%[-2pt] \mbox{s.t.}~ & ~~ \textstyle \sum_{i\in\mathcal{I}} x_{si} \leq c_s, ~x_{si} \in\{0,1\}, ~ \forall s\in V, i\in\mathcal{I} . \end{aligned} \end{equation} Problem \eqref{prob:maxCG} is NP-hard \cite{YehSigmetrics}. It is challenging to calculate the socially optimal solution and analyze the PoA in general. In the following, we first show that the PoA can be arbitrarily poor if we allow any content request patterns. We then identify the cache paradox where adding extra cache nodes can make the PoA worse. Under reasonable constraints of request patterns and paths, however, we can show that the PoA is bounded in general caching networks. Furthermore, for given caching networks with known network topology and parameters, we can derive a better bound for PoA. \subsection{An Example with an Arbitrarily Poor PoA}\label{sec:BadPoA} Next, we show that the PoA can be arbitrarily close to $0$, indicating that the selfish caching behaviors can lead to unboundedly poor performance in terms of social welfare. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{fig/PoA2} \caption{An example where PoA approaches $0$. Consider a caching network with $|V|=3$ nodes where node 3 is the designated server of content items in set $\mathcal{I}=\{1,2,\ldots,I\}$. The request arrival rates at node 1 satisfy $ \lambda_{(1,1)} >0, \lambda_{(1,i)}= \lambda_{(1,1)}-\epsilon>0$, where $ \epsilon > 0$, for $i\in I\setminus \{1\}$. Node 2 does not generate request, i.e., $\lambda_{(2,i)} = 0, \forall i\in I$. The cache capacities are $c_1=1$ and $c_2=I-1$.}\label{fig:PoA2} \end{figure} \begin{lemma} There exists a selfish caching game for which the PoA is arbitrarily close to $0$. \end{lemma} \begin{proof} We construct an example where PoA approaches $0$, as shown in Figure \ref{fig:PoA2}. In this example, the socially optimal caching strategy is for node 1 to cache content item 1 and for node 2 to cache content items $2$ to $I$. The optimal social welfare, i.e., aggregate caching gain, is\footnote{The superscript ``SO'' represents socially optimal.} \begin{equation*} \textstyle G^{\rm SO}=\lambda_{(1,1)}(w_{21}+w_{32})+\sum_{i=2}^I \lambda_{(1,i)}w_{32}. \end{equation*} There may exist more than one PSNE. One is that node 1 caches item 1 and node 2 caches none of the content items (since node $2$ has no request of its own). The social welfare achieved by this PSNE is $\textstyle G^{\rm NE}=\lambda_{(1,1)}(w_{21}+w_{32}).$ We have \begin{equation*} \textstyle \frac{G^{\rm NE}}{G^{\rm SO}} =\frac{1}{1+\sum_{i=2}^I\frac{\lambda_{(1,i)}w_{32}}{\lambda_{(1,1)}(w_{21}+w_{32})} }. \end{equation*} When $w_{32} \gg w_{21}$ and $\epsilon \to 0$, we have $\frac{\lambda_{(1,i)}w_{32}}{\lambda_{(1,1)}(w_{21}+w_{32})} \to 1$ and $\frac{G^{\rm NE}}{G^{\rm SO}} \to \frac{1}{I}$, which goes to $0$ as $I$ becomes very large. Since PoA measures the worst case ratio between any PSNE and the social optimal solution, the PoA will be no larger than ${G^{\rm NE}}/{G^{\rm SO}}$ and hence can be arbitrarily close to $0$. \end{proof} \subsection{Cache Paradox} In practice, one way to improve the aggregate caching gain in the network is to add extra cache nodes. However, we identify the following cache paradox. \begin{lemma} In the selfish caching game, adding extra cache nodes can make the PoA worse. \end{lemma} \begin{proof} Consider a caching network with two nodes in Figure \ref{fig:Paradox} (left subfigure), where node 2 is the designated server for two content items. Assume $c_1=1$ and $\lambda_{(1,1)}>\lambda_{(1,2)}>0$. At the equilibrium, node 1 caches item 1, which is also socially optimal. Hence, $PoA = 1$. Now we add an extra cache node, i.e., node 3 (see the right subfigure in Figure \ref{fig:Paradox}), where $c_3=1$ and $\lambda_{(3,1)}=\lambda_{(3,2)}=0$. Assume $w_{21}=w_{31}+w_{23}$, $w_{31}>0$, and $w_{23}>0$. Then one equilibrium is that node 1 caches item 1 and node 3 caches nothing. However, the socially optimal strategy is that node 1 caches item 1 and node 3 caches item 2. Hence, the PoA with node 3 satisfies $$PoA'=\frac{\lambda_{(1,1)} (w_{31}+w_{23}) }{ \lambda_{(1,1)} (w_{31}+w_{23})+\lambda_{(1,2)} w_{23} } < 1.$$ Intuitively, adding node 3 does not change the social welfare achieved at the equilibrium, but increases the optimal social welfare, and hence makes the PoA worse. \end{proof} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig/Paradox} \caption{An example where adding an extra cache node makes the PoA worse.}\label{fig:Paradox} \end{figure} \subsection{Bound on PoA} In this section, we show that under reasonable constraints of request patterns and paths, the selfish caching game belongs to a class of games that we call \emph{$\alpha$-scalable valid utility games,} and the PoA is bounded by the length of the longest request forwarding path in the network. Recall that $Z_s=\{i:x_{si}=1,i\in\mathcal{I}\}$ represents the set of content items cached by node $s\in V$. For convenience, we express the caching gain of node $s$ as $g_s(Z_s,Z_{-s})$, and the aggregate caching gain of the network as $G(Z)=\sum_{s\in V}g_s(Z_s,Z_{-s})$. We first define the \emph{valid utility games} (for general games not restricted to selfish caching games), introduced by Vetta in \cite{ValidUtilityGame}. \begin{definition}[Valid Utility Game \cite{ValidUtilityGame}] A game (with social function $\gamma(\cdot)$ and individual payoff functions $f_s(\cdot), \forall s\in V$)\footnote{Note that the social function can be any objective that the network aims to optimize, and may not be the summation of individual players' payoff functions.} is a valid utility game if the following three properties are satisfied: \begin{enumerate} \item The social function $\gamma(\cdot)$ is non-decreasing and submodular. Mathematically, for every content item $i\in \mathcal{I}$ and for any subsets $Z, Z'$ such that $Z \subseteq Z'$, \begin{align} & \gamma(Z) \leq \gamma(Z'), \label{eq:prop21} \\ & \gamma(Z \cup \{i\}) - \gamma(Z) \geq \gamma(Z' \cup \{i\}) - \gamma(Z'). \label{eq:prop22} \end{align} \item The sum of players' payoff functions $f_s(\cdot)$ for any strategy profile $\boldsymbol{x}$ should be no larger than the social function $\gamma(\cdot)$: \begin{equation}\label{eq:prop1} \textstyle \sum_{s\in V} f_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) \leq \gamma(\boldsymbol{x}). \end{equation} \item The payoff of a player is no less than the difference between the social function when the player participates and that when it does not participate \begin{equation}\label{eq:prop3} f_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) \geq \gamma(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) - \gamma(\boldsymbol{0},\boldsymbol{x}_{-s}). \end{equation} \end{enumerate} \end{definition} Vetta in \cite{ValidUtilityGame} proved that the PoA of a valid utility game is bounded by $2$. In the following, we define a new class of games called \emph{$\alpha$-scalable valid utility games,} which generalizes the notation of valid utility games. \begin{definition}[$\alpha$-Scalable Valid Utility Game] A game is an $\alpha$-scalable valid utility game if it satisfies the two properties in \eqref{eq:prop21}, \eqref{eq:prop22}, and \eqref{eq:prop1}, and the payoff of a player is no less than the product of a positive constant $\alpha$ and the difference between the social function when the player participates and that when it does not participate: \begin{equation}\label{eq:svug} f_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) \geq \frac{1}{\alpha} \cdot \Big[ \gamma(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) - \gamma(\boldsymbol{0},\boldsymbol{x}_{-s}) \Big]. \end{equation} \end{definition} Note that the valid utility game is a special case of the $\alpha$-scalable valid utility games with $\alpha=1$. We show that the selfish caching game is an $\alpha$-scalable valid utility game with $\alpha=\max_{v\in V,i\in\mathcal{I}}|p^{(v,i)}|-1$ when the following two properties are satisfied. \begin{definition}[Homogeneous Request Pattern Property]\label{assum:arrival} The request arrival processes for content item $i\in\mathcal{I}$ at different nodes are the same, i.e., \begin{equation}\label{eq:homolambda} \lambda_{(s,i)}=\lambda_i, \forall s \in V, i\in \mathcal{I}. \end{equation} \end{definition} The homogeneous request pattern property implies that each content item has a global popularity. Note that even under the homogeneous request pattern property, the popularity of different content items can be different, i.e., $\lambda_i \neq \lambda_j, i\neq j, i,j \in \mathcal{I}$. \begin{figure}[t] \centering \includegraphics[width=0.26\textwidth]{fig/PathOverlap} \caption{An example that satisfies the path overlap property. Here, $p^{(2,1)}=(2,3)$ and $p^{(1,1)}=(1,2,3)$.}\label{fig:PathOverlap} \end{figure} \begin{definition}[Path Overlap Property]\label{assum:path} If node $s$ is on path $p^{(v,i)}$, then starting from node $s$, path $p^{(v,i)}$ overlaps with path $p^{(s,i)}$, i.e., \begin{equation}\label{eq:pathoverlap} s \in p^{(v,i)} \Rightarrow p^{(s,i)} \subseteq p^{(v,i)}. \end{equation} \end{definition} Figure \ref{fig:PathOverlap} shows an example that satisfies the path overlap property. Note that the path overlap property is naturally satisfied when each node chooses a unique shortest path to fetch content items. \begin{theorem}\label{theo:scalablevalid} The selfish caching game with the homogeneous request pattern and path overlap properties on caching graphs with no mixed request loop is an $\alpha$-scalable valid utility game where \begin{equation} \textstyle \alpha=\max_{v\in V,i\in\mathcal{I}}|p^{(v,i)}|-1. \end{equation} \end{theorem} \begin{proof} See Appendix C. \end{proof} In the following theorem, we show that when the selfish caching game is an $\alpha$-scalable valid utility game, the PoA is bounded by the length of the longest request forwarding path in the network. \begin{theorem}\label{theo:PoAalpha} When the selfish caching game is an $\alpha$-scalable valid utility game, the PoA satisfies \begin{equation}\label{eq:PoAalpha} PoA \geq \frac{1}{1+\alpha} = \frac{1}{\max_{v\in V,i\in\mathcal{I}}|p^{(v,i)}|}. \end{equation} \end{theorem} \begin{proof} See Appendix D. \end{proof} The PoA bound decreases with $\alpha$. The intuition is that as the length of the request forwarding path increases, the selfish behaviors of the intermediate nodes on a request forwarding path affect more succeeding nodes. The above performance guarantee is true for general caching networks with an arbitrary topology. However, given a caching network with a known topology and network parameters, we can further explore the network structure and derive a better bound for PoA. This is achieved by characterizing the discrete curvature of the social function, as discussed next. \begin{figure}[t] \centering \includegraphics[width=0.36\textwidth]{fig/PoA1} \caption{An example to calculate the value of $\delta(G)$. We assume $\lambda_{(v,i)}=\lambda_i, \forall v\in V, i\in\mathcal{I}$. According to \eqref{eq:deltaGcalculation}, we have $\delta(G)=\frac{1}{1+w_{21}/w_{32}} \in [0,1].$ When $w_{21}/w_{32} \to 0$, we have $\delta(G) \to 1$; when $w_{21}/w_{32} \to \infty$, we have $\delta(G) \to 0$. }\label{fig:delta} \end{figure} \subsection{PoA and the Discrete Curvature of the Social Function} To understand how the discrete curvature \cite{ValidUtilityGame} of the social function will affect our PoA analysis, we first introduce the discrete derivative. For a set function $G(\cdot)$, we define the discrete derivative at $Y$ in the direction $Z$ as $$ G'_Z(Y)=G(Y \cup Z) - G(Y) .$$ We define the \emph{discrete curvature} of a non-decreasing, submodular social function $G(\cdot)$ to be \begin{equation}\label{eq:deltaGcalculation} \delta(G)=\max_{\forall s\in V:G'_{Z_s}(\emptyset) > 0} \frac{G'_{Z_s}(\emptyset)-G'_{Z_s}(\mathcal{I}^{|V|}-Z_s)}{G'_{Z_s}(\emptyset)} \in [0,1], \end{equation} where $\mathcal{I}^{|V|}-Z_s$ represents the caching strategy profile (which can be infeasible) under which node $s$ caches content items in set $\mathcal{I}\setminus Z_s$ while all other nodes cache all content items in set $\mathcal{I}$. Figure \ref{fig:delta} shows an example to calculate the value of $\delta(G)$. Given the discrete curvature of the social function, we can obtain a better bound for the PoA of the selfish caching game. \begin{theorem}\label{theo:deltaG} Given the discrete curvature $\delta(G)$ of the social function, for any selfish caching game with the homogeneous request pattern and path overlap properties on caching graphs with no mixed request loop, the PoA satisfies \begin{equation}\label{eq:PoAdeltaG} PoA \geq \frac{1}{\alpha+\delta(G)}. \end{equation} \end{theorem} \begin{proof} See Appendix E. \end{proof} The PoA bound decreases with $\delta(G)$. The intuition is that under a larger $\delta(G)$, the selfish behavior of a cache node has a greater impact on the achieved social welfare. Since $\delta(G)\in [0,1]$ exploits the curvature property of the given network structure, the performance guarantee in \eqref{eq:PoAdeltaG} is better than the one in \eqref{eq:PoAalpha}. \section{Selfish Caching Games with Unequal-Sized Items}\label{sec:Approx} In this section, we analyze more general selfish caching games with unequal-sized items, which correspond to the practical applications such as video streaming services over HTTP (e.g., Netflix and Hulu) that split each video into segments of lengths from 2 to 10 seconds \cite{huang2012confused}. We show that the caching gain maximization problem for each cache node is NP-hard. We further generalize the model by considering that each cache node has limited computational capability to solve its caching gain maximization problem. This may lead to an approximate PSNE. We analyze the existence and efficiency of an approximate PSNE under these two generalizations. \subsection{Game Modeling} Let $L_i$ denote the size of content item $i$, for all $i\in\mathcal{I}$. We define the selfish caching game with unequal-sized items as follows: \begin{game}[Selfish Caching Game with Unequal-Sized Items]\label{gameapp} $ $ \begin{itemize} \item Players: the set $V$ of cache nodes on the caching graph G(V,E); \item Strategies: the caching strategy $\boldsymbol{x}_s=\{x_{si}: \forall i\in\mathcal{I}\}$ for each cache node $s\in V$, where $x_{si}\in\{0,1\}$ and $\sum_{i\in\mathcal{I}}L_ix_{si} \leq c_s$; \item Payoffs: the caching gain $g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s})$ for each $s\in V$. \end{itemize} \end{game} In the following, we will show that for each cache node $s\in V$, given fixed $\boldsymbol{x}_{-s}$, its caching gain maximization problem is equivalent to a knapsack problem. For node $s \in V$, the caching gain in \eqref{eq:CachingGain} can be equivalently written as \begin{equation}\label{eq:gsapp} \begin{aligned} g_s(\boldsymbol{x}_s,\boldsymbol{x}_{-s}) &= \sum_{i\in \mathcal{I}} \lambda_{(s,i)} \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} ( 1-\prod_{k'=2}^k(1-x_{p_{k'}i}) ) \\ & + \sum_{i\in \mathcal{I} } x_{si} \cdot \lambda_{(s,i)} \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} \prod_{k'=2}^k(1-x_{p_{k'}i}). \end{aligned} \end{equation} Given fixed $\boldsymbol{x}_{-s}$, the first term in \eqref{eq:gsapp} is a constant, while the second term in \eqref{eq:gsapp} depends on $\boldsymbol{x}_s$. Define weight \begin{equation}\label{eq:weightqsi} \textstyle q_{si}(\boldsymbol{x}_{-s})=\lambda_{(s,i)} \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} \prod_{k'=2}^k(1-x_{p_{k'}i}). \end{equation} Intuitively, $q_{si}(\boldsymbol{x}_{-s})$ represents the routing cost for request $(s,i)$ under $\boldsymbol{x}_{-s}$ if $x_{si}=0$. Given fixed $\boldsymbol{x}_{-s}$, the caching gain maximization problem of node $s$ is equivalent to the following knapsack problem: \begin{equation}\label{prob:knapsack} \begin{aligned} & \max_{\boldsymbol{x}_s} ~\sum_{i\in \mathcal{I} } x_{si} \cdot q_{si}(\boldsymbol{x}_{-s}) \\ & ~~\mbox{s.t.} ~~~~~ \textstyle ~\sum_{i\in\mathcal{I}}L_ix_{si} \leq c_s, x_{si}\in\{0,1\}, \forall i\in \mathcal{I}. \end{aligned} \end{equation} Solving the knapsack problem \eqref{prob:knapsack} is NP-hard.\footnote{When $L_i=L_j, \forall i,j\in \mathcal{I}$, problem \eqref{prob:knapsack} is a max-weight knapsack problem, which is easy to solve and corresponds to the scenario with equal-sized items in Sections \ref{sec:model}--\ref{sec:PoA}.} In practice, each cache node has limited computational capability in a short time period (e.g., minutes or hours for which the request patterns remain unchanged \cite{garetto2015efficient}), and can only solve the knapsack problem \eqref{prob:knapsack} to an approximate solution $\hat{\boldsymbol{x}}_s=\{\hat{x}_{si}:\forall i\in \mathcal{I}\}$. There is extensive literature on the polynomial-time approximation algorithms for the knapsack problem \cite{KnapsackBook}. We present one such algorithm in Lines 4-12 of Algorithm \ref{algo:FindNECloud}, which achieves a $1/2$ approximation ratio (see Section 9.4.2 of \cite{KnapsackBook}). Now we consider the general case where cache nodes obtain only a ${1}/{\beta}$ approximate solution with $\beta >1$ for problem \eqref{prob:knapsack}. This leads to the $\beta$-approximate PSNE of Game \ref{gameapp}. \subsection{Existence of an Approximate PSNE} A $\beta$-approximate PSNE is a strategy profile for which no player can improve its caching gain by a factor more than $\beta$ of its current caching gain by unilaterally changing its strategy.\footnote{An alternative notion of approximate PSNE (see, e.g., \cite{ApproximateNEadditive}) is based on an additive error, rather than the multiplicative error. Our definition is equally natural, and indeed more in line with the notion of price of anarchy in game theory \cite{CSgame, ValidUtilityGame, ApproximateNE}.} \begin{definition}[$\beta$-Approximate PSNE \cite{ApproximateNE}] A pure strategy profile $\boldsymbol{x}^{\beta-NE}$ is a $\beta$-approximate PSNE if no player can find an alternative pure strategy with a payoff which is more that $\beta$ times its current payoff. That is for any player $s\in V$, \begin{equation} g_s(\boldsymbol{x}_s',\boldsymbol{x}_{-s}^{\beta-NE}) \leq \beta \cdot g_s(\boldsymbol{x}_s^{\beta-NE},\boldsymbol{x}_{-s}^{\beta-NE}), \mbox{ for all feasible } \boldsymbol{x}_s'. \end{equation} \end{definition} Next, we show that a $\beta$-approximate PSNE exists when the following property is satisfied: \begin{definition}[Cloud Property]\label{assum:cloud} All content items are stored in the same designated server node, i.e., \begin{equation}\label{eq:cloud} |\mathcal{D}^i|=1 \mbox{ and } \mathcal{D}^i=\mathcal{D}^j, \forall i\neq j, i,j\in\mathcal{I}. \end{equation} Furthermore, for each cache node $s\in V$, its request forwarding path for different content items is the same, i.e., \begin{equation}\label{eq:samepath} p^{(s,i)}=p^s, \forall i\in\mathcal{I} . \end{equation} \end{definition} In practice, a network in which all content items are stored in the cloud server satisfies \eqref{eq:cloud}. Note that \eqref{eq:samepath} is naturally satisfied when each node chooses the unique shortest path to fetch content items. Furthermore, \eqref{eq:samepath} is naturally satisfied in a tree topology. In the following, we show that a $\beta$-approximate PSNE exists in Game \ref{gameapp} with the cloud property and the path overlap property in \eqref{eq:pathoverlap}.\footnote{The cloud property and the path overlap property are sufficient conditions for existence. Analyzing the sufficient and necessary conditions for existence of (approximate) PSNE on graphs is an open problem, and we will consider it in the future work.} Note that in the caching graph satisfying the cloud property and the path overlap property, there is no mixed request loop. \begin{theorem}\label{theo:ExistenceBeta} A $\beta$-approximate PSNE always exists in Game \ref{gameapp} with the cloud property and the path overlap property. \end{theorem} \begin{proof} See Appendix F. \end{proof} The result holds in arbitrary-topology networks with heterogeneous content popularity. However, the complexity for finding an approximate PSNE may grow exponentially with the number of nodes and their strategies in general. \subsection{Polynomial-Time Algorithm to Find an Approximate PSNE} In this section, we propose a polynomial-time algorithm to find an approximate PSNE of Game \ref{gameapp}. With the cloud property in \eqref{eq:cloud} and \eqref{eq:samepath}, and given designated server node $u$, the caching gain of each cache node $s\in V$ depends on not only $\boldsymbol{x}_s$ but also $\{\boldsymbol{x}_v: v\in V_s\}$, where $V_s=\{v: v\in p^s, v\neq s, v\neq u\}$ is the set of intermediate nodes on node $s$' request forwarding path $p^s$. We group nodes with the same number of intermediate nodes into one set, i.e., denote the set of nodes with $|V_s|=m$ by $\mathcal{V}^m=\{s\in V: |V_s|=m\}$ where $0 \leq m \leq |V|-2$. Note that since node $u$ stores all content items, its caching strategy does not affect other nodes. If the path overlap property in \eqref{eq:pathoverlap} is satisfied, we know that if node $v\in V_s$, then $s\notin V_v$. That is, if node $v$ in on path $p^s$, then node $s$ is not on path $p^v$. Hence, for each node $s\in \mathcal{V}^m$ with $|V_s|=m$ intermediate nodes in set $V_s=\{v: v\in p^s, v\neq s, v\neq u\}$, every intermediate node $v\in V_s$ has a smaller number of intermediate nodes, i.e., $|V_v|<m$. This motivates us to find the equilibrium strategies for nodes in sets $\mathcal{V}^m, 0 \leq m \leq |V|-2,$ according to the increasing order of $m$. \begin{algorithm}[t] \LinesNumbered \SetAlgoLined \begin{small} \KwIn{$G(V,E), \mathcal{I}, w_{uv}, \forall (u,v)\in E, \lambda_{(s,i)}$ and $p^{(s,i)}$, for all $s\in V, i\in \mathcal{I}$} \KwOut{$\boldsymbol{x}^{\beta-NE}$} Classify nodes into sets $\mathcal{V}^m$ for $0 \leq m \leq |V|-2$\; \For{$m=0:|V|-2$}{ \For{$s\in \mathcal{V}^m$}{ Set $\hat{\boldsymbol{x}}_s=\boldsymbol{0}$, i.e., $\hat{x}_{si}=0, \forall i\in \mathcal{I}$\; Relax problem \eqref{prob:knapsack} to a linear programming problem by relaxing $\boldsymbol{x}_s\in \{0,1\}^{|\mathcal{I}|}$ to $\widetilde{\boldsymbol{x}}_s\in [0,1]^{|\mathcal{I}|}$\; Compute an optimal solution $\widetilde{\boldsymbol{x}}_s^{\ast}$ of the LP-relaxation\; Set $I_s=\{i:\widetilde{x}_{si}^{\ast}=1\}$ and $F_s=\{i:0<\widetilde{x}_{si}^{\ast}<1\}$\; \eIf{$\sum_{i\in I_s}q_{(s,i)}(\boldsymbol{x}_{-s})>\max_{i\in F_s}q_{(s,i)}(\boldsymbol{x}_{-s})$}{ Set $\hat{x}_{si}=1, \forall i\in I_s$\; }{ Set $\hat{x}_{sj}=1$ for $j=\arg \max_{i\in F_s}q_{(s,i)}(\boldsymbol{x}_{-s})$\; } } } Set $\boldsymbol{x}^{\beta-NE}=\hat{\boldsymbol{x}}$\; \end{small} \caption{Find $\beta$-Approximate PSNE} \label{algo:FindNECloud} \end{algorithm} We propose a polynomial-time algorithm (Algorithm \ref{algo:FindNECloud}) to find a $\beta$-approximate PSNE. Specifically, We find the equilibrium strategies of nodes in sets $\mathcal{V}^m$ for $0 \leq m \leq |V|-2$ sequentially (Lines 1-2 of Algorithm \ref{algo:FindNECloud}). For example, for node $s\in \mathcal{V}^0$ such that $V_s=\emptyset$ (Line 3 of Algorithm \ref{algo:FindNECloud}), its $\beta$-approximate equilibrium strategy $\boldsymbol{x}_s^{\beta-NE}$ is the $\beta$-approximate solution to problem \eqref{prob:knapsack}, calculated by Lines 4-12 of Algorithm \ref{algo:FindNECloud}. Note that for a node $s\in\mathcal{V}^0$, $q_{si}(\boldsymbol{x}_{-s})=q_{si}$ is a constant independent of other nodes' strategies. For node $s\in \mathcal{V}^m$ with $1 \leq m \leq |V|-2$ (Line 3 of Algorithm \ref{algo:FindNECloud}), its equilibrium strategy is the $\beta$-approximate solution to problem \eqref{prob:knapsack}, calculated by Lines 4-12 of Algorithm \ref{algo:FindNECloud}.\footnote{Lines 4-12 of Algorithm \ref{algo:FindNECloud} achieve a $1/2$ approximation ratio of problem \eqref{prob:knapsack}, and hence $\beta=2$. Note that $\beta$ is identical across all nodes.} Note that its equilibrium strategy depends only on the caching strategies of nodes $v\in V_s$ in its intermediate node set with $|V_v| < m$, and hence $q_{si}(\boldsymbol{x}_{-s})=q_{si}(\{\boldsymbol{x}_{v}^{\beta-NE}:v\in V_s\})$. We continue this sequential process until all nodes decide their $\beta$-approximate equilibrium caching strategies. The resulting caching strategy profile is a $\beta$-approximate PSNE of Game \ref{gameapp}. In the following theorem, we show that Algorithm \ref{algo:FindNECloud} can find a $\beta$-approximate PSNE of Game \ref{gameapp} in polynomial time. \begin{theorem}\label{theo:ExistenceAPP} For Game \ref{gameapp} with the cloud property and the path overlap property, Algorithm \ref{algo:FindNECloud} computes a $\beta$-approximate PSNE in $\mathcal{O}(|V||\mathcal{I}|)$ time. \end{theorem} \begin{proof} See Appendix G. \end{proof} We next analyze the PoA of Game \ref{gameapp}. \subsection{Price of Anarchy} we show that the PoA for the $\beta$-approximate PSNE is bounded under the homogeneous request pattern property in \eqref{eq:homolambda}. \begin{theorem}\label{theo:PoAapp} For Game \ref{gameapp} with the cloud property, the path overlap property, and the homogeneous request pattern property, the PoA for the $\beta$-approximate PSNE satisfies \begin{equation}\label{eq:PoAbetaapp} PoA^{\beta} \geq \frac{1}{1+\alpha \cdot \beta} = \frac{1}{1+ \beta \cdot \left( \max_{v\in V,i\in\mathcal{I}}|p^{(v,i)}| -1\right)}. \end{equation} \end{theorem} \begin{proof} See Appendix H. \end{proof} The performance guarantee holds for arbitrary caching networks. However, due to cache nodes' limited computational capabilities, the guarantee for the approximate PSNE in \eqref{eq:PoAbetaapp} is worse than the one in \eqref{eq:PoAalpha} for the equal-sized item case. \section{Simulation Results}\label{sec:simu} \begin{figure*}[t] \centering \begin{minipage}[t]{0.49 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/Abilene} \caption{Abilene network.}\label{fig:Abilene} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.49 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/AbileneAdd} \caption{Abilene network with extra nodes.}\label{fig:AbileneAdd} \end{minipage} \end{figure*} \begin{figure*}[t] \centering \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/HomoLambda_Abilene10} \caption{$G(\cdot)$ vs. $c_v$, under both heterogeneous and homogeneous request patterns.}\label{fig:HomoLambda_Abilene10} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_Abilene10} \caption{$G(\cdot)$ vs. $c_v$, under different no. of Type-II nodes $N_{II}$.}\label{fig:IdleNodes_Abilene10} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1.024\textwidth]{fig/100TrialsArrow} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. $c_v$, under different $N_{II}$, for 100 trials.}\label{fig:100Trials} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.25 \linewidth} \centering \includegraphics[width=1.1\textwidth]{fig/ImpactExtraNodesTwoServerEnlarged} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. no. of extra cache nodes, under different $c_v$.}\label{fig:ImpactExtraNodesTwoServer} \end{minipage} \end{figure*} We perform simulations on networks including the Abilene network shown in Figure \ref{fig:Abilene}, the GEANT network shown in Figure \ref{fig:GEANT}, and the Grid topology shown in Figure \ref{fig:Grid}. Simulation results show that the performance of Nash equilibria improves with the cache capacity at each node, while it degrades with the number of nodes that do not generate content requests. Furthermore, adding extra cache nodes to the existing network can make the performance of Nash equilibria worse. \textbf{Upper Bound of the Optimal Social Welfare.} Since the social welfare maximization problem \eqref{prob:maxCG} is NP-hard, we calculate an upper bound for the optimal social welfare. Specifically, we relax problem \eqref{prob:maxCG} by relaxing the binary caching strategy $\boldsymbol{x}=\{x_{si}\in\{0,1\}: \forall s\in V,i\in\mathcal{I}\}$ to be a continuous caching probability strategy $\boldsymbol{\phi}=\{\phi_{si}\in[0,1]: \forall s\in V,i\in\mathcal{I}\}$ where $\sum_{i\in\mathcal{I}}\phi_{si}\leq c_s,\forall s\in V$, while keeping the objective function unchanged. The relaxed problem is \begin{equation}\label{prob:relaxedG} \textstyle \mbox{max} ~ G(\boldsymbol{\phi})~\mbox{s.t.} ~ \textstyle \sum_{i\in\mathcal{I}} \phi_{si} \leq c_s, ~\phi_{si} \in [0,1], ~ \forall s\in V, i\in\mathcal{I} . \end{equation} The relaxation objective function $G(\boldsymbol{\phi})$ is not concave, so \eqref{prob:relaxedG} is not a convex optimization problem. We approximate $G(\boldsymbol{\phi})$ by $L(\boldsymbol{\phi})$ below \cite{YehSigmetrics}: \begin{equation*} L(\boldsymbol{\phi})= \sum_{s\in V, i\in\mathcal{I}}\lambda_{(s,i)} \sum_{k=1}^{|p^{(s,i)}|-1}w_{p_{k+1}p_k} \min\left\{ 1,\sum_{k'=1}^k \phi_{p_{k'}i} \right\}. \end{equation*} Note that $L(\boldsymbol{\phi})$ is concave, and we can solve the following convex optimization problem in polynomial time. \begin{equation}\label{prob:maxL} \textstyle \mbox{max} ~ L(\boldsymbol{\phi}) ~ \mbox{s.t.} ~ \textstyle \sum_{i\in\mathcal{I}} \phi_{si} \leq c_s, ~\phi_{si} \in [0,1], ~ \forall s\in V, i\in\mathcal{I} . \end{equation} We have the following result: \begin{lemma}\label{lemma:SocialWelfareUB} Let $\boldsymbol{x}^\ast$, $\boldsymbol{\phi}^\ast$, and $\boldsymbol{\phi}^{\ast\ast}$ be the optimal solutions to problems \eqref{prob:maxCG}, \eqref{prob:relaxedG}, and \eqref{prob:maxL}, respective. Then: \begin{equation} G(\boldsymbol{x}^\ast) \leq G(\boldsymbol{\phi}^\ast) \leq L(\boldsymbol{\phi}^{\ast}) \leq L(\boldsymbol{\phi}^{\ast\ast}) . \end{equation} \end{lemma} \begin{proof} See Appendix I. \end{proof} Hence, $L(\boldsymbol{\phi}^{\ast\ast})$ serves as an upper bound for the optimal social welfare $G(\boldsymbol{x}^\ast)$. We define $\bar{G}(\boldsymbol{x}^{\rm SO})=L(\boldsymbol{\phi}^{\ast\ast})$.\footnote{The superscript ``SO'' represents socially optimal.} In the following, we first perform simulations for the case with equal-sized content items and show the results in Figures \ref{fig:HomoLambda_Abilene10}--\ref{fig:IdleNodes_Ratios_Grid}, which validate the existence of a PSNE in Theorem \ref{theo:Existence} and the PoA analysis in Theorem \ref{theo:PoAalpha}. We then perform simulations for the case with unequal-sized content items and show the results in Figures \ref{fig:HomoLambda_AbileneApp} -- \ref{fig:ImpactExtraNodesApp}, which validate the existence of an approximate PSNE in Theorem \ref{theo:ExistenceBeta} and the PoA analysis of the approximate PSNE in Theorem \ref{theo:PoAapp}. \textbf{Experiment Setup for the Abilene Network.} For the Abilene network shown in Figure \ref{fig:Abilene}, we take all edge costs from the Abilene network configuration \cite{AbileneTopology}.\footnote{We assume that the edge costs are symmetric.} We consider a set $\mathcal{I}=\{1,\ldots,10\}$ of content items \cite{YehSigmetrics}, where node 1 is the designated server of the first 6 content items and node 2 is the designated server of the remaining 4 content items. Each node chooses the shortest path to fetch every content item, following which there is no mixed request loop on the graph. We generate the arrival rates $\lambda_{(s,i)}, \forall s\in V, i\in \mathcal{I}$ uniformly at random in the interval $[0,10]$. \begin{figure*}[t] \centering \begin{minipage}[t]{0.25 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/GEANT} \caption{GEANT network.}\label{fig:GEANT} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/HomoLambda_GEANT} \caption{$G(\cdot)$ vs. $c_v$, under both heterogeneous and homogeneous request patterns.}\label{fig:HomoLambda_GEANT} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_GEANT} \caption{$G(\cdot)$ vs. $c_v$, under different no. of Type-II nodes $N_{II}$.}\label{fig:IdleNodes_GEANT} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_Ratios_GEANT} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. $c_v$, under different $N_{II}$.}\label{fig:IdleNodes_Ratios_GEANT} \end{minipage} \end{figure*} \begin{figure*}[t] \centering \begin{minipage}[t]{0.2 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/Grid} \caption{Grid topology.}\label{fig:Grid} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.245 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/HomoLambda_Grid} \caption{$G(\cdot)$ vs. $c_v$, under both heterogeneous and homogeneous request patterns.}\label{fig:HomoLambda_Grid} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.245 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_Grid} \caption{$G(\cdot)$ vs. $c_v$, under different no. of Type-II nodes $N_{II}$.}\label{fig:IdleNodes_Grid} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.245 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_Ratios_Grid} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. $c_v$, under different $N_{II}$.}\label{fig:IdleNodes_Ratios_Grid} \end{minipage} \end{figure*} \textbf{Results in the Abilene Network.} Figure \ref{fig:HomoLambda_Abilene10} shows the aggregate caching gain $G(\boldsymbol{x}^{\rm NE})$ and $\bar{G}(\boldsymbol{x}^{\rm SO})$ under different cache capacities at each node,\footnote{We show the results for the case where cache nodes may have different cache capacities in Appendix J.} for the case with heterogeneous request patterns $\lambda_{(s,i)}$ (the upper two curves) and for the case with homogeneous request patterns $\lambda_{(s,i)}=\lambda_i, \forall s\in V, i\in \mathcal{I}$ (the lower two curves)\footnote{We take $\lambda_i=\sum_{s\in V}\lambda_{(s,i)}/V$ given the heterogeneous $\lambda_{(s,i)}, \forall s\in v,i\in\mathcal{I}$.}, respectively. We can see that the gap between $G(\boldsymbol{x}^{\rm NE})$ and $\bar{G}(\boldsymbol{x}^{\rm SO})$ under homogeneous $\lambda_i$ is smaller than the gap under heterogeneous $\lambda_{(s,i)}$. Thus, the homogeneous request pattern leads to better performance achieved by selfish caching behaviors in the Abilene network. In practice, some cache nodes are intermediate routers which do not request for any content items. We define nodes with positive request rates as Type-I nodes (with a total number $N_{I}$), and nodes with no request as Type-II nodes (with a total number $N_{II}$). We show the impact of $N_{II}$ in Figure \ref{fig:IdleNodes_Abilene10}. We can see that the gap between $G(\boldsymbol{x}^{\rm NE})$ and $\bar{G}(\boldsymbol{x}^{\rm SO})$ decreases with the cache capacity at each node, while the gap increases with $N_{II}$. This implies that the impact of the selfish behaviors is mitigated when the cache resource increases, and the selfish behaviors of Type-II nodes degrade the (relative) performance of Nash equilibria (since the selfish Type-II nodes will not cache content items at equilibrium). To understand the impact of the randomness of request arrival rates $\lambda_{(s,i)}$, we perform simulations on $100$ sets of randomly generated $\{\lambda_{(s,i)}: \forall s\in V, i\in \mathcal{I}\}$, and show the average ratios $G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ of the $100$ trials in Figure \ref{fig:100Trials}, where the error bars represent the standard deviations. As is consistent with our observation from Figure \ref{fig:IdleNodes_Abilene10}, the performance of the Nash equilibria increases with the cache capacity, while decreases with $N_{II}$. In practice, one direct way to improve the aggregate caching gain in the network is to add extra cache nodes. To check the impact of extra caches on the performance of Nash equilibria, we sequentially add node 12, node 13, until node 21, shown in Figure \ref{fig:AbileneAdd}. We show the ratio $G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ with different number of extra nodes in Figure \ref{fig:ImpactExtraNodesTwoServer}. We can see that adding more extra caches makes PoA worse. The reason is that adding extra cache nodes can improve the optimal social welfare, while it cannot improve the social welfare achieved by Nash equilibria due to the selfish nature of cache nodes. Hence the ``relative'' performance of the Nash equilibria (measured in terms of PoA) reduces. \textbf{Results in the GEANT Network.} We perform simulations on the GEANT network shown in Figure \ref{fig:GEANT}. We consider a set $\mathcal{I}=\{1,\ldots,20\}$ of content items. We generate the cost on each edge uniformly at random from the interval $[1, 100]$. We show the performances corresponding to selfish behaviors in Figures \ref{fig:HomoLambda_GEANT}--\ref{fig:IdleNodes_Ratios_GEANT}. As in the Abilene network, the homogeneous request pattern leads to better (relative) performance achieved by selfish caching behaviors, and the ratio $G(X^{NE})/\bar{G}(X^{SO})$ increases with $c_v$ and decreases with $N_{II}$. \begin{figure*}[t] \centering \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/HomoLambda_AbileneApp} \caption{$G(\cdot)$ vs. $c_v$, under both heterogeneous and homogeneous request patterns.}\label{fig:HomoLambda_AbileneApp} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1\textwidth]{fig/IdleNodes_AbileneApp} \caption{$G(\cdot)$ vs. $c_v$, under different no. of Type-II nodes $N_{II}$.}\label{fig:IdleNodes_AbileneApp} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.23 \linewidth} \centering \includegraphics[width=1.024\textwidth]{fig/IdleNodes_Abilene_Ratios_App} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. $c_v$, under different $N_{II}$.}\label{fig:IdleNodes_Abilene_Ratios_App} \end{minipage} \begin{minipage}[t]{0.005 \linewidth} ~ \end{minipage} \begin{minipage}[t]{0.25 \linewidth} \centering \includegraphics[width=1.08\textwidth]{fig/ImpactExtraNodesApp} \caption{$G(\boldsymbol{x}^{\rm NE})/\bar{G}(\boldsymbol{x}^{\rm SO})$ vs. no. of extra cache nodes, under different $c_v$.}\label{fig:ImpactExtraNodesApp} \end{minipage} \end{figure*} \textbf{Results in the Grid Topology.} We perform simulations on the Grid topology shown in Figure \ref{fig:Grid}. We consider a set $\mathcal{I}=\{1,\ldots,16\}$ of content items, and generate the cost on each edge uniformly at random from the interval $[1, 100]$. We show the performance corresponding to selfish behaviors in Figures \ref{fig:HomoLambda_Grid}--\ref{fig:IdleNodes_Ratios_Grid}. Different from the Abilene and GEANT networks, we observe in Figure \ref{fig:HomoLambda_Grid} that the homogeneous request pattern leads to a larger aggregate caching gain but a smaller ratio $G(X^{NE})/\bar{G}(X^{SO})$ than that under the heterogeneous request pattern. As in the Abilene and GEANT networks, $G(X^{NE})/\bar{G}(X^{SO})$ increases with $c_v$, and decreases with $N_{II}$. \textbf{Results for the Scenario with Unequal-Sized Items.} We perform simulations in the Abilene network for the case where different content items have different sizes in Figures \ref{fig:HomoLambda_AbileneApp} -- \ref{fig:ImpactExtraNodesApp}. We assume that the sizes of the $|\mathcal{I}|=10$ content items are $\boldsymbol{L}=\{0.2,0.4,0.6,0.8,1.0,1.2,1.4,1.6,1.8,2.0\},$ and node 1 is the designated server of the 10 content items. We compare the performance achieved by the approximate Nash equilibria of Game 2 and that by the socially optimal solution. Figure \ref{fig:HomoLambda_AbileneApp} shows that the homogeneous request pattern leads to larger gaps between $G(\boldsymbol{x}^{\rm NE})$ and $\bar{G}(\boldsymbol{x}^{\rm SO})$, and hence a worse performance achieved by selfish caching behaviors at the approximate Nash equilibrium. Figures \ref{fig:IdleNodes_AbileneApp} -- \ref{fig:ImpactExtraNodesApp} show that the gap between $G(\boldsymbol{x}^{\rm NE})$ and $\bar{G}(\boldsymbol{x}^{\rm SO})$ decreases with the cache capacity at each node, while the gap increases with $N_{II}$. Furthermore, adding more extra caches makes the PoA worse. \section{Conclusion}\label{sec:conclusion} In this paper, we analyze selfish caching games on directed graphs, which can yield arbitrary bad performance. We show that a PSNE exists and can be found in polynomial time if there is no mixed request loop, and we can avoid mixed request loops by properly choosing request forwarding paths. We then show that although cache paradox happens, i.e., adding extra cache nodes does not improve the performance of PSNE, with the homogeneous request pattern property and the path overlap property, the PoA is bounded in arbitrary-topology networks. We further show that the selfish caching game with unequal-sized items admits an approximate PSNE with bounded PoA in special cases. There are several interesting directions to explore in the future, such as analyzing the impact of the congestion effect on each edge, analyzing the joint caching and routing decisions of selfish nodes, analyzing the privacy issue, analyzing the dynamic selfish caching game under incomplete information, and analyzing the coalitional game for the caching network with multiple cache providers where each provider owns several cache nodes. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-12-29T02:26:46", "yymm": "2012", "arxiv_id": "2012.14148", "language": "en", "url": "https://arxiv.org/abs/2012.14148" }